url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/97105/modular-representations-with-unequal-characteristic-reference-request
## Modular representations with unequal characteristic - reference request ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a finite group, and let $K$ be a finite field whose characteristic does not divide $|G|$. I am interested in the theory of finitely generated modules over $K[G]$. Of course many problems are not present here because $K[G]$ is semisimple and all modules are projective. My case is partly covered by Section 15.5 of Serre's book "Linear Representations of Finite Groups". However, Serre likes to assume that $K$ is "sufficiently large", meaning that it has a primitive $m$'th root of unity, where $m$ is the least common multiple of the orders of the elements of $G$. I do not want to assume this, so some Galois theory of finite extensions of $K$ will come into play. I do not think that anything desperately complicated happens, but it would be convenient if I could refer to the literature rather than having to write it out myself. Is there a good source for this? [UPDATED]: In particular, I would like to be able to control the dimensions over $K$ of the simple $K[G]$-modules. As pointed out in Alex Bartel's answer, these need not divide the order of $G$. I am willing to assume that $G$ is a $p$-group for some prime $p\neq\text{char}(K)$. [UPDATED AGAIN]: OK, here is a sharper question. Put $m=|K|$ (which is a power of a prime different from $p$) and let $t$ be the order of $m$ in $(\mathbb{Z}/p)^\times$. Let $L$ be a finite extension of $K$, let $G$ be a finite abelian $p$-group, and let $\rho:G\to L^\times$ be a homomorphism that does not factor through the unit group of any proper subfield containing $K$. Then $\rho$ makes $L$ into an irreducible $K$-linear representation of $G$, and every irreducible arises in this way. If I've got this straight, we see that the possible degrees of nontrivial irreducible $K$-linear representations of abelian $p$-groups are the numbers $tp^k$ for $k\geq 0$. I ask: if we let $G$ be a nonabelian $p$-group, does the set of possible degrees get any bigger? - 2 Maybe Geoff's answer to mathoverflow.net/questions/91132/… could help you? – Someone May 16 2012 at 11:32 @Someone: thanks, that's a useful pointer. – Neil Strickland May 16 2012 at 12:27 Could you clarify what you mean by "control the dimension"? The dimensions of the simple modules over $K$ are multiples of the dimensions of the simple modules over $\bar{K}$, but I am not even sure what "controlling the dimensions" over $\bar{K}$ would entail. – Alex Bartel May 16 2012 at 15:04 @Alex: I have updated the question again. – Neil Strickland May 16 2012 at 16:25 ## 2 Answers Your last statement is not true in general. Let $G=C_3$ and take your favourite finite field that does not contain the cube roots of unity, e.g. $K=\mathbb{F}_5$. Then the two non-trivial one-dimensional representations over $\bar{K}$ are not defined over $K$, but their sum is, since it's the regular representation minus the trivial. In general, $K[G]$-modules for $K$ finite of characteristic co-prime to $|G|$ behave pretty much like modules over characteristic zero fields (the simple $K[G]$-modules are just sums over Galois orbits of the absolutely simple ones), the major simplification being that there are no Schur indices. I am not sure that there is much more to say about this. The second volume of Curtis and Reiner contains a whole chapter on rationality questions, i.e. fields that are not "sufficiently large" in the sense of Serre. Most of it is for characteristic zero, but as I say, a lot of it carries over. Edit Re updated question: if $G$ is a finite $p$-group and $K$ is a finite field of characteristic different from $p$, then it is indeed true that any irreducible representation of $G$ over $K$ has dimension $tp^k$, for some $k$ and some $t\;|\;(p-1)$. Indeed, the absolutely irreducible representations have dimension a power of $p$, and the field of definition of any absolutely irreducible representation is $K$ adjoin $p^r$-th roots of unity for some $r$, so the Galois orbit of a representation has size dividing $(p-1)p^{r-1}$ for some $r$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The question you raise doesn't come up often enough to be dealt with explicitly in textbooks and such, I think. One way to extract an answer (possibly overkill) is to look more closely at the cde-triangle in the formulation of Serre or Curtis & Reiner. Changing your notation a bit, take `$p$` not dividing `$|G|$` and then form a triple `$(K,A,k)$` with `$A$` a complete d.v.r. (such as `$\mathbb{Z}_p$`) having `$K$` as fraction field and `$k$` as the finite residue field of characteristic `$p$`. Without assuming that these fields are "large enough", one knows that the decomposition homomorphism `$d: \mathrm{R}_K(G) \rightarrow \mathrm{R}_k(G)$` is surjective: see my older question here. EDIT: Looking back at what I wrote next, it seems too superficial. Maybe a more careful comparison of the behavior under field extensions is really needed. Back to the drawing board. ADDED: Maybe I've missed something, but I think what Serre does in his Section 15.5 avoids any use of the assumption that the fields are large enough. So this should dispose of the original question asked, while equating dimensions of correlated simple modules over `$K$` and `$k$`. (Serre tends to be careful about specifying where it matters that fields are large enough.) Even though simple modules may decompose further over field extensions, working with any fixed `$p$`-modular system seems to yield trivial Cartan and decomposition matrices. (In this situation, the fact that `$d$` is surjective follows from the assumption that `$p$` doesn't divide the group order.) In any case Alex has addressed the modified questions well. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939780056476593, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/39302/simple-derivation-of-the-tension-of-a-cord-on-a-suspended-mass
# Simple derivation of the tension of a cord on a suspended mass Just wanted to disclaim this problem -- it is homework. However, I am not asking for the solution, I am wondering if anyone can tell me what I may be doing wrong in solving this specific tension problem. The problem is described by this picture: Now, assuming masses of the cords suspending the mass $M$ are insignificant, the question is what is the value of $T_2$ (given specific values of $\theta$ and the mass of $M$). So, here's my solution: Since the object is in rest, by $F = ma$ and $a=g$, $$mg = T_1 \sin(\theta)$$ and $$T_2 = T_1 \cos(\theta)$$ Therefore, $$T_1 = \frac{mg}{\sin(\theta)}$$ and, furthermore, $$T_2 = \frac{mg \cos(\theta)}{\sin(\theta)} = mg \cot(\theta).$$ So, that is my final solution, however, multiple sources are telling me it is wrong (these sources being my 1. teacher's automated grading software and 2. a textbook answer to the same problem, with different mass and theta value and a value different than what I am finding). Is this a correct means of reaching the conclusion? - ## 1 Answer Maybe I am missing something simple, but I cannot find any errors in your solution. - Thank you, your confirmation is valuable to me. – mjgpy3 Oct 8 '12 at 0:18 Actually accounting for the weight of the cords is missing, but I am willing to overlook this. – ja72 Oct 8 '12 at 14:15 2 @ja72: It is explicitly indicated in the wording of the problem: "assuming masses of the cords suspending the mass M are insignificant". – akhmeteli Oct 8 '12 at 15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331654906272888, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/151956-show-continuous-maps-preserve-connectedness.html
# Thread: 1. ## show that continuous maps preserve connectedness Hi guys. Suppose we have two metric spaces $X$ and $Y$, and we have a function $f:X\to Y$ which is continuous on $X$. If $S$ is a connected subset of $X$, then it's easy enough to show that $f(S)=\{f(x):x\in S\}$ is connected. However, suppose there is a function $g:X\to Y$ which is only continuous on $S$, but not necessarily the rest of $X$. How do we show that $g(S)$ is connected? Any help would be much appreciated. Thanks! 2. Originally Posted by hatsoff Hi guys. Suppose we have two metric spaces $X$ and $Y$, and we have a function $f:X\to Y$ which is continuous on $X$. If $S$ is a connected subset of $X$, then it's easy enough to show that $f(S)=\{f(x):x\in S\}$ is connected. However, suppose there is a function $g:X\to Y$ which is only continuous on $S$, but not necessarily the rest of $X$. How do we show that $g(S)$ is connected? Any help would be much appreciated. Thanks! Sketch: suppose $g(S)=U\cup V\,,\,U,V\in Y$ open $\Longrightarrow S\subset g^{-1}(U)\cup g^{-1}(V)\,,\,g^{-1}(U)\,,\,g^{-1}(V)$ open in $S$ (why)? By connectedness of S one of these two last sets must be empty, so... Tonio 3. Thanks, but in your sketch you assume that $g(S)$ is open, which may not be the case. If on the other hand we say simply that $g(S)\subseteq U\cup V$, then I won't be able to show that $g^{-1}(U)$ and $g^{-1}(V)$ are open. Or am I misunderstanding something---always a possibility!---? 4. Originally Posted by hatsoff Thanks, but in your sketch you assume that $g(S)$ is open, which may not be the case. I didn't mean but that's what must be deduced from my typos. I assume $g(S)$ that it is the union (oops! and I should have written disjoint union) of two open sets in $g(S)$ (another typo here), which means both $U,V$ are the intersection of $g(S)$ with open subsets of $Y$, and NOT necessarily open in $Y$. Sorry for that. Tonio If on the other hand we say simply that $g(S)\subseteq U\cup V$, then I won't be able to show that $g^{-1}(U)$ and $g^{-1}(V)$ are open. Or am I misunderstanding something---always a possibility!---? . 5. So does anyone else have an idea? I was thinking that maybe we could show that for any function $g:X\to Y$, if the restriction of $g$ to $S\subseteq X$ is continuous, then there is a function $h:X\to Y$ which is continuous on $X$, and whose restriction to $S$ is equal to that of $g$. In that case, I could show that $S$ is connected implies $h(S)=g(S)$ is connected. However, I do not know how to prove that the continuous restriction of a function to $S$ may be expanded to a function continuous on $X$. Heck, I'm not even sure it's true! Help would be much appreciated! 6. Assuming S is nonempty, By contraposition we can assume $G(S)= U\cup V$ where $U,V\subset Y$ open, nonempty and disjoint. (That is, assuming G(S) is not connected) Then as Tonio said, $f^{-1}(U), f^{-1}(V)$ are open disjoint and nonempty, and $S= f^{-1}(U)\cup f^{-1}(V)$......and what does this mean? 7. If $g(S)$ is disconnected, then we cannot assume that $g(S)=U\cup V$ for disjoint open sets $U$ and $V$. Rather, we can only assume that $g(S)\subseteq U\cup V$ for such sets. And since $g$ is only known to be continuous on $S$, then that means $g^{-1}(U)$ and $g^{-1}(V)$ may be nonopen. Thanks for the attempt though! 8. Originally Posted by hatsoff If $g(S)$ is disconnected, then we cannot assume that $g(S)=U\cup V$ for disjoint open sets $U$ and $V$. Rather, we can only assume that $g(S)\subseteq U\cup V$ for such sets. And since $g$ is only known to be continuous on $S$, then that means $g^{-1}(U)$ and $g^{-1}(V)$ may be nonopen. Thanks for the attempt though! How did you define connectedness? The definition I'm familiar with says that a space $X$ is disconnected if there exist two disjoint open sets $U,V$ such that $X=U \cup V$.. Also, you can simply define $h = g|_S$ and then $h:S \to Y$ is continuous and thus $h(S) = g(S)$ is continuous 9. Originally Posted by Defunkt How did you define connectedness? The definition I'm familiar with says that a space $X$ is disconnected if there exist two disjoint open sets $U,V$ such that $X=U \cup V$.. $g(S)$ is not a space. It's just a subset of the space $Y$. My definition is as follows: A set $S$ is disconnected iff there are disjoint open sets $U,V$ covering $S$ with $U\cap S\neq\emptyset\neq V\cap S$. It is connected iff it is not disconnected. 10. Originally Posted by hatsoff $g(S)$ is not a space. It's just a subset of the space $Y$. My definition is as follows: A set $S$ is disconnected iff there are disjoint open sets $U,V$ covering $S$ with $U\cap S\neq\emptyset\neq V\cap S$. It is connected iff it is not disconnected. What do you mean? Then how can we give it the subspace topology? EDIT: I mean, how do you define a space? And then why is $g(S)$ not one? 11. If is disconnected, then we cannot assume that for disjoint open sets and I don't know what definition of connectedness you're using.... But a topological space is connected if it can not be written as a disjoint union of 2 non-empty open sets U,V. Thus not connected means we can find such U,V. 12. Originally Posted by hatsoff If $g(S)$ is disconnected, then we cannot assume that $g(S)=U\cup V$ for disjoint open sets $U$ and $V$. Rather, we can only assume that $g(S)\subseteq U\cup V$ for such sets. And since $g$ is only known to be continuous on $S$, then that means $g^{-1}(U)$ and $g^{-1}(V)$ may be nonopen. Thanks for the attempt though! It seems like you haven't yet fully understood the definition of connectedness, but there is another equivalent definition which can be helpful here: A set $S$ in a top. space $X$ is disconnected iff there exists a continuous function from $X$ onto $\{0,1\}$ , where this last space is the usual two-points one with the inherited topology from the usual topology in $\mathbb{R}$ . Let us go back now to your problem: if $g(S)$ is disconnected there exists $h:g(S)\rightarrow \{0,1\}$ continous and onto, but then $h\circ g:S\rightarrow \{0,1\}$ is cont. and onto, contradiction. Tonio 13. I understand the definition of connectedness which I related earlier. Although I haven't gone and proved it, I can see how your alternate definition might be equivalent. That said, I figured out how to prove the contrapositive: If $g(A)$ is disconnected, where $g:X\to Y$ is continuous on $A$, then there are disjoint open sets $U,V$ covering $g(A)$ with $g(A)\cap U$ and $g(A)\cap V$ nonempty. Putting $U'=\bigcup\{B(d(x,A\cap f^{-1}(U))/2,x):x\in A\cap f^{-1}(U)\}$ and $V'=\bigcup\{B(d(x,A\cap f^{-1}(V))/2,x):x\in A\cap f^{-1}(V)\}$ satisfies disconnectedness for $A$. Thanks for the help, guys!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 121, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9662027955055237, "perplexity_flag": "head"}
http://mathoverflow.net/questions/100970?sort=votes
## before taking on a graduate student [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This post asks for resources to handle the following situation: "I, suddenly, have students" and many wonderful answers have been provided. My question here is sort of a prequel: recently, a post-qualification second year graduate student $X$ expressed an interest in working with me. What should I put $X$ through before agreeing to advise him or her? I don't mean to ask about the part where I point $X$ to appropriate literature from my field and help $X$ to understand the textbooks and survey papers as needed: that I can do. But how should I measure $X$'s aptitude and gauge interest levels, etc? I don't want this to be a long and drawn out process for obvious reasons. The zoomed-out version of my question probably should be: What do you look for in a graduate student before you agree to advise him or her? If it turns out from the responses that this question is overly subjective and leads to wildly varying opinions, I can make the question community wiki. - 14 Incidentally, now academia.stackexchange.com is in beta, your question will definitely look more in-topic there. – Federico Poloni Jun 29 at 20:09 11 This question was clearly of general interest: 500+ views and 13 up votes in 9 hours, not to mention two useful and non-argumentative answers. This is important information not just for those starting their post-doctoral careers but also for first and second year grad students. I would like to thank the answerers for their help, and inform the closers that their views are shortsighted at best if they sincerely believe that a (possibly edited) version of this question is not useful to many research mathematicians and their potential students. – Vidit Nanda Jun 30 at 5:06 10 I don't think it helps much to ask such a question at academia.stackexchange.com - any non-obvious advice would be very math-specific. – ABayer Jun 30 at 12:30 7 The fact that nobody's given an argumentative answer isn't evidence that nobody wants to. I've been tempted to give a mildly argumentative answer (that potential advisors should make sure students are well informed - what Tricia wrote is great - and try to find the right fit, but should not filter for aptitude in a US-style system, since everyone who has been admitted to grad school and remains in good standing needs and deserves an advisor). However, I don't think MO is the right venue or format for such a debate. – Henry Cohn Jun 30 at 13:22 7 Speaking as one of the most active users at academia.stackexchange, this question would be very welcome there, @ABayer's comment notwithstanding. Nothing about advising students is "obvious" to everyone, in math or any other field. – JeffE Jul 1 at 0:55 show 17 more comments ## 4 Answers I usually suggest the student spend a couple weeks, or more if they want, taking a look at a little bit of material in my area and then come talk more with me, so that we can both get a sense of whether the area is a good fit for the student. Typically, I print out a survey paper in my area and suggest the student start reading part of it; and I also ask the student to try to do a couple of exercises from a book in my area, suggesting the student pay attention while doing this both to whether the material seems to mesh well with the student's talents and also to whether they find it enjoyable/engaging. I find it very informative to see how this goes and to chat with the student after they've grappled a little with this material; I tell them I don't think either one of us should make a definite decision until after they do this. Of course I try to make it clear that it's fine to come back with questions about the material. This also buys me a little time to learn more about the student -- like Lee Mosher mentioned e.g. how they did in coursework. Some of the students who have approached me seemed to think combinatorics (my area) would be the easy route through grad school, which I do not believe is true at all -- so I try to communicate that it's an area where on the one hand questions may sometimes have simple statements, but on the other hand may require a lot of ingenuity to solve. Based on this experience with students, I think it's also important to make sure students aren't making decisions based on misconceptions. So I second everything Lee said too. Others here have surely had many more students than I've had so far, but I did have the experience of being approached by a large number of students right after starting a tenure track job, so gave this a lot of thought then. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have a strong opinion on this topic: a potential advisor must have an excellent reason to refuse to advise a graduate student who meets certain minimal requirements. The department has accepted the student. Presumably the student has successfully passed prelim exams, etc. The department as a whole has promised to help the student in their pursuit of a PhD. In my opinion, it is inappropriate to tell a student that, although the department promised to help, I personally think you do not deserve / warrant my advice. There are obvious exceptions: too many current advisees, personality conflicts leading to unhealthy working relationship, etc. Agreeing to advise is not a guarantee that the student will make progress, and if they do not, both parties should re-evaluate the relationship. However, faculty have an obligation to the PhD candidates admitted to the program. They should accept advisees unless there is an excellent reason to refuse. - 8 I think there is a difference in how much of this responsibility should fall upon a tenured faculty member vs. a not-yet-tenured assistant professor. – Patricia Hersh Jun 30 at 17:06 6 While this is what is asked for, I would just like to stress that this answer refers to a US-style system. And, for example, in various European systems the general situations is completely different. – quid Jun 30 at 17:33 12 Whether you think the student deserves your advice is not the only issue in deciding whether to become a student's advisor. You can also fulfill your responsibility to the student by telling them (if it's true) that you don't think you would be their best choice for advisor, and helping them to find an advisor that better fits the student's specific research interests, expectations, working style, level of independence, etc. – JeffE Jul 1 at 1:03 5 @JeffE -- It sounds to me like you are addressing a different question than the one posed by the OP. I agree that there are times when a different advisor would be a better fit with a student. Unfortunately, many of the faculty I have met who use "better fit" as a reason to refuse advisees are the same ones who refuse administrative duties, who show up for office hours as the mood strikes them, etc.: it is just an excuse for avoiding a duty they don't want. – Jason Starr Jul 1 at 14:14 1 @Jason: that may sometimes be the case, but not always. I think there are also quite a few people who are overwhelmed by the amount of advising, administrative, etc. responsibility that falls upon them, in some cases because others aren't carrying their weight, and I think it's an important issue for such people to figure out how to be effective in deciding which such things to take on and in finding good ways to make sure the other things get done too. – Patricia Hersh Jul 1 at 14:34 show 4 more comments I have a strong opinion on this topic: a potential advisor must have an excellent reason to refuse to advise a graduate student who meets certain minimal requirements. I have developed an equally strong opinion in the opposite direction: a graduate student should show his ability to surpass the so called "minimal requirements" by far and large to even start talking about my becoming his adviser. Unless you want to end up defending PhD yourself a second time with your tongue and hands disconnected from your body and operated by remote control, you'd better make the student undergo a few severe tests over an extended period of time. If he survives, he's worth trying. A good place to start is to give him a tough but self-contained paper in your field and ask him to read it within a month and present it to you. I do not believe in any "promises" or "obligations" to graduate students. We give them an opportunity to learn and to prove themselves worthy, but that's about it. Sorry for "being argumentative", but since we touched the moral grounds in this question, you should keep in mind that the moral standards vary a lot from place to place and from person to person, so I would hate having you swayed by Jason's argument without being aware that not everyone shares his point of view. In short, make your own choice on the matter. You have as clear head and keen eyes when evaluating a potential candidate as everyone else. - 18 I agree with this quite strongly. We produce too many weak Ph.D.'s, and too few mathematician I know seem to understand the negative impact of this on our field and ourselves. Most seem to believe that if the person ends up outside of academic mathematics, then no damage is done. I learned during a brief tenure working on Wall Street that the opposite is true. Sharp non-mathematicians can detect weak math Ph.D.'s, and it shows them that a math Ph.D. is not worth as much as they might have thought. – Deane Yang Jul 1 at 21:18 1 I generally support fedja's point of view and I want to add another argument suporting it. Often, students looking for an adviser have no clear idea of the effort required and your expectations. Giving them an initial taste of your demands is in my view very helpful to all involved. If that initial experience is too scary for a prospective student, then maybe it is wiser for him/her to find a different adviser. There is however a delicate balance. When I assign some difficult reading and a tight deadline I also add that he/she can stop by and ak questions. – Liviu Nicolaescu Jul 2 at 16:19 To gauge aptitude you can look at how the student did on the qualifying exam or exams that are closest to your subject. To gauge interest you can see if the student has taken advantage of nearby seminars and conferences in your subject, and you can talk to the student about what they got out of those experiences. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9771716594696045, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Large_numbers
# Large numbers This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2011) This article . You can help. The discussion page may contain suggestions. (March 2011) This article is about large numbers in the sense of numbers that are significantly larger than those ordinarily used in everyday life, for instance in simple counting or in monetary transactions. The term typically refers to large positive integers, or more generally, large positive real numbers, but it may also be used in other contexts. Very large numbers often occur in fields such as mathematics, cosmology, cryptography and statistical mechanics. Sometimes people refer to numbers as being "astronomically large". However, it is easy to mathematically define numbers that are much larger even than those used in astronomy. ## Using scientific notation to handle large and small numbers See also: scientific notation, logarithmic scale, and orders of magnitude Scientific notation was created to handle the wide range of values which occur in scientific study. 1.0 × 109, for example, means one billion, a 1 followed by nine zeros: 1 000 000 000, and 1.0 × 10−9 means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is. ## Large numbers in the everyday world Examples of large numbers describing everyday real-world objects are: • The number of bits on a computer hard disk (as of 2010[update], typically about 1013, 500-1000 GB) • The estimated number of atoms in the observable Universe (1080) • The number of cells in the human body (more than 1014) • The number of neuronal connections in the human brain (estimated at 1014) • The Avogadro constant, the number of "elementary entities" (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12; (approximately 6.022 × 1023 per mole) ## Astronomically large numbers Other large numbers, as regards length and time, are found in astronomy and cosmology. For example, the current Big Bang model of the Universe suggests that it is 13.8 billion years (4.3 × 1017 seconds) old, and that the observable universe is 93 billion light years across (8.8 × 1026 metres), and contains about 5 × 1022 stars, organized into around 125 billion (1.25 × 1011) galaxies, according to Hubble Space Telescope observations. There are about 1080 fundamental particles in the observable universe, by rough estimation.[1] According to Don Page, physicist at the University of Alberta, Canada, the longest finite time that has so far been explicitly calculated by any physicist is $10^{10^{10^{10^{10^{1.1}}}}} \mbox{ years}$ which corresponds to the scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire universe, observable or not, assuming a certain inflationary model with an inflaton whose mass is 10−6 Planck masses.[2][3] This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where our universe's history repeats itself arbitrarily many times due to properties of statistical mechanics, this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again. Combinatorial processes rapidly generate even larger numbers. The factorial function, which defines the number of permutations on a set of fixed objects, grows very rapidly with the number of objects. Stirling's formula gives a precise asymptotic expression for this rate of growth. Combinatorial processes generate very large numbers in statistical mechanics. These numbers are so large that they are typically only referred to using their logarithms. Gödel numbers, and similar numbers used to represent bit-strings in algorithmic information theory, are very large, even for mathematical statements of reasonable length. However, some pathological numbers are even larger than the Gödel numbers of typical mathematical propositions. Logician Harvey Friedman has done work related to very large numbers, such as with Kruskal's tree theorem and the Robertson–Seymour theorem. ## Computers and computational complexity This section . Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research may be removed. (September 2009) Moore's Law, generally speaking, estimates that the number of transistors on a square inch of a microprocessor will double about every 18 months. This sometimes leads people to believe that eventually, computers will be able to solve any mathematical problem, no matter how complicated (See Turing Test). This is not the case;[citation needed] computers are fundamentally limited by the constraints of physics, and certain upper bounds on what to expect can reasonably be formulated. [according to whom?] Also, there are certain theoretical results which show that some[weasel words] problems, such as the halting problem, are inherently beyond the reach of complete computational solution, no matter how powerful or fast the computation.[examples needed] Between 1980 and 2000, hard disk sizes increased from about 10 megabytes (1 × 107) to over 100 gigabytes (1011 bytes).[citation needed] A 100 gigabyte disk could store the given names of all of Earth's seven billion inhabitants without using data compression.[citation needed] But what about a dictionary-on-disk storing all possible passwords containing up to 40 characters? Assuming each character equals one byte, there are about 2320 such passwords, which is about 2 × 1096. In his paper Computational capacity of the universe,[4] Seth Lloyd points out that if every particle in the universe could be used as part of a huge computer, it could store only about 1090 bits, less than one millionth of the size such a dictionary would require. However, storing information on hard disk and computing it are very different functions. On the one hand storage currently has limitations as stated, but computational speed is a different matter. It is quite conceivable[by whom?] that the stated limitations regarding storage have no bearing on the limitations of actual computational capacity;[citation needed] especially if the current research into quantum computers results in a "quantum leap" (but see holographic principle). Still, computers can easily be programmed to start creating and displaying all possible 40-character passwords one at a time. Such a program could be left to run indefinitely. Assuming a modern PC could output 1 billion strings per second, it would take one billionth of 2 × 1096 seconds, or 2 × 1087 seconds to complete its task, which is about 6 × 1079 years. By contrast, the universe is estimated to be 13.8 billion (1.38 × 1010) years old. Computers will presumably continue to get faster, but the same paper mentioned before estimates that the entire universe functioning as a giant computer could have performed no more than 10120 operations since the Big Bang. This is trillions of times more computation than is required for displaying all 40-character passwords, but computing all 50 character passwords would outstrip the estimated computational potential of the entire universe. Problems like this grow exponentially in the number of computations they require, and are one reason why exponentially difficult problems are called "intractable" in computer science: for even small numbers like the 40 or 50 characters described earlier, the number of computations required exceeds even theoretical limits[citation needed] on mankind's computing power. The traditional division between "easy" and "hard" problems is thus drawn between programs that do and do not require exponentially increasing resources to execute. Such limits are an advantage in cryptography, since any cipher-breaking technique which requires more than, say, the 10120 operations mentioned before will never be feasible. Such ciphers must be broken by finding efficient techniques unknown to the cipher's designer. Likewise, much of the research throughout all branches of computer science focuses on finding efficient solutions to problems that work with far fewer resources than are required by a naïve solution. For example, one way of finding the greatest common divisor between two 1000 digit numbers is to compute all their factors by trial division. This will take up to 2 × 10500 division operations, far too large to contemplate. But the Euclidean algorithm, using a much more efficient technique, takes only a fraction of a second to compute the GCD for even huge numbers such as these. As a general rule, then, PCs in 2005 can perform 240 calculations in a few minutes.[citation needed] A few thousand PCs working for a few years could solve a problem requiring 264 calculations, but no amount of traditional computing power will solve a problem requiring 2128 operations (which is about what would be required to brute-force the encryption keys in 128-bit SSL commonly used in web browsers, assuming the underlying ciphers remain secure). Limits on computer storage are comparable.[quantify] Quantum computing may allow certain problems[vague] to become feasible, but it has practical and theoretical challenges which may never be overcome.[examples needed] ## Examples • $10^{10}$ (10,000,000,000), called "ten billion" in the short scale or "ten milliard" in the long scale. • googol = $10^{100}$ (10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000) • centillion = $10^{303}$ or $10^{600}$, depending on number naming system • googolplex = $10^{\text{googol}}=10^{10^{100}}$ • Skewes' numbers: the first is approximately $10^{10^{10^{34}}}$, the second $10^{10^{10^{1000}}}$ The total amount of printed material in the world is roughly 1.6 × 1018 bits[citation needed]; therefore the contents can be represented by a number somewhere in the range 0 to roughly $2^{1.6 \times 10^{18}}\approx 10^{4.8 \times 10^{17}}$ Compare: • $1.1^{1.1^{1.1^{1000}}} \approx 10^{10^{1.02\times10^{40}}}$ • $1000^{1000^{1000}}\approx 10^{10^{3000.48}}$ The first number is much larger than the second, due to the larger height of the power tower, and in spite of the small numbers 1.1. In comparing the magnitude of each successive exponent in the last number with $10^{10^{10}}$, we find a difference in the magnitude of effect on the final exponent. ## Systematically creating ever faster increasing sequences Main article: fast-growing hierarchy Given a strictly increasing integer sequence/function $f_0(n)$ (n≥1) we can produce a faster growing sequence $f_1(n) = f_0^n(n)$ (where the superscript n denotes the nth functional power). This can be repeated any number of times by letting $f_k(n) = f_{k-1}^n(n)$, each sequence growing much faster than the one before it. Then we could define $f_\omega(n) = f_n(n)$, which grows much faster than any $f_k$ for finite k (here ω is the first infinite ordinal number, representing the limit of all finite numbers k). This is the basis for the fast-growing hierarchy of functions, in which the indexing subscript is extended to ever-larger ordinals. For example, starting with f0(n) = n + 1: • f1(n) = f0n(n) = n + n = 2n • f2(n) = f1n(n) = 2nn > (2 ↑) n for n ≥ 2 (using Knuth up-arrow notation) • f3(n) = f2n(n) > (2 ↑)n n ≥ 2 ↑2 n for n ≥ 2. • fk+1(n) > 2 ↑k n for n ≥ 2, k < ω. • fω(n) = fn(n) > 2 ↑n - 1 n > 2 ↑n − 2 (n + 3) − 3 = A(n, n) for n ≥ 2, where A is the Ackermann function (of which fω is a unary version). • fω+1(64) > fω64(6) > Graham's number (= g64 in the sequence defined by g0 = 4, gk+1 = 3 ↑gk 3). • This follows by noting fω(n) > 2 ↑n - 1 n > 3 ↑n - 2 3 + 2, and hence fω(gk + 2) > gk+1 + 2. • fω(n) > 2 ↑n - 1 n = (2 → n → n-1) = (2 → n → n-1 → 1) (using Conway chained arrow notation) • fω+1(n) = fωn(n) > (2 → n → n-1 → 2) (because if gk(n) = X → n → k then X → n → k+1 = gkn(1)) • fω+k(n) > (2 → n → n-1 → k+1) > (n → n → k) • fω2(n) = fω+n(n) > (n → n → n) = (n → n → n→ 1) • fω2+k(n) > (n → n → n → k) • fω3(n) > (n → n → n → n) • fωk(n) > (n → n → ... → n → n) (Chain of k+1 n's) • fω2(n) = fωn(n) > (n → n → ... → n → n) (Chain of n+1 n's) ## Standardized system of writing very large numbers A standardized way of writing very large numbers allows them to be easily sorted in increasing order, and one can get a good idea of how much larger a number is than another one. To compare numbers in scientific notation, say 5×104 and 2×105, compare the exponents first, in this case 5 > 4, so 2×105 > 5×104. If the exponents are equal, the mantissa (or coefficient) should be compared, thus 5×104 > 2×104 because 5 > 2. Tetration with base 10 gives the sequence $10 \uparrow \uparrow n=10 \to n \to 2=(10\uparrow)^n 1$, the power towers of numbers 10, where $(10\uparrow)^n$ denotes a functional power of the function $f(n)=10^n$ (the function also expressed by the suffix "-plex" as in googolplex, see the Googol family). These are very round numbers, each representing an order of magnitude in a generalized sense. A crude way of specifying how large a number is, is specifying between which two numbers in this sequence it is. More accurately, numbers in between can be expressed in the form $(10\uparrow)^n a$, i.e., with a power tower of 10s and a number at the top, possibly in scientific notation, e.g. $10^{10^{10^{10^{10^{4.829}}}}}$, a number between $10\uparrow\uparrow 5$ and $10\uparrow\uparrow 6$ (note that $10 \uparrow\uparrow n < (10\uparrow)^n a < 10 \uparrow\uparrow (n+1)$ if $1 < a < 10$). (See also extension of tetration to real heights.) Thus googolplex is $10^{10^{100}} = (10\uparrow)^2 100 = (10\uparrow)^3 2$ Another example: $2 \uparrow\uparrow\uparrow 4 = \begin{matrix} \underbrace{2_{}^{2^{{}^{.\,^{.\,^{.\,^2}}}}}}\\ \qquad\quad\ \ \ 65,536\mbox{ copies of }2 \end{matrix} \approx (10\uparrow)^{65,531}(6.0 \times 10^{19,728}) \approx (10\uparrow)^{65,533} 4.3$ (between $10\uparrow\uparrow 65,533$ and $10\uparrow\uparrow 65,534$) Thus the "order of magnitude" of a number (on a larger scale than usually meant), can be characterized by the number of times (n) one has to take the $log_{10}$ to get a number between 1 and 10. Thus, the number is between $10\uparrow\uparrow n$ and $10\uparrow\uparrow (n+1)$. As explained, a more accurate description of a number also specifies the value of this number between 1 and 10, or the previous number (taking the logarithm one time less) between 10 and 1010, or the next, between 0 and 1. Note that $10^{(10\uparrow)^{n}x}=(10\uparrow)^{n}10^x$ I.e., if a number x is too large for a representation $(10\uparrow)^{n}x$ we can make the power tower one higher, replacing x by log10x, or find x from the lower-tower representation of the log10 of the whole number. If the power tower would contain one or more numbers different from 10, the two approaches would lead to different results, corresponding to the fact that extending the power tower with a 10 at the bottom is then not the same as extending it with a 10 at the top (but, of course, similar remarks apply if the whole power tower consists of copies of the same number, different from 10). If the height of the tower is large, the various representations for large numbers can be applied to the height itself. If the height is given only approximately, giving a value at the top does not make sense, so we can use the double-arrow notation, e.g. $10\uparrow\uparrow(7.21\times 10^8)$. If the value after the double arrow is a very large number itself, the above can recursively be applied to that value. Examples: $10\uparrow\uparrow 10^{\,\!10^{10^{3.81\times 10^{17}}}}$ (between $10\uparrow\uparrow\uparrow 2$ and $10\uparrow\uparrow\uparrow 3$) $10\uparrow\uparrow 10\uparrow\uparrow (10\uparrow)^{497}(9.73\times 10^{32})=(10\uparrow\uparrow)^{2} (10\uparrow)^{497}(9.73\times 10^{32})$ (between $10\uparrow\uparrow\uparrow 4$ and $10\uparrow\uparrow\uparrow 5$) Similarly to the above, if the exponent of $(10\uparrow)$ is not exactly given then giving a value at the right does not make sense, and we can, instead of using the power notation of $(10\uparrow)$, add 1 to the exponent of $(10\uparrow\uparrow)$, so we get e.g. $(10\uparrow\uparrow)^{3} (2.8\times 10^{12})$. If the exponent of $(10\uparrow \uparrow)$ is large, the various representations for large numbers can be applied to this exponent itself. If this exponent is not exactly given then, again, giving a value at the right does not make sense, and we can, instead of using the power notation of $(10\uparrow \uparrow)$, use the triple arrow operator, e.g. $10\uparrow\uparrow\uparrow(7.3\times 10^{6})$. If the right-hand argument of the triple arrow operator is large the above applies to it, so we have e.g. $10\uparrow\uparrow\uparrow(10\uparrow\uparrow)^{2} (10\uparrow)^{497}(9.73\times 10^{32})$ (between $10\uparrow\uparrow\uparrow 10\uparrow\uparrow\uparrow 4$ and $10\uparrow\uparrow\uparrow 10\uparrow\uparrow\uparrow 5$). This can be done recursively, so we can have a power of the triple arrow operator. We can proceed with operators with higher numbers of arrows, written $\uparrow^n$. Compare this notation with the hyper operator and the Conway chained arrow notation: $a\uparrow^n b$ = ( a → b → n ) = hyper(a, n + 2, b) An advantage of the first is that when considered as function of b, there is a natural notation for powers of this function (just like when writing out the n arrows): $(a\uparrow^n)^k b$. For example: $(10\uparrow^2)^3 b$ = ( 10 → ( 10 → ( 10 → b → 2 ) → 2 ) → 2 ) and only in special cases the long nested chain notation is reduced; for b = 1 we get: $10\uparrow^3 3 = (10\uparrow^2)^3 1$ = ( 10 → 3 → 3 ) Since the b can also be very large, in general we write a number with a sequence of powers $(10 \uparrow^n)^{k_n}$ with decreasing values of n (with exactly given integer exponents ${k_n}$) with at the end a number in ordinary scientific notation. Whenever a ${k_n}$ is too large to be given exactly, the value of ${k_{n+1}}$ is increased by 1 and everything to the right of $({n+1})^{k_{n+1}}$ is rewritten. For describing numbers approximately, deviations from the decreasing order of values of n are not needed. For example, $10 \uparrow (10 \uparrow \uparrow)^5 a=(10 \uparrow \uparrow)^6 a$, and $10 \uparrow (10 \uparrow \uparrow \uparrow 3)=10 \uparrow \uparrow (10 \uparrow \uparrow 10 + 1)\approx 10 \uparrow \uparrow \uparrow 3$. Thus we have the somewhat counterintuitive result that a number x can be so large that, in a way, x and 10x are "almost equal" (for arithmetic of large numbers see also below). If the superscript of the upward arrow is large, the various representations for large numbers can be applied to this superscript itself. If this superscript is not exactly given then there is no point in raising the operator to a particular power or to adjust the value on which it acts. We can simply use a standard value at the right, say 10, and the expression reduces to $10 \uparrow^n 10=(10 \to 10 \to n)$ with an approximate n. For such numbers the advantage of using the upward arrow notation no longer applies, and we can also use the chain notation. The above can be applied recursively for this n, so we get the notation $\uparrow^n$ in the superscript of the first arrow, etc., or we have a nested chain notation, e.g.: (10 → 10 → (10 → 10 → $3 \times 10^5$) ) = $10 \uparrow ^{10 \uparrow ^{3 \times 10^5} 10} 10 \!$ If the number of levels gets too large to be convenient, a notation is used where this number of levels is written down as a number (like using the superscript of the arrow instead of writing many arrows). Introducing a function $f(n)=10 \uparrow^{n} 10$ = (10 → 10 → n), these levels become functional powers of f, allowing us to write a number in the form $f^m(n)$ where m is given exactly and n is an integer which may or may not be given exactly (for the example: $f^2(3 \times 10^5)$. If n is large we can use any of the above for expressing it. The "roundest" of these numbers are those of the form fm(1) = (10→10→m→2). For example, $(10 \to 10 \to 3\to 2) = 10 \uparrow ^{10 \uparrow ^{10^{10}} 10} 10 \!$ Compare the definition of Graham's number: it uses numbers 3 instead of 10 and has 64 arrow levels and the number 4 at the top; thus $G < 3\rightarrow 3\rightarrow 65\rightarrow 2 <(10 \to 10 \to 65\to 2)=f^{65}(1)$, but also $G < f^{64}(4)<f^{65}(1)$. If m in $f^m(n)$ is too large to give exactly we can use a fixed n, e.g. n = 1, and apply the above recursively to m, i.e., the number of levels of upward arrows is itself represented in the superscripted upward-arrow notation, etc. Using the functional power notation of f this gives multiple levels of f. Introducing a function $g(n)=f^{n}(1)$ these levels become functional powers of g, allowing us to write a number in the form $g^m(n)$ where m is given exactly and n is an integer which may or may not be given exactly. We have (10→10→m→3) = gm(1). If n is large we can use any of the above for expressing it. Similarly we can introduce a function h, etc. If we need many such functions we can better number them instead of using a new letter every time, e.g. as a subscript, so we get numbers of the form $f_k^m(n)$ where k and m are given exactly and n is an integer which may or may not be given exactly. Using k=1 for the f above, k=2 for g, etc., we have (10→10→n→k) = $f_k(n)=f_{k-1}^n(1)$. If n is large we can use any of the above for expressing it. Thus we get a nesting of forms ${f_k}^{m_k}$ where going inward the k decreases, and with as inner argument a sequence of powers $(10 \uparrow^n)^{p_n}$ with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation. When k is too large to be given exactly, the number concerned can be expressed as ${f_n}(10)$=(10→10→10→n) with an approximate n. Note that the process of going from the sequence $10^{n}$=(10→n) to the sequence $10 \uparrow^n 10$=(10→10→n) is very similar to going from the latter to the sequence ${f_n}(10)$=(10→10→10→n): it is the general process of adding an element 10 to the chain in the chain notation; this process can be repeated again (see also the previous section). Numbering the subsequent versions of this function a number can be described using functions ${f_{qk}}^{m_{qk}}$, nested in lexicographical order with q the most significant number, but with decreasing order for q and for k; as inner argument we have a sequence of powers $(10 \uparrow^n)^{p_n}$ with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation. For a number too large to write down in the Conway chained arrow notation we can describe how large it is by the length of that chain, for example only using elements 10 in the chain; in other words, we specify its position in the sequence 10, 10→10, 10→10→10, .. If even the position in the sequence is a large number we can apply the same techniques again for that. ### Examples of numbers in numerical order Numbers expressible in decimal notation: • 22 = 4 • 222 = 2 ↑↑ 3 = 16 • 33 = 27 • 44 = 256 • 55 = 3125 • 66 = 46,656 • $2^{2^{2^{2}}}$ = 2 ↑↑ 4 = 2↑↑↑3 = 65,536 • 77 = 823,543 • 106 = 1,000,000 = 1 million • 88 = 16,777,216 • 99 = 387,420,489 • 109 = 1,000,000,000 = 1 billion • 1010 = 10,000,000,000 • 1012 = 1,000,000,000,000 = 1 trillion • 333 = 3 ↑↑ 3 = 7,625,597,484,987 ≈ 7.63 × 1012 • 1015 = 1,000,000,000,000,000 = 1 million billion = 1 quadrillion Numbers expressible in scientific notation: • googol = 10100 • 444 = 4 ↑↑ 3 ≈ 1.34 × 10154 ≈ (10 ↑)2 2.2 • Approximate number of Planck volumes comprising the volume of the observable universe = 8.5 × 10184 • 555 = 5 ↑↑ 3 ≈ 1.91 × 102184 ≈ (10 ↑)2 3.3 • $2^{2^{2^{2^2}}} = 2 \uparrow \uparrow 5 = 2^{65,536} \approx 2.0 \times 10^{19,729} \approx (10 \uparrow)^2 4.3$ • 666 = 6 ↑↑ 3 ≈ 2.66 × 1036,305 ≈ (10 ↑)2 4.6 • 777 = 7 ↑↑ 3 ≈ 3.76 × 10695,974 ≈ (10 ↑)2 5.8 • $M_{43,112,609} \approx 3.16 \times 10^{12,978,188} \approx 10^{10^{7.1}} = (10 \uparrow)^2 7.1$, the 47th and as of December 2011 the largest known Mersenne prime. • 888 = 8 ↑↑ 3 ≈ 6.01 × 1015,151,335 ≈ (10 ↑)2 7.2 • 999 = 9 ↑↑ 3 ≈ 4.28 × 10369,693,099 ≈ (10 ↑)2 8.6 • 101010 =10 ↑↑ 3 = 1010,000,000,000 = (10 ↑)3 1 • $3^{3^{3^{3}}} = 3 \uparrow \uparrow 4 \approx 1.26 \times 10^{3,638,334,640,024} \approx (10 \uparrow)^3 1.10$ Numbers expressible in (10 ↑)n k notation: • googolplex = $10^{10^{100}} = (10 \uparrow)^3 2$ • $2^{2^{2^{2^{2^2}}}} = 2 \uparrow \uparrow 6 = 2^{2^{65,536}} \approx 2^{(10 \uparrow)^2 4.3} \approx 10^{(10 \uparrow)^2 4.3} = (10 \uparrow)^3 4.3$ • $10^{10^{10^{10}}}=10 \uparrow \uparrow 4=(10 \uparrow)^4 1$ • $3^{3^{3^{3^3}}} = 3 \uparrow \uparrow 5 \approx 3^{10^{3.6 \times 10^{12}}} \approx (10 \uparrow)^4 1.10$ • $2^{2^{2^{2^{2^{2^2}}}}} = 2 \uparrow \uparrow 7 = \approx (10 \uparrow)^4 4.3$ • 10 ↑↑ 5 = (10 ↑)5 1 • 3 ↑↑ 6 ≈ (10 ↑)5 1.10 • 2 ↑↑ 8 ≈ (10 ↑)5 4.3 • 10 ↑↑ 6 = (10 ↑)6 1 • 10 ↑↑↑ 2 = 10 ↑↑ 10 = (10 ↑)10 1 • 2 ↑↑↑↑ 3 = 2 ↑↑↑ 4 = 2 ↑↑ 65,536 ≈ (10 ↑)65,533 4.3 is between 10 ↑↑ 65,533 and 10 ↑↑ 65,534 Bigger numbers: • 3 ↑↑↑ 3 = 3 ↑↑ (3 ↑↑ 3) ≈ 3 ↑↑ 7.6 × 1012 ≈ 10 ↑↑ 7.6 × 1012 is between (10 ↑↑)2 2 and (10 ↑↑)2 3 • $10\uparrow\uparrow\uparrow 3=(10 \uparrow \uparrow)^3 1$ = ( 10 → 3 → 3 ) • $(10\uparrow\uparrow)^2 11$ • $(10\uparrow\uparrow)^2 10^{\,\!10^{10^{3.81\times 10^{17}}}}$ • $10\uparrow\uparrow\uparrow 4=(10 \uparrow \uparrow)^4 1$ = ( 10 → 4 → 3 ) • $(10\uparrow\uparrow)^{2} (10\uparrow)^{497}(9.73\times 10^{32})$ • $10\uparrow\uparrow\uparrow 5=(10 \uparrow \uparrow)^5 1$ = ( 10 → 5 → 3 ) • $10\uparrow\uparrow\uparrow 6=(10 \uparrow \uparrow)^6 1$ = ( 10 → 6 → 3 ) • $10\uparrow\uparrow\uparrow 7=(10 \uparrow \uparrow)^7 1$ = ( 10 → 7 → 3 ) • $10\uparrow\uparrow\uparrow 8=(10 \uparrow \uparrow)^8 1$ = ( 10 → 8 → 3 ) • $10\uparrow\uparrow\uparrow 9=(10 \uparrow \uparrow)^9 1$ = ( 10 → 9 → 3 ) • $10 \uparrow \uparrow \uparrow \uparrow 2 = 10\uparrow\uparrow\uparrow 10=(10 \uparrow \uparrow)^10 1$ = ( 10 → 2 → 4 ) = ( 10 → 10 → 3 ) • The first term in the definition of Graham's number, g1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) ≈ 3 ↑↑↑ (10 ↑↑ 7.6 × 1012) ≈ 10 ↑↑↑ (10 ↑↑ 7.6 × 1012) is between (10 ↑↑↑)2 2 and (10 ↑↑↑)2 3 (See Graham's number#Magnitude) • $10\uparrow\uparrow\uparrow\uparrow 3=(10 \uparrow \uparrow\uparrow)^3 1$ = (10 → 3 → 4) • $4 \uparrow \uparrow \uparrow \uparrow 4$ = ( 4 → 4 → 4 ) $\approx (10 \uparrow \uparrow \uparrow)^2 (10 \uparrow \uparrow)^3 154$ • $10\uparrow\uparrow\uparrow\uparrow 4=(10 \uparrow \uparrow\uparrow)^4 1$ = ( 10 → 4 → 4 ) • $10\uparrow\uparrow\uparrow\uparrow 5=(10 \uparrow \uparrow\uparrow)^5 1$ = ( 10 → 5 → 4 ) • $10\uparrow\uparrow\uparrow\uparrow 6=(10 \uparrow \uparrow\uparrow)^6 1$ = ( 10 → 6 → 4 ) • $10\uparrow\uparrow\uparrow\uparrow 7=(10 \uparrow \uparrow\uparrow)^7 1=$ = ( 10 → 7 → 4 ) • $10\uparrow\uparrow\uparrow\uparrow 8=(10 \uparrow \uparrow\uparrow)^8 1=$ = ( 10 → 8 → 4 ) • $10\uparrow\uparrow\uparrow\uparrow 9=(10 \uparrow \uparrow\uparrow)^9 1=$ = ( 10 → 9 → 4 ) • $10 \uparrow \uparrow \uparrow \uparrow \uparrow 2 = 10\uparrow\uparrow\uparrow\uparrow 10=(10 \uparrow \uparrow\uparrow)^{10} 1$ = ( 10 → 2 → 5 ) = ( 10 → 10 → 4 ) • ( 2 → 3 → 2 → 2 ) = ( 2 → 3 → 8 ) • ( 3 → 2 → 2 → 2 ) = ( 3 → 2 → 9 ) = ( 3 → 3 → 8 ) • ( 10 → 10 → 10 ) = ( 10 → 2 → 11 ) • ( 10 → 2 → 2 → 2 ) = ( 10 → 2 → 100 ) • ( 10 → 10 → 2 → 2 ) = ( 10 → 2 → $10^{10}$ ) = $10 \uparrow ^{10^{10}} 10 \!$ • The second term in the definition of Graham's number, g2 = 3 ↑g1 3 > 10 ↑g1 - 1 10. • ( 10 → 10 → 3 → 2 ) = (10 → 10 → (10 → 10 → $10^{10}$) ) = $10 \uparrow ^{10 \uparrow ^{10^{10}} 10} 10 \!$ • g3 = (3 → 3 → g2) > (10 → 10 → g2 - 1) > (10 → 10 → 3 → 2) • g4 = (3 → 3 → g3) > (10 → 10 → g3 - 1) > (10 → 10 → 4 → 2) • ... • g9 = (3 → 3 → g8) is between (10 → 10 → 9 → 2) and (10 → 10 → 10 → 2) • ( 10 → 10 → 10 → 2 ) • g10 = (3 → 3 → g9) is between (10 → 10 → 10 → 2) and (10 → 10 → 11 → 2) • ... • g63 = (3 → 3 → g62) is between (10 → 10 → 63 → 2) and (10 → 10 → 64 → 2) • ( 10 → 10 → 64 → 2 ) • Graham's number, g64[5] • ( 10 → 10 → 65 → 2 ) • ( 10 → 10 → 10 → 3 ) • ( 10 → 10 → 10 → 4 ) ## Comparison of base values The following illustrates the effect of a base different from 10, base 100. It also illustrates representations of numbers, and the arithmetic. $100^{12}=10^{24}$, with base 10 the exponent is doubled. $100^{100^{12}}=10^{2*10^{24}}$, ditto. $100^{100^{100^{12}}}=10^{10^{2*10^{24}+0.3}}$, the highest exponent is very little more than doubled. • $100\uparrow\uparrow 2=10^ {200}$ • $100\uparrow\uparrow 3=10^ {2 \times 10^ {200}}$ • $100\uparrow\uparrow 4=(10\uparrow)^2 (2 \times 10^ {200}+0.3)=(10\uparrow)^2 (2\times 10^ {200})=(10\uparrow)^3 200.3=(10\uparrow)^4 2.3$ • $100\uparrow\uparrow n=(10\uparrow)^{n-2} (2 \times 10^ {200})=(10\uparrow)^{n-1} 200.3=(10\uparrow)^{n}2.3<10\uparrow\uparrow (n+1)$ (thus if n is large it seems fair to say that $100\uparrow\uparrow n$ is "approximately equal to" $10\uparrow\uparrow n$) • $100\uparrow\uparrow\uparrow 2=(10\uparrow)^{98} (2 \times 10^ {200})=(10\uparrow)^{100} 2.3$ • $100\uparrow\uparrow\uparrow 3=10\uparrow\uparrow(10\uparrow)^{98} (2 \times 10^ {200})=10\uparrow\uparrow(10\uparrow)^{100} 2.3$ • $100\uparrow\uparrow\uparrow n=(10\uparrow\uparrow)^{n-2}(10\uparrow)^{98} (2 \times 10^ {200})=(10\uparrow\uparrow)^{n-2}(10\uparrow)^{100} 2.3<10\uparrow\uparrow\uparrow (n+1)$ (compare $10\uparrow\uparrow\uparrow n=(10\uparrow\uparrow)^{n-2}(10\uparrow)^{10}1<10\uparrow\uparrow\uparrow (n+1)$; thus if n is large it seems fair to say that $100\uparrow\uparrow\uparrow n$ is "approximately equal to" $10\uparrow\uparrow\uparrow n$) • $100\uparrow\uparrow\uparrow\uparrow 2=(10\uparrow\uparrow)^{98}(10\uparrow)^{100} 2.3$ (compare $10\uparrow\uparrow\uparrow\uparrow 2=(10\uparrow\uparrow)^{8}(10\uparrow)^{10}1$) • $100\uparrow\uparrow\uparrow\uparrow 3=10\uparrow\uparrow\uparrow(10\uparrow\uparrow)^{98}(10\uparrow)^{100} 2.3$ (compare $10\uparrow\uparrow\uparrow\uparrow 3=10\uparrow\uparrow\uparrow(10\uparrow\uparrow)^{8}(10\uparrow)^{10}1$) • $100\uparrow\uparrow\uparrow\uparrow n=(10\uparrow\uparrow\uparrow)^{n-2}(10\uparrow\uparrow)^{98}(10\uparrow)^{100} 2.3$ (compare $10\uparrow\uparrow\uparrow\uparrow n=(10\uparrow\uparrow\uparrow)^{n-2}(10\uparrow\uparrow)^{8}(10\uparrow)^{10}1$; if n is large this is "approximately" equal) ## Accuracy Note that for a number $10^n$, one unit change in n changes the result by a factor 10. In a number like $10^{\,\!6.2 \times 10^3}$, with the 6.2 the result of proper rounding using significant figures, the true value of the exponent may be 50 less or 50 more. Hence the result may be a factor $10^{50}$ too large or too small. This seems like extremely poor accuracy, but for such a large number it may be considered fair (a large error in a large number may be "relatively small" and therefore acceptable). ### Accuracy for very large numbers In the case of an approximation of an extremely large number, the relative error may be large, yet there may still be a sense in which we want to consider the numbers as "close in magnitude". For example, consider $10^{10}$ and $10^9$ The relative error is $1 - \frac{10^9}{10^{10}} = 1 - \frac{1}{10} = 90\%$ a large relative error. However, we can also consider the relative error in the logarithms; in this case, the logarithms (to base 10) are 10 and 9, so the relative error in the logarithms is only 10%. The point is that exponential functions magnify relative errors greatly – if a and b have a small relative error, $10^a$ and $10^b$ the relative error is larger, and $10^{10^a}$ and $10^{10^b}$ will have even larger relative error. The question then becomes: on which level of iterated logarithms do we wish to compare two numbers? There is a sense in which we may want to consider $10^{10^{10}}$ and $10^{10^9}$ to be "close in magnitude". The relative error between these two numbers is large, and the relative error between their logarithms is still large; however, the relative error in their second-iterated logarithms is small: $\log_{10}(\log_{10}(10^{10^{10}})) = 10$ and $\log_{10}(\log_{10}(10^{10^9})) = 9$ Such comparisons of iterated logarithms are common, e.g., in analytic number theory. ### Approximate arithmetic for very large numbers There are some general rules relating to the usual arithmetic operations performed on very large numbers: • The sum and the product of two very large numbers are both "approximately" equal to the larger one. • $(10^a)^{\,\!10^b}=10^{a 10^b}=10^{10^{b+\log _{10} a}}$ Hence: • A very large number raised to a very large power is "approximately" equal to the larger of the following two values: the first value and 10 to the power the second. For example, for very large n we have $n^n\approx 10^n$ (see e.g. the computation of mega) and also $2^n\approx 10^n$. Thus $2\uparrow\uparrow 65536 \approx 10\uparrow\uparrow 65533$, see table. ## Large numbers in some noncomputable sequences The busy beaver function Σ is an example of a function which grows faster than any computable function. Its value for even relatively small input is huge. The values of Σ(n) for n = 1, 2, 3, 4 are 1, 4, 6, 13 (sequence in OEIS). Σ(5) is not known but is definitely ≥ 4098. Σ(6) is at least 3.5×1018267. ## Infinite numbers Main article: cardinal number See also: large cardinal, Mahlo cardinal, and totally indescribable cardinal Although all these numbers above are very large, they are all still decidedly finite. Certain fields of mathematics define infinite and transfinite numbers. For example, aleph-null is the cardinality of the infinite set of natural numbers, and aleph-one is the next greatest cardinal number. $\mathfrak{c}$ is the cardinality of the reals. The proposition that $\mathfrak{c} = \aleph_1$ is known as the continuum hypothesis. ## Notations Some notations for extremely large numbers: • Knuth's up-arrow notation / hyper operators / Ackermann function, including tetration • Conway chained arrow notation • Steinhaus-Moser notation; apart from the method of construction of large numbers, this also involves a graphical notation with polygons; alternative notations, like a more conventional function notation, can also be used with the same functions. These notations are essentially functions of integer variables, which increase very rapidly with those integers. Ever faster increasing functions can easily be constructed recursively by applying these functions with large integers as argument. Note that a function with a vertical asymptote is not helpful in defining a very large number, although the function increases very rapidly: one has to define an argument very close to the asymptote, i.e. use a very small number, and constructing that is equivalent to constructing a very large number, e.g. the reciprocal. ## Notes and references 1. Information Loss in Black Holes and/or Conscious Beings?, Don N. Page, Heat Kernel Techniques and Quantum Gravity (1995), S. A. Fulling (ed), p. 461. Discourses in Mathematics and its Applications, No. 4, Texas A&M University Department of Mathematics. arXiv:hep-th/9411193. ISBN 0-9630728-3-8. 2. Lloyd, Seth (2002). "Computational capacity of the universe". Phys. Rev. Lett. 88 (23): 237901. arXiv:quant-ph/0110141. Bibcode:2002PhRvL..88w7901L. doi:10.1103/PhysRevLett.88.237901. PMID 12059399. 3. Regarding the comparison with the previous value: $10\uparrow ^n 10 < 3 \uparrow ^{n+1} 3$, so starting the 64 steps with 1 instead of 4 more than compensates for replacing the numbers 3 by 10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 163, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8949944972991943, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/109048?sort=votes
## non-artificial examples of non-smooth and non-admissible representations of GL_2 ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $F$ be a finite degree extension over $\mathbf{Q}_p$ and consider the locally profinite group $G:=GL_2(\mathbf{Q}_p)$. P1: Give an interesting example (non-artificial one, i.e., one that arises in real life for a representation theorist) of a non-smooth representation $\rho$ of $G$ on a topological $\mathbf{C}$-vector space $V$. So here I would like the representation $\rho:G\rightarrow Aut(V)$ to be at least continuous in the following sense: for every $v\in V$ I want the orbit map $\pi^v:G\rightarrow V$, $g\mapsto \pi(g)(v)$ to be continuous. P2: Give an interesting example of a smooth representation of $G$ on a topological $\mathbf{C}$-vector space which is not admissible. added: Is it possible to construct such representations by inducing over an appropriate closed subgroup of $G$? - ## 2 Answers The "smoothness" prevents taking Hilbert-space completions in general, for example. That is, for example, with $G=GL_n(F)$ for a $p$-adic field $F$ and $n\ge 1$, the Hilbert space $V=L^2(G)$ with right translation by $G$ is a continuous representation in the strong topology but is not smooth. In fact, this would be the case for any unimodular totally-disconnected (non-discrete) group $G$. The subspace of smooth vectors in $L^2(G)$ is dense, of course. The smooth vectors in $L^2(G)$ are a natural example of a smooth repn that is (too big to be) admissible. Not hard to check. Edit: and responding to the further-question above, note that the other good answer about action of G on compactly-supported (complex-valued) functions on the Bruhat-Tits building (or tree, for $SL_2(\mathbb Q_p)$), is indeed the (compactly-supported, smooth) induced representation of the trivial representation on the Iwahori subgroup, up to the whole $SL_n(\mathbb Q_p)$. Crazily enough, if one lifts a "cuspidal" repn from $SL_n(\mathbb F_p)$ to $K=SL_n(\mathbb Z_p)$, and then induces that to $SL_n(\mathbb Q_p)$, a finite direct sum of supercuspidal repns is obtained, so this induced repn is admissible... in contrast to inducing the trivial repn from $K$. (Also, of course, principal series inducing admissibles on the Levi components are admissible.) - 4 @Paul: I erased my answer when I noticed it was the same as you (except that, for political reasons, I had chosen left translations...) – Alain Valette Oct 7 at 13:42 @Alain ... "Political"! :) – paul garrett Oct 7 at 16:02 I see, so I had obvious examples just under my nose! Thanks Paul – Hugo Chapdelaine Oct 7 at 18:47 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Another example, this time of a natural representation which is smooth, but not admissible. The group $G$ acts naturally on its Bruhat-Tits tree $X$. Let $C(X)$ be the space of complex-valued functions on (the set of vertices of) $X$ with finite support. Then $G$ has a natural representation on $C(X)$, which is smooth (the stabilizer of a point is a maximal compact subgroup is a compact maximal, the stabilizer of a function is an intersection of finitely many such maximal compact subgroup hence is open) but not admissible (for example, the space of invariants in $C(X)$ by a compact maximal is the underlying space of the unramified (or spherical) Hecke algebra of $G$, which has infinite dimension over $\mathbb C$. - Thanks Joel for the nice example. Is there a good (recent) reference on representations of topological groups (locally compact is fine with me) which give a good overview of the various "categories" of representations (smooth, admissible etc....)? – Hugo Chapdelaine Oct 7 at 18:52 @Hugo : A nice reference for learning representations of locally profinite groups is Bushnell-Henniart, The local Langlands conjecture for GL(2). – François Brunault Oct 7 at 20:37 @Francois, I did not see much examples in this book outside admissible representations. After all, their goal is to prove local Langlands correspondence. I would like to have a reference where one can get a feeling of the various types of representations: for example unitary versus non-unitary, continuous versus not strongly continuous, smooth but not admissible. For example if you look at $GL_2(R)$ with the discrete topology, "how many more" representations do you get from looking only at smooth ones. Basically, I want to see various ways of organizing representations of top. groups. – Hugo Chapdelaine Oct 8 at 13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176995754241943, "perplexity_flag": "head"}
http://openwetware.org/index.php?title=User:David_J_Weiss/Notebook/people/weiss/Formal&diff=prev&oldid=373815
# User:David J Weiss/Notebook/people/weiss/Formal ### From OpenWetWare (Difference between revisions) | | | | | |----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | () | | () | | | Line 40: | | Line 40: | | | | | | | | | ==<center>Conclusion</center>== | | ==<center>Conclusion</center>== | | - | So from my experiment to find the ratio for e/m the best result I found was when I held the voltage constant and varied the radii I came up with a value of <math>1.85(.58)*10^{11}\frac{coul}{kg}</math>. | + | So from my experiment to find the ratio for e/m the best result I found was when I held the voltage constant and varied the radii I came up with a value of <math>1.85\pm.12*10^{11}\frac{coul}{kg}</math>. | | | | | | | | So based on my calculations from the above section I obtain the best result when I take the value using constant voltage, which the error is approximately 5.11%.  This is not a bad result considering that their is a significant amount of systematic and random error present in this lab.  The sources of random(non-biased) error in this lab are as follows.  The collisions that occur between the helium atoms in the bulb and discharged electrons from the electrons gun, some of the energy that is used to accelerate the electrons is wasted in the form of visible light due to the collisions so the energy that is measured is an over estimate.  Some of the systematic errors are caused by the imprecision of the ruler and in aligning them.  We tried to over come some of these by taking two reading one on the left side and one on the right and averaging the two but this can only decrease the error but not eliminate it completely.  Another error is that when the voltage is raised or the current is lowered the discharge of the electron beam doesn't line up with where it impacts the electron gun so you can adjust this by means of the focus knob, this changes the radii of the electron  beam so you need to adjust the focus every time you change the voltage or current if not you potentially can alter the data and throw off the results. | | So based on my calculations from the above section I obtain the best result when I take the value using constant voltage, which the error is approximately 5.11%.  This is not a bad result considering that their is a significant amount of systematic and random error present in this lab.  The sources of random(non-biased) error in this lab are as follows.  The collisions that occur between the helium atoms in the bulb and discharged electrons from the electrons gun, some of the energy that is used to accelerate the electrons is wasted in the form of visible light due to the collisions so the energy that is measured is an over estimate.  Some of the systematic errors are caused by the imprecision of the ruler and in aligning them.  We tried to over come some of these by taking two reading one on the left side and one on the right and averaging the two but this can only decrease the error but not eliminate it completely.  Another error is that when the voltage is raised or the current is lowered the discharge of the electron beam doesn't line up with where it impacts the electron gun so you can adjust this by means of the focus knob, this changes the radii of the electron  beam so you need to adjust the focus every time you change the voltage or current if not you potentially can alter the data and throw off the results. | ## Experimental Determination of the Electron Charge to Mass Ratio Author: David Weiss Experimentalists: David Weiss, Elizabeth Allen University of New Mexico, Department of Physics and Astronomy MSC07 4220, 800 Yale Blvd NE, Albuquerque, New Mexico 87131-0001 USA Contact info: [email protected] ## Abstract The ratio for electric charge to the mass of an electron is a fundamental concept in physics and useful for future students interested in the study of physics. From this you can conclude how the electron is hardly affected by gravity and how the electric field governs how the electron behaves. This is important to know for the reason that it is one of the most important values in quantum mechanics. We did this by means of observing the trajectory of electrons in a known constant magnetic field. From this you can find the ratio of electric charge to mass for an electron as a function of observed radii, magnetic field, and energy. This can be done with an electron gun a Helmholtz Coil and a couple of power sources. With all these things we can determine how a beam of electrons curves within a magnetic field and thus measure a radius and with some tricky manipulation figure the ratio for electric charge compared to mass for the electrons. From my experimental data we found that the ratio of e/m is 2.3+/-.23*10 coul/kg. This was one standard deviation away from the accepted value. There was still some systematic and random error that was prevalent throughout the experiment. We will discuss the reasons and sources of these errors. ## Introduction The charge of an electron is one of the most basic concepts in the entire study of electromagnetism and atomic particles. The first person o find an electron was J.J. Thompson. He did so in a series of experiments which used cathode ray tubes to try to find electrons. He did three such different experiments and it wasn't until the third that he found the charge to mass ratio for the electron which he found in 19872. These results let him to formulate his "Plum Pudding Model" of the atom. This experiment is a lot like the one detailed here. For these experiments he was awarded the Nobel Prize in Physics in the year 1906. After Thompson did these experiments R.A. Millikan came around and found through experimentation the charge of the electron. His experiments which involve dropping oil droplets in a chamber that could be charged to see how the oil droplets reacted in an electric field. These experiments then lead to the charge that an electron has on it3. He was later awarded the Nobel Prize in Physics for these experiments in 1923 after some controversy due to the deeds of one Felix Ehrenhaft's claim that he found a smaller charge than Millikan, but these claims turned out to be wrong and the prize was given to Millikan. With out these fundamental experiments we could have not found the charge of the electron, and with out this fundamental constant we could not have been able to do some of the work in chemistry atomic physics and quantum mechanics. The experiment that i did was similar to the experiment that Thompson did in that I am using an electron gun to "boil" off electrons and measure how they behave in a magnetic field. I will vary the force of the electrons by mean of changing the voltage to the electron gun which is the Lorenz Force4, I will also vary the magnetic field by means of changing the current that is applied to the Helmholtz Coils5 to show how an electron responds to a changing electric field and or a changing force. ## Experiment and Materials ### Instrumentation and Assembly An electron gun is housed in a bulb that has some gas in to so you can see the electron beam. There is also a Helmholtz Coil attached to this apparatus so that a uniform magnetic field can be generated. This is a manufactured piece so there is no need to worry about aligning everything properly (e/m Experimental Apparatus Model TG-13 Uchida Yoko as shown in Fig. 1). There are three different power supplies each one connects to a different part of the e/m apparatus. A connection needs to be made between the 6-9 Vdc 2A power supply (SOAR corporation DC Power Supply Model 7403, 0-36V, 3A, As shown in Fig.3)and the Helmholtz Coil with a multimeter in series (BK PRECISION Digital Multimeter Model 2831B, Fig 3). The 6.3V power supply (Hewlett-Packard DC Power Supply Model 6384A, Fig. 2)needs to be connected to the heater jacks. A power source rated at 150-300V (Gelman Instrument Company Deluxe Regulated Power Supply, Fig.3) needs to be connected to another multimeter (BK PRECISION Digital Multimeter (Model 2831B, Fig. 2) to the electron gun. Fig. 1) e/m Experimental Apparatus (Model TG-13),left to right:connections for the Helmholtz Coil power supply,current adjustment knob,focus adjustment,connection to voltmeter for electron gun,connections for the electron gun power supply,conections for the heater power supply Fig. 2)(bottom) Hewlett-Packard DC Power Supply (Model 6384A, 4-5.5V, 0-8A) connected in series to (top) BK PRECISION Digital Multimeter Model 2831B Fig. 3) (left of e/m E. A.) SOAR corporation DC Power Supply Model 7403, (top right) BK PRECISION Digital Multimeter Model 2831B, (bottom right) Gelman Instrument Company Deluxe Regulated Power Supply ### Procedure and Methods The general procedure can be found in Professor Gold's Lab Manual1. We first turned on the power to the heater and let it warm up for approximately 2 minutes, we knew this was done when we observed the cathode glowing red. After we warmed up the heater we applied a voltage of 200V to the electron gun, we then observed the beam of electrons that was glowing green. After we observed the electron beam we then applied a current to the Helmholtz Coils, and observed the electron beam take a circular orbit. We then proceeded to take our data on the radii of electron beam, one on the right side and one on the left, in addition to the voltage on the electron gun and the current on the Helmholtz Coils. We took the data on the radii of the beam by looking at a ruler attached to the back of the e/m apparatus(Fig. 1). We noticed how the radii of the beam was effected by changing the voltage while holding the current constant and also with the opposite. In our experiment we first started holding the current along the coils constant at 1.35A while fluctuating the voltage on the electron gun from a max value of 250V to a minimum voltage of 146V. We observed that the more voltage we applied while keeping the current constant the radius of the electron beam increased. For the next set of experiments we kept the voltage constant at 143V and had a range of current from 0.9A to 1.33A and observed that the radii increased as we decreased the current along the coils. We took data on the radii versus the current and radii versus the voltage and this can be found on my data page for this lab. ### Results and Discussion The data given in Table 1(Fig.4) shows the results that were obtained while keeping the voltage constant at 143 volts. The graph of radii vs 1/I^2, showing how the data compares to a best fit line (using Microsoft Excel,Fig. 5) showing one standard deviation most of the data fits the linear fit. The value of e/v was determined to be 2.74+/-0.38*10^11 coul/kg within 68% confidence interval, which has an error of approximately 5.11% when compared to the value from the paper "The Electronic Atomic Weight and e/m Ratio6" of 1.76*10^11 coul/kg. From Table 1(Fig. 4) the value of e/m while holding the current constant at 1.35 amps was 1.85+/-0.58*10^11 coul/kg. Using Microsoft Excel a plot of my data radii62 vs voltage(Fig. 6) is given compared to a liner fit of the data, and as there is no obvious systematic deviation this is a good fit for the data. From my value for e/m the error is 55.7% when compared to the vale from "The Electronic Atomic Weight and e/m Ratio6" of 1.76*10^11 coul/kg. From Table 1(Fig. 4) the value of e/m was calculated while the voltage and current were both being varied. This value was _+/- *10^11 coul/kg. The liner graphs showing the varying values are radii^2 vs voltage(Fig. 7), and radii vs 1/I^2(Fig. 8) showing again that even while the voltage and current are varied a linear fit still holds. An error of _% was given when compared to the value in "The Electronic Atomic Weight and e/m Ratio6" of 1.76*10^11 coul/kg. ## Conclusion So from my experiment to find the ratio for e/m the best result I found was when I held the voltage constant and varied the radii I came up with a value of $1.85\pm.12*10^{11}\frac{coul}{kg}$. So based on my calculations from the above section I obtain the best result when I take the value using constant voltage, which the error is approximately 5.11%. This is not a bad result considering that their is a significant amount of systematic and random error present in this lab. The sources of random(non-biased) error in this lab are as follows. The collisions that occur between the helium atoms in the bulb and discharged electrons from the electrons gun, some of the energy that is used to accelerate the electrons is wasted in the form of visible light due to the collisions so the energy that is measured is an over estimate. Some of the systematic errors are caused by the imprecision of the ruler and in aligning them. We tried to over come some of these by taking two reading one on the left side and one on the right and averaging the two but this can only decrease the error but not eliminate it completely. Another error is that when the voltage is raised or the current is lowered the discharge of the electron beam doesn't line up with where it impacts the electron gun so you can adjust this by means of the focus knob, this changes the radii of the electron beam so you need to adjust the focus every time you change the voltage or current if not you potentially can alter the data and throw off the results. The experiment showed me that an electron beam when in the presence of a uniform magnetic field creates a circular form. This circle whose radii is directly related to the strength of the magnetic field and the velocity at which the electrons leave the electron gun. If I were to do this experiment again I would not imagine to obtain such a value so close to the actual value for e/m due to the large possibility for errors due to the large amount of systematic errors that can be introduced into the expernment. Summing up even when there is potentially large amounts of errors you can still find fundamental constants. ## Acknowledgments I would like to thank my lab partner Elizabeth Allen, my lab professor Dr.Steven Koch our lab TA Pranav Rathi for their assistance and support in this lab. ## References 1.M. Gold, Physics 307L: Junior Laboratoy, UNM Physics ans Astronomy (2006),[1] 2. J. Thompson, "Cathode Rays". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, Vol 44, 293-316 (1897). 3.R. A. Millikan, "On the elementary electrical charge and the Avogadro constant". The Physical Review, Series II 2: 109–143 (1913). 4.Darrigol, Olivier, "Electrodynamics from Ampère to Einstein",Oxford University Press, ISBN =0-198-50593-0, 327 (2000)[2] 5.R. Merritt, C. Purcell, and G. Stroink. "Uniform magnetic field produced by three, four, and five square coils". Review of scientific Instruments, Volume 54, Issue 7, 879 (1983). 6.R.C. Gibbs and R.C. Williams, "The Electronic Atomic Weight and e/m Ratio". The Physical Review, Volume 44, Issue 12, 1029 (1933).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271829128265381, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/44548/applications-of-systems-of-linear-equations
Applications of systems of linear equations Sorry if this questions is overly simplistic. It's just something I haven't been able to figure out. I've been reading through quite a few linear algebra books and have gone through the various methods of solving linear systems of equations, in particular, $n$ systems in $n$ unknowns. While I understand the techniques used to solve these for the most part, I don't understand how these situations present themselves. I was wondering if anyone could provide a simple real-world example or two from data analysis, finance, economics, etc. in which the problem they were working on led to a system of $n$ equations in $n$ unknowns. I don't need the solution worked out. I just need to know the problem that resulted in the system. - I deleted my answer because I realized there was already an answer by Rahul Narain which is better explained and is essential the same (Kirchhoff's circuit laws.) – Américo Tavares Jun 12 '11 at 21:56 It's is very disheartening that no one can come up with a straight forward example to explain to a Math student how matrices are useful. Anyone have any example that doesn't require an engineering degree? – JackOfAll May 4 '12 at 22:51 4 Answers One of the most frequent occasions where linear systems of $n$ equations in $n$ unknowns arise is in least-squares optimization problems. Let us look at an example. Let's say that we are studying two physical quantities $y$ and $x$ and we conjecture that $y$ is a second order polynomial function of $x$, i.e. $y=\alpha x^2 + \beta x + \gamma$ for some real numbers $\alpha$, $\beta$, $\gamma$ that are unknown. Let's say now that we perform experiments and obtain measurements $(x_1,y_1) \cdots (x_{100},y_{100})$. Applying the polynomial model on the measurements yields $y_i=\alpha x_i^2 + \beta x_i + \gamma$ for $i=1, \cdots 100$ or in matrix form $X k=y$ where $k=[\alpha \, \, \beta \, \, \gamma]^T$, $y=[y_1 \cdots y_{100}]^T$ and the $i^{th}$ row of $X$ is the row vector $[x_i^2 \, \, x_i \, \, 1]$. Now, as you might observe, we have $100$ equations in $3$ unknowns, i.e. our linear system $X k=y$ is overdetermined. Practically speaking, this system is consistent (i.e. it has a solution) only if indeed $y$ is related to $x$ via a second order polynomial equation (i.e. our conjecture is true) and additionally there is no noise in our measurements. So assume that none of the above two conditions is true. Hence the system $X k=y$ will not in general have a solution and one might consider finding a vector $k$ that instead minimizes $||X k - y||_2^2$, i.e. the square of the error. Then the solution of this optimization problem is the solution to the $3 \times 3$ system $X^T X k = X^T y$. This formulation comes up all the time in engineering, e.g. in signal prediction. So, least squares problems lead to square (i.e. $n \times n$) linear systems of equations. - Thanks Manos, this is very helpful. – miggety Jun 11 '11 at 11:57 You are welcome! – Manos Jun 11 '11 at 14:01 Did you find this question interesting? Try our newsletter email address I have no knowledge of finance or economics, but problems in physics and engineering often give rise to linear systems. For example, say you have a complicated network of resistors, and you apply a potential difference to two junctions in the network (connect them to the ends of a battery, say). What is the current going through each resistor? If the network can be split up into series and parallel combinations of resistors, it's easy to solve, but in general, it's not possible to do this. Then the only way to find the solution is linear algebra: you have $n$ unknowns, the relative potentials at all the other junctions, and $n$ equations, from Kirchhoff's current law at those junctions, and you can solve this system to find the unique solution. If you don't think this is a real-world problem, you should listen to MC Frontalot: people call him up in the middle of the night to ask him the impedance of a resistor icosahedron. - A package of gummi bears contains 20 grams of sugars, 5 grams of fat, and 1 gram of proteins. A package of butter contains 6 grams of sugars, 15 grams of fat, and 2 gram of proteins. A chicken contains 2 grams of sugars, 4 grams of fat, and 12 gram of proteins. (The numbers are fake.) Joe has bought some gummi bears, butter, and chickens. In total, he ate it and it contained 1 ton of sugars, 1 ton of fats, and 1 ton of proteins. How many packages of gummi bears, butter, and chickens did he buy? - 3 How is it a real-life problem? It looks like the completely artificial problems that you can find in high-school textbooks that are designed just to see if one is able to replace « gummi bears » by x, « butter » by y, etc. – gallais Jun 10 '11 at 14:35 @gallais: I second that that formulation is not much "real-life" (to digest 1 ton of fat ;-) ). However a real-life for this type of questions was one I had to solve: to recompute the amount of certain metallic substances involved from which a set of items was made, which had left the factories' door and had then been inventarized. That simply involved to invert the matrix of composition-coefficients for that items. In short: any regression or factor analysis in statistics (which is real life) needs that solving of systems of linear equations... – Gottfried Helms Jan 11 '12 at 19:37 Here is an example from cost accounting: In Chapter 15 of Cost Accounting: A Managerial Emphasis, 14th ed., by Horngren, Datar, and Rajan, one of the sections covers a method of allocating costs of support departments such as Personnel and Legal to operating departments such as the refrigerator and dish washer manufacturing divisions of a kitchen appliances manufacturer. Special consideration needs to be given for the common scenario where support departments support each other. For example, the Personnel department may support the Legal department by hiring paralegals and attorneys while the Legal department might support the Personnel department by evaluating compensation plans and interpretting regulatory requirements. The most justifiable method presented in the chapter is called the "reciprocal method" or "matrix method". It uses a system of linear equations to calculate the total cost of each support department, factoring in reciprocal support, so that the total cost can be allocated to the operating departments. For example, assume the following facts: • The Personnel Department's budgeted fixed costs are \$1,200,000, not including reciprocal support costs. • The Legal Department's budgeted fixed costs are \$2,000,000, not including reciprocal support costs. • Personnel Department costs are allocated on the basis of recruiter hours. • Legal Department costs are allocated on the basis of attorney hours. • The Personnel Department is budgeted to utilize 5% of the Legal Department's budget for attorney hours. • The Legal Department is budgeted to utilize 10% of the Personnel Department's budget for recruiter hours. Let $P_F$ be the Personnel Department's total fixed costs and $L_F$ be the Legal Department's total fixed costs. Then: $$\begin{align} P_F &= \$1,200,000 + 5\% \times L_F\\ L_F &= \$2,000,000 + 10\% \times P_F \end{align}$$ We have: $$\begin{align} L_F &= \$2,000,000 + 10\% \times (\$1,200,000 + 5\% \times L_F) \\ 99.5\% \times L_F &= \$2,120,000 \\ L_F &= \$2,130,653.27 \end{align}$$ $$\begin{align} P_F &= \$1,200,000 + 5\% \times \$2,130,653.27\\ &= \$1,306,532.66 \end{align}$$ The related matrix is: $$\begin{bmatrix} 1 & -0.05 \\ -0.1 & 1 \end{bmatrix}$$ The system of linear equations is larger with more support departments. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501659274101257, "perplexity_flag": "middle"}
http://calculus7.org/2012/03/16/embeddings-vs-embeddings/
being boring ## Embeddings vs embeddings Posted on 2012-03-16 by In mathematics, an embedding (or imbedding) is one instance of some mathematical structure contained within another instance, such as a group that is a subgroup. When some object $X$ is said to be embedded in another object $Y$, the embedding is given by some injective and structure-preserving map $f \colon X \to Y$. Thus speaks Wikipedia. And so we must insist on $f(X)$ being isomorphic to $X$ in whatever category we are dealing with; otherwise there is no reason to say that $Y$ contains $X$ in any way. This is why geometers distinguish immersions from embeddings, and note that even an injective immersion may fail to be an embedding. Not an embedding And yet, people speak of the Sobolev (Соболев) embedding theorem and its relatives due to Morrey, Rellich, Кондрашов… The common feature of these theorems is that one normed space $X$ is set-theoretically contained in another normed space $Y$, and the inclusion map is continuous (or even compact in the case of Rellich-Кондрашов). But this inclusion map is not an isomorphism; the image of $X$ inside of $Y$ may look nothing like $X$ itself. For instance, the Sobolev theorem says that, in the two-dimensional setting, the space of functions with integrable gradient ($W^{1,1}$) is “embedded” into $L^2$. Even though there is nothing inside of $L^2$ that looks like $W^{1,1}$. Let’s consider a much simpler example: the space of absolutely summable sequences $\ell_1$ is contained in the space of square-summable sequences $\ell_2$. This inclusion is a continuous, indeed non-expanding map: $\|x\|_2\le \|x\|_1$ because $\displaystyle \sum_i |x_i|^2\le \left(\sum_i |x_i|\right)^2$. But there is no subspace of $\ell_2$ that is isomorphic to $\ell_1$, isomorphisms being invertible linear maps. This can be shown without any machinery: Suppose $f\colon \ell_1\to\ell_2$ is a linear map such that $C^{-1}\|x\|_1 \le \|f(x)\|_2 \le C\|x\|_1$ for some constant $C$, for all $x\in \ell_1$. Consider the unit basis vectors $e_i$, i.e., $e_1=(1,0,0,0,\dots)$, etc. Denote $v_i=f(e_i)$. For any choice of $\pm$ signs, $\|\sum_{i=1}^n \pm e_i \|_1 =n$ and, therefore, $\displaystyle (*) \qquad \left\|\sum_{i=1}^n \pm v_i \right\|_2^2 \ge C^{-2} n^2$. On the other hand, taking the average over all sign assignments in $\displaystyle \left\|\sum_{i=1}^n \pm v_i \right\|_2^2$ kills all cross terms, leaving us with $\sum_{i=1}^n \| v_i \|_2^2$, which is $\le C^2n$. As $n\to\infty$, we obtain a contradiction between the latter inequality and (*). Parallelogram (or parallelepiped) law In a Hilbert space, the sum of squared diagonals of a parallelepiped is precisely controlled by the square of sums of its sides. In $\ell_p$ for $1\le p<2$ the diagonals may be “too long'', but cannot be ''too short''. In $\ell_p$ for $2\le p<\infty$ it's the opposite. (And in $\ell_{\infty}$ anything goes, since it contains an isometric copy of every $\ell_p$.) Further development of these computations with diagonals leads to the notions of type and cotype of a Banach space: type gives an upper bound on the length of diagonals, cotype gives a lower bound. This entry was posted in Uncategorized and tagged Banach space, cotype, embedding, type. Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245511889457703, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/124050/how-to-approximate-an-integral-using-the-composite-trapezoid-rule?answertab=oldest
# How to approximate an integral using the Composite Trapezoid Rule I'm trying to estimate the value of the following integral on the interval $[0,1]$ $$I = \int_0^1 \frac{1}{1+x} dx$$ So, using the composite trapezoid rule (and with $n=4$, ie I'm only using the first 4 $x_i$ to do the approximation), I get the following expression: $$I = \frac{67}{60} - \frac{1}{96} (2(1+\xi_1)^{-3} + 2(1+\xi_2)^{-3} + 2(1+\xi_3)^{-3} + 2(1+\xi_4)^{-3})$$ But I'm lost when it comes to calculating the error and finding a value for $\xi$. What's the general way of finding the error like this? Formula I'm using $$I = \frac{h}{2} \sum_{i=1}^n [f(x_{i-1}) + f(x_i)] - \frac{h^3}{12} \sum_{i=1}^n f^{''}(\xi_i)$$ - Ah, I see. Every time I've ever seen the term "composite trapezoid rule" it has referred to approximating the area under $y=f(x)$ using several (as opposed to one) trapezoids -- that is the first part of your formula not including the summation with fourth order derivatives. – Bill Cook Mar 24 '12 at 21:44 1 Thanks for including the formula you are using. I'm still baffled as to where it comes from. It looks like an error term is being removed for each subinterval. But the 4th order derivatives don't jive with the trapezoid rule (seems like they should be 2nd derivatives). Where does this come from? – Bill Cook Mar 24 '12 at 21:49 Ah yes, my bad :) They are second derivatives. – MaxMackie Mar 24 '12 at 21:58 Ok. So your formula is: for each subinterval, pair the trapezoid approximation with its error term and then sum up. From my experience, one uses such formulas to approximate or bound the error not to "find the error". If you are looking to bound the error, you can find a bound for $f''(\xi_i)$ restricted to the i-th subinterval (or to simplify life you could use a single bound for the entire interval). Then the second summation (with your bound(s) subbed for the second derivatives) will give you a bound for the error. – Bill Cook Mar 24 '12 at 22:20 ## 1 Answer You have to find the upper bound of this error sum. Therefore you should take the maximum value of f'' on each subinterval (according to your formula). With $f(x)=\frac{1}{1+x}$, we got: $f''(x)=\frac{2}{(1+x)^3}$. The largest value of this function in the interval $[x_0,x_1]$ is $f''(x_0)$ , since $\ f''(x) \$ is a decreasing function in [0,1]. For n=4, we got the intervals [0,0.25], [0.25,0.5] [0.5,0.75] [0.75,1] with corresponding maximum values for $f''(x)$ , $\ f''(0)=2, \ f''(0.25)=1.024, \ f''(0.5)=0.592 \ and \ f''(0.75)=0.373$. So an upper bound for your problem would be $\frac{h^3}{12} \sum_{i=1}^n |f^{''}(\xi_i)|=\frac{0.25^3}{12}(2+1.024+0.592+0.373)=0.005194$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255340695381165, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/54343/is-there-a-preferable-convention-for-defining-the-wedge-product/54355
## Is there a preferable convention for defining the wedge product? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There are different conventions for defininig the wedge product $\wedge$. In Kobayashi-Nomizu, there is $\alpha\wedge\beta:=Alt(\alpha\otimes\beta)$, in Spivak, we find $\alpha\wedge\beta:=\frac{k!l!}{(k+l)!}Alt(\alpha\otimes\beta)$, where $\alpha$ and $\beta$ are any forms of degree $k$ and $l$ respectively, and $Alt(\cdot)$ take the alternating part of the tensor. But, is there a rationale to prefer one of them among the others? If not, what do you prefer? and for what reason? - 2 what are the choices here ? – Suresh Venkat Feb 4 2011 at 18:25 At least Kobayashi-Nomizu and Bourbaki use different conventions. – Giuseppe Feb 4 2011 at 18:31 18 In my opinion, you are better defining the exterior algebra as the quotient of tensor algebra by the relation $\alpha\wedge \alpha=0$. Forcing it inside the tensor algebra is ugly and unnatural. – Donu Arapura Feb 4 2011 at 18:54 1 Although, I should add that it sometimes convenient. – Donu Arapura Feb 4 2011 at 19:09 1 Excuse me, I do not understand, I should not take the quotient of the tensor algebra by the relation $\alpha\otimes\alpha=0$? – Giuseppe Feb 4 2011 at 19:26 show 1 more comment ## 6 Answers I think a lot of people run into this issue. The way I think about it is the following: Take your finite-dimensional vector space $V$ and form its tensor algebra $T(V)$. Define $\mathcal{J}$ to be the 2-sided ideal in $T(V)$ generated by elements of the form $v \otimes v$, and then define the exterior algebra to be $\Lambda(V) = T(V) / \mathcal{J}$. This exhibits the exterior algebra as a quotient of the tensor algebra. The different conventions you see for the wedge product arise from different embeddings of the exterior algebra into the tensor algebra. Define on $V^{\otimes n}$ the map $$A_n (v_1 \otimes \dots \otimes v_n) = \frac{1}{n!} \sum_{\pi \in S_n} sgn(\pi) v_{\pi(1)} \otimes \dots \otimes v_{\pi(n)},$$ (or possibly with $\pi^{-1}$ instead of $\pi$, although I guess it doesn't matter) and then define on the tensor algebra the map $$A = \bigoplus_{n=0}^{\infty} A_n.$$ Then you can show easily that $A_n^2 = A_n$ for all $n$, so that $A$ is a projection. The point is that $\mathcal{J} = \mathrm{ker} (A)$, so that you can identify the quotient $\Lambda(V)$ with $\mathrm{im} A$, i.e. we have now embedded the exterior algebra as a subspace of the tensor algebra. This is where the two conventions differ. I have defined $A_n$ with a $\frac{1}{n!}$ in front, but some don't do so. Of course, this doesn't change the kernel of the map, but it does change the embedding of the exterior algebra into the tensor algebra. The important point is that $A$ is not an algebra map of $T(V)$ to itself, so the embedding $\Lambda(V) \to T(V)$ is not an embedding of algebras. Now you ask how to describe the exterior product in terms of the product in $T(V)$. Take $\alpha \in \Lambda^k(V)$ and $\beta \in \Lambda^l(V)$ with representatives $\tilde{\alpha} \in \mathrm{im}(A_k)$ and $\tilde{\beta} \in \mathrm{im}(A_l)$, respectively. Then $A_{k+l}(\tilde{\alpha} \otimes \tilde{\beta})$ is the representative of $\alpha \wedge \beta$ that you're looking for. Essentially, it boils down to whether or not you put the $\frac{1}{n!}$ in front of your alternating map or not. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The issue here is that there are really two tasks: (1) Define the algebra of differential forms on a manifold, and (2) implement them as multilinear functions on tangent vectors. The natural way to define them is along the lines of what Donu said: The differential forms at a point are like a polynomial algebra over the vector space `$V = T^*_pM$`, except supercommutative (or graded-commutative) instead of commutative. The supercommutativity condition is identical to taking a quotient of the tensor algebra, which is the free non-commutative algebra over the vector space $V$. But then for the second task, you would like a monomial, in a standard basis of cotangent vectors, to take values of ${0,1,-1}$ if you pair it with a standard basis of vectors. For example, you would like to say $$dx \wedge dy = dx \otimes dy - dy \otimes dx,$$ because that evaluates to $1$ on $(\hat{x},\hat{y})$ and $-1$ on $(\hat{y},\hat{x})$. In order to do this, you have to implement the wedge product with antisymmetrization and with factorials, actually the reciprocal of the factor you give: $$\alpha \wedge \beta = \frac{(a+b)!}{a!b!} \mathrm{Alt}(\alpha \otimes \beta).$$ If I were explaining the subject, I would handle points (1) and (2) separately. It is common to conflate the two concerns. It amounts to either definition forms as a subspace of tensors (the usual solution), or as a quotient space of tensors. The real issue is that they need to be both, and that double role leads you to the factorial factors. - Thank you very much for the careful response to my question. I have to quietly think about. Please excuse me if the question was not properly adapt to MathOverflow. – Giuseppe Feb 4 2011 at 20:21 Why do you need the factors to be 0,1,-1? – Martin Gisser Feb 22 at 13:58 The answers by Greg and MTS are quite thorough, so there is not much more to say about that. However, I would like to explain my comment that viewing differential forms as antisymmetric tensors is often inadvisable, although I don't want to seem too dogmatic about this. My first argument is pedagogical. Making the above identification can be confusing (as evidenced by the question) and is frequently besides the point. A few years back, when I taught a vector calculus class, I decided to do differential forms. The students had no idea about tensor products or multilinear algebra, so it would have been a bad idea to attempt this approach. Instead, I told them that $dx$ etc. were symbols subject to the chain rule $dx = \frac{\partial x}{\partial u}du+\ldots$, and that they could be multiplied in such a way that $dx\wedge dy= - dy\wedge dx$. I gave a heuristic explanation in terms of oriented areas of "infinitesimal" rectangles for why this should be so... I won't claim that the experiment was entirely successful, but it could have been a lot worse. My second argument is more mathematical. Differential forms can be defined within algebraic geometry for quite general spaces. Here the approach using antisymmetric tensors can lead to serious problems. In characteristic $p>0$, the denominators will be undefined in general. As an interesting side note, in the algebraic proof of the Hodge theorem by Deligne and Illusie they do find it necessary to make this identification. But they have to restrict the dimension of the space to be less than $p$ for precisely this reason. Although in the limiting case of characteristic $0$, this is a nonissue. - Each one of the two conventions has it's own advantage: the one with the normalizing coefficient makes the exterior algebra sit inside the tensor algebra (as the subspace of alternating tensors) and the "Alt" map be a projection onto that subspace hence the identity on alternating tensors, while the convention with*out* the normalizing factor is better suited for a ground field of positive characteristic as otherwise the denominator of the normalizing factor would be zero. - I prefer (alas!) the Kobayashi-Nomizu "algebraic" version. Besides the formal benefits of having a projector and not needing to carry around combinatorial factors, here is a differential geometric case: If $\nabla \alpha$ is the Levi-Civita connection applied to the 1-form $\alpha$, then the exterior differential $d\alpha$ is the antisymmetric part of $\nabla\alpha$. In the Spivak "geometer" version it would be $\frac 1 2 d\alpha$. (The symmetric part of this decompostion involves the Lie derivative of the metric. See the nice book by W.A. Poor, Differential Geometric Structures, or Peter Petersen's Riemannian Geometry 2nd ed. (Who has found this wondrous decomposition?)) I hope I'm confused on this... 1) Two different "canonical" exterior differentials would be quite a scandalous mess. 2) I don't know any textbook mentioning this issue P.S.: see appendix of http://arxiv.org/abs/math-ph/0212043 showing what bad can happen - It is a convention. It doesn't matter. This is like asking whether there should be a $2\pi$ or a $\sqrt{2\pi}$ in the definition of Fourier transform. In other words, not very interesting, and definitely not a research math question. - 9 The convention OP asks about absolutely matters. One way to see that it matters: the different choices give different algebras in characteristic $p$ (one's more like polynomials, the other like divided powers). – Theo Johnson-Freyd Feb 5 2011 at 5:15 7 As far as the Fourier transform is concerned, I think it is beneficial for students to be aware that there are different conventions and know the effects this can have (I don't mean memorize the effects, but realize it alters things here and there). This becomes particularly noticeable when you do Fourier analysis on R^n and not just R, where I think the way different conventions behave justify why inserting 2*pi into the exponential part of the Fourier integral is, notationally, the simplest convention. – KConrad Feb 5 2011 at 21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943405032157898, "perplexity_flag": "head"}
http://mathoverflow.net/questions/96142?sort=oldest
## Monotonicity of a combination of Bessel functions ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Prove that the following function is decreasing (as a function of a) for a > 0 when 0 < r < 1: $${K_2(ar)I_2(a)-I_2(ar)K_2(a)\over I_2(a)}I_2(ar).$$ The problem arose in the analysis of a model for yield stress fluids. We have numerical evidence, but I would be interested in an analytical proof. - ## 1 Answer This function is quite interesting as it reaches a maximum in 0. This can be seen without difficulty computing the first and second derivatives. It is $$B(a,r)=\frac{K_2(ar)I_2(a)-I_2(ar)K_2(a)}{I_2(a)}I_2(ar)$$ that gives $$B(0,r)=\frac{1}{4}(1-r^4)>0$$ $$B'(0,r)=0$$ $$B''(0,r)=-\frac{1}{12}r^2(1-r^2)^2<0$$ for the given interval. This means that this function is decreasing for $a>0$. The problem is if there are some other points where the first derivative can become zero changing the concavity of the curve. So, the first derivative has the following involved expression $$B'(a,r)=\frac{1}{2 I^2_2(a)}\left[I_2(ar)^2 (I_1(a)+I_3(a)) K_2(a)+\right.$$ $$I_2(a) I_2(ar) (-2 r (I_1(ar)+I_3(ar)) K_2(a)+$$ $$I_2(ar) (K_1(a)+K_3(a))+r I_2^2(a) ((I_1(ar)+I_3(ar)) K_2(ar)$$ $$\left.-I_2(ar) (K_1(ar)+K_3(ar)))\right]$$ Now we notice that $I_2$ is a monotonic increasing function that never becomes zero and $K_2$ is a monotonic decreasing function that never becomes zero for the given intervals and similarly is true for $I_1,\ I_3$ and $K_1,\ K_3$. All these functions are positive. So, it is not difficult to realize that the first derivative is a monotonic function and never hits zero again being just a balance of functions never reaching zero unless $a=0$ and having monotonic behavior, $K_i$ are decreasing functions and $I_i$ increasing functions. We also note that $$\lim_{a\rightarrow\infty}B'(a,r)=0$$ that can be proved using the asymptotic formula for these Bessel functions. Now, combining monotonicity and positivity of these Bessel functions, starting from 0 and reaching asymptotically 0 at increasing values of the argument, they can just reach an extremum and never cross zero again. This can also be seen with a simple plot evaluated at $r=0.1,0.3,0.5,0.7,0.9$. - I understand that $I_2$ and $K_2$ do not become zero, but why does this imply $B'$ does not become zero? – Michael Renardy May 7 2012 at 11:37 I have improved the answer. You are doing a balance of positive terms in the first derivative. This balance can only be zero in zero where the maximum is reached. Then, Is increase exponentially while Ks go down exponentially. – Jon May 7 2012 at 13:38 I still cannot follow. B' has positive and negative contributions. You have to show the negative ones outweigh the positive ones, and I do not see how you get this out of sign of I and K and their derivatives alone. I tried to check your expressions for B'. I could get them to agree neither with B' nor with each other. Maybe there are typos. Your second expression seems to have an unbalanced parenthesis. – Michael Renardy May 7 2012 at 14:34 Ok, I finally found out where you are missing the parenthesis. I still do not follow your argument beyond the positivity and monotoncity of the I's and K's. – Michael Renardy May 7 2012 at 15:00 I should say that the derivative does not change sign and so does not cross to zero again. I will try to expand the answer to make this evident. – Jon May 8 2012 at 7:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429432153701782, "perplexity_flag": "head"}
http://mattleifer.info/2011/08/01/the-choi-jamiolkowski-isomorphism-youre-doing-it-wrong/comment-page-1/
Mathematics — Physics — Quantum Theory # The Choi-Jamiolkowski Isomorphism: You’re Doing It Wrong! Posted on 1 August, 2011 As the dear departed Quantum Pontiff used to say: New Paper Dance! I am pretty happy that this one has finally been posted because it is my first arXiv paper since I returned to work, and also because it has gone through more rewrites than Spiderman: The Musical. What is the paper about, I hear you ask? Well, mathematically, it is about an extremely simple linear algebra trick called the Choi-Jamiolkwoski isomorphism. This is actually two different results: the Choi isomorphism and the Jamiolkowski isomorphism, but people have a habit of lumping them together. This trick is so extremely well-known to quantum information theorists that it is not even funny. One of the main points of the paper is that you should think about what the isomorphism means physically in a new way. Hence the “you’re doing it wrong” in the post title. ### First Level Isomorphisms For the uninitiated, here is the simplest way of describing the Choi isomorphism in a single equation: $\Ket{j}\Bra{k} \qquad \qquad \equiv \qquad \qquad \Ket{j} \otimes \Ket{k},$ i.e. the ismomorphism works by turning a bra into a ket. The thing on the left is an operator on a Hilbert space $$\mathcal{H}$$ and the thing on the right is a vector in $$\mathcal{H} \otimes \mathcal{H}$$, so the isomorphism says that $$\mathcal{L}(\mathcal{H}) \equiv \mathcal{H} \otimes \mathcal{H}$$, where $$\mathcal{L}(\mathcal{H})$$ is the space of linear operators on $$\mathcal{H}$$. Here is how it works in general. If you have an operator $$U$$ then you can pick a basis for $$\mathcal{H}$$ and write $$U$$ in this basis as $U = \sum_{j,k} U_{j,k} \Ket{j}\Bra{k},$ where $$U_{j,k} = \Bra{j}U\Ket{k}$$. Then you just extend the above construction by linearity and write down a vector $\Ket{\Phi_U} = \sum_{j,k} U_{j,k} \Ket{j} \otimes \Ket{k}.$ It is pretty obvious that we can go in the other direction as well, starting with a vector on $$\mathcal{H}\otimes\mathcal{H}$$, we can write it out in a product basis, turn the second ket into a bra, and then we have an operator. So far, this is all pretty trivial linear algebra, but when we think about what this means physically it is pretty weird. One of the things that is represented by an operator in quantum theory is dynamics, in particular a unitary operator represents the dynamics of a closed system for a discrete time-step. One of the things that is represented by a vector on a tensor product Hilbert space is a pure state of a bipartite system. It is fairly easy to see that (up to normalization) unitary operators get mapped to maximally entangled states under the isomorphism, so, in some sense, a maximally entangled state is “the same thing” as a unitary operator. This is weird because there are some things that make sense for dynamical operators that don’t seem to make sense for states and vice-versa. For example, dynamics can be composed. If $$U$$ represents the dynamics from $$t_0$$ to $$t_1$$ and $$V$$ represents the dynamics from $$t_1$$ to $$t_2$$, then the dynamics from $$t_0$$ to $$t_2$$ is represented by the product $$VU$$. Using the isomorphism, we can define a composition for states, but what on earth does this mean? Before getting on to that, let us briefly pause to consider the Jamiolkowski version of the isomorphism. The Choi isomorphism is basis dependent. You get a slightly different state if you write down the operator in a different basis. To make things basis independent, we replace $$\mathcal{H}\otimes\mathcal{H}$$ by $$\mathcal{H}\otimes\mathcal{H}^*$$. $$\mathcal{H}^*$$ denotes the dual space to $$\mathcal{H}$$, i.e. it is the space of bras instead of the space of kets. In Dirac notation, the Jamiolkwoski isomorphism looks pretty trivial. It says $\Ket{j}\Bra{k} \qquad \qquad \equiv \qquad \qquad \Ket{j} \otimes \Bra{k}.$ This is axiomatic in Dirac notation, because we always assume that tensor product symbols can be omitted without changing anything. However, this version of the isomorphism is going to become important later. ### Conventional Interpretation: Gate Teleportation In quantum information, the Choi isomorphism is usually interpreted in terms of “gate teleportation”. To understand this, we first reformulate the isomorphism slightly. Let $$\Ket{\Phi^+}_{AA’} = \sum_j \Ket{jj}_{AA’}$$, where $$A$$ and $$A’$$ are quantum systems with Hilbert spaces of the same dimension. The vectors $$\Ket{j}$$ form a preferred basis, and this is the basis in which the Choi isomorphism is going to be defined. Note that $$\Ket{\Phi^+}_{AA’}$$ is an (unnormalized) maximally entangled state. It is easy to check that the isomorphism can now be reformulated as $\Ket{\Phi_U}_{AA'} = I_A \otimes U_A' \Ket{\Phi^+}_{AA'},$ where $$I_A$$ is the identity operator on system $$A$$. The reverse direction of the isomorphism is given by $U_A \Ket{\psi}\Bra{\psi}_A U_A^{\dagger} = \Bra{\Phi^+}_{A'A''} \left ( \Ket{\psi}\Bra{\psi}_{A''} \otimes \Ket{\Phi_U}\Bra{\Phi_U}_{A'A} \right )\Ket{\Phi^+}_{A'A''},$ where $$A^{\prime\prime}$$ is yet another quantum system with the same Hilbert space as $$A$$. Now let’s think about the physical interpretation of the reverse direction of the isomorphism. Suppose that $$U$$ is the identity. In that case, $$\Ket{\Phi_U} = \Ket{\Phi^+}$$ and the reverse direction of the isomorphism is easily recognized as the expression for the output of the teleportation protocol when the $$\Ket{\Phi^+}$$ outcome is obtained in the Bell measurement. It says that $$\Ket{\psi}$$ gets teleported from $$A^{\prime\prime}$$ to $$A$$. Of course, this outcome only occurs some of the time, with probability $$1/d$$, where $$d$$ is the dimension of the Hilbert space of $$A$$, a fact that is obscured by our decision to use an unnormalized version of $$\Ket{\Phi^+}$$. Now, if we let $$U$$ be a nontrivial unitary operator then the reverse direction of the isomorphism says something more interesting. If we use the state $$\Ket{\Phi_U}$$ rather than $$\Ket{\Phi^+}$$ as our resource state in the teleportation protocol, then, upon obtaining the $$\Ket{\Phi^+}$$ outcome in the Bell measurement, the output of the protocol will not simply be the input state $$\Ket{\psi}$$, but it will be that state with the unitary $$U$$ applied to it. This is called “gate teleportation”. It has many uses in quantum computing. For example, in linear optics implementations, it is impossible to perform every gate in a universal set with 100% probability. To avoid damaging your precious computational state, you can apply the indeterministic gates to half of a maximally entangled state and keep doing so until you get one that succeeds. Then you can teleport your computational state using the resulting state as a resource and end up applying the gate that you wanted. This allows you to use indeterministic gates without having to restart the computation from the beginning every time one of these gates fails. Using this interpretation of the isomorphism, we can also come up with a physical interpretation of the composition of two states. It is basically a generalization of entanglement swapping. If you take $$\Ket{\Phi_U}$$ and $$\Ket{\Phi_{V}}$$ and and perform a Bell measurement across the output system of the first and the input system of the second then, upon obtaining the $$\Ket{\Phi^+}$$ outcome, you will have the state $$\Ket{\Phi_{UV}}$$. In this way, you can perform your entire computational circuit in advance, before you have access to the input state, and then just teleport your input state into the output register as the final step. In this way, the Choi isomorphism leads to a correspondence between a whole host of protocols involving gates and protocols involving entangled states. We can also define interesting properties of operations, such as the entanglement of an operation, in terms of the states that they correspond to. We then use the isomoprhism to give a physical meaning to these properties in terms of gate teleportation. However, one weak point of the correspondence is that it transforms something deterministic; the application of a unitary operation; into something indeterministic; getting the $$\Ket{\Phi^+}$$ outcome in a Bell measurement. Unlike the teleportation protocol, gate teleportation cannot be made deterministic by applying correction operations for the other outcomes, at least not if we want these corrections to be independent of $$U$$. The states you get for the other outcomes involve nasty things like $$U^*, U^T, U^\dagger$$ applied to $$\Ket{\psi}$$, depending on exactly how you construct the Bell basis, e.g. choice of phases. These can typically not be corrected without applying $$U$$. In particular, that would screw things up in the linear optics application wherein $$U$$ can only be implemented non-deterministically. Before turning to our alternative interpretation of Choi-Jamiolkowski, let’s generalize things a bit. ### Second Level Isomorphisms In quantum theory we don’t just have pure states, but also mixed states that arise if you have uncertainty about which state was prepared, or if you ignore a subsystem of a larger system that is in a pure state. These are described by positive, trace-one, operators, denoted $$\rho$$, called density operators. Similarly, dynamics does not have to be unitary. For example, we might bring in an extra system, interact them unitarily, and then trace out the extra system. These are described by Completely-Positive, Trace-Preserving (CPT) maps, denoted $$\mathcal{E}$$. These are linear maps that act on the space of operators, i.e. they are operators on the space of operators, and are often called superoperators. Now, the set of operators on a Hilbert space is itself a Hilbert space with inner product $$\left \langle N, M \right \rangle = \Tr{N^{\dagger}M}$$. Thus, we can apply Choi-Jamiolkowski on this space to define a correspondence between superoperators and operators on the tensor product. We can do this in terms of an orthonormal operator basis with respect to the trace inner product, but it is easier to just give the teleportation version of the isomorphism. We will also generalize slightly to allow for the possibility that the input and output spaces of our CPT map may be different, i.e. it may involve discarding a subsystem of the system we started with, or bringing in extra ancillary systems. Starting with a CPT map $$\mathcal{E}_{B|A}: \mathcal{L}(\mathcal{H}_A) \rightarrow \mathcal{L}(\mathcal{H}_B)$$ from system $$A$$ to system $$B$$, we can define an operator on $$\mathcal{H}_A \otimes \mathcal{H}_B$$ via $\rho_{AB} = \mathcal{E}_{B|A'} \otimes \mathcal{I}_{A} \left ( \Ket{\Phi^+}\Bra{\Phi^+}_{AA'}\right ),$ where $$\mathcal{I}_A$$ is the identity superoperator. This is a positive operator, but it is not quite a density operator as it satisfies $$\PTr{B}{\rho_{AB}} = I_A$$, which implies that $$\PTr{AB}{\rho_{AB}} = d$$ rather than $$\PTr{AB}{\rho_{AB}} = 1$$. This is analogous to using unnormalized states in the pure-state case. The reverse direction of the isomorphism is then given by $\mathcal{E}_{B|A} \left ( \sigma_A \right ) = \Bra{\Phi^+}_{A'A}\sigma_{A'} \otimes \rho_{AB}\Ket{\Phi^+}_{A'A}.$ This has the same interpretation in terms of gate teleportation (or rather CPT-map teleportation) as before. The Jamiolkowski version of this isomorphism is given by $\varrho_{AB} = \mathcal{E}_{B|A'} \otimes \mathcal{I}_{A} \left ( \Ket{\Phi^+}\Bra{\Phi^+}_{AA'}^{T_A}\right ),$ where $$^T_A$$ denotes the partial transpose in the basis used to define $$\Ket{\Phi^+}$$. Although it is not obvious from this formula, this operator is independent of the choice of basis, as $$\Ket{\Phi^+}\Bra{\Phi^+}_{AA’}^{T_A}$$ is actually the same operator for any choice of basis. I’ll keep the reverse direction of the isomorphism a secret for now, as it would give a strong hint towards the punchline of this blog post. ### Probability Theory I now want to give an alternative way of thinking about the isomorphism, in particular the Jamiolkowski version, that is in many ways conceptually clearer than the gate teleportation interpretation. The starting point is the idea that quantum theory can be viewed as a noncommutative generalization of classical probability theory. This idea goes back at least to von Neumann, and is at the root of our thinking in quantum information theory, particularly in quantum Shannon theory. The basic idea of the generalization is that that probability distributions $$P(X)$$ get mapped to density operators $$\rho_A$$ and sums over variables become partial traces. Therefore, let’s start by thinking about whether there is a classical analog of the isomorphism, and, if so, what its interpretation is. Suppose we have two random variables, $$X$$ and $$Y$$. We can define a conditional probability distribution of $$Y$$ given $$X$$, $$P(Y|X)$$, as a positive function of the two variables that satisfies $$\sum_Y P(Y|X) = 1$$ independently of the value of $$X$$. Given a conditional probability distribution and a marginal distribution, $$P(X)$$, for $$X$$, we can define a joint distribution via $P(X,Y) = P(Y|X)P(X).$ Conversely, given a joint distribution $$P(X,Y)$$, we can find the marginal $$P(X) = \sum_Y P(X,Y)$$ and then define a conditional distribution $P(Y|X) = \frac{P(X,Y)}{P(X)}.$ Note, I’m going to ignore the ambiguities in this formula that occur when $$P(X)$$ is zero for some values of $$X$$. Now, suppose that $$X$$ and $$Y$$ are the input and output of a classical channel. I now want to think of the probability distribution of $$Y$$ as being determined by a stochastic map $$\Gamma_{Y|X}$$ from the space of probability distributions over $$X$$ to the space of probability distributions over $$Y$$. Since $$P(Y) = \sum_{X} P(X,Y)$$, this has to be given by $P(Y) = \Gamma_{Y|X} \left ( P(X)\right ) = \sum_X P(Y|X) P(X),$ or $\Gamma_{Y|X} \left ( \cdot \right ) = \sum_{X} P(Y|X) \left ( \cdot \right )$. What we have here is a correspondence between a positive function of two variables — the conditional proabability distribution — and a linear map that acts on the space of probability distributions — the stochastic map. This looks analogous to the Choi-Jamiolkowski isomorphism, except that, instead of a joint probability distribution, which would be analogous to a quantum state, we have a conditional probability distribution. This suggests that we made a mistake in thinking of the operator in the Choi isomorphism as a state. Maybe it is something more like a conditional state. ### Conditional States Let’s just plunge in and make a definition of a conditional state, and then see how it makes sense of the Jamiolkowski isomorphism. For two quantum systems, $$A$$ and $$B$$, a conditional state of $$B$$ given $$A$$ is defined to be a positive operator $$\rho_{B|A}$$ on $$\mathcal{H}_A \otimes \mathcal{H}_B$$ that satisfies $\PTr{B}{\rho_{B|A}} = I_A.$ This is supposed to be analogous to the condition $$\sum_Y P(Y|X) = 1$$. Notice that this is exactly how the operators that are Choi-isomorphic to CPT maps are normalized. Given a conditional state, $$\rho_{B|A}$$, and a reduced state $$\rho_A$$, I can define a joint state via $\rho_{AB} = \sqrt{\rho_A} \rho_{B|A} \sqrt{\rho_A},$ where I have suppressed the implicit $$\otimes I_B$$ required to make the products well defined. The conjugation by the square root ensures that $$\rho_{AB}$$ is positive, and it is easy to check that $$\PTr{AB}{\rho_{AB}} = 1$$. Conversely, given a joint state, I can find its reduced state $$\rho_A = \PTr{B}{\rho_{AB}}$$ and then define the conditional state $\rho_{B|A} = \sqrt{\rho_A^{-1}} \rho_{AB} \sqrt{\rho_A^{-1}},$ where I am going to ignore cases in which $$\rho_A$$ has any zero eigenvalues so that the inverse is well-defined (this is no different from ignoring the division by zero in the classical case). Now, suppose you are given $$\rho_A$$ and you want to know what $$\rho_B$$ should be. Is there a linear map that tells you how to do this, analogous to the stochastic map $$\Gamma_{Y|X}$$ in the classical case? The answer is obviously yes. We can define a map $$\mathfrak{E}_{B|A}: \mathcal{L} \left ( \mathcal{H}_A\right ) \rightarrow \mathcal{L} \left ( \mathcal{H}_B\right )$$ via $\mathfrak{E}_{B|A} \left ( \rho_A \right ) = \PTr{A}{\rho_{B|A} \rho_A},$ where we have used the cyclic property of the trace to combine the $$\sqrt{\rho_A}$$ terms, or $\mathfrak{E}_{B|A} \left ( \cdot \right ) = \PTr{A}{\rho_{B|A} (\cdot)}.$ The map $$\mathfrak{E}_{B|A}$$ so defined is just the Jamiolkowski isomorphic map to $$\rho_{B|A}$$ and the above equation gives the reverse direction of the Jamiolkowski isomorphism that I was being secretive about earlier. The punchline is that the Choi-Jamiolkowski isomorphism should not be thought of as a mapping between quantum states and quantum operations, but rather as a mapping between conditional quantum states and quantum operations. It is no more surprising than the fact that classical stochastic maps are determined by conditional probability distributions. If you think of it in this way, then your approach to quantum information will become conceptually simpler a lot of ways. These ways are discussed in detail in the paper. ### Causal Conditional States There is a subtlety that I have glossed over so far that I’d like to end with. The map $$\mathfrak{E}_{B|A}$$ is not actually completely positive, which is why I did not denote it $$\mathcal{E}_{B|A}$$, but when preceeded by a transpose on $$A$$ it defines a completely positive map. This is because the Jamiolkowski isomorphism is defined in terms of the partial transpose of the maximally entangled state. Also, so far I have been talking about two distinct quantum systems that exist at the same time, whereas in the classical case, I talked about the input and output of a classical channel. A quantum channel is given by a CPT map $$\mathcal{E}_{B|A}$$ and its Jamiolkowski representation would be $\mathcal{E}_{B|A} \left (\rho_A \right ) = \PTr{A}{\varrho_{B|A}\rho_A},$ where $$\varrho_{B|A}$$ is the partial transpose over $$A$$ of a positive operator and it satisfies $$\PTr{B}{\varrho_{B|A}} = I_A$$. This is the appropriate notion of a conditional state in the causal scenario, where you are talking about the input and output of a quantum channel rather than two systems at the same time. The two types of conditional state are related by a partial transpose. Despite this difference, a good deal of unification is achieved between the way in which acausally related (two subsystems) and causally related (input and output of channels) degrees of freedom are described in this framework. For example, we can define a “causal joint state” as $\varrho_{AB} = \sqrt{\rho_A} \varrho_{B|A} \sqrt{\rho_A},$ where $$\rho_A$$ is the input state to the channel and $$\varrho_{B|A}$$ is the Jamiolkowski isomorphic map to the CPT map. This unification is another main theme of the paper, and allows a quantum version of Bayes’ theorem to be defined that is independent of the causal scenario. ### The Wonderful World of Conditional States To end with, here is a list of some things that become conceptually simpler in the conditional states formalism developed in the paper: • The Born rule, ensemble averaging, and quantum dynamics are all just instances of a quantum analog of the formula $$P(Y) = \sum_X P(Y|X)P(X)$$. • The Heisenberg picture is just a quantum analog of $$P(Z|X) = \sum_Y P(Z|Y)P(Y|X)$$. • The relationship between prediction and retrodiction (inferences about the past) in quantum theory is given by the quantum Bayes’ theorem. • The formula for the set of states that a system can be ‘steered’ to by making measurements on a remote system, as in EPR-type experiments, is just an application of the quantum Bayes’ theorem. If this has whet your appetite, then this and much more can be found in the paper. ### Share this: Related posts: This entry was posted in Quantum Quandaries. Bookmark the permalink. ### 6 Responses to The Choi-Jamiolkowski Isomorphism: You’re Doing It Wrong! 1. Blake Stacey I was very happy to see this paper appear on the arXiv last night, as after a first read-through, it looks like this might help me cook up a quantum generalization of something I’ve been working on in classical information theory. Nice blog post, too! I’m sure I’ll have questions once I get the chance to work through the algebra in more depth. 2. Cesar Rodriguez-Rosario This interpretation of the isomorphism in terms of conditional states is great! The intuition you provide is very nice, I’m a fan. I’ll go read the paper now to learn more details. 3. Blake Stacey I noticed a couple minor cosmetic points in section A of the Introduction. Corrections are in brackets: Generally, a region will refer to a collection of elementary region[s]. and e.g., the input and output spaces for a quantum channel are assigned different labels[.] (I’ve spent so much time this year rewriting manuscripts that I can’t help but copy-edit. Sigh.) 4. Guillaume Thanks for explaining the difference between Choi isomorphism and Jamiolkowski isomorphism in your paper, that was heplful to me ! 5. marozols Hi Matt. Thanks a lot for taking your time to write this post—it really clarified things to me. I was always puzzled by how the maximally entangled state and the Choi matrix is normalized, but now it all makes sense when I think of them as conditional probability distributions. I’m surprised that these kind of analogies are not widely known and part of standard curriculum, since they make the quantum concepts much more intuitive and presumably easier to learn. Btw, I wonder if your remark about doing the whole computation before the input is supplied also applies classically. 6. mleifer Thanks. I’m glad you appreciated it. The remark about doing the whole computation before the input does apply classically, but it is pretty useless. What it corresponds to is choosing an input at random and then making a copy of it. The correlated state of the two copies then corresponds to the maximally entangled state. Then, you run the computation on one of the copies. When you want to perform the computation on your chosen input you then just check whether it is the same as the randomly chosen input, this being analogous to the Bell measurement. If it is then you accept the output of the computation in the second copy and if it isn’t then you reject it and start the whole process again. Obviously, this is a pretty useless procedure as the probability of correctly guessing the input goes down exponentially with the number of bits, but this is conceptually no different from what is going on in quantum gate teleportation. • ### Subscribe to Blog via Email Join 24 other subscribers Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 126, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311482906341553, "perplexity_flag": "head"}
http://simple.wikipedia.org/wiki/Radioactive_decay
# Radioactive decay This article . You can help Wikipedia by finding sources, and adding them. (August 2009) The English used in this article may not be easy for everybody to understand. (August 2010) The trefoil symbol is used to indicate radioactive material. Radioactive decay is the process where the nucleus of an atom changes into another type of nucleus and releases a particle at the same time. Nuclei that change like this are called radioactive or unstable. Most atoms on earth are not radioactive and are stable. But, atoms that have too few or too many neutrons than a stable atom can be radioactive. For example, most carbon atoms in the world have six protons and six neutrons in their nucleus. This carbon is called carbon-12, because 12 is the number of protons plus the number neutrons in the carbon-12 nucleus (six protons + six neutrons = 12). Carbon's atomic weight is 12. If two more neutrons are added to carbon-12, it becomes carbon-14. Carbon-14 still acts chemically like carbon, because carbon is defined by having six protons and six electrons. Carbon-14 attracts six electrons no matter how many neutrons it has. In fact, carbon-14 exists in all living things that contain carbon; all plants and animals contain carbon-14. However, carbon-14 is radioactive and so it can be detected. Carbon-14, in the small amounts found about us in nature, is harmless. Alpha decay, beta decay and gamma decay are the most common types of radioactive decay. They are different from each other because different types of decay produce different particles. The starting radioactive nucleus is called the parent nucleus and the nucleus that it changes into is called the daughter nucleus. The high-energy particles produced by radioactive materials are called radiation. ## Nuclear transformations and energy Radioactive decay changes an atom from one that has higher energy inside its nucleus into one with lower energy. The change of energy of the nucleus is given to the particles that are created. The energy released by radioactive decay may either be carried away by a gamma ray electromagnetic radiation (a type of light), a beta particle or an alpha particle. In all those cases, the change of energy of the nucleus is carried away. And in all those cases, the total number of positive and negative charges of the atom's protons and electrons sum to zero before and after the change. ## Alpha decay During alpha decay, the atomic nucleus releases an alpha particle. Alpha decay causes the nucleus to lose two protons and two neutrons. Alpha decay causes the atom to change into another element, because the atom loses two protons (and two electrons). For example, if Americium were to go through alpha decay it would change into Neptunium because Neptunium is defined by having two protons fewer than Americium. Alpha decay usually happens in the most heavy elements, such as uranium, thorium, plutonium, and radium. Alpha particles cannot even go through a few centimeters of air. Alpha radiation cannot hurt humans when the alpha radiation source is outside the human body, because human skin does not let the alpha particles go through. Alpha radiation can be very harmful if the source is inside the body, such as when people breathe dust or gas containing materials which decay by emitting alpha particles (radiation). ## Beta decay There are two kinds of beta decay, beta-plus and beta-minus. In beta-minus decay, the nucleus gives out a negatively charged electron and a neutron changes into a proton: $n^0 \rightarrow p^+ + e^- + \bar{\nu}_e$. where $n^0$ is the neutron $\ p^+$ is the proton $e^-$ is the electron $\bar{\nu}_e$ is the anti-neutrino Beta-minus decay happens in nuclear reactors. In beta-plus decay, the nucleus releases a positron, which is like an electron but it is positively charged, and a proton changes into a neutron: $\ p^+ \rightarrow n^0 + e^+ + {\nu}_e$. where $\ p^+$ is the proton $n^0$ is the neutron $e^+$ is the positron ${\nu}_e$ is the neutrino Beta-plus decay happens inside the sun and in some types of particle accelerators. ## Gamma decay Gamma decay happens when a nucleus produces a high-energy packet of energy called a gamma ray. Gamma rays do not have electrical charge, but they do have angular momentum. Gamma rays are usually emitted from nuclei just after other types of decay. Gamma rays can be used to see through material, to kill bacteria in food, to find some types of disease, and to treat some kinds of cancer. Gamma rays have the highest energy of any electromagnetic wave, and gamma ray bursts from space are the most energetic releases of energy known, even more energetic than supernovas.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186223149299622, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/07/04/the-hopf-fibration/
# The Unapologetic Mathematician ## The Hopf Fibration As a nontrivial example of a foliation, I present the “Hopf fibration”. The name I won’t really explain quite yet, but we’ll see it’s a one-dimensional foliation of the three-dimensional sphere. So, first let’s get our hands on the three-sphere $S^3$. This is by definition the collection of vectors of length $1$ in $\mathbb{R}^4$, but I want to consider this definition slightly differently. Since we know that the complex plane $\mathbb{C}$ is isomorphic to the real plane $\mathbb{R}^2$ as a real vector space, so we find the isomorphism $\mathbb{R}^4\cong\mathbb{C}^2$. Now we use the inner product on $\mathbb{C}$ to define $S^3$ as the collection of vectors $(z_1,z_2)$ with $\lvert z_1\rvert^2+\lvert z_2\rvert^2=1$. Now for each $\alpha\in\mathbb{R}$ we can define a foliation. The leaf through the point $(z_1,z_2)$ is the curve $\left(z_1e^{it},z_2e^{i\alpha t}\right)$. Since multiplying by $e^{it}$ and $e^{i\alpha t}$ doesn’t change the norm of a complex number, this whole curve is still contained within $S^3$. Every point in $S^3$ is clearly contained in some such curve, and two points being contained within the same curve is an equivalence relation: any point is in the same curve as itself; if $w_1=z_1e^{it}$ and $w_2=z_2e^{i\alpha t}$, then $z_1=w_1e^{i(-t)}$ and $z_2=w_2e^{i\alpha(-t)}$; and if $w_1=z_1e^{is}$, $w_2=z_2e^{i\alpha s}$, $x_1=w_1e^{it}$ and $x_2=w_2e^{i\alpha t}$, then $x_1=z_1e^{i(s+t)}$ and $x_2=w_2e^{i\alpha(s+t)}$. This shows that the curves do indeed partition $S^3$. Now we need to show that the tangent spaces to the leaves provide a distribution on $S^3$. Since this will be a one-dimensional distribution, we just need to find an everywhere nonzero vector field tangent to the leaves, and the derivative of the curve through each point will do nicely. At $(z_1,z_2)$ we get the derivative $\displaystyle\frac{d}{dt}\left(z_1e^{it},z_2e^{i\alpha t}\right)\Big\vert_0=(iz_1,i\alpha z_2)$ It should be clear that this defines a smooth vector field over all of $S^3$, though it may not be clear from the formulas that these vectors are actually tangent to $S^3$. To see this we can either (messily) convert back to real coordinates or we can think geometrically and see that the tangent to a curve within a submanifold must be tangent to that submanifold. The Hopf fibration is what results when we pick $\alpha=1$, but the case of irrational $\alpha$ is very interesting. In this case we find that some leaves curve around and meet themselves, forming circles, while others never meet themselves, forming homeomorphic images of the whole real line. What this tells us is that not all the leaves of a foliation have to look like each other. To see this, we try to solve the equations $\displaystyle\begin{aligned}z_1&=z_1e^{it}\\z_2&=z_2e^{i\alpha t}\end{aligned}$ The first equation tells us that either $z_1=0$ or $t=2\pi n$. In the first case, we simply have the circle $\lvert z_2\rvert=1$. In the second case, the second equation tells us that either $z_2=0$ or $2\pi m=\alpha t=2\pi\alpha n$. The case where $z_2=0$ is similar to the case $z_1=0$, but if neither coordinate is zero then we find $\alpha=\frac{m}{n}$. But we assumed that $\alpha$ is irrational, so we get no nontrivial solutions for $t$ here. Since the curves don’t change the length of either component, we can get other examples of foliations. For instance, if we let $\lvert z_1\rvert=\lvert z_2\rvert=\frac{1}{2}$, then the curve will stay on the torus $S^1\times S^1$ where each circle has radius $\frac{1}{\sqrt{2}}$ in its copy of $\mathbb{C}$. Looking at all the curves on this surface gives a foliation of the torus. If $\alpha$ is irrational, the curve winds around and around the donut-shaped surface, never quite coming back to touch itself, but eventually coming arbitrarily close to any given point on the surface. ### Like this: Posted by John Armstrong | Differential Topology, Topology No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 51, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246032238006592, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4179396
Physics Forums Thread Closed Page 2 of 2 < 1 2 ## limit : alternate defination Quote by ato i really dont know you mean by "computer style code" . You are writing in a format similar to the pseudo-code used by programmers. If you want epsilon-delta in symbolic logic, it'll look more like the following: ##(\forall \epsilon \, : \epsilon > 0)(\exists \delta \, : \, \delta > 0)(\forall x)((0 < |x-a| < \delta)\rightarrow(0<|f(x)-L|<\epsilon))## 1. using AND,OR comes under logic. it helps to say what we want say to in more flexible form . I'm pretty sure Fredrik knows what conjunction and disjunction are. 2. using square brackets are needed to show exactly which statements are joined by AND OR => We use round brackets actually. A => B AND C could either be misunderstood as A => [B AND C] or [ A => B ] AND C The first one. Conjunction has higher precedence than implication. Quote by Fredrik I have no idea what you mean by δ1, ε1, δ2, and so on. i mean δ1 means first element of (0,∞) , but i think you are going to ask δ1 or δ2 is. i would say i dont know , i dont need to know. what i know is (0,∞) is set and if it has at least two element then δ1 and δ2 exist and using them in a statement is not a problem. however i would insist/request you/anyone to give confirmation on lim f(x) at a is L if and only if f([x1,x2]) is real interval for all x1 ,x2 ∈ domainf update: lim f(x) at a is L if and only if f([a-δ,a+δ]) is real interval for at least one δ > 0 it is wrong because for example f([0-π,0+π]) is a real interval even if lim f(x) at 0 = 1/2 where f(x) = sin x . Mentor Quote by ato i mean δ1 means first element of (0,∞) , but i think you are going to ask δ1 or δ2 is. i would say i dont know , i dont need to know. what i know is (0,∞) is set and if it has at least two element then δ1 and δ2 exist and using them in a statement is not a problem. But you're not just talking about two deltas, you're talking about an infinite sequence of deltas. What is the significance of that sequence? If you think that there's an infinite sequence ##\delta_1,\delta_2,\dots## such that for all x in (0,∞) there's a positive integer n such that ##\delta_n=x##, then you're wrong. That's what "(0,∞) is not countable" means. If you meant that ##\delta_1,\delta_2,\dots## is just some arbitrary sequence in (0,∞), then you need to say so. However, I don't see a reason to bring a sequence into this. I thought you were just trying to rewrite what the epsilon-delta definition is saying in a way that you're more comfortable with. And the epsilon-delta definition doesn't mention any sequences. Quote by ato however i would insist/request you/anyone to give confirmation on The statement to the right of "if and only if" is not equivalent to ##\lim_{x\to a}f(x)=L##. The only observation I needed to make to know that for sure is that it doesn't contain a or L. Mentor Quote by ato nothing => [ [ b =/= L] => [ lim f(c) =/= b ]] Quote by Mark44 I have no idea what you're trying to say with that. It is logically meaningless. Quote by ato what i meant is i cannot find a statement that i could replace with "nothing" such as [ replacing_statement => [ [ b =/= L] => [ lim f(c) =/= b ]] ] is true. I still have no idea what you mean by this. This symbol -- => -- is usually taken to mean "implies". Also, to indicate "not equals" many people write !=, which is notation that comes from the C programming language. Even better is to use ≠, a symbol that is available in the Quick Symbols that appear when you click Go Advanced below the input pane. You can't write stuff like "replacing_statement => b ≠ L" and expect to be understood, without having said what "replacing_statement" represents. Quote by ato what i meant by [ [ b =/= L] => [ lim f(c) =/= b ]] is [ [ [ b =/= L ] and [ lim f(x) at c = L] ] => [ lim f(x) at c =/= b ]] however this is always true. What does "lim f(x) at c" mean? Is this what you mean? $$\lim_{x \to c} f(x)$$ This limit either exists (and is equal to some number, say L) or it doesn't exist. We don't say "limit of f(x) at c." This limit can exist whether or not f(c) happens to be defined. If it turns out that f is defined at c, the limit doesn't have to be the same value. Quote by ato which is why you might thought is meaningless. however what i actually wanted to say is [ replacing_statement => lim f(x) at c = L ] and [ L =/= Lwrong ] and [ replacing_statement =/> lim f(x) at c = Lwrong ] This is pretty much gibberish. Mentor From your update in post #19 Quote by ato f([0-π,0+π]) is a real interval even if lim f(x) at 0 = 1/2 where f(x) = sin x . I'm not sure what you mean by f([0-π,0+π]). The sine function maps the interval [##-\pi, \pi##] to the interval [-1, 1]. I don't know what you mean by "lim f(x) at 0 = 1/2". Are there typos in this? The sine function is continuous for all reals, so $$\lim_{x \to c} sin(x) = sin(c)$$. Mentor Quote by Mark44 What does "lim f(x) at c" mean? Is this what you mean? $$\lim_{x \to c} f(x)$$ This limit either exists (and is equal to some number, say L) or it doesn't exist. We don't say "limit of f(x) at c." As you know, what we do say is "the limit of f(x) as x goes to c". I've always felt that this is a little bit odd. We're talking about the limit of a function, and that function is denoted by f, not f(x). So there should be a way of saying "the limit of f(x) as x goes to c" without mentioning a dummy variable. I would like to say "the limit of f at c". Obviously, this is to be interpreted as "(the limit of f) at c", not "the limit of (f at c)". I wouldn't approve of the hybrid "the limit of f(x) at c", because when you mention the dummy variable x, you also have to say that it goes to something. I may have contributed to some confusion by using the phrase "f has a limit at c if..." (without this explanation) in one of my earlier posts in this thread. Regarding the =/= stuff that you're quoting in the middle of post #21, it seems to me that what he's saying is just this: $$\lim_{x\to c}f(x)=L\neq b\ \Rightarrow\ \lim_{x\to c}f(x)\neq b.$$ Hey ato, why not just accept the standard definition of limit? Let's suppose your definition of a limit is indeed correct, in that it allows you to prove all the standard theorems of calculus without introducing any inconsistencies. Even then, why even consider a "new definition" if an older definition already does the job in the most simple possible way, unless you can find some specific application to which your definition is tailored? I think you might consider accepting the work of: He was the original creator of the limit definition, and he did it 300 years ago. It's the standard you see in most texts, because his definition is simple yet correct. BiP Mentor Quote by Fredrik As you know, what we do say is "the limit of f(x) as x goes to c". I've always felt that this is a little bit odd. We're talking about the limit of a function, and that function is denoted by f, not f(x). So there should be a way of saying "the limit of f(x) as x goes to c" without mentioning a dummy variable. I don't see why this should be a consideration. c is a number on the x (typically) axis, and we're working with numbers that are "near" c (and are obviously values on the x-axis). Quote by Fredrik I would like to say "the limit of f at c". Obviously, this is to be interpreted as "(the limit of f) at c", not "the limit of (f at c)". I understand what you're saying, but I disagree. If a limit L exists, the closer x is to c, the closer f(x) -- not f -- is to L. What I'm doing here is a sort of paraphrase of the ##\epsilon - \delta## definition. Quote by Fredrik I wouldn't approve of the hybrid "the limit of f(x) at c", because when you mention the dummy variable x, you also have to say that it goes to something. I may have contributed to some confusion by using the phrase "f has a limit at c if..." (without this explanation) in one of my earlier posts in this thread. Regarding the =/= stuff that you're quoting in the middle of post #21, it seems to me that what he's saying is just this: $$\lim_{x\to c}f(x)=L\neq b\ \Rightarrow\ \lim_{x\to c}f(x)\neq b.$$ Mentor Quote by Mark44 I understand what you're saying, but I disagree. If a limit L exists, the closer x is to c, the closer f(x) -- not f -- is to L. What I'm doing here is a sort of paraphrase of the ##\epsilon - \delta## definition. Yes, we obviously have to say "f(x)" in a sentence that starts with "the closer x is to c...". I'm not saying that we shouldn't use sentences that start that way. I'm just saying that the phrase "the limit of f at c" makes more sense than "the limit of f(x) as x goes to c", since we're dealing with the limit of a function denoted by f. Your comment about the epsilon-delta definition made me realize that the reason why people use notation and terminology that mentions a dummy variable is that the definition they're working with mentions a dummy variable. I guess this has some pedagogical advantages. We could of course use an equivalent definition that doesn't mention a dummy variable: For each open interval A that contains L, there's an open interval B such that c is a member of B and f(B) is a subset of A. Edit: Oops, this is wrong! This definition should say "...and f(B-{c}) is a subset of A". If this had been the most popular definition, the standard notation would probably be ##\lim_c f## instead of ##\lim_{x\to c}f(x)##. We could say similar things about the notation ##\int_a^b f(x)\,\mathrm{d}x## for a Riemann integral. It would make at least as much sense to write ##\int_a^b f##, but it's nice to have a notation for the integral that makes it look like a Riemann sum. The notation reminds us of the definition. This is probably also why a notation without dummy variables is totally dominant in the context of Lebesgue integration. In that case, we don't want to be reminded of Riemann sums. Quote by Mark44 From your update in post #19 I'm not sure what you mean by f([0-π,0+π]). The sine function maps the interval [##-\pi, \pi##] to the interval [-1, 1]. I don't know what you mean by "lim f(x) at 0 = 1/2". Are there typos in this? The sine function is continuous for all reals, so $$\lim_{x \to c} sin(x) = sin(c)$$. f([0-π,0+π]) is a set (not function) constructed by taking each and every number from [0-π,0+π] , calculated f(that_number) putting into the set f([0-π,0+π]) . i borrowed this notation from Fredrick's post , when he used f(B) . if there is anyother more standard and accepted notation, please tell me, i would use that . whatever the notation is i dont say "f([0-π,0+π]) is a real interval such f-1(-π) and f-1(π) are the endpoints." what i mean by "lim f(x) at 0 = 1/2" is limx → 0 f(x) = 1/2 . and what i mean by this is lets say somehow we came up with a convoluted expression ( called g(x) ) for sin x such as when g(1/2) is calculated we get a undefined form. since limit helps us to define the undefined, we seek limits (my wrong definition here) help . limits says lim g(x) at 0 is L if and only if f([0-δ,0+δ]) is real interval for at least one δ > 0 lets put L = 1/2 and try to prove required conditions for at least one δ. as you can see f([0-π,0+π]) is real interval even if limx → 0 g(x) = 1/2 . because the 0 which was needed from g(0) is provided by f(-π) nad f(π) . hence limx → 0 g(x) = 1/2 is correct . but thats not we want, so i made a change in what i wrote . lim f(x) at a is L if and only if f([x1,x2]) is real interval for all x1 ,x2 ∈ domainf so even "f([0-π,0+π]) is real interval " is true, f([0-x1,0+x2]) is real interval is false for every 0 < x1 < π , 0 < x2 < π and since we could not prove required conditions . " limx → 0 g(x) = 1/2 " cant be prove true. however i would mention an addendum limx→af(x) = L ⇔ f([x1,x2]) is real interval for all x1 ,x2 ∈ domainf and limx→af(x) != L ⇔ f([x1,x2]) is not real interval for at least one x1 and x2 ∈ domainf so now we can also prove "limx → 0 g(x) != 1/2" true . Quote by Mark44 This is pretty much gibberish. Quote by fredrick it seems to me that what he's saying is just this: $$\lim_{x\to c}f(x)=L\neq b\ \Rightarrow\ \lim_{x\to c}f(x)\neq b.$$ no thats not i was saying. here you could use "L != b" to prove "limx→c != b" . for example above i did not use the fact "L != b" above in while proving "limx → 0 g(x) != 1/2". now you could replace replacing_statement with my definition and you would see it does not implies what it should not implies . infact it prove that false which is even more better . Quote by Fredrik The statement to the right of "if and only if" is not equivalent to ##\lim_{x\to a}f(x)=L##. The only observation I needed to make to know that for sure is that it doesn't contain a or L. what does not contain a or L ? could you elaborate ? Quote by Bipolarity Hey ato, why not just accept the standard definition of limit? Let's suppose your definition of a limit is indeed correct, in that it allows you to prove all the standard theorems of calculus without introducing any inconsistencies. Even then, why even consider a "new definition" if an older definition already does the job in the most simple possible way, unless you can find some specific application to which your definition is tailored? i dont understand exactly what epsilon - delta definition is ? i see two version when i read epsilon delta definition . 1. for all ε > 0, for at least one δ > 0 f([a-δ,a+δ]) ⊂ [L-ε,L+ε] 2. for all ε > 0, for at least one δ > 0 f([a-δ,a+δ]) ⊂ f([f-1(L-ε),f-1(L+ε)]) assuming f-1(L-ε) and f-1(L-ε) is unique 1st version is quite equivalent to my definition . in other words, i dont understand epsilon delta . i understand at least one equivalent version of it. about its application ? i dont know . i dont expect it to . the definition served its purpose . thats enough for me .but if i find anything (as a result of my_def) in future that does not agree with current calculus i will considered it a failure . i have not yet. if i do i would certainly post here . Quote by ato f([0-π,0+π]) is a set (not function) constructed by taking each and every number from [0-π,0+π] , calculated f(that_number) putting into the set f([0-π,0+π]) . i borrowed this notation from Fredrick's post , when he used f(B) . if there is anyother more standard and accepted notation, please tell me, i would use that . Its the correct notation. It just looks strange because you write 0, when most of us would just drop it. whatever the notation is i dont say "f([0-π,0+π]) is a real interval such f-1(-π) and f-1(π) are the endpoints." You keep bringing up this concept of "##f([x_1,x_2])## is a real interval". Why? It's not true in general. And its not important when discussing limits. since limit helps us to define the undefined Limits do no such thing. If you are talking about indeterminate forms, then that is a different matter. i dont understand exactly what epsilon - delta definition is ? i see two version when i read epsilon delta definition . 1. for all ε > 0, for at least one δ > 0 f([a-δ,a+δ]) ⊂ [L-ε,L+ε] <SNIP> 1st version is quite equivalent to my definition . You seem to have changed your definition multiple times, so I can't keep up. You need to write your definition out in full. Mentor Quote by ato what does not contain a or L ? could you elaborate ? lim f(x) at a is L if and only if f([x1,x2]) is real interval for all x1 ,x2 ∈ domainf I'm telling you that the first statement lim f(x) at a is L can't possibly be equivalent to the second statement f([x1,x2]) is real interval for all x1 ,x2 ∈ domainf since the second statement doesn't contain L or a. Quote by ato i dont understand exactly what epsilon - delta definition is ? i see two version when i read epsilon delta definition . 1. for all ε > 0, for at least one δ > 0 f([a-δ,a+δ]) ⊂ [L-ε,L+ε] 2. for all ε > 0, for at least one δ > 0 f([a-δ,a+δ]) ⊂ f([f-1(L-ε),f-1(L+ε)]) assuming f-1(L-ε) and f-1(L-ε) is unique If you replace the closed intervals with open intervals (i.e. change f([a-δ,a+δ]) ⊂ [L-ε,L+ε] to f((a-δ,a+δ)) ⊂ (L-ε,L+ε)) in the first definition, you have a definition that's almost the epsilon-delta definition. However, your definition would fail for functions that are defined at a but not continuous there, for example the f defined by f(x)=0 for all x≠a and f(a)=1. I think I may have contributed to this confusion by posting two definitions where I made this same mistake. I apologize for that. I think this would work however: "For each ε > 0, there's a δ > 0 such that f((a-δ,a+δ)-{a}) ⊂ (L-ε,L+ε)". The second definition will at best fail for a lot more functions (even if we replace the closed intervals by open intervals and remove the point a from the first interval). It might also be completely wrong. I haven't thought it through well enough to know. Maybe it works for strictly increasing functions or something. Also, you can't say something like "assuming f-1(L-ε) and f-1(L-ε) is unique" after the part of the statement that relies on this. You would have to start the definition by saying something like "for all strictly increasing functions f, we define ##\lim_{x\to c}f(x)## as..." Quote by Fredrik If you replace the closed intervals with open intervals (i.e. change f([a-δ,a+δ]) ⊂ [L-ε,L+ε] to f((a-δ,a+δ)) ⊂ (L-ε,L+ε)), you have a definition that's almost the epsilon-delta definition.However, your definition would fail for functions that are defined at a but not continuous there, for example the f defined by f(x)=0 for all x≠a and f(a)=1. I think this would work however: "For each ε > 0, there's a δ > 0 such that f((a-δ,a+δ)-{a}) ⊂ (L-ε,L+ε)". the mistake i found that i mention in post 19, also apply to this too (sorry for not correcting it here). the gist of it is we cannot use ⊂ there. because what i want to say is to change for all [y1,y2] ⊂ f([a-δ,a+δ]) , y1,y2 ∈ f([a-δ,a+δ]) . alternativily, ⊂ says something about (L-ε,L+ε) not f((a-δ,a+δ)-{a}). i want function to remain/become continious after choosing a limit for at a point . by continious i mean f(x) increase through all real numbers (it should not skip a number) if x is increased continiously (without skipping number) from x1 to x2 . thats when i say, f([x1,x2] is a real interval . because for a set to be a real interval, it should include all the numbers between any two of its elements . let me show step by step transition from your definition (above version) to my definition (recent version). 1. "For each ε > 0, there's a δ > 0 such that f((a-δ,a+δ)-{a}) ⊂ (L-ε,L+ε)" why ⊂ has to go. for example consider f as a monotonically increasing function . so you could a above statement as "For each ε > 0 such that f(((f-1(L-ε)),f-1(L+ε))-{a}) ⊂ (L-ε,L+ε)" but f(((f-1(L-ε)),f-1(L+ε))-{a}) ⊂ (L-ε,L+ε) will always be true irrespective of L . we need to say is f((a-δ,a+δ)-{a}) is continious set. f((a-δ,a+δ)-{a}) is a real interval . so we can say 2. "For each δ > 0 such that f((a-δ,a+δ)-{a}) is real interval" or "For at least one δ > 0 such that f((a-δ,a+δ)-{a}) is real interval" but excluding a is not good idea, because lets say lim f at c is valid . but even then f((a-δ,a+δ)-{a}) would not be a real interval (at least for monotonically increasing functions). so including a, we can say 3."For each δ > 0 such that f((a-δ,a+δ)) is real interval" or "For at least one δ > 0 such that f((a-δ,a+δ)) is real interval" lets rule out "For at least one δ > 0 such that f((a-δ,a+δ)) is real interval". as i mention in post 19, consider f(x) = sin x and we knowingly define lim sinx as x goes to 0 = 1/2, a wrong value . so to prove that this is actually a correct limit we need to prove "f((a-δ,a+δ)) is real interval" for at least one δ. since f([-π,π]) is a real interval so lim sinx as x goes to 0 = 1/2 . but if we include the for each δ clause, we can prevent that so we L_wrong is indeed wrong because f((-π/6,-π/6]) is not real interval . so lets test 4. "For each δ > 0 such that f((a-δ,a+δ)) is real interval" you are correct that this version would not give expected limit , lim f(x) as x goes to a for f(x)=0 for all x!=a and f(a)=1 but then traditional limit does not either , in other word traditional limit would say that limit does not exist for f(x) as x goes to a where as this version try to assign a value to f such that f becomes continious. however this version would fail for every x != a. because {0,1} is not a real interval. lets use this version 5. "For at least one δ > 0, f((a-Δx,a+Δx)) is real interval for all Δx, where 0<Δx<δ" now thats we can find out limit for all points except a, f(x)=0 for all x!=a and f(a)=1 . Quote by Fredrik You asked for a comment about the statementlim f(x) at a is L if and only if f([x1,x2]) is real interval for all x1 ,x2 ∈ domainfI'm telling you that the first statementlim f(x) at a is Lcan't possibly be equivalent to the second statementf([x1,x2]) is real interval for all x1 ,x2 ∈ domainfsince the second statement doesn't contain L or a. why do you think L or a has to be mentaioned ? do you think that , if statement A : [ anything1 a anthing2 ] statement B : [ anything3 ] then [ A <=> B ] is false ? if yes, why would you think that ? i cant think of any benifit it would give ? or worse it gives wrong results ,like this [ f is increasing function ] <=> [ for every x1, x2 ∈ domainf [ x1 < x2 <=> f(x1 < x2) ] ] would be false which its not supposed to be ? or change L in LHS and if RHS does not changes (it does, thats what i say) then let me know i would give more arguements against this . if RHS does changes then you would notice that correct L, f([x1,x2]) becomes a real interval and vice versa . and thats what i said . Mentor Quote by ato why do you think L or a has to be mentaioned ? OK, there are equivalences where a variable is mentioned in one of the statements but not the other, for example $$x=1 \text{ and } x\neq 1 \Leftrightarrow 0\neq 0.$$ However, in the case of the two statements we're talking about, it couldn't possibly be more obvious that the two statements are not equivalent. Note for example that given a function f and a number a the truth value (true or false) of the first statement depends on the value of the variable L, but the truth value of the second statement does not. Ato, you are creating definition in order to solve a problem that was solved a few centuries ago. Your statements may be correct in your head (or maybe not) but do not follow normal practices in logic. Your statement $\lim_{x\rightarrow a}f(x)=L\iff f([x_1,x_2])$ is a real interval for all $x_1,x_2$ on the function's domain is not the definition of a limit. Take the example of $f(x)=\frac{\sin x}{x}$ which has a limit of 1 as $x\rightarrow 0$ but $f([0,\pi])$ is not an interval because $f(0)$ is not defined. Perhaps the idea that you have in your mind is correct, but your notation is wrong. You should stop trying to come up with a new way to say something before you understand the old way. Learn calculus before you attempt to rewrite it. Even if you are some genius who came up with a better way to define limits, you are not capable of speaking the language of mathematics to get your point across. Go back to your books (or get new ones) and learn how limits work. They do. The definition is good. It really doesn't make sense to discuss this with you until you understand the definition of a limit. Also, http://www.maths.tcd.ie/~dwilkins/LaTeXPrimer/ would make my eyes hurt a little less. Mentor I wholeheartedly agree with everything that DrewD said. I just want to add that we have put together a guide for using LaTeX here in the forum. Link. Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus This thread has gone on for long enough. Ato, I would suggest you to read a good calculus book such as Spivak or Apostol. Limits are very well understood these days. And the definition we have right now works. Please make some effort to understand our definition. If you do, then we might discuss things again. Thread Closed Page 2 of 2 < 1 2 Thread Tools Similar Threads for: limit : alternate defination Thread Forum Replies Classical Physics 15 Calculus & Beyond Homework 1 Linear & Abstract Algebra 3 Engineering, Comp Sci, & Technology Homework 1 General Physics 10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503819942474365, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41209/why-has-the-trace-of-the-energy-momentum-tensor-to-vanish-for-conserved-scaling?answertab=oldest
Why has the trace of the energy-momentum tensor to vanish for conserved scaling currents to exist? In this paper, the authors say that the trace of the energy-momentum tensor has to vanish to allow for the existence of conserved dilatation or scaling currents, as defined on p 10, Eq(22) $$\Theta^{\mu} = x_{\nu} \Theta^{\mu\nu} + \Sigma^\mu.$$ $\Theta^{\mu\nu}$ is the energy-momentum tensor and $\Sigma^\mu = d_{\phi}\pi_i^{\mu}\phi^i$ is the internal part. This fact is just mentioned and I dont understand why this has to be the case, so can somebody explain it to me? - – Qmechanic♦ Nov 19 '12 at 22:36 1 Answer Without the internal part: The divergence $$\nabla_\mu (x_\nu \Theta^{\mu\nu}) = x_\nu \nabla_\nu \Theta^{\mu\nu} + \frac12(\nabla_\mu x_\nu + \nabla_\nu x_\mu) \Theta^{\mu\nu}$$ where I used that $\Theta^{\mu\nu}$ is symmetric. Recalling that the energy-momentum tensor is divergence free, the first term drops out. Assuming that $x^\nu$ generates a dilation/scaling symmetry (and not a bona fide symmetry), we know that its deformation $$\nabla_\mu x_\nu + \nabla_\nu x_\mu \propto \mathcal{L}_x g_{\mu\nu} \propto g_{\mu\nu}$$ where $\mathcal{L}$ is the Lie derivative. (In the case $x^\nu$ generates a symmetry the term vanishes from Killing's equation.) Hence in this case for the current to be conserved (that is, divergence free), we need that $g_{\mu\nu} \Theta^{\mu\nu} = 0$; that is, the energy momentum tensor is tracefree. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184114933013916, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/54656/list
## Return to Answer 2 added 1294 characters in body; added 54 characters in body Here are some extra remarks about the argument. First, this combinatorial principle. Another illustration of the same principle is the fact that a graph is $k$-colorable if and only if every finite subgraph is $k$-colorable. If the graph only has countably many vertices, then there is a standard proof by induction; it is only the uncountable case that requires something fancier such as the ultrafilter lemma or Tychonoff's theorem. In fact, the case that's needed is almost the same as the $k=2$ case of colorability. This case, and the orientability argument, is even easier than the general case because the local coloring or orientation is essentially unique. Second, cleaning up the atlas so that every chart is an interval and a non-empty intersection of any two charts is an interval. The first condition is sometimes part of the definition of an atlas. But if not, every open set in $\mathbb{R}$ is a countable union of intervals (or every open set in $\mathbb{R}^n$ is a countable union of balls) and you can just make them separate charts. As for the second condition, using the intermediate value theorem and the Hausdorff condition, two interval charts can only intersect at one end or at both ends. If they intersect at both ends, then all of the other charts are redundant and the manifold is a circle. 1 I think that it's easiest to model an orientable manifold as one whose gluing maps are in `$\mathrm{Diff}^+$`. Actually the following proof also works in the topological category. You can assume that all of the charts of the 1-manifold are open intervals, and that any two charts also intersect in an interval. Then, once you orient one of the intervals, the orientation spreads to its neighbors. Now, there is a principle in combinatorics that the orientations will all be consistent unless there is a finite obstruction. What would this obstruction look like? Using the Hausdorff condition, and throwing away redundant charts, you can clean up any finite collection of charts until you either have a sequence of charts or a cyclic sequence of charts chained together at the ends. Then it's clear in either case that there is no finite obstruction. Note that there are non-Hausdorff 1-manifolds that are not orientable; the Hausdorff condition is thus essential to the proof.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93988037109375, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/143044-subsets-unit-sphere-c-2-a.html
# Thread: 1. ## subsets of unit sphere in C^2 View $S^3$ as the unit sphere in $\mathbb{C}^2$. Now, 1. What are the path connected components of the subset of $S^3$ described by the equation $x^3 + y^6 = 0$, where the x and y refer to the coordinates (in $\mathbb{C}$)? 2. Is it true that the similar subset $x^2 + y^5 = 0$ is homeomorphic to the circle? 3. what is the fundamental group of $S^3 - K$, where K is the subset in the 2nd part of the problem? thnx 2. Originally Posted by shoplifter View $S^3$ as the unit sphere in $\mathbb{C}^2$. Now, 1. What are the path connected components of the subset of $S^3$ described by the equation $x^3 + y^6 = 0$, where the x and y refer to the coordinates (in $\mathbb{C}$)? 2. Is it true that the similar subset $x^2 + y^5 = 0$ is homeomorphic to the circle? 3. what is the fundamental group of $S^3 - K$, where K is the subset in the 2nd part of the problem? thnx What do you think? You can't expect us to just give you answers, we need to seem some effort on your part! 3. Originally Posted by shoplifter View $S^3$ as the unit sphere in $\mathbb{C}^2$. Now, 1. What are the path connected components of the subset of $S^3$ described by the equation $x^3 + y^6 = 0$, where the x and y refer to the coordinates (in $\mathbb{C}$)? 2. Is it true that the similar subset $x^2 + y^5 = 0$ is homeomorphic to the circle? 3. what is the fundamental group of $S^3 - K$, where K is the subset in the 2nd part of the problem? thnx Are you working with the ususal topology on $\mathbb{C} ^2$? The Zariski topology? 4. i apologize - i only wanted to get started, i've thought about this for a week and haven't progressed at all. we're looking at $S^3$ as the unit sphere in $\mathbb{C}^2$, so this is the standard topology 5. Originally Posted by shoplifter i apologize - i only wanted to get started, i've thought about this for a week and haven't progressed at all. we're looking at $S^3$ as the unit sphere in $\mathbb{C}^2$, so this is the standard topology Let's start with one that seems easiest, whether the curve defined by $x^2+y^5=0$ is homeomorphic to $\mathbb{S}^1$. I have literally put no real work into this, and so please do not think I am dropping hints because I know something, but my intuition (God knows how useful that is) says they are not. $\mathbb{S}^1$ is a $1$-manifold, is that subspace? 6. actually my prof said they are homeomorphic. should i be looking at ways to solve the simultaneous equations (because i have two, right? the one for S^3, and the one for the subset itself)? because i tried, and it doesn't help at all
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653543829917908, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/07/01/column-rank/?like=1&source=post_flair&_wpnonce=420e8d64d7
# The Unapologetic Mathematician ## Column Rank Let’s go back and consider a linear map $T:V\rightarrow W$. Remember that we defined its rank to be the dimension of its image. Let’s consider this a little more closely. Any vector in the image of $T$ can be written as $T(v)$ for some vector $v\in V$. If we pick a basis $\left\{e_i\right\}$ of $V$, then we can write $T(v)=T(v^ie_i)=v^iT(e_i)$. Thus the vectors $T(e_i)\in W$ span the image of $T$. And thus they contain a basis for the image. More specifically, we can get a basis for the image by throwing out some of these vectors until those that remain are linearly independent. The number that remain must be the dimension of the image — the rank — and so must be independent of which vectors we throw out. Looking back at the maximality property of a basis, we can state a new characterization of the rank: it is the cardinality of the largest linearly independent subset of $\left\{T(e_i)\right\}$. Now let’s consider in particular a linear transformation $T:\mathbb{F}^m\rightarrow\mathbb{F}^n$. Remember that these spaces of column vectors come with built-in bases $\left\{e_i\right\}$ and $\left\{f_j\right\}$ (respectively), and we have a matrix $T(e_i)=t_i^jf_j$. For each index $i$, then, we have the column vector $\displaystyle T(e_i)=\begin{pmatrix}t_i^1\\t_i^2\\\vdots\\t_i^n\end{pmatrix}$ appearing as a column in the matrix $\left(t_i^j\right)$. So what is the rank of $T$? It’s the maximum number of linearly independent columns in the matrix of $T$. This quantity we will call the “column rank” of the matrix. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 2 Comments » 1. [...] Yesterday we defined the column rank of a matrix to be the maximal number of linearly independent columns. Flipping over, we can consider the [...] Pingback by | July 2, 2008 | Reply 2. [...] , and we’ve got column vectors to consider. If all are linearly independent, then the column rank of the matrix is . Then the dimension of the image of is , and thus is [...] Pingback by | October 20, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9045707583427429, "perplexity_flag": "head"}
http://lucatrevisan.wordpress.com/tag/spectral-partitioning/
in theory "Marge, I agree with you - in theory. In theory, communism works. In theory." -- Homer Simpson # Tag Archive You are currently browsing the tag archive for the ‘Spectral partitioning’ tag. ## CS359G Lecture 4: Spectral Partitioning January 26, 2011 in CS359G | Tags: Cheeger inequality, Spectral partitioning | 3 comments In which we prove the difficult direction of Cheeger’s inequality. As in the past lectures, consider an undirected ${d}$-regular graph ${G=(V,E)}$, call ${A}$ its adjacency matrix, and ${M: = \frac 1d A}$ its scaled adjacency matrix. Let ${\lambda_1 \geq \cdots \geq \lambda_n}$ be the eigenvalues of ${M}$, with multiplicities, in non-increasing order. We have been studying the edge expansion of a graph, which is the minimum of ${h(S)}$ over all non-trivial cuts ${(S,V-S)}$ of the vertex set (a cut is trivial if ${S=\emptyset}$ or ${S=V}$), where the expansion ${h(S)}$ of a cut is $\displaystyle h(S) := \frac{Edges(S,V-S)}{d\cdot \min \{ |S|, |V-S| \} }$ We have also been studying the (uniform) sparsest cut problem, which is the problem of finding the non-trivial cut that minimizes ${\phi(S)}$, where the sparsisty ${\phi(S)}$ of a cut is $\displaystyle \phi(S) := \frac{Edges(S,V-S)}{\frac dn |S| \cdot |V-S|}$ We are proving Cheeger’s inequalities: $\displaystyle \frac {1-\lambda_2}{2} \leq h(G) \leq \sqrt{2 \cdot (1-\lambda_2) } \ \ \ \ \ (1)$ and we established the left-hand side inequality in the previous lecture, showing that the quantity ${1-\lambda_2}$ can be seen as the optimum of a continuous relaxation of ${\phi(G)}$, so that ${1-\lambda_2 \leq \phi(g)}$, and ${\phi(G) \leq 2 h(G)}$ follows by the definition. Today we prove the more difficult, and interesting, direction. The proof will be constructive and algorithmic. The proof can be seen as an analysis of the following algorithm. Algorithm: SpectralPartitioning • Input: graph ${G=(V,E)}$ and vector ${{\bf x} \in {\mathbb R}^V}$ • Sort the vertices of ${V}$ in non-decreasing order of values of entries in ${{\bf x}}$, that is let ${V=\{ v_1,\ldots,v_n\}}$ where ${x_{v_1} \leq x_{v_2} \leq \ldots x_{v_n}}$ • Let ${i\in \{1,\ldots,n-1\}}$ be such that ${h(\{ v_1,\ldots, v_i \} )}$ is minimal • Output ${S= \{ v_1,\ldots, v_i \}}$ We note that the algorithm can be implemented to run in time ${O(|V|+|E|)}$, assuming arithmetic operations and comparisons take constant time, because once we have computed ${h(\{ v_1,\ldots,v_{i} \})}$ it only takes time ${O(degree(v_{i+1}))}$ to compute ${h(\{ v_1,\ldots,v_{i+1} \})}$. We have the following analysis of the quality of the solution: Lemma 1 (Analysis of Spectral Partitioning) Let ${G=(V,E)}$ be a d-regular graph, ${{\bf x}\in {\mathbb R}^V}$ be a vector such that ${{\bf x} \perp {\bf 1}}$, let ${M}$ be the normalized adjacency matrix of ${G}$, define $\displaystyle \delta:= \frac{ \sum_{i,j} M_{i,j} |x_i - x_j |^2}{\frac 1n \sum_{i,j} |x_i - x_j |^2}$ and let ${S}$ be the output of algorithm SpectralPartitioning on input ${G}$ and ${x}$. Then $\displaystyle h(S) \leq \sqrt{2\delta}$ Remark 1 If we apply the lemma to the case in which ${{\bf x}}$ is an eigenvector of ${\lambda_2}$, then ${\delta = 1-\lambda_2}$, and so we have $\displaystyle h(G) \leq h(S) \leq \sqrt{2 \cdot (1-\lambda_2)}$ which is the difficult direction of Cheeger’s inequalities. Remark 2 If we run the SpectralPartitioning algorithm with the eigenvector ${{\bf x}}$ of the second eigenvalue ${\lambda_2}$, we find a set ${S}$ whose expansion is $\displaystyle h(S) \leq \sqrt{2\cdot (1-\lambda_2) } \leq 2 \sqrt{h(G)}$ Even though this doesn’t give a constant-factor approximation to the edge expansion, it gives a very efficient, and non-trivial, approximation. As we will see in a later lecture, there is a nearly linear time algorithm that finds a vector ${x}$ for which the expression ${\delta}$ in the lemma is very close to ${1-\lambda_2}$, so, overall, for any graph ${G}$ we can find a cut of expansion ${O(\sqrt {h(G)})}$ in nearly linear time. Read the rest of this entry » ## Max Cut-Gain and the Smallest Eigenvalue September 23, 2008 in math, theory | Tags: Expanders, Max Cut, Moses Charikar, Spectral partitioning | 3 comments In June, I wrote about my work on using spectral partitioning to approximate Max Cut. I have now posted a revised paper with a couple new things. One is an improved analysis due to Moses Charikar, of the algorithm described in the June paper. Moses shows that if one starts from a graph in which the optimum cuts a $1-\epsilon$ fraction of edges, then the algorithm cuts at least a $1-4\sqrt \epsilon + 8\epsilon$ fraction of edges (and also at least half of the edges). Cutting more than a $1-\frac 2 \pi \sqrt \epsilon + o_\epsilon(1)$ fraction of edges is Unique-Games-hard. Optimizing the fraction $\displaystyle \frac{ \max \{ \ \frac 12 \ , \ (1-4\sqrt \epsilon + 8\epsilon) \ \} }{1-\epsilon}$ we see that the approximation ratio of the algorithm is always at least $.531$. The second new result answers a question raised by an anonymous commenter in June: what about Max Cut Gain? Read the rest of this entry » ## Max Cut and the Smallest Eigenvalue June 13, 2008 in math, theory | Tags: Expanders, Max Cut, Spectral partitioning | 3 comments In the Max Cut problem, we are given an undirected graph $G=(V,E)$ and we want to find a partition $(L,\bar L)$ of the set of vertices such that as many edges as possible have one endpoint in $L$ and one endpoint in $\bar L$, and are hence cut by the partition. It is easy, as recognized since the 1970s, to find a partition that cuts half of the edges and that, thus, is at least half as good as an optimal solution. No approximation better than 1/2 was known for this problem, until Goemans and Williamson famously proved that one could achieve a .878… approximation using semidefinite programming (SDP). No other approximation algorithm achieving an approximation asymptotically better than 1/2 is known, and it seems that a fundamental difficulty is the following. Suppose we prove that a certain algorithm achieves approximation $> 51\%$. Then, given a graph in which the optimum is, say, $< 50.4 \%$, the algorithm and its analysis must provide a certificate that the optimum cut in the given graph is $< 50.4/51 < 99\%$, and there is no general technique to prove upper bounds to the Max Cut optimum of a general graph other than Semidefinite Programming. (And see here and here for negative results showing that large classes of linear programming relaxations are unable to give such certificates.) Spectral techniques can prove upper bounds to Max Cut in certain cases (and can be seen as special cases of the upper bounds provided by the Goemans-Williamson relaxation). In the simplified case in which $G=(V,E)$ is a $d$-regular graph, let $A$ be the adjacency matrix of $G$ and $\lambda_1 \geq \lambda_2 \geq \cdots \lambda_n$ be the eigenvalues of $A$; then it is easy to show that $(1) \ \ \ \displaystyle maxcut(G) \leq \frac 12 + \frac 12 \cdot \frac {|\lambda_n|}{d}$ where $maxcut(G)$ is the fraction of edges cut by an optimal solution. Unfortunately (1) does not quite have an approximate converse: there are graphs where $|\lambda_n|=d$ but $maxcut(G) = \frac 12 + o_d(1)$. The following fact, however, is always true and well known: • $|\lambda_n|=d$ if and only if $G$ contains a bipartite connected component. Is there an “approximate” version of the above statement characterizing the cases in which $d-|\lambda_n|$ is small? Surprisingly, as far as I know the question had not been considered before. For comparison, the starting point of the theory of edge expansion is the related fact • $\lambda_2=d$ if and only if $G$ is disconnected. Which can be rephrased as: • $\lambda_2=d$ if and only if there is a non-empty $S\subseteq V$, $|S| \leq |V|/2$ such that $edges(S,V-S)=0$. Cheeger’s inequality characterizes the case in which $d-\lambda_2$ is small: • If there is a non-empty $S\subseteq V$, $|S| \leq |V|/2$ such that $edges(S,V-S) \leq \epsilon \cdot d \cdot |S|$, then $\lambda_2 \geq d\cdot (1-2\epsilon)$; • If $\lambda_2 \geq d\cdot (1-\epsilon)$ then there is a non-empty $S\subseteq V$, $|S| \leq |V|/2$ such that $edges(S,V-S) \leq \sqrt{2 \epsilon} \cdot d \cdot |S|$. For a subset $S\subseteq V$, and a bipartition $L,R=S-L$ of $S$, we say that an edge $(i,j)$ fails [to be cut by] the bipartition if $(i,j)$ is incident on $S$ but it is not the case that one endpoint is in $L$ and one endpoint is in $R$. (This means that either both endpoints are in $L$, or both endpoints are in $R$, or one endpoint is in $S$ and one endpoint is not in $S$.) Then we can express the well-known fact about $\lambda_n$ as • $|\lambda_n|=d$ if and only if there is $S\subseteq V$ and a bipartition of $S$ with zero failed edges. In this new paper I prove the following approximate version • If there is a non-empty $S\subseteq V$, and a bipartition of $S$ with at most $\epsilon \cdot d \cdot |S|$ failed edges, then $|\lambda_n| \geq d\cdot (1-4\epsilon)$; • If $|\lambda_n| \geq d\cdot (1-\epsilon)$, then there is a non-empty $S\subseteq V$, and a partition of $S$ with at most $\sqrt{2 \epsilon} \cdot d \cdot |S|$ failed edges. The following notation makes the similarity with Cheeger’s inequality clearer. Define the edge expansion of a graph $G$ as $\displaystyle h(G) = \min_{S\subseteq V. \ |S| \leq \frac {|V|}{2} } \frac {edges(S,V-S)} {d|S|}$ Let us define the bipartiteness ratio of $G$ as $\displaystyle \beta(G) = \min_{S, \ L \subseteq S, \ R=S-L} \frac{edges(L) + edges(R) + edges(S,V-S)}{d|S|}$ that is, as the minimum ratio between failed edges of a partition of a set $S$ over $d|S|$. Then Cheeger’s inequality gives $\displaystyle \frac 12 \cdot \frac{d-\lambda_2}{d} \leq h(G) \leq \sqrt{2 \cdot \frac{d-\lambda_2}{d} }$ and our results give $\displaystyle \frac 14 \cdot \frac{d-|\lambda_n|}{d} \leq \beta(G) \leq \sqrt{2 \cdot \frac{d-|\lambda_n|}{d}}$ This translates into an efficient algorithm that, given a graph $G$ such that $maxcut(G) \geq 1-\epsilon$, finds a set $S$ and a bipartition of $S$ such that at least a $1- 4\sqrt{\epsilon}$ fraction of the edges incident on $S$ are cut by the bipartition. Removing the vertices in $S$ and continuing recursively on the residual graph yields a .50769… approximation algorithm for Max Cut. (The algorithm stops making recursive calls, and uses a random partition, when the partition of $S$ found by the algorithm has too many failed edges.) The paper is entirely a by-product of the ongoing series of posts on edge expansion: the question of relations between spectral techniques to max cut was asked by a commenter and the probabilistic view of the proof of Cheeger’s inequality that I wrote up in this post was very helpful in understanding the gap between $\lambda_n$ and $-d$. ## The Limitations of the Spectral Partitioning Algorithm May 23, 2008 in math, theory | Tags: Expanders, Integrality gap, Spectral partitioning | Leave a comment We continue to talk about the problem of estimating the expansion of a graph $G=(V,E)$, focusing on the closely related sparsest cut, defined as $\displaystyle \phi(G):= \min_{S} \frac {edges(S,V-S) } { \frac 1n \cdot |S| \cdot |V-S| }$ The spectral paritioning algorithm first finds a vector $x \in {\mathbb R}^n$ minimizing $(1) \ \ \ \displaystyle \frac{ \sum_{i,j} A(i,j) |x(i)-x(j)|^2 } { \frac 1n \cdot \sum_{i,j} |x(i)-x(j)|^2}$ (where $A$ is the adjacency matrix of $G$) and then finds the best cut $(S,V-S)$ where $S$ is of the form $\{ i \in V: \ x(i) \geq t \}$ for a threshold $t$. We proved that if the quantity in (1) is $\epsilon$ and $G$ is $d$-regular, then the algorithm will find a cut of sparsity at most $\sqrt{8 \epsilon d}$, and that if $x$ is the eigenvector of the second eigenvalue, then it is an optimal solution to (1), and the cost of an optimal solution to (1) is a lower bound to $\phi(G)$. This means that the algorithm finds a cut of sparsity at most $\sqrt{8 \phi(G) d}$. Can the analysis be improved? Read the rest of this entry » ## The Spectral Partitioning Algorithm May 11, 2008 in math, theory | Tags: Expanders, Spectral partitioning | 6 comments [In which we prove the "difficult part" of Cheeger's inequality by analyzing a randomized rounding algorithm for a continuous relaxation of sparsest cut.] We return to this month’s question: if $G=(V,E)$ is a $d$-regular graph, how well can we approximate its edge expansion $h(G)$ defined as $h(G) := \min_{S\subseteq V} \frac{edges(S,V-S)} {\min \{ |S|, \ |V-S| \} }$ and its sparsest cut $\phi(G)$ defined as $\phi(G) := \min_{S\subseteq V} \frac{edges(S,V-S)} { \frac 1n \cdot |S| \cdot |V-S|} = \min_{x\in \{0,1\}^n } \frac{ \sum_{i,j} A(i,j) \cdot |x(i)-x(j)|}{\frac 1n \sum_{i,j} |x(i)-x(j)|}$, where $A$ is the adjacency matrix of $G$. We have looked at three continuous relaxations of $\phi(G)$, the spectral gap, the Leighton-Rao linear program, and the Arora-Rao-Vazirani semidefinite program. As we saw, the spectral gap of $G$, defined as the difference between largest and second largest eigenvalue of $A$, can be seen as the solution to a continuous optimization problem: $d-\lambda_2 = \min_{x\in {\mathbb R}^n} \frac {\sum_{i,j} A(i,j) \cdot |x(i)-x(j)|^2}{\frac 1n \sum_{i,j} |x(i)-x(j)|^2}$. It follows from the definitions that $d-\lambda_2 \leq \phi \leq 2h$ which is the “easy direction” of Cheeger’s inequality, and the interesting thing is that $d-\lambda_2$ is never much smaller, and it obeys $(1) \ \ \ d-\lambda_2 \geq \frac {h^2}{2d} \geq \frac {\phi^2}{8d}$, which is the difficult part of Cheeger’s inequality. When we normalize all quantities by the degree, the inequality reads as $\frac 1 8 \cdot \left( \frac \phi d \right)^2 \leq \frac 1 2 \cdot \left( \frac h d \right)^2 \leq \frac{d-\lambda_2}{d} \leq \frac \phi d \leq 2 \frac h d$ . I have taught (1) in three courses and used it in two papers, but I had never really understood it, where I consider a mathematical proof to be understood if one can see it as a series of inevitable steps. Many steps in the proofs of (1) I had read, however, looked like magic tricks with no explanation. Finally, however, I have found a way to describe the proof that makes sense to me. (I note that everything I will say in this post will be completely obvious to the experts, but I hope some non-expert will read it and find it helpful.) We prove (1), as usual, by showing that given any $x\in {\mathbb R}^n$ such that $(2) \ \ \ \frac {\sum_{i,j} A(i,j) \cdot |x(i)-x(j)|^2}{\frac 1n \sum_{i,j} |x(i)-x(j)|^2} = \epsilon$ we can find a threshold $t$ such that the cut $(S,V-S)$ defined by $S:= \{ i: x_i \geq t \}$ satisfies $(3) \ \ \ \frac{edges(S,V-S)} { \min \{ |S|,\ |V-S| \} } \leq \sqrt{2 d \epsilon}$ and $\frac{edges(S,V-S)} { \frac 1n \cdot |S| \cdot |V-S| } \leq \sqrt{8 d \epsilon}$. This not only gives us a proof of (1), but also an algorithm for finding sparse cuts when they exist: take a vector $x$ which is an eigenvector of the second eigenvalue (or simply a vector for which the Rayleigh quotient in (2) is small), sort the vertices $i$ according to the value of $x(i)$, and find the best cut among the “threshold cuts.” This is the “spectral partitioning” algorithm. This means that proving (1) amounts to studying an algorithm that “rounds” the solution of a continuous relaxation to a combinatorial solution, and there is a standard pattern to such arguments in computer science: we describe a randomized rounding algorithm, study its average performance, and then argue that there is a fixed choice that is at least as good as an average one. Here, in particular, we would like to find a distribution $T$ over threshold, such that if we define $S:= \{ i: x(i) \geq T\}$ as a random variable in terms of $T$ we have ${\mathbb E} [ edges(S,V-S) ] \leq {\mathbb E} [\min \{ |S|,\ |V-S| \} ] \cdot \sqrt{2 d \epsilon}$ and so, using linearity of expectation, ${\mathbb E} [ edges(S,V-S) - \min \{ |S|,\ |V-S| \} \cdot \sqrt{2 d \epsilon}] \leq 0$ from which we see that there must be a threshold in our sample space such that (3) holds. I shall present a proof that explicitly follows this pattern. It would be nice if we could choose $T$ uniformly at random in the interval $\left[\min_i x(i), \max_i x(i)\right]$, but I don’t think it would work. (Any reader can see a counterexample? I couldn’t, but we’ll come back to it.) Instead, the following works: assuming with no loss of generality that the median of $\{ x(1),\ldots,x(n) \}$ is zero, $T$ can be chosen so that $|T|\cdot T$ is distributed uniformly in the interval $\left[\min_i x(i), \max_i x(i)\right]$. (This means that thresholds near the median are less likely to be picked than thresholds far from the median.) This choice seems to be a magic trick in itself, voiding the point I made above, but I hope it will become natural as we unfold our argument. Read the rest of this entry »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 192, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423835873603821, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/88937-finding-out-if-limits-exist-graph.html
# Thread: 1. ## Finding out if limits exist from a graph I seem to be having some trouble grasping this concept. I want to know how I can tell if a limit exists by looking at a graph (cartesian plane) given specific points. For instance I have the problem: Given the points (0, -3) [its an open dot at this point] and (-2, -5) [a closed dot here] and y = h(x) determine the following limits (if they exist): a. lim h(x) x -> -2 b. lim h(x) x -> 0 What math techniques can I use to approach a problem like this? Thanks for any help. 2. If the function is defined at only two listed points (or maybe only the one?), there can be no limit, as far as I know. Is any information provided regarding the function's behavior outside of the two listed points? Thank you! 3. Originally Posted by stapel If the function is defined at only two listed points (or maybe only the one?), there can be no limit, as far as I know. Is any information provided regarding the function's behavior outside of the two listed points? Thank you! No additional information is provided about the function. All the information that is provided is listed as such in my post above. I'm still a bit confused about this problem btw, so any additional comments you can make would be great. Thanks. 4. Are you sure the graph does not consist of lines or curves with "points (0, -3) [its an open dot at this point] and (-2, -5) [a closed dot here]"? It hat is the case then the $\lim_{x\rightarrow 0}h(x)= -3$ and $\lim_{x\rightarrow -2}h(x)= -5$. 5. Originally Posted by HallsofIvy Are you sure the graph does not consist of lines or curves with "points (0, -3) [its an open dot at this point] and (-2, -5) [a closed dot here]"? It hat is the case then the $\lim_{x\rightarrow 0}h(x)= -3$ and $\lim_{x\rightarrow -2}h(x)= -5$. There is one line on this graph and it passes through the points I have given, and as I have said there's an open dot at the point (0,-3) and a closed dot at the other one. Does your limit existence still hold true now that you know this? I'm pretty sure I understand what your trying to say. Thanks! 6. Originally Posted by hemi There is one line on this graph and it passes through the points I have given, and as I have said there's an open dot at the point (0,-3) and a closed dot at the other one. Does your limit existence still hold true now that you know this? I'm pretty sure I understand what your trying to say. Thanks! You might consider telling us what you have been told an open and closed dots signify (we can guess, but it is good for you to practice giving all the information you have when asking for help). Also consider if there is some other contextural information, that you have that might make our job a bit easier, that we are unaware of which might make our job a bit easier. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632501006126404, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/149260-challenging-grade-6-question-grade-9-a.html
Thread: 1. Challenging grade 6 question or grade 9 ? Abel, Bob, Conan, Dave and Elijah divided a certain number of marbles amongst themselves in the following way: Abel took 1 marble and 20% of the remaining marbles, then Bob took 1 marble and 20% of the remaining marbles. Conan, Dave and Elijah did the same. At least how many marbles were there in tbe beginning? I have two answers but do not know which is correct. Is it a matter of how you interpret the 20% of the remaining ? Math is meant to be black and white but when different ppl say different things ... confusing 3121 and 10 Answer 1 [1][ ][ ][ ][ ][ ] --> 1 + 5/4 (1 + 5/4 (1 + 5/4 (1 + 5/4 (1 + R))))) units not drawn to scale [1][ ][ ][ ][ ][ ] --> 1 + 5/4 (1 + 5/4 (1 + 5/4 (1 + R)))) units not drawn to scale [1][ ][ ][ ][ ][ ] --> 1 + 5/4 (1 + 5/4 (1 + R)) units not drawn to scale [1][ ][ ][ ][ ][ ] --> 1 + 5/4 (1 + R) [1][][][][][] --> 1 + R As R is a multiple of 5, try till you get the first expression as a whole number. Solve using Microsoft Office - Excel, the first whole number is 3121. There were 3121 marbles in the beginning. Answer 2 each person has 1 marble plus 20% of the 'remainder' --> There are 5 marbles + 5x20% of the 'remainder' --> 100% of the remainder is used / split across the 5 people So, what is the smallest number that satisfies these conditions - 100% of the Remainder / 5 is a positive whole number (cannot have a fraction of a marble nor a negative marble) - the Remainder is greater than zero. If not than, 0/5 = 0 producing a whole number. I don't think they are looking for this answer however. I would think that the Remainder needs to be greater than 0. With this assumption, the Remainder needs to be a multiple of 5. The smallest multiple of 5 is 5. --> 5 marbles + 5 marbles = 10 marbles The smallest number of marbles is 10. 2. i think it wants answer 1 as well. edit typo fixed 3. Hello spaarky Your first answer is correct: the lowest number you can start with is 3121. This leaves 1020 marbles at the end, which is $4^5-4$. In fact, all solutions will leave a number of marbles in the form $4^5n-4,\;n=1,\; 2,\; 3,\; ...$ I'll leave it to someone else (or you!) to prove it! Grandad
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287987351417542, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/13422/numerical-computation-of-the-rayleigh-lamb-curves
# Numerical computation of the Rayleigh-Lamb curves The Rayleigh-Lamb equations: $$\frac{\tan (pd)}{\tan (qd)}=-\left[\frac{4k^2pq}{\left(k^2-q^2\right)^2}\right]^{\pm 1}$$ (two equations, one with the +1 exponent and the other with the -1 exponent) where $$p^2=\frac{\omega ^2}{c_L^2}-k^2$$ and $$q^2=\frac{\omega ^2}{c_T^2}-k^2$$ show up in physical considerations of the elastic oscillations of solid plates. Here, $d$ is the thickness of a elastic plate, $c_L$ the velocity of longitudinal waves, and $c_T$ the velocity of transverse waves. These equations determine for each positive value of $\omega$ a discrete set of real "eigenvalues" for $k$. My problem is the numerical computation of these eigenvalues and, in particular, to obtain curves displaying these eigenvalues. What sort of numerical method can I use with this problem? Thanks. Edit: Using the numerical values $d=1$, $c_L=1.98$, $c_T=1$, the plots should look something like this (black curves correspond to the -1 exponent, blue curves to the +1 exponent; the horizontal axis is $\omega$ and the vertical axis is $k$): - once you insert $\omega$ you have an equation $f(k)=0$ that you want to solve for $k$. This is a 1 d non-linear root finding problem (as you can see from reading a few introductory paragraphs from each section in the book numerical recipes (I would not use their code though). As it is 1d you can bracket the root to search for it using secant or other method. – Alice Aug 12 '11 at 11:44 can you solve for the eigenvalues analytically? and then just use the software to plot the graph? : ) – Timtam Aug 13 '11 at 12:39 @Timtam: Solving for the eigenvalues analytically is part of the problem – becko Aug 18 '11 at 2:06 @Alice: The problem is there is more than one root. Blindly bracketing for roots can miss some of them. – becko Aug 18 '11 at 2:06 @becko. Only three free parameters, $d$, $\omega/c_T$, $\omega/c_L$ that will specify the roots. For most situations $d$ likely tells you the spacing with roots every $k^2 d$ mod $2 \pi$. Once you bracket the roots you can run a root finder. If you plot the function for various regimes of your three parameters you can check that you have appropriately bracketed the roots. – Alice Aug 18 '11 at 12:49 ## 1 Answer How about instead of finding the roots and then making the plots, you skip right to the plots using a Monte Carlo method? Choose a random k, then choose a random ω and calculate the left-hand side and the right-hand side of the R-L equations. If the RHS is close enough to the LHS (you pick how close), put a point on the plot (blue or black, depending on which branch you used). The more points you process, and the more stringent the condition of RHS=LHS equality you pick, the more accurate will the plot look. A problem with this approach is that when you put a point on the plot, your algorithm doesn't know which branch it belongs to. But if it is the plot you are after, you will have no problem figuring it out by the eye when the calculation is complete. To numerically read from that Monte Carlo plot, you can sort the solutions you find (sort them in k-space or in ω-space), and do some kind of search using interpolation. Here is an approximate C code to explain what I mean: ````void main() { srand(1); const int N=1000000000; const float eps=0.01; const float cl=...; const float ct=...; const float kmax=20.0; const float omegamax=20.0; float k, omega, p, q, lhs, rhs; for (int i=0; i<N; i++) { k=(float)rand()/(float)RAND_MAX*k_max; omega=(float)rand()/(float)RAND_MAX*omega_max; lhs=tan(...)/tan()...; rhs=(4.0*k*k...); if (fabs(lhs-rhs) < eps*fabs(lhs+rhs)) printf("%f %f\n", k, omega); } } ```` and that's it. - +1 That's a good idea certainly worth exploring. I'll keep you posted. – becko Aug 19 '11 at 16:22 I am glad it is along the lines of what you wanted. For a lightweight problem like this, brute force Monte Carlo approach may pay off. The calculation will not be that long, especially if you parallelize it, but you will not need to put a lot of time and effort into the search for a more elegant solution. – drlemon Aug 19 '11 at 16:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8542270660400391, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/102062/list
## Return to Question 3 edited tags 2 corrected to remove an inaccurate link. Given $f:X\to Y$ a morphism of schemes (or stacks if it's not harder), I am interested in a geometric reformulation of the condition that the functor $f^*:D^b(Coh(Y))\to D^b(Coh(X))$ is full. I can only find full and faithful appearing together in the literature, and I need to extricate the two conditions. Does anyone know a simple formulation, or a good reference? Things I know which might help: 1. For affine schemes, it seems to be well-known that `$f^*$` is full and faithful if and only if $f$ is an open immersion. The explanation presented here translates full faithfulness of `$f^*$` to saying that the diagonal map $X\to X\times_Y X$ is an isomorphism, which in turn gives you that $f$ is mono. Other arguments about flatness give you that it's an immersion. I'm not sure how to modify this for $f^*$ only full, as the full and faithful assumptions seem to get applied in tandem. I'm also not sure how particular this line of inquiry is to affine schemes. Intuitively, asking for `$f^*$` to be full seems alot like asking that anytime you have a sheaf $F$ on $Y$, and a section of it defined only on $X$ (i.e. a section of `$f^*F$`), it can be extended to a section on all of $Y$. And so that would seem to indicate that the image of $f$ should have codimension-two complement. However, that intuition only really applies to underived `$f^*$`, and maybe deriving `$f^*$` eliminates the codimension 2 requirement? Also, this intution is assuming that f is mono, so that $f^*$ is just restriction, which I don't think is true a priori. As a side-note: I'd be interested in the same question (geometric characterization of fullness) for `$f_*$ and $f^!$.` 1 # Fullness of pullback functor in algebraic geometry Given $f:X\to Y$ a morphism of schemes (or stacks if it's not harder), I am interested in a geometric reformulation of the condition that the functor $f^*:D^b(Coh(Y))\to D^b(Coh(X))$ is full. I can only find full and faithful appearing together in the literature, and I need to extricate the two conditions. Does anyone know a simple formulation, or a good reference? Things I know which might help: 1. For affine schemes, it seems to be well-known that `$f^*$` is full and faithful if and only if $f$ is an open immersion. The explanation presented here translates full faithfulness of `$f^*$` to saying that the diagonal map $X\to X\times_Y X$ is an isomorphism, which in turn gives you that $f$ is mono. Other arguments about flatness give you that it's an immersion. I'm not sure how to modify this for $f^*$ only full, as the full and faithful assumptions seem to get applied in tandem. I'm also not sure how particular this line of inquiry is to affine schemes. 2. Intuitively, asking for `$f^*$` to be full seems alot like asking that anytime you have a sheaf $F$ on $Y$, and a section of it defined only on $X$ (i.e. a section of `$f^*F$`), it can be extended to a section on all of $Y$. And so that would seem to indicate that the image of $f$ should have codimension-two complement. However, that intuition only really applies to underived `$f^*$`, and maybe deriving `$f^*$` eliminates the codimension 2 requirement? Also, this intution is assuming that f is mono, so that $f^*$ is just restriction, which I don't think is true a priori. As a side-note: I'd be interested in the same question (geometric characterization of fullness) for `$f_*$ and $f^!$.`
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615012407302856, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/97021/how-does-one-prove-int-0-infty-prod-k-1-infty-operatorname-rm-sinc-lef/97043
How does one prove $\int_0^\infty \prod_{k=1}^\infty \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \mathrm{d} t = 2 \pi$ Looking into the distribution of a Fabius random variable: $$X := \sum_{k=1}^\infty 2^{-k} u_k$$ where $u_k$ are i.i.d. uniform variables on a unit interval, I encountered the following expression for its probability density: $$f_X(x) = \frac{1}{\pi} \int_0^\infty \left( \prod_{k=1}^\infty \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \right) \cos \left( t \left( x- \frac{1}{2} \right) \right) \mathrm{d} t$$ It seems, numerically, that $f\left(\frac{1}{2} \right) = 2$, but my several attempts to prove this were not successful. Any ideas how to approach this are much appreciated. - 2 Answers From Theorem 1 (equation (19) on page 5) of Surprising Sinc Sums and Integrals, we have $$\frac{1}{\pi} \int_0^\infty \left( \prod_{k=1}^N \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \right) \mathrm{d} t=2$$ for all $N<\infty$. I suppose you can justify letting $N\to \infty$ to get your result. One of the surprises in that paper concerns a similar integral $$\int_0^\infty \left( \prod_{k=0}^N \operatorname{\rm sinc}\left( \frac{t}{2{k+1}} \right) \right) \mathrm{d} t.$$ This turns out to be equal to $\pi/2$ when $0\leq N\leq 6$, but is slightly less than $\pi/2$ when $N=7$. - Since $u_k$ and $1-u_k$ are equal in distribution, it follows that $$X \stackrel{d}{=} \sum_{k=1}^\infty 2^{-k} \left(1-u_k\right) = \sum_{k=1}^\infty 2^{-k} - X = 1 - X$$ Since $u_k \geq 0$, it follows that $\mathbb{P}\left(0 \leqslant X \leqslant 1\right) = 1$. Therefore $f_X(x) = 0$ for $x < 0$ or $x>1$. It also shows that $f_X(x) = f_X(1-x)$ for $0<x<1$. Notice that $$\begin{eqnarray} f_X^\prime(x) &=& \frac{1}{\pi} \int_0^\infty \left(- t \sin\left( t \left(x-\frac{1}{2} \right) \right) \right) \operatorname{\rm sinc}\left( \frac{t}{4} \right) \prod_{k=2}^\infty \operatorname{\rm sinc}\left( \frac{t}{2^{k+1}} \right) \mathrm{d} t \end{eqnarray}$$ Now using $t \cdot \operatorname{\rm sinc}\left( \frac{t}{4} \right) = 4 \sin\left( \frac{t}{4} \right)$, and $$\sin\left( t \left(x-\frac{1}{2} \right) \right) \sin\left(\frac{t}{4} \right) = \frac{1}{2} \left[ \cos\left( \frac{t}{2} \left( (2 x-1) -\frac{1}{2} \right) \right) - \cos\left( \frac{t}{2} \left( 2 x -\frac{1}{2} \right) \right) \right]$$ which gives $$\begin{eqnarray} f_X^\prime(x) &=& 4 f_X(2 x) - 4 f_X(2x-1) \end{eqnarray}$$ Now let $0 < z \leqslant \frac{1}{2}$. Then $f_X^\prime(z) = 4 f_X(2z)$ and thus $$f_X(z) - f_X(0) = \int_0^{z} 4 f_X(2x) \mathrm{d} x = 2 \left(F_X(2z) - F_X(0)\right)$$ Clearly $F(0) = 0$, $F(1) = 1$ and $F_X\left(\frac{1}{2} \right) = \frac{1}{2}$, since the distribution is symmetric about $x=\frac{1}{2}$, thus $$f_X\left(\frac{1}{2}\right) = f_X\left(0\right) + 2 \qquad f_X\left(\frac{1}{4}\right) = f_X\left(0\right) + 1$$ If I now assume that $f_X(0) = 0$, the result follows. Added A missing proof of $f_X(0)=0$, as well as alternative proof of $f_X\left(\frac{1}{2}\right) = 2$ results from writing: $$X \stackrel{d}{=} \frac{u_1}{2} + \frac{1}{2} \sum_{k=2}^\infty 2^{-k+1} u_k \stackrel{d}{=} \frac{u_1+X}{2}$$ This equality in distribution implies the following integral equation for $f_X$: $$f_X(x) = 2 \int_0^1 f_X(2x -u) \mathrm{d} u$$ Substituting $x=0$ we get $f_X(0) = 2 \int_0^1 f_X(-u) \mathrm{d} u =0$, since $f_X(x)=0$ for $x<0$. Incidentally, using $x=\frac{1}{2}$ gives the desired identity as well: $$f_X\left(\frac{1}{2}\right) = 2 \int_0^1 f_X(1-u) \mathrm{d} u = 2 \int_0^1 f_X(u) \mathrm{d} u = 2$$ - 2 The probabilist in me likes this kind of argument! – Byron Schmuland Jan 6 '12 at 21:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444201588630676, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/44250/faddeev-popov-ghost-propagator-in-canonical-quantization/44454
# Faddeev-Popov ghost propagator in canonical quantization Obtaining the propagator for the Faddeev-Popov (FP) ghosts from the path integral language is straightforward. It is simply $$\langle T(c(x) \bar c(y))\rangle~=~\int\frac{d^4 p}{(2\pi)^4}\frac{i e^{-ip.(x-y)}}{p^2-\xi m^2+i\epsilon}.$$ But I am unable to properly derive it following canonical quantization route. The problem is that for anti-commuting fields, the time-ordered product is defined as $$\langle T(c(x) \bar c(y))\rangle=\theta(x^0-y^0)\langle c(x) \bar c(y)\rangle-\theta(y^0-x^0)\langle \bar c(y) c(x)\rangle$$ with a minus sign between the two terms. This minus sign is preventing me from closing the contours in the correct way to obtain the expression in my first equation. The only way I can save this is to say that FP ghosts are special, and their time-ordered product is defined with a plus sign instead of the minus sign. Is this legitimate? What is the right way to get the Ghost propagator following canonical quantization route? - 1 Comment to the last term on the rhs. of the last eq. of the question(v1): The bar should be on top of $c(y)$ not $c(x)$. – Qmechanic♦ Nov 14 '12 at 23:25 1. What is $\xi$? If it is $\pm 1$ metric convention, it also affects the $i\epsilon$ term. 2. Have you added an overall phase to $Z[j]$ to absorb the $i$ factor in the kinetic term of the ghost field? 3. Could you write the relevant action/hamiltonian, the Heisenberg picture ghost fields as a function of creation/anhilation operators and the anticommutation relations to see your conventions? – drake Nov 16 '12 at 21:27 By the way, is that a book problem or something you are thinking about? Because I would guess that the questions doesn't make sense. I think that the canonical formalism does not follow directly from the path integral version in FP trick. The closest could be BRST quantization, but there $c$ and $\bar c$ are independent real fields (so the your propagator is 0). I think that in a noncovariant formalism like the canonical makes more sense to fix the temporal gauge instead a covariant one. – drake Nov 17 '12 at 1:56 Last question: how do you deduce the $i\epsilon$ terms from the path integral if you don't know the vacuum wave functional? Or you know it? – drake Nov 17 '12 at 2:00 Another issue: The Hamiltonian for a scalar fermionic field is not bounded from bellow. – drake Nov 17 '12 at 2:34 show 2 more comments ## 1 Answer The solution to this problem comes from the sneaky fact (Kugo, 1978) that while the FP ghost field is hermitian $c^\dagger (x) = c(x)$ while the anti-ghost field is anti-hermitian $\bar c^\dagger (x)=-\bar c (x)$ . As a result, the plane wave expansion for the ghost/anti-ghost fields (Becchi, 2008), Scholarpedia are: $$c^a(x)={1 \over(2 \pi)^{3/2}} \int_{k_0= | \vec k|} {d \vec k \over 2 k_0}( \gamma^a( \vec k)e^{-ik \cdot x}+ (\gamma^a)^\dagger( \vec k)e^{ik \cdot x})$$ $$\bar c^a(x)={1 \over(2 \pi)^{3/2}} \int_{k_0= | \vec k|} {d \vec k \over 2 k_0}( \bar \gamma^a( \vec k)e^{-ik \cdot x}- (\bar\gamma^a)^\dagger( \vec k)e^{ik \cdot x}) \ ,$$ with a minus sign between the two terms in mode expansion for the anti-ghost field. Thus, when evaluating the time ordered correlator (propagator), the minus sign in the plane-wave expansion compensates the minus sign in the definition of the time-ordering shown in my question above. Thus, I am able to derive the standard Feynman propagator for the FP ghost field. - Interesting. Weinberg however defines both fields as hermitians in the 2nd volume of his QFT book. I guess he introduces an $i$ factor that makes it real. And this has to make to do with the $i$ in the kinetic term that sometimes is reabsorbed in a redefinition of $Z[j]$. I also think that the minus sign in $\bar c$ makes the Hamiltonian bounded from bellow. What I'm wondering is what happen with the hermiticity of the hamiltonian. Good question and answer! – drake Nov 18 '12 at 21:18 It would be nice if you complete your answer with the full free canonical quantization if you already have everything clear. Essentially: commutation relations between ghost fields, non interacting ghost Hamiltonian in Fock space and vacuum wave functional. – drake Nov 18 '12 at 21:22 @drake Will do as soon as possible. – QuantumDot Nov 19 '12 at 1:14 I'll give a bounty if I learn how to do it. – drake Nov 19 '12 at 1:22 You have placed a bounty -- but I can't give a thorough response before bounty has elapsed. I am in the process of completing a paper, and writing job applications. Would you hold off on the bounty for later? – QuantumDot Nov 20 '12 at 6:16 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217460751533508, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/118988?sort=newest
## Automorphisms of $SL_n$ as a variety ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What are the automorphisms of $SL_n$ as an algebraic variety? In other words, let $k$ be an algebraically closed field of characteristic 0 (e.g., $k=\mathbb{C}$). Let $\tau$ be an automorphism of $SL_n$ regarded as an algebraic variety over $k$. Assume that $\tau$ takes the unit element $e$ of $G$ to itself. Is it true that $\tau$ is an automorphism of $SL_n$ as an algebraic group over $k$? - 1 What about inversion (for $n>1$)? – ACL Jan 15 at 16:27 @ACL: Thank you, Antoine. Are there any other automorphisms? – Mikhail Borovoi Jan 15 at 16:33 @Mikhail: Your edit needs some more editing. Aside from this, what motivates the original question? – Jim Humphreys Jan 15 at 17:04 @Jim: I have removed the edit. The original question was motivated by my previous question mathoverflow.net/questions/118356/… and a comment of Tom Goodwillie. I am trying to construct a finite subgroup $H\subset G=SL_{n,\mathbb{C}}$ and an automorphism $\sigma$ of $\mathbb{C}$ such that the $\mathbb{C}$-varieties $G/H$ and $\sigma(G/H)=G/\sigma H$ are not isomorphic. – Mikhail Borovoi Jan 15 at 17:50 Note that the group generated by automorphisms, left translations and inversion is finite-dimensional (actually $2(n^2-1)$); while the example by Mariano gives a faithful action of an infinite dimensional abelian group. – Yves Cornulier Jan 15 at 18:10 ## 2 Answers The coordinate ring when $n=2$ is $A=k[a,b,c,d]/(ad-bc-1)$. If $f\in k[b,c]$, there is an automorphism $\phi:A\to A$ such that $\phi(a)=a+bf$, $\phi(c)=c+df$, $\phi(b)=b$ and $\phi(d)=d$. One could conjecture that the automorphism group in this case is generated by $SL_2$, inversion and this sort of triangular automorphisms, much as in the Makar-Limanov–Jung–van der Kulk theorem for $k[x,y]$ (This is a very optimistic conjecture, though: this is a $3$-dimensional affine variety quite close to affine space and there are non-tame automorphisms of the latter...) In general, I doubt we know the automorphism group. - Of course, this trick works for all $n$. – Mariano Suárez-Alvarez Jan 15 at 16:44 Thank you, Mariano. – Mikhail Borovoi Jan 15 at 17:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The automorphism group is massive! Flexible varieties and automorphism groups, I. Arzhantsev, H. Flenner, S. Kaliman, F. Kutzschebauch, M. Zaidenberg, http://arxiv.org/abs/1011.5375. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8838229179382324, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/8374/family-of-function-with-fractional-derivatives
Family of function with fractional derivatives I would like a family of functions $f_a(x)$ so that $f_a$ is $a\in\mathbb{R}$ fractionally differentiable but not $a+\epsilon$ fractionally differentiable. Does anyone know such functions which are easily described? Thanks in advance. - 1 Answer x^a is fractionally differentiable at $x=0$ with $b$-fractional derivative proportional to $x^{a-b}$ for $b \leq a$. However, if $b > a$, then the fractional derivative blows up at the origin. If you want a collection of functions which have $a$-fractional derivatives but such that the $b$-fractional derivatives for $b>a$ simply do not exist, consider the $a$-fractional anti-derivatives of $|x|$. These are easy to find, because $|x| = x$ for $x>0$, and $|x| = -x$ for $x \leq 0$, and so you can just use the formula for the $k$th anti-derivative of $x$. - Do you also have an example which doesn't blow up? – alext87 Oct 31 '10 at 8:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529961943626404, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/153924-least-squares-fitting.html
# Thread: 1. ## Least Squares Fitting Hello, It is often said that the square of the correlation coefficient, $r^2$, is the fraction of the variation in y that is explained by its relationship with x. I seek assistance with determining how this translates to the following: 1- $r^2$ = (deviations from trendline)/(standard deviation of y data) i.e. standard deviation of the data's deviation from the trendline divided by the the standard deviation of the y data is 1- $r^2$. Best regards, wirefree 2. Keenly look forward to a response to my query above. Best regards, wirefree
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073576927185059, "perplexity_flag": "middle"}
http://cstheory.stackexchange.com/questions/14664/complexity-results-for-lower-elementary-recursive-functions
Complexity results for Lower-Elementary Recursive Functions? Intrigued by Chris Pressey's interesting question on elementary-recursive functions, I was exploring more and unable to find an answer to this question on the web. The elementary recursive functions correspond nicely to the exponential hierarchy, $\text{DTIME}(2^n) \cup \text{DTIME}(2^{2^n}) \cup \cdots$. It seems straightforward from the definition that decision-problems decidable (term?) by lower-elementary functions should be contained in EXP, and in fact in DTIME$(2^{O(n)})$; these functions also are constrained to output strings linear in their input length [1]. But on the other hand, I don't see any obvious lower bounds; at first glance it seems conceivable that LOWER-ELEMENTARY could strictly contain NP, or perhaps fail to contain some problems in P, or most likely some possibility I've not yet imagined. It would be epicly cool if LOWER-ELEMENTARY = NP but I suppose that is too much to ask for. So my questions: 1. Is my understanding so far correct? 2. What is known about the complexity classes bounding the lower elementary recursive functions? 3. (Bonus) Do we have any nice complexity-class characterizations when making further restrictions on recursive functions? I was thinking in particular of the restriction to $\log(x)$-bounded summations, which I think run in polynomial time and produce linear output; or constant-bounded summations, which I think run in polynomial time and produce output of length at most $n + O(1)$. [1]: We can show (I believe) that lower-elementary functions are subject to these restrictions by structural induction, supposing that the functions $h,g_1,\dots,g_m$ have complexity $2^{O(n)}$ and outputs of bitlength $O(n)$ on an input of length $n$. When $f(x) = h(g_1(x),\dots,g_m(x))$, letting $n := \log x$, each $g$ has output of length $O(n)$, so $h$ has an $O(n)$-length input (and therefore $O(n)$-length output); the complexity of computing all $g$s is $m2^{O(n)}$ and of $h$ is $2^{O(n)}$, so $f$ has complexity $2^{O(n)}$ and output of length $O(n)$ as claimed. When $f(x) = \sum_{i=1}^x g(x)$, the $g$s have outputs of length $O(n)$, so the value of the sum of outputs is $2^n 2^{O(n)} \in 2^{O(n)}$, so their sum has length $O(n)$. The complexity of summing these values is bounded by $2^n$ (the number of summations) times $O(n)$ (the complexity of each addition) giving $2^{O(n)}$, and the complexity of computing the outputs is bounded by $2^{n}$ (the number of computations) times $2^{O(n)}$ (the complexity of each one), giving $2^{O(n)}$. So $f$ has complexity $2^{O(n)}$ and output of length $O(n)$ as claimed. - The Wikipedia article you link to states that lower-elementary functions have polynomial growth (but it gives no reference.) Showing that a P-complete problem can or cannot be solved with elementary functions would be a good step towards pinning it down further. It does not, offhand, look impossible to simulate a Turing machine for n steps -- maybe a bounded sum corresponding to the number of steps of another bounded sum corresponding to each state transition? – Chris Pressey Dec 9 '12 at 11:32 @Chris - My guess was that "polynomial growth" refers to the number of bits in the output being no more than linear in the number of bits in the input. I agree that the simulation seems very plausible, and seems doable in polynomial time (but might take some details to verify this!). – usul Dec 9 '12 at 17:22 Sorry, that first part might not be clear, but it's because then on input of value $x$ the output has value at most polynomial in $x$. – usul Dec 9 '12 at 19:11 Concerning question 3: the functions definable in the variant with $\log(x)$-bounded summation are all in the complexity class uniform $\mathsf{TC}^0$. With constant bounded summation you get a subclass of uniform $\mathsf{AC}^0$. – Jan Johannsen Dec 10 '12 at 10:15 1 @Xoff I believe it is all in the summation: We are summing from $1$ to $x$, where (on an input of $n$ bits) $x$ can have size $2^n$, so our sum will be $2^n$ times the size of each summand. – usul Mar 27 at 0:53 show 2 more comments 1 Answer Concerning (bonus) question 3: the functions definable in the variant with $\log(x)$-bounded summation are all in the complexity class uniform $\mathsf{TC}^0$. This follows from the construction in Chandra, Stockmeyer and Vishkin "Constant depth reducibility", SIAM J. Comput. 13 (1984) showing that the sum of $n$ numbers of $n$ bits each can be computed by poynomial size constant depth circuits with majority gates. With constant bounded summation you get a subclass of uniform $\mathsf{AC}^0$. Constant bounded summation can be reduced to addition and composition, and addition can be computed by constant depth boolean circuits using the carry-lookahead method. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393879771232605, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=400183
Physics Forums ## Spacetime Diagrams - Help With Drawing 1. The problem statement, all variables and given/known data I need to draw carefully labelled spacetime diagrams (one per section) to illustrate the following: a) The length of a 'rod' which is exactly $1.0$ light seconds long when at rest along the x′-axis in one frame, and which is moving along the x-axis at $0.866c$ in the diagram rest frame. b) The addition of velocities as viewed from a spaceship, for the case of a missile launched at $+0.80c$ relative to the spaceship, itself travelling at $+0.60c$ relative to the Earth. Take the launch time to be at $t=0$ as the spaceship passes the Earth. 2. Relevant equations Within the question statement and solutions, as relevant. 3. The attempt at a solution I can't figure out how to draw the scenario in (a). No idea. Have managed to do some simpler ones and the one in the second part of this post, but I just can't get my head around how to draw this one. I've had a go at (b), this is what I have for the spacetime diagram (see the included image) but I'm not sure if the scales and labels are correct, I just made the 3,4,5 values as that was convenient numbers that seem to work from some quick calculations, I think I can just do that.: http://yfrog.com/5nspacetimediag1j Any help, particularly with (a) as I think (b) is pretty much correct hopefully, would be much appreciated. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Thread Tools | | | | |-------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Spacetime Diagrams - Help With Drawing | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 2 | | | Math & Science Software | 5 | | | Calculus & Beyond Homework | 1 | | | Math & Science Software | 4 | | | Computing & Technology | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340093731880188, "perplexity_flag": "middle"}
http://www.linuxforu.com/2011/05/what-is-scientific-programming/
# What is Scientific Programming? By Adit Gupta on May 1, 2011 in How-Tos, Open Gurus, Tools / Apps · 5 Comments This article will take you into the world of scientific programming — from simple numerical computations to some complex mathematical models and simulations. We will explore various computational tools but our focus will remain scientific programming with Python. I have chosen Python because it combines remarkable power with clean, simple and easy-to-understand syntax. That some of the most robust scientific packages have been written in Python makes it a natural choice for scientific computational tasks. Scientific programming, or in broader terms, scientific computing, deals with solving scientific problems with the help of computers, so as to obtain results more quickly and accurately. Computers have long been used for solving complex scientific problems — however, advancements in computer science and hardware technologies over the years have also allowed students and academicians to play around with robust scientific computation tools. Although tools like Mathematica and Matlab remain commercial, the open source community has also developed some equally powerful computational tools, which can be easily used by students and independent researchers. In fact, these tools are so robust that they are now also used at educational institutions and research labs across the globe. So, let’s move on to setting up a scientific environment. ## Setting up the environment Most UNIX system/Linux distributions have Python installed by default. We will use Python 2.6.6 for the purposes of this article. It’s recommended to install IPython, as it offers enhanced introspection, additional shell syntax, syntax highlighting and tab-completion. You can install IPython here. Next, we’ll install the two most basic scientific computational packages for Python: NumPy and SciPy. The former is the fundamental package needed for scientific computing with Python. It contains a powerful N-dimensional array object, sophisticated functions, tools for integrating C/C++, and Fortran code with useful linear algebra, Fourier transforms, and random-number capabilities. The SciPy library is built to work with NumPy arrays, and provides many user-friendly and efficient numerical routines for numerical integration and optimisation. Open the Synaptic Package Manager and install the `python-numpy` and `python-scipy` packages. Now that we have NumPy and SciPy installed, let’s get our hands dirty with some mathematical functions and equations! Figure 1: NumPy and SciPy Installation ## Numerical computations with NumPy, SciPy and Maxima NumPy offers efficient array computations with fixed-size, homogeneous, multi-dimensional array types, and a plethora of functions to perform various array operations. Array-programming languages like NumPy generalise operations in scalars to apply transparently to vectors, matrices and other higher-dimensional arrays. Python does not have a default array data type, and processing data with Python lists and for loops is dramatically slower compared to corresponding operations in compiled languages like FORTRAN, C and C++. NumPy comes to the rescue, with its dynamically typed environment for array computation, similar to basic Matlab. You can create a simple array with the array function in NumPy: ```In[1]: import numpy as np In[2]:  a = np.array([1,2,3,4,5]) In[2]:  b = np.array([6,7,8,9,10]) In[3]:  type(b) #check datatype Out[3]: type numpy.ndarray  #array In[4]:  a+b Out[4]: array([7,9,11,13,15])``` You can also convert a simple array to a matrix array using the shape attribute. ```In[1]: import numpy as np In[5]:  c = np.array([1,4,5,7,2,6]) In[6]:  c.shape = (2,3) In[7]:  c Out[7]: array([1,4,5],[7,2,6]) // converted to a 2 column matrix``` ## Matrix operations Now let us take a look at some simple matrix operations. The following matrix can be simply defined as: $M = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}$ ```# Defining a matrix and matrix multiplication In[1]: import numpy as np In[2]: x = np.array([[1,2,3],[4,5,6],[7,8,9]]) In[3]: y = np.array([[1,4,5],[2,6,5],[6,8,3]]) #another matrix In[4]: z = np.dot(x,y) #matrix multiplication using dot attribute In[5]: z Out[5]: z = ([[23, 40,24], [50,94,63],[77,148,102]])``` You can also create matrices in NumPy using the matrix class. However, it’s preferable to use arrays, since most NumPy functions return arrays, and not matrices. Moreover, matrix objects have a maximum of Rank-2. To hold Rank-3 data, you need an array. Also, arrays are closer in semantics to tensor algebra,  compared to matrix objects. The following example shows how to transpose a matrix and define a diagonal matrix: ```In[7]: import numpy as np In[8]: x = np.array([[1,2,3],[4,5,6],[7,8,9]]) In[7]: xT = np.transpose(x) #take transpose of the matrix In[8]: xT Out[8]:xT = ([[1,4,7],[2,5,8],[3,6,9]]) In[9]:n = diag(range(1,4)) #defining a diagnol matrix In[10]: n Out[10]:n = ([[1,0,0],[0,2,0],[0,0,3]])``` ## Linear algebra You can also solve linear algebra problems using the `linalg` package contained in SciPy. Let us look at a few more examples of calculating matrix inverse and determinant: ```# Matrix Inverse In[1]: import numpy as np In[2]:  m = np.array([[1,3,3],[1,4,3],[1,3,4]]) In[3]:  np.linalg.inv(m) #take inverse with linalg.inv function Out[3]: array([[-7,-3,-3],[-1,1,0],[-1,0,1]]) #Calculating Determinant In[4]:  z = np.array([[0,0],[0,1]]) In[5]:  np.linalg.det(z) Out[5]: 0  #z is a singular matrix and hence has its determinant as zero``` ## Integration The `scipy.integrate` package provides several integration techniques, which can be used to solve simple and complex integrations. The package provides various methods to integrate functions. We will be discussing a few of them here. Let us first understand how to integrate the following functions: $\int_3^0 \! x^2 dx \\ \\ \int_3^1 \! 2^{\sqrt{x}} /\sqrt{x} dx$ ```# Simple Integration of x^2   In[1]: from scipy.integrate import quad   In[2]: import scipy as sp   In[3]: sp.integrate.quad(lambda x: x**2,0,3)   Out[3]: (9.0, 9.9922072216264089e-14)     # Integration of 2^sqrt(x)/sqrt(x)   In[4]: sp.integrate.quad(lambda x: 2**sqrt(x)/sqrt(x),1,3)   Out[4]: (3.8144772785946079, 4.2349205016052412e-14)``` The first argument to quad is a “callable” Python object (i.e., a function, method, or class instance). We have used a lambda function as the argument in this case. (A lambda function is one that takes any number of arguments — including optional arguments — and returns the value of a single expression.) The next two arguments are the limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral, and the second element holding an upper bound on the error. ## Differentiation We can get a derivative at a point via automatic differentiation, supported by FuncDesigner and OpenOpt, which are scientific packages based on SciPy. Note that automatic differentiation is different from symbolic and numerical differentiation. In symbolic differentiation, the function is differentiated as an expression, and is then evaluated at a point. Numerical differentiation makes use of the method of finite differences. However, automatic differentiation is the decomposition of differentials provided by the chain rule. A complete understanding of automatic differentiation is beyond the scope of this article, so I’d recommend that interested readers refer to Wikipedia. Automatic differentiation works by decomposing the vector function into elementary sequences, which are then differentiated by a simple table lookup. Unfortunately, a deeper understanding of automatic differentiation is required to make full use of the scientific packages provided in Python. Hence, in this article, we’ll focus on symbolic differentiation, which is easier to understand and implement. We’ll be using a powerful computer algebra system known as Maxima for symbolic differentiation. Maxima is a version of the MIT-developed MACSYMA system, modified to run under CLISP. Written in Lisp, it allows differentiation, integration, solutions for linear or polynomial equations, factoring of polynomials, expansion of functions in the Laurent or Taylor series, computation of the Poisson series, matrix and tensor manipulations, and two- and three-dimensional graphics. Open the Synaptic Package Manager and install the `maxima` package. Once installed, you can run it by executing the `maxima` command in the terminal. We’ll be differentiating the following simple functions with the help of Maxima: d / dx(x4) d / dx(sin x + tan x) d / dx(1 / log x) Figure 2 displays Maxima in action. Figure 2: Differentiation of some simple functions You have to simply define the function in `diff()` and maxima will calculate the derivative for you. ```(%i1) diff(x^4) (%o1) 4x^3 del(x) (%i2) diff(sin(x) + tan(x)) (%o2) (sec^2(x) + cos(x))del(x) (%i3) diff(1/log(x)) (%o3) - del(x)/x log^2(x)-``` The command `diff(expr,var,num)` will differentiate the expression in Slot 1 with respect to the variable entered in Slot 2 a number of times, determined by a positive integer in Slot 3. Unless a dependency has been established, all parameters and variables in the expression are treated as constants when taking the derivative. Similarly, you can also calculate higher order differentials with Maxima. ## Ordinary differential equations Maxima can also be used to solve ODEs. We’ll dive straight into some examples to understand how to solve ODEs with Maxima. Consider the following differential equations: dx/dt = e-t + x d2x / dt2 – 4x = 0 Consider Figure 3. Figure 3: Solving simple differential equations Figure 4: Getting solutions at a point of differential equations Let’s rewrite our example ordinary differential equations using the noun form `diff`, which uses a single quote. Then use `ode2`, and call the general solution `gsoln`. The function `ode2` solves an ordinary differential equation (ODE) of the first or second order. This takes three arguments: an ODE given by `eqn`, the dependent variable `dvar`, and the independent variable ivar. When successful, it returns either an explicit or implicit solution for the dependent variable. `%c` is used to represent the integration constant in the case of first-order equations, and `%k1` and `%k2` the constants for second-order equations. We can also find the solution at predefined points using `ic1` and call this particular solution, `psoln`. Consider the following non-linear first order differential equation: (x2y)dy / dx = xy +x3 – 1 Let’s first define the equation, and then solve it with `ode2`. Further, let us find the particular solution at points `x=1` and `y=1` using `ic1`. We can also solve ODEs with NumPy and SciPy using the FuncDesigner and OpenOpt packages. However, both these packages make use of automatic differentiation to solve ODEs. Hence, Maxima was chosen over these packages. ODEs can also be solved using the `scipy.integrate.odeint` package. We will later use this package for mathematical modelling. ## Curve plotting with MatPlotLib It’s said that a picture is worth a thousand words, and there’s no denying the fact that it’s much more convenient to make sense of a scientific experiment by looking at the plots as compared to looking just at the raw data. In this article, we’ll be focusing on MatPlotLib, which is a Python package for 2D plotting that produces production-quality graphs. Matlab is customisable and extensible, and is integrated with LaTeX markup, which is really useful when writing scientific papers. Let us make a simple plot with the help of MatPlotLib: ```#Simple Plot with MatPlotLib #! /usr/bin/python import matplotlib.pyplot as plt x = range(10) plt.plot(x, [xi**3 for xi in x]) plt.show()``` Figure 5: Simple plot with MatPlotLib Let us take another example using the `arange` function; `arange(x,y,z)` is a part of NumPy, and it generates a sequence of elements with `x` to `y` with spacing `z`. ```#Simple Plot with MatPlotLib #! /usr/bin/python import matplotlib.pyplot as plt import numpy as np x = np.arange(0,20,2) plt.plot(x, [xi**2 for xi in x]) plt.show()``` We can also add labels, legends, the grid and axis name in the plot. Take a look at Figure 6, and the following code: Figure 6: The plot after utilising the arange function ```import matplotlib.pyplot as plt import numpy as np x = np.arange(0,20,2) plt.title('Sample Plot') plt.xlabel('X axis') plt.ylabel('Y axis') plt.plot(x, [xi**3 for xi in x], label='Fast') plt.plot(x, [xi**4 for xi in x], label='Slow') plt.legend() plt.grid(True) plt.show() plt.savefig('plot.png') ``` Figure 7: Multiline plot with MatPlotLib You can create various types of plots using MatPlotLib. Let us take a look at Pie Plot and Scatter Plot. ```import matplotlib.pyplot as plt plt.figure(figsize=(10,10)); plt.title('Distribution of Dark Energy and Dark Matter in the Universe') x = [74.0,22.0,3.6,0.4] labels = ['Dark Energy', 'Dark Matter', 'Intergalatic gas', 'Stars,etc'] plt.pie(x, labels=labels, autopct='%1.1f%%'); plt.show()``` Figure 8: Pie chart with MatPlotLib Figure 9: Scatter Plot with MatPlotLib ```import matplotlib.pyplot as plt import numpy as np plt.title('Scatter Plot') x = np.random.randn(200) y = np.random.randn(200) plt.xlabel('X axis') plt.ylabel('Y axis') plt.scatter(x,y) plt.show()``` Similarly, you can plot Histograms and Bar charts using the `plt.hist()` and `plt.bar()` functions, respectively. In our next example, we will generate a plot by using data from a text file: ```import matplotlib.pyplot as plt import numpy as np data = np.loadtxt('ndata.txt') x = data[:,0] y = data[:,1] figure(1,figsize=(6,4)) grid(True) hold(True) lw=1 xlabel('x') plot(x,y,'b',linewidth=lw) plt.show()``` After executing this program, it results in the plot shown in Figure 10. Figure 10: Plotting by fetching data from the text file Figure 11: Spring-Mass System So, what’s happening here? First of all, we fetch data from the text file using the `loadtxt` function, which splits each non-empty line into a sequence of strings. Empty or commented lines are just skipped. The fetched data is then distributed in variables using slice. The figure function creates a new figure of the specified dimensions, whereas the plot function creates a new line plot. ## Mathematical modelling Now that we have a basic understanding of various computation tools, we can move on to some more complex problems related to mathematics and physics. Let’s take a look at one of the problems provided by the SciPy community. The example is available on the Internet (at the SciPy website). However, some of the methods explained in this example are deprecated; hence, we’ll rebuild the example, so that it works correctly with the latest version of SciPy and NumPy. We’re going to build and simulate a model based on a coupled spring-mass system, which is essentially a harmonic oscillator, in which a spring is stretched or compressed by a mass, thereby developing a restoring force in the spring, which results in harmonic motions when the mass is displaced from its equilibrium position. For an undamped system, the motion of Block 1 is given by the following differential equation: m1d2x1 / dt = (k1 + k)x1 – k2x2 = 0 For Block 2: m2d2x2 / dt + k x2 – k1x1 = 0 In this example, we’ve taken a coupled spring-mass system, which is subjected to a frictional force, thereby resulting in damping. Note that damping tends to reduce the amplitude of oscillations in an oscillatory system. For our example, let us assume that the lengths of the springs, when subjected to no external forces, are L1 and L2. The following differential equations define such a system: m1d2x1 / dt + μ1dx1 / dt + k1(x1 – L1) – k2(x1 – x2 – L2) = 0 …and: m2d2x2 / dt + μ2dx / dt + k (x2 – x1 – L2) = 0 We’ll be using the Scipy `odeint` function to solve this problem. The function works for first-order differential equations; hence, we’ll re-write the equations as first fourth order equations: dx1 / dt = y1 dy1 / dt = (-μ1y1 – k1(x1 – L1) + k (x2 – x1 – L2)) / m1 dx2 / dt = y dy2 / dt = (-μ2y2 – k2(x2 – x1 – L1)) / m2 Now, let’s write a simple Python script to define this problem: ```#! /usr/bin/python def vector(w,t,p): x1,y1,x2,y2 = w m1,m2,k1,k2,u1,u2,L1,L2 = p f = [y1, (-b1*y1 - k1*(x1-L1) + k2*(x2-x1-L2))/m1, y2, (-b2*y2 - k2*(x2-x1-L2))] return f``` In this script, we have simply defined the above mentioned equations programmatically. The argument `w` defines the state variables; `t` is for time, and `p` defines the vector of the parameters. In short, we have simply defined the vector field for the spring-mass system in this script. Now, let’s define a script that uses `odeint` to solve the equations for a given set of parameter values, initial conditions, and time intervals. The script prints the points in the solution to the terminal. ```#! /usr/bin/python from scipy.integrate import odeint import two_springs # Parameter values # Masses: m1 = 1.0 m2 = 1.5 # Spring constants k1 = 8.0 k2 = 40.0 # Natural lengths L1 = 0.5 L2 = 1.0 # Friction coefficients b1 = 0.8 b2 = 0.5 # Initial conditions # x1 and x2 are the initial displacements; y1 and y2 are the initial velocities x1 = 0.5 y1 = 0.0 x2 = 2.25 y2 = 0.0 # ODE solver parameters abserr = 1.0e-8 relerr = 1.0e-6 stoptime = 10.0 numpoints = 250 # Create the time samples for the output of the ODE solver. t = [stoptime*float(i)/(numpoints-1) for i in range(numpoints)] # Pack up the parameters and initial conditions: p = [m1,m2,k1,k2,L1,L2,b1,b2] w0 = [x1,y1,x2,y2] # Call the ODE solver. wsol = odeint(two_springs.vectorfield,w0,t,args=(p,),atol=abserr,rtol=relerr) # Print the solution. for t1,w1 in zip(t,wsol): print t1,w1[0],w1[1],w1[2],w1[3]``` The `scipy.integrate.odeint` function integrates a system of ordinary differential equations. It takes the following parameters: • `func: callable(y, t0, ...)` — It computes the derivative of `y` at `t0`. • `y0: array` — This is the initial condition on `y` (can be a vector). • `t: array` — It is a sequence of time points for which to solve for `y`. The initial value point should be the first element of this sequence. • `args: tuple` — Indicates extra arguments to pass to function. In our example, we have added `atrol` and `rtol` as extra arguments to deal with absolute and relative errors. The `zip` function takes one or more sequences as arguments, and returns a series of tuples that pair up parallel items taken from those sequences. Copy the solution generated from this script to a text file using the `cat` command. Name this text file as `two_springs.txt`. The following script uses Matplotlib to plot the solution generated by `two_springs_solver.py`: ```#! /usr/bin/python # Defining a matrix and matrix multiplication from pylab import * from matplotlib.font_manager import FontProperties import numpy as np data = np.loadtxt('two_springs.txt') t  = data[:,0] x1 = data[:,1] y1 = data[:,2] x2 = data[:,3] y2 = data[:,4] figure(1,figsize=(6,4)) xlabel('t')4 grid(True) hold(True) lw = 1 plot(t,x1,'b',linewidth=lw) plot(t,x2,'g',linewidth=lw) legend((r'$x_1$',r'$x_2$'),prop=FontProperties(size=16)) title('Mass Displacements for the Coupled Spring-Mass System') savefig('two_springs.png',dpi=72)``` On running the script, we get the plot shown in Figure 12. It clearly shows how the mass displacements are reduced with time for damped systems. Figure 12: Plot of the spring-mass system In this article, we have covered some of the most basic operations in scientific computing. However, we can also model and simulate more complex problems with NumPy and SciPy. These tools are now actively used for research in quantum physics, cosmology, astronomy, applied mathematics, finance and various other fields. With this basic understanding of scientific programming, you’re now ready to explore deeper realms of this exciting world! ##### Related Posts: Tags: algebra, array object, C/C++, computation tools, computational tasks, computational tools, dimensional array, Fortran, fortran code, Languages, LaTeX, LFY May 2011, mathematical models, matlab, Maxima, numerical computations, NumPy, Pie chart, programming, python, scientific computation, scientific computing, scientific programming, SciPy, unix Article written by: Adit works as a technology evangelist at Sourcebits Technologies, Bangalore. He is passionate about programming, astrophysics and computer science. Connect with him: Website - Twitter • rctorr @ayedd danger? xD Python vs El Mundo ;) • mvvijesh @sandeeprajup wow, this is good. Thanks. :) @vijaym123 • JohnDCook RT @mikeloukides: LINUX & Python for scientific computing: http://t.co/sz92nQal • SciPyTip RT @mikeloukides: LINUX & Python for scientific computing: http://t.co/Rus73zMY • chris_mahan @mikeloukides Thanks for that link. Verrry interesting. All published articles are released under Creative Commons Attribution-NonCommercial 3.0 Unported License, unless otherwise noted. LINUX For You is powered by WordPress, which gladly sits on top of a CentOS-based LEMP stack. .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8778891563415527, "perplexity_flag": "middle"}
http://www.reference.com/browse/turbulence
Definitions Nearby Words turbulence [tur-byuh-luhns] /ˈtɜrbyələns/ turbulence, state of violent or agitated behavior in a fluid. Turbulent behavior is characteristic of systems of large numbers of particles, and its unpredictability and randomness has long thwarted attempts to fully understand it, even with such powerful tools as statistical mechanics. Although much is still unknown about turbulence, recent developments in nonlinear dynamics have led to an understanding of the onset of turbulence, and the advent of the supercomputer has enabled better models of turbulent states to be developed. Until the early 1970s, it was held that laminar, or smooth, flow made a gradual transition to turbulent flow by the addition of instabilities, one at a time, until the flow became unpredictable. Experimental work, however, has shown that the onset of turbulence occurs abruptly, and in fact is characterized by the so-called strange attractors of nonlinear dynamics. Increased understanding of turbulent flow through supercomputer models is leading to advances in such diverse areas as the design of better airplane wings and artificial heart valves. Fluid flow in which the fluid undergoes irregular fluctuations, or mixing. The speed of the fluid at a point is continuously undergoing changes in magnitude and direction, which results in swirling and eddying as the bulk of the fluid moves in a specific direction. Common examples of turbulent flow include atmospheric and ocean currents, blood flow in arteries, oil transport in pipelines, lava flow, flow through pumps and turbines, and the flow in boat wakes and around aircraft wing tips. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. In fluid dynamics, turbulence or turbulent flow is a fluid regime characterized by chaotic, stochastic property changes. This includes low momentum diffusion, high momentum convection, and rapid variation of pressure and velocity in space and time. Flow that is not turbulent is called laminar flow. The (dimensionless) Reynolds number characterizes whether flow conditions lead to laminar or turbulent flow; e.g. for pipe flow, a Reynolds number above about 4000 (A Reynolds number between 2100 and 4000 is known as transitional flow) will be turbulent. At very low speeds the flow is laminar, i.e., the flow is smooth (though it may involve vortices on a large scale). As the speed increases, at some point the transition is made to turbulent flow. In turbulent flow, unsteady vortices appear on many scales and interact with each other. Drag due to boundary layer skin friction increases. The structure and location of boundary layer separation often changes, sometimes resulting in a reduction of overall drag. Because laminar-turbulent transition is governed by Reynolds number, the same transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large scale structures. The energy "cascades" from these large scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale. In two dimensional turbulence (as can be approximated in the atmosphere or ocean), energy actually flows to larger scales. This is referred to as the inverse energy cascade and is characterized by a $k^\left\{-\left(5/3\right)\right\}$ in the power spectrum. This is the main reason why large scale weather features such as hurricanes occur. Turbulent diffusion is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid, itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula. When designing piping systems, turbulent flow requires a higher input of energy from a pump (or fan) than laminar flow. However, for applications such as heat exchangers and reaction vessels, turbulent flow is essential for good heat transfer and mixing. While it is possible to find some particular solutions of the Navier-Stokes equations governing fluid motion, all such solutions are unstable at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified. Still, the complete description of turbulence remains one of the unsolved problems in physics. According to an apocryphal story Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first. A similar witticism has been attributed to Horace Lamb (who had published a noted text book on Hydrodynamics)—his choice being quantum mechanics (instead of relativity) and turbulence. Lamb was quoted as saying in a speech to the British Association for the Advancement of Science, "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic. Examples of turbulence • Smoke rising from a cigarette. For the first few centimeters, the flow remains laminar, and then becomes unstable and turbulent as the rising hot air accelerates upwards. Similarly, the dispersion of pollutants in the atmosphere is governed by turbulent processes. • Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient switched from favorable (pressure decreasing in the flow direction) to unfavorable (pressure increasing in the flow direction), creating a large region of low pressure behind the ball that creates high form drag. To prevent this from happening, the surface is dimpled to perturb the boundary layer and promote transition to turbulence. This results in higher skin friction, but moves the point of boundary layer separation further along, resulting in lower form drag and lower overall drag. • The mixing of warm and cold air in the atmosphere by wind, which causes clear-air turbulence experienced during airplane flight, as well as poor astronomical seeing (the blurring of images seen through the atmosphere.) • Most of the terrestrial atmospheric circulation • The oceanic and atmospheric mixed layers and intense oceanic currents. • The flow conditions in many industrial equipment (such as pipes, ducts, precipitators, gas scrubbers, dynamic scraped surface heat exchangers, etc.) and machines (for instance, internal combustion engines and gas turbines). • The external flow over all kind of vehicles such as cars, airplanes, ships and submarines. • The motions of matter in stellar atmospheres. • A jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created. These layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. • Race cars unable to follow each other through fast corners due to turbulence created by the leading car causing understeer. • In windy conditions, trucks that are on the motorway gets buffeted by their wake. • Round bridge supports under water. In the summer when the river is flowing slowly the water goes smoothly around the support legs. In the winter the flow is faster, so a higher Reynolds Number, so the flow may start off laminar but is quickly separated from the leg and becomes turbulent. Kolmogorov 1941 Theory Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy. In his original theory of 1941, Kolmogorov postulated that for very high Reynolds number, the small scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as L). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high. Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the viscosity ($nu$) and the rate of energy dissipation ($varepsilon$). With only these two parameters, the unique length that can be formed by dimensional analysis is $eta = left\left(frac\left\{nu^3\right\}\left\{varepsilon\right\}right\right)^\left\{1/4\right\}$. This is today known as the Kolmogorov length scale (see Kolmogorov microscales). A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length $eta$, while the input of energy into the cascade comes from the decay of the large scales, of order L. These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length r) that has formed at the expense of the energy of the large ones. These scales, are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. $eta ll r ll L$). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range"). Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range $eta ll r ll L$ are universally and uniquely determined by the scale r and the rate of energy dissipation $varepsilon$. The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function $E\left(k\right)$, where k is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field u(x): $mathbf\left\{u\right\}\left(mathbf\left\{x\right\}\right) = iiint_\left\{mathbb\left\{R\right\}^3\right\} widehat\left\{mathbf\left\{u\right\}\right\}\left(mathbf\left\{k\right\}\right)e^\left\{i mathbf\left\{k cdot x\right\}\right\} mathrm\left\{d\right\}^3mathbf\left\{k\right\}$, where û(k) is the Fourier transform of the velocity field. Thus, E(k)dk represents the contribution to the kinetic energy from all the Fourier modes with k < |k| < k + dk, and therefore, $mathrm\left\{Total,, kinetic,, energy\right\} = int_\left\{0\right\}^\left\{infty\right\}E\left(k\right)mathrm\left\{d\right\}k$. The wavenumber k corresponding to length scale r is $k=2pi/r$. Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is $E\left(k\right) = C varepsilon^\left\{2/3\right\} k^\left\{-5/3\right\}$, where C would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, and considerable experimental evidence has accumulated that supports it. In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant in the inertial range. A usual way of studying turbulent velocity fields is by means of velocity increments: $delta mathbf\left\{u\right\}\left(r\right) = mathbf\left\{u\right\}\left(mathbf\left\{x\right\} + mathbf\left\{r\right\}\right) - mathbf\left\{u\right\}\left(mathbf\left\{x\right\}\right)$; that is, the difference in velocity between points separated by a vector r (since the turbulence is assumed isotropic, the velocity increment depends only on the modulus of r). Velocity increments are useful because they emphasize the effects of scales of the order of the separation r when statistics are computed. The statistical scale-invariance implies that the scaling of velocity increments should occur with a unique scaling exponent $beta$, so that when r is scaled by a factor $lambda$, $delta mathbf\left\{u\right\}\left(lambda r\right)$ should have the same statistical distribution as $lambda^\left\{beta\right\}delta mathbf\left\{u\right\}\left(r\right)$, with $beta$ independent of the scale r. From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the velocity increments (known as structure functions in turbulence) should scale as $langle \left[delta mathbf\left\{u\right\}\left(r\right)\right]^n rangle = C_n varepsilon^\left\{n/3\right\} r^\left\{n/3\right\}$, where the brackets denote the statistical average, and the $C_n$ would be universal constants. There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the n/3 value predicted by the theory, becoming a non-linear function of the order n of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov n/3 value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law $E\left(k\right) propto k^\left\{-p\right\}$, with $1 < p < 3$, the second order structure function has also a power law, with the form $langle \left[delta mathbf\left\{u\right\}\left(r\right)\right]^2 rangle propto r^\left\{p-1\right\}$. Since the experimental values obtained for the second order structure function only deviate slightly from the 2/3 value predicted by Kolmogorov theory, the value for p is very near to 5/3 (differences are about 2%). Thus the "Kolmogorov -5/3 spectrum" is generally observed in turbulence. However, for high order structure functions the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the $C_n$ constants, are related with the phenomenon of intermittency in turbulence. This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is really universal in the inertial range. References • Falkovich, Gregory and Sreenivasan, Katepalli R. Lessons from hydrodynamic turbulence, , vol. 59, no. 4, pages 43-49 (April 2006). • U. Frisch. Turbulence: The Legacy of A. N. Kolmogorov. Cambridge University Press, 1995. • T. Bohr, M.H. Jensen, G. Paladin and A.Vulpiani. Dynamical Systems Approach to Turbulence, Cambridge University Press, 1998. Original scientific research papers • Kolmogorov, Andrey Nikolaevich (1941). "The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers". Proceedings of the USSR Academy of Sciences 30 299–303. , translated into English by • Kolmogorov, Andrey Nikolaevich (1941). "Dissipation of energy in locally isotropic turbulence". Proceedings of the USSR Academy of Sciences 32 16–18. , translated into English by External links Turbulent flow in a pipe
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322941899299622, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/544/what-is-a-markov-chain/545
# What is a Markov Chain? What is a intuitive explanation of a Markov Chain, and how they work? Please provide at least one practical example. - Also, feel free to retag this question :) – falcon Jul 23 '10 at 8:16 probability is not the best tag for Markow chains, but it could be useful. – mau Jul 23 '10 at 8:30 ## 5 Answers A Markov chain is a discrete random process with the property that the next state depends only on the current state (wikipedia) So $P(X_n | X_1, X_2, \ldots X_n-1) = P(X_n | X_{n-1})$. An example could be when you are modelling the weather. You then can take the assumption that the weather of today can be predicted by only using the knowledge of yesterday. Let's say we have Rainy and Sunny. When it is rainy on one day the next day is Sunny with probability 0.3. When it is Sunny, the probability for Rain next day is 0.4. Now when it is today Sunny we can predict the weather of the day after tomorrow, by simply calculating the probability for Rain tomorrow, multiplying that with the probablity for Sun after rain plus the probability of Sun tomorrow times the probability of Sun after sun. In total the probability of Sunny of the day after tomorrow is $P(R|S) \cdot P(S|R) + P(S|S) \cdot P(S|S) = 0.3 \cdot 0.4+0.6 \cdot 0.6 = 0.48$ - In a nutshell, a Markov chain is (the behavior of) a random process which may only find itself in a (not necessarily finite) number of different states. The process moves from a state to another in discrete times (that is, you define a sequence S(t) of states at time t=0,1,2,...), and for which the probability of going from state S to state R depends just from S and R; that is, there is no "memory of the past" and the process is "timeless". This means that the Markov chain may be modeled as a n*n matrix, where n is the number of possible states. An example of a process which may be modeled by a Markov chain is the sequence of faces of a die showing up, if you are allowed to rotate the die wrt an edge. The corresponding matrix is ```` 1 2 3 4 5 6 ------------------------ 1 | 0 1/4 1/4 1/4 1/4 0 2 | 1/4 0 1/4 1/4 0 1/4 3 | 1/4 1/4 0 0 1/4 1/4 4 | 1/4 1/4 0 0 1/4 1/4 5 | 1/4 0 1/4 1/4 0 1/4 1 | 0 1/4 1/4 1/4 1/4 0 ```` As usual, Wikipedia and MathWorld are your friends. - Markov chains, especially hidden Markov models are huge in computation linguistics. A hidden Markov model is one where we can't directly view the state, but we do have some information about what the state might be. For example, consider breaking down a sentence in parts of speech such as verbs, adjectives, ect. We don't know what the parts of speech are, but we can attempt to deduce them from the word. For example, the word run might be used 80% as a verb, 18% of the time as a noun and 2% of the time as an adjective. We also have Markov relations between the parts of speech, so for example an adjective might be followed by a noun 70% of the time and another adjective 30% of the time. We can use the Viterbi algorithm to decide which sequence is most likely to have generated the observed output (the algorithm takes into account both the probability of such a sequence of parts of speech occurring together in a sentence and the relative chance that such parts of speech would be responsible for us observing the given words). - my dissertation for M.Sc. in mathematics (back in 1986...) was about hidden Markov models in speech recognition :-) – mau Jul 23 '10 at 13:30 Markov chains are used in Markov Chain Monte Carlo (MCMC). This computational technique is extremely common in Bayesian statistics. In Bayesian statistics, you want to compute properties of a posterior distribution. You'd like to draw independent samples from this distribution, but often this is impractical. So you construct a Markov chain that has as its limiting distribution the the distribution you want. So, for example, to get the mean of your posterior distribution you could take the mean of the states of your Markov chain. (Ergodic theory blesses this process.) - I had a programming project in college where we generated large amounts of psuedo-English text using Markov chains. The assignment is here, although I don't know if that link will be good forever. From that page: For example, suppose that [our Markov chains are of length] 2 and the sample file contains ````I like the big blue dog better than the big elephant with the big blue hat on his tusk. ```` Here is how the first three words might be chosen: • A two-word sequence is chosen at random to become the initial prefix. Let's suppose that "the big" is chosen. • The first word must be chosen based on the probability that it follows the prefix (currently "the big") in the source. The source contains three occurrences of "the big". Two times it is followed by "blue", and once it is followed by "elephant". Thus, the next word must be chosen so that there is a 2/3 chance that "blue" will be chosen, and a 1/3 chance that "elephant" will be chosen. Let's suppose that we choose "blue" this time. • The next word must be chosen based on the probability that it follows the prefix (currently "big blue") in the source. The source contains two occurrences of "big blue". Once it is followed by "dog", and the other time it is followed by "hat". Thus, the next word must be chosen so that there is a 50-50 probability of choosing "dog" vs. "hat". Let's suppose that we choose "hat" this time. • The next word must be chosen based on the probability that it follows the prefix (currently "blue hat") in the source. The source contains only one occurrence of "blue hat", and it is followed by "on". Thus, the next word must be "on" (100% probability). • Thus, the first three words in the output text would be "blue hat on". You keep going like that, generating text that is completely nonsensical, but ends up having sort of the same "tone" as the original text. For example, if your sample file is the complete text of Alice In Wonderland (one of the texts we tried it on) then your nonsense comes out kind of whimsical and Carrollian (if that's a word). If your sample file is The Telltale Heart, you get somewhat dark, morbid nonsense. Anyway, while not a rigorous, formal, definition, I hope this helps give you a sense of what a Markov chain is. - +1 Fantastic example, this is exactly the context in which I had encountered Markov Chains. Whish I could vote up more than once. – falcon Jul 31 '10 at 6:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384510517120361, "perplexity_flag": "middle"}
http://citizendia.org/Exponential_function
The exponential function is a function in mathematics. The Mathematical concept of a function expresses dependence between two quantities one of which is given (the independent variable, argument of the function Mathematics is the body of Knowledge and Academic discipline that studies such concepts as Quantity, Structure, Space and The application of this function to a value x is written as exp(x). Equivalently, this can be written in the form ex, where e is a mathematical constant, the base of the natural logarithm, which equals approximately 2. The Mathematical constant e is the unique Real number such that the function e x has the same value as the slope of the tangent line 718281828, and is also known as Euler's number. The exponential function is nearly flat (climbing slowly) for negative values of x, climbs quickly for positive values of x, and equals 1 when x is equal to 0. Its y value always equals the slope at that point. Slope is used to describe the steepness incline gradient or grade of a straight line. As a function of the real variable x, the graph of y=ex is always positive (above the x axis) and increasing (viewed left-to-right). In Mathematics, the real numbers may be described informally in several different ways In mathematics the graph of a function f is the collection of all Ordered pairs ( x, f ( x) It never touches the x axis, although it gets arbitrarily close to it (thus, the x axis is a horizontal asymptote to the graph). An asymptote of a real-valued function y=f(x is a curve which describes the behavior of f as either x or y goes to infinity Its inverse function, the natural logarithm, ln(x), is defined for all positive x. In Mathematics, if &fnof is a function from A to B then an inverse function for &fnof is a function in the opposite direction from B The natural logarithm, formerly known as the Hyperbolic logarithm is the Logarithm to the base e, where e is an irrational The exponential function is occasionally referred to as the anti-logarithm. In Mathematics, the logarithm of a number to a given base is the power or Exponent to which the base must be raised in order to produce However, this terminology seems to have fallen into disuse in recent times. Sometimes, especially in the sciences, the term exponential function is more generally used for functions of the form kax, where a, called the base, is any positive real number not equal to one. Science (from the Latin scientia, meaning " Knowledge " or "knowing" is the effort to discover, and increase human understanding This article will focus initially on the exponential function with base e, Euler's number. In general, the variable x can be any real or complex number, or even an entirely different kind of mathematical object; see the formal definition below. A variable (ˈvɛərɪəbl is an Attribute of a physical or an abstract System which may change its Value while it is under Observation. Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted ## Properties Most simply, exponential functions multiply at a constant rate. For example the population of a bacterial culture which doubles every 20 minutes can (approximatively, as this is not really a continuous problem) be expressed as an exponential, as can the value of a car which decreases by 10% per year. Using the natural logarithm, one can define more general exponential functions. The function $\,\!\, a^x=(e^{\ln a})^x=e^{x \ln a}$ defined for all a > 0, and all real numbers x, is called the exponential function with base a. Note that this definition of $\, a^x$ rests on the previously established existence of the function $\, e^x$, defined for all real numbers. (Here, we neither formally nor conceptually clarify whether such a function exists or what non-natural exponents are supposed to mean. ) Note that the equation above holds for a = e, since $\,\!\, e^{x \ln e}=e^{x \cdot 1}=e^x.$ Exponential functions "translate between addition and multiplication" as is expressed in the first three and the fifth of the following exponential laws: $\,\!\, a^0 = 1$ $\,\!\, a^1 = a$ $\,\!\, a^{x + y} = a^x a^y$ $\,\!\, a^{x y} = \left( a^x \right)^y$ $\,\!\, {1 \over a^x} = \left({1 \over a}\right)^x = a^{-x}$ $\,\!\, a^x b^x = (a b)^x$ These are valid for all positive real numbers a and b and all real numbers x and y. Expressions involving fractions and roots can often be simplified using exponential notation: $\,{1 \over a} = a^{-1}$ and, for any a > 0, real number b, and integer n > 1: $\,\sqrt[n]{a^b} = \left(\sqrt[n]{a}\right)^b = a^{b/n}.$ ## Derivatives and differential equations The importance of exponential functions in mathematics and the sciences stems mainly from properties of their derivatives. In Mathematics, a fraction (from the Latin fractus, broken is a concept of a proportional relation between an object part and the object In Mathematics, an n th root of a Number a is a number b such that bn = a. In Calculus, a branch of mathematics the derivative is a measurement of how a function changes when the values of its inputs change In particular, $\,{d \over dx} e^x = e^x.$ That is, ex is its own derivative. In Calculus, a branch of mathematics the derivative is a measurement of how a function changes when the values of its inputs change Functions of the form $\,Ke^x$ for constant K are the only functions with that property. (This follows from the Picard-Lindelöf theorem, with $\,y(t) = e^t, y(0)=K$ and $\,f(t,y(t)) = y(t)$. ) Other ways of saying the same thing include: • The slope of the graph at any point is the height of the function at that point. • The rate of increase of the function at x is equal to the value of the function at x. • The function solves the differential equation $\,y'=y$. A differential equation is a mathematical Equation for an unknown function of one or several variables that relates the values of the • exp is a fixed point of derivative as a functional In fact, many differential equations give rise to exponential functions, including the Schrödinger equation and the Laplace's equation as well as the equations for simple harmonic motion. In Mathematics, a functional is traditionally a map from a Vector space to the field underlying the vector space which is usually the Real In Physics, especially Quantum mechanics, the Schrödinger equation is an equation that describes how the Quantum state of a Physical system In Mathematics, Laplace's equation is a Partial differential equation named after Pierre-Simon Laplace who first studied its properties Simple harmonic motion is the motion of a simple harmonic oscillator, a motion that is neither driven nor damped. For exponential functions with other bases: $\,{d \over dx} a^x = (\ln a) a^x.$ Thus, any exponential function is a constant multiple of its own derivative. If a variable's growth or decay rate is proportional to its size — as is the case in unlimited population growth (see Malthusian catastrophe), continuously compounded interest, or radioactive decay — then the variable can be written as a constant times an exponential function of time. This article is about proportionality the mathematical relation A Malthusian catastrophe (or Malthusian check, crisis, dilemma, disaster, trap, controls, or limit) is a return Interest is a fee paid on borrowed capital Assets lent include Money, Shares, Consumer goods through Hire purchase, major assets Radioactive decay is the process in which an unstable Atomic nucleus loses energy by emitting ionizing particles and Radiation. Furthermore for any differentiable function f(x), we find, by the chain rule: $\,{d \over dx} e^{f(x)} = f'(x)e^{f(x)}.$ ## Formal definition The exponential function (in blue), and the sum of the first n+1 terms of the power series on the left (in red). In Calculus, the chain rule is a Formula for the Derivative of the composite of two functions. The exponential function ex can be defined in a variety of equivalent ways, as an infinite series. In Mathematics, a series is often represented as the sum of a Sequence of terms That is a series is represented as a list of numbers with In particular it may be defined by a power series: $e^x = \sum_{n = 0}^{\infty} {x^n \over n!} = 1 + x + {x^2 \over 2!} + {x^3 \over 3!} + {x^4 \over 4!} + \cdots$. In Mathematics, a power series (in one variable is an Infinite series of the form f(x = \sum_{n=0}^\infty a_n \left( x-c \right^n = a_0 + Note that this definition has the form of a Taylor series. In Mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its Derivatives Using an alternate definition for the exponential function should lead to the same result when expanded as a Taylor series. In Mathematics, the Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its Derivatives A less common definition defines ex as the solution y to the equation $x = \int_{1}^y {dt \over t}.$ It can also be considered to be the following limit: $e^x = \lim_{n \rightarrow \infty} \left(1 + \frac{x}{n}\right)^{n}$ ## Numerical value To obtain the numerical value of the exponential function, the infinite series can be rewritten as : $\,e^x = {1 \over 0!} + x \, \left( {1 \over 1!} + x \, \left( {1 \over 2!} + x \, \left( {1 \over 3!} + \cdots \right)\right)\right)$ $\,= 1 + {x \over 1} \left(1 + {x \over 2} \left(1 + {x \over 3} \left(1 + \cdots \right)\right)\right)$ This expression will converge quickly if we can ensure that x is less than one. To ensure this, we can use the following identity. $\,e^x\,$ $\,=e^{z+f}\,$ $\,= e^z \times \left[{1 \over 0!} + f \, \left( {1 \over 1!} + f \, \left( {1 \over 2!} + f \, \left( {1 \over 3!} + \cdots \right)\right)\right)\right]$ • Where $\,z$ is the integer part of $\,x$ • Where $\,f$ is the fractional part of $\,x$ • Hence, $\,f$ is always less than 1 and $\,f$ and $\,z$ add up to $\,x$. The value of the constant ez can be calculated beforehand by multiplying e with itself z times. ## Computing exp(x) for real x An even better algorithm can be found as follows. First, notice that the answer y = ex is usually a floating point number represented by a mantissa m and an exponent n so y = m 2n for some integer n and suitably small m. Thus, we get: $\,y = m\,2^n = e^x.$ Taking log on both sides of the last two gives us: $\,\ln(y) = \ln(m) + n\ln(2) = x.$ Thus, we get n as the result of dividing x by log(2) and finding the greatest integer that is not greater than this - that is, the floor function: $\,n = \left\lfloor\frac{x}{\ln(2)}\right\rfloor.$ Having found n we can then find the fractional part u like this: $\,u = x - n\ln(2).$ The number u is small and in the range 0 ≤ u < ln(2) and so we can use the previously mentioned series to compute m: $\,m = e^u = 1 + u(1 + u(\frac{1}{2!} + u(\frac{1}{3!} + u(....)))).$ Having found m and n we can then produce y by simply combining those two into a floating point number: $\,y = e^x = m\,2^n.$ ## Continued fractions for ex Via Euler's identity: $\,\ e^x=1+x+\frac{x^2}{2!}+\cdots=1+\cfrac{x}{1-\cfrac{x}{x+2-\cfrac{2x}{x+3-\cfrac{3x}{x+4-\cfrac{4x}{x+5-\cfrac{5x}{\ddots}}}}}}$ More advanced techniques are necessary to construct the following: $\,\ e^{2m/n}=1+\cfrac{2m}{(n-m)+\cfrac{m^2}{3n+\cfrac{m^2}{5n+\cfrac{m^2}{7n+\cfrac{m^2}{9n+\cfrac{m^2}{\ddots}}}}}}\,$ Setting m = x and n = 2 yields $\,\ e^x=1+\cfrac{2x}{(2-x)+\cfrac{x^2}{6+\cfrac{x^2}{10+\cfrac{x^2}{14+\cfrac{x^2}{18+\cfrac{x^2}{\ddots}}}}}}\,$ ## On the complex plane Exponential function on the complex plane. In Mathematics and Computer science, the floor and ceiling functions map Real numbers to nearby Integers The The transition from dark to light colors shows that the magnitude of the exponential function is increasing to the right. The periodic horizontal bands indicate that the exponential function is periodic in the imaginary part of its argument. As in the real case, the exponential function can be defined on the complex plane in several equivalent forms. In Mathematics, the real numbers may be described informally in several different ways Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted Some of these definitions mirror the formulas for the real-valued exponential function. Specifically, one can still use the power series definition, where the real value is replaced by a complex one: $\,\!\, e^z = \sum_{n = 0}^\infty\frac{z^n}{n!}$ Using this definition, it is easy to show why ${d \over dz} e^z = e^z$ holds in the complex plane. Another definition extends the real exponential function. In Mathematics, the real numbers may be described informally in several different ways First, we state the desired property ex + iy = exeiy. For ex we use the real exponential function. In Mathematics, the real numbers may be described informally in several different ways We then proceed by defining only: eiy = cos(y) + isin(y). Thus we use the real definition rather than ignore it. In Mathematics, the real numbers may be described informally in several different ways [1] When considered as a function defined on the complex plane, the exponential function retains the important properties $\,\!\, e^{z + w} = e^z e^w$ $\,\!\, e^0 = 1$ $\,\!\, e^z \ne 0$ $\,\!\, {d \over dz} e^z = e^z$ for all z and w. Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted It is a holomorphic function which is periodic with imaginary period $\,2 \pi i$ and can be written as $\,\!\, e^{a + bi} = e^a (\cos b + i \sin b)$ where a and b are real values. Holomorphic functions are the central object of study of Complex analysis; they are functions defined on an open subset of the complex number plane Geometric interpretation Geometrically imaginary numbers are found on the vertical axis of the complex number plane This formula connects the exponential function with the trigonometric functions and to the hyperbolic functions. In Mathematics, the hyperbolic functions are analogs of the ordinary trigonometric, or circular functions Thus we see that all elementary functions except for the polynomials spring from the exponential function in one way or another. This article discusses the concept of elementary functions in differential algebra In Mathematics, a polynomial is an expression constructed from Variables (also known as indeterminates and Constants using the operations Extending the natural logarithm to complex arguments yields a multi-valued function, ln(z). In Mathematics, a multivalued function (shortly multifunction, other names set-valued function, set-valued map, multi-valued map We can then define a more general exponentiation: $\,\!\, z^w = e^{w \ln z}$ for all complex numbers z and w. This is also a multi-valued function. The above stated exponential laws remain true if interpreted properly as statements about multi-valued functions. The exponential function maps any line in the complex plane to a logarithmic spiral in the complex plane with the center at the origin. Definition In Polar coordinates ( r, θ the curve can be written as r = ae^{b\theta}\ or \theta In Mathematics, the origin of a Euclidean space is a special point, usually denoted by the letter O, used as a fixed point of reference Two special cases might be noted: when the original line is parallel to the real axis, the resulting sprial never closes in on itself; when the original line is parallel to the imaginary axis, the resulting spiral is a circle of some radius. z = Re(ex+iy) z = Im(ex+iy) z = |ex+iy| ## Computation of exp(z) for a complex z This is fairly straightforward given the formula $\,e^{x + yi} = e^xe^{yi} = e^x(\cos(y) + i \sin(y)) = e^x\cos(y) + ie^x\sin(y).$ Note that the argument y to the trigonometric functions is real. ## Computation of $\,a^b$ where both a and b are complex This is also straightforward given the formulae: if a = x + yi and b = u + vi we can first convert a to polar co-ordinates by finding a $\,\theta$ and an $\,r$ such that: $\,re^{{\theta}i} = r\cos\theta + i r\sin\theta = a = x + yi$ or $\, x = r\cos\theta$ and $\,y = r\sin\theta.$ Thus, $\,x^2 + y^2 = r^2$ or $\,r = \sqrt{x^2 + y^2}$ and $\,\tan\theta = \frac{y}{x}$ or $\,\theta = \operatorname{atan2}(y, x).$ Now, we have that: $\,a = re^{{\theta}i} = e^{\ln(r) + {\theta}i}$ so: $\,a^b = (e^{\ln(r) + {\theta}i})^{u + vi} = e^{(\ln(r) + {\theta}i)(u + vi)}$ The exponent is thus a simple multiplication of two complex values yielding a complex result which can then be brought back to regular cartesian format by the formula: $\,e^{p + qi} = e^p(\cos(q) + i\sin(q)) = e^p\cos(q) + ie^p\sin(q)$ where p is the real part of the multiplication: $\,p = u\ln(r) - v\theta$ and q is the imaginary part of the multiplication: $\,q = v\ln(r) + u\theta.$ Note that all of $\,x, y, u, v, r,$ $\,\theta$, $\,p$ and $\,q$ are all real values in these computations. Also note that since we compute and use $\,\ln(r)$ rather than r itself you don't have to compute the square root. Instead simply compute $\,\ln(r) = \frac12\ln(x^2 + y^2)$. Watch out for potential overflow though and possibly scale down the x and y prior to computing $\,x^2 + y^2$ by a suitable power of 2 if $\,x$ and $\,y$ are so large that you would overflow. If you instead run the risk of underflow, scale up by a suitable power of 2 prior to computing the sum of the squares. In either case you then get the scaled version of $\,x$ - we can call it $\,x'$ and the scaled version of $\,y$ - call it $\,y'$ and so you get: $\,x = x'2^s$ and $\,y = y'2^s$ where $\,2^s$ is the scaling factor. Then you get $\,\ln(r) = \frac12(\ln(x'^2 + y'^2) + s)$ where $\,x'$ and $\,y'$ are scaled so that the sum of the squares will not overflow or underflow. If $\,x$ is very large while $\,y$ is very small so that you cannot find such a scaling factor you will overflow anyway and so the sum is essentially equal to $\,x^2$ since y is ignored and thus you get $\,r = |x|$ in this case and $\,\ln(r) = \log(|x|)$. The same happens in the case when $\,x$ is very small and $\,y$ is very large. If both are very large or both are very small you can find a scaling factor as mentioned earlier. Note that this function is, in general, multivalued for complex arguments. In Mathematics, a multivalued function (shortly multifunction, other names set-valued function, set-valued map, multi-valued map This is because rotation of a single point through any angle plus 360 degrees, or 2π radians, is the same as rotation through the angle itself. So θ above is not unique: θk = θ + 2πk for any integer k would do as well. The convention though is that when ab is taken as a single value it must be that for k = 0, ie. we use the smallest possible (in magnitude) value of theta, which has a magnitude of, at most, π. ## Matrices and Banach algebras The definition of the exponential function given above can be used verbatim for every Banach algebra, and in particular for square matrices (in which case the function is called the matrix exponential). In Mathematics, especially Functional analysis, a Banach algebra, named after Stefan Banach, is an Associative algebra A over the In Mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries) which may be Numbers or more generally In Mathematics, the matrix exponential is a Matrix function on square matrices analogous to the ordinary Exponential function. In this case we have $\,\ e^{x + y} = e^x e^y \mbox{ if } xy = yx$ $\,\ e^0 = 1$ $\,\ e^x$ is invertible with inverse $\,\ e^{-x}$ the derivative of $\,\ e^x$ at the point $\,\ x$ is that linear map which sends $\,\ u$ to $\,\ ue^x$. In the context of non-commutative Banach algebras, such as algebras of matrices or operators on Banach or Hilbert spaces, the exponential function is often considered as a function of a real argument: $\,\ f(t) = e^{t A}$ where A is a fixed element of the algebra and t is any real number. In Mathematics, Banach spaces (ˈbanax named after Polish Mathematician Stefan Banach) are one of the central objects of study in Functional analysis This article assumes some familiarity with Analytic geometry and the concept of a limit. This function has the important properties $\,\ f(s + t) = f(s) f(t)$ $\,\ f(0) = 1$ $\,\ f'(t) = A f(t)$ ## On Lie algebras The exponential map sending a Lie algebra to the Lie group that gave rise to it shares the above properties, which explains the terminology. In differential geometry the exponential map is a generalization of the ordinary Exponential function of mathematical analysis to all differentiable manifolds with an Affine In Mathematics, a Lie algebra is an algebraic structure whose main use is in studying geometric objects such as Lie groups and differentiable Manifolds Lie In Mathematics, a Lie group (ˈliː sounds like "Lee" is a group which is also a Differentiable manifold, with the property that the group In fact, since R is the Lie algebra of the Lie group of all positive real numbers with multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie algebra M (n, R) of all square real matrices belongs to the Lie group of all invertible square matrices, the exponential function for square matrices is a special case of the Lie algebra exponential map. In general, when the argument of the exponential function is noncommutative, the formula is given explicitly by the Baker-Campbell-Hausdorff formula. In Mathematics, the Baker-Campbell-Hausdorff formula is the solution to Z = \log(e^X e^Y\ for non- commuting X ## Double exponential function Main article: double exponential function The term double exponential function can have two meanings: • a function with two exponential terms, with different exponents • a function $\,f(x) = a^{a^x}$; this grows even faster than an exponential function; for example, if a = 10: f(−1) = 1. A double exponential function is a Constant raised to the power of an Exponential function. 26, f(0) = 10, f(1) = 1010, f(2) = 10100 = googol, . A googol is the Large number 10100 that is the digit 1 followed by one hundred zeros (in Decimal representation . . , f(100) = googolplex. Factorials grow faster than exponential functions, but slower than double-exponential functions. Fermat numbers, generated by $\,F(m) = 2^{2^m} + 1$ and double Mersenne numbers generated by $\,MM(p) = 2^{(2^p-1)}-1$ are examples of double exponential functions. In Mathematics, a Fermat number, named after Pierre de Fermat who first studied them is a positive integer of the form F_{n} = 2^{2^{ In Mathematics, a double Mersenne number is a Mersenne number of the form M_{M_p} = 2^{2^p-1}-1 where p is a Mersenne ## Similar properties of e and the function ez The function ez is not in C(z) (ie. not the quotient of two polynomials with complex coefficients). For n distinct complex numbers {a1,. . . an}, $\{e^{a_1 z},... e^{a_n z}\}$ is linearly independent over C(z). The function ez is transcendental over C(z). ## Periodicity For all integers n and complex x: $e^{x} = e^{x \, \pm \, 2i\pi n}$ Proof: $\begin{align}e^{x} &= e^{x}1 \\ &= e^{x}1^{\pm n} \\ &= e^{x}(e^{2i\pi})^{\pm n} \\ &= e^{x}e^{\pm 2i\pi n} \\ &= e^{x \, \pm \, 2i\pi n}\end{align}$ For all positive integers n and complex a & x: $a^{x} = e^{\ln a^{x}} = e^{x \ln a} = e^{x \ln a \, \pm \, 2i\pi n}$ ## References 1. ^ Ahlfors, Lars V. (1953). Complex analysis. McGraw-Hill Book Company, Inc. .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 114, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9041193723678589, "perplexity_flag": "head"}
http://mathoverflow.net/questions/118088/projection-of-a-point-to-a-convex-hull-in-d-dimensions/118097
## Projection of a point to a convex hull in d dimensions ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I've got n points in d dimensions (typically n is around 30k-60k and d is 5 or 6). I'm using qhull to calculate the Delaunay triangulation and the convex hull of the set of points. You can assume each point was drawn from the normal multidimensional distribution. I need the triangulation for function interpolation which works quite well once you calculate the simplex/barycentric coordinates of the query point p. The problem is how to handle points that are outside the convex hull (which occurs fairly infrequently - but does occur)? I need a way to project the point onto the hull's surface and calculate where on the d-1 dimensional face it hit so that I can interpolate this point (essentially clipping the point to the region of the hull). Is there an efficient algorithm out there that does this? I came across this on the web but am not clear how to apply it across the entire hull efficiently. Thanks - ## 2 Answers The natural projection of your exterior point $a$ is to the point $b$ on the polytope that is closest to $a$, i.e., which minimizes the distance $|ab|$. This can be formulated as a quadratic programming problem, for which there are many algorithms. Quite some time ago, Gilbert worked out some methods: (1) E. G. Gilbert, "Minimizing the quadratic form on a convex set", SIAM J. Contr., vol. 4, pp.61-79 1966 (2) E. G. Gilbert, D. W. Johnson, and S. S. Keerthi, "A fast procedure for computing the distance between complex objects in three dimensional space", IEEE J. Robot. Automat., vol. 4, pp.193-203 1988 (PDF link) The first sentence of the 2nd paper above is: "An efficient and reliable algorithm for computing the Euclidean distance between a pair of convex sets in $\mathbb{R}^m$ is described." This algorithm has become known as the GJK algorithm. I doubt this is the last word on the topic. There is a huge literature on collision detection in $\mathbb{R}^3$—which often amounts to finding the minimum distance from a point to a polyhedron—but I don't know how much of it scales gracefully to dimensions $5$ or $6$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I found a very simple algorithm that returns the barycentric coordinates of an arbitrary point in n dimensions - so what I've done is to find all the outermost simplexes (in qhull you just check that at least 1 entry in the neighbours list is -1) - and by brute force check the distance from the query point (in each simplexes barycentric coordinates) to the simplex's projection and pick the smallest one. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236770272254944, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CMB-1998-060-2
Canadian Mathematical Society www.cms.math.ca | | Site map | CMS store location:  Publications → journals → CMB Abstract view # The right regular representation of a compact right topological group Read article [PDF: 83KB] http://dx.doi.org/10.4153/CMB-1998-060-2 Canad. Math. Bull. 41(1998), 463-472 Published:1998-12-01 Printed: Dec 1998 • Alan Moran Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript ## Abstract We show that for certain compact right topological groups, $\overline{r(G)}$, the strong operator topology closure of the image of the right regular representation of $G$ in ${\cal L}({\cal H})$, where ${\cal H} = \L2$, is a compact topological group and introduce a class of representations, ${\cal R}$, which effectively transfers the representation theory of $\overline{r(G)}$ over to $G$. Amongst the groups for which this holds is the class of equicontinuous groups which have been studied by Ruppert in [10]. We use familiar examples to illustrate these features of the theory and to provide a counter-example. Finally we remark that every equicontinuous group which is at the same time a Borel group is in fact a topological group. MSC Classifications: 22D99 - None of the above, but in this section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8178167343139648, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4156966
Physics Forums ## If a is even, prove a^(-1) is even... 1. The problem statement, all variables and given/known data If a is even, prove a-1 is even. 2. Relevant equations We know that every permutation in $S_n, n>1$ can be written as a product of 2-cycles. Also note that the identity can be expressed as (12)(12) for this to be possible. 3. The attempt at a solution Suppose a is a permutation made up of 2cycles, say $a_1, ...,a_n$. We know that : $a^{-1} = (a_1, ...,a_n)^{-1} = a_{1}^{-1}, ..., a_{n}^{-1}$ Now since we can write (ab) = (ba) for any two cycle, we know : $a^{-1} = (a_1, ...,a_n)^{-1} = a_{1}^{-1}, ..., a_{n}^{-1} = a_1, ...,a_n = a$ So if a is an even permutation, it means that |a| is even, say |a|=n. Then |a-1| is also even since |a| = |a-1| for 2cycles. Thus if a is even, then a-1 is also even. Is this correct? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor Quote by Zondrina 1. The problem statement, all variables and given/known data If a is even, prove a-1 is even. 2. Relevant equations We know that every permutation in $S_n, n>1$ can be written as a product of 2-cycles. Also note that the identity can be expressed as (12)(12) for this to be possible. 3. The attempt at a solution Suppose a is a permutation made up of 2cycles, say $a_1, ...,a_n$. We know that : $a^{-1} = (a_1, ...,a_n)^{-1} = a_{1}^{-1}, ..., a_{n}^{-1}$ Now since we can write (ab) = (ba) for any two cycle, we know : $a^{-1} = (a_1, ...,a_n)^{-1} = a_{1}^{-1}, ..., a_{n}^{-1} = a_1, ...,a_n = a$ So if a is an even permutation, it means that |a| is even, say |a|=n. Then |a-1| is also even since |a| = |a-1| for 2cycles. Thus if a is even, then a-1 is also even. Is this correct? It's correct if you can get rid of all that unclearly defined symbolism and verbiage that's giving me a headache. What's the definition of 'even permutation' in simple english? Please don't use symbols! Quote by Dick It's correct if you can get rid of all that unclearly defined symbolism and verbiage that's giving me a headache. What's the definition of 'even permutation' in simple english? Please don't use symbols! If a permutation 'a' can be expressed as a product of an even number of 2cycles, then every possible decomposition of a into a product of two cycles must have an even number of 2cycles. Recognitions: Homework Help Science Advisor ## If a is even, prove a^(-1) is even... Quote by Zondrina If a permutation 'a' can be expressed as a product of an even number of 2cycles, then every possible decomposition of a into a product of two cycles must have an even number of 2cycles. I'll just take the 'definition' part of that. a is a product, right? Write it as a product. So $a=a_1 a_2 ... a_n$ where the a's are tranpositions (2 cycles) and n is even. Now express $a^{-1}$ as a product of transpositions. Be careful about factor order. Thread Tools | | | | |------------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: If a is even, prove a^(-1) is even... | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Set Theory, Logic, Probability, Statistics | 3 | | | Calculus & Beyond Homework | 3 | | | Calculus & Beyond Homework | 3 | | | Calculus & Beyond Homework | 13 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263316988945007, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33601/how-do-i-find-work-done-by-friction-over-a-curve-represented-by-a-polynomial?answertab=votes
# How do I find work done by friction over a curve represented by a polynomial? I am facing a problem in Physics. Problem: What will be the work done by the frictional force over a polynomial curve if a body is sliding on this polynomial($a+bx+cx^2+dx^3+\ldots$) curve from rest from the height $h_1$ to height $h_2$ (where $h_1 > h_2$). I tried to solve this as follows: frictional force $F = k mg \cos\theta$, where $mg \cos\theta$ is normal force at that point. $k$ is coefficient of friction Total work done=Line Integration over the polynomial(dot product of F and displacement). But to go ahead from this point,i do not know. - 3 Is there any more information available, such as initial and final speeds of the body? The problem needs to be defined better. – DarenW Aug 7 '12 at 6:39 "Total work done=Line Integration over the polynomial(dot product of F and displacement)." I think that is the solution already...If you're looking for total work done, and you've already found it, what more do you want to do? :) – Rody Oldenhuis Aug 7 '12 at 7:56 @DarenW he states that the body starts "from rest". Anyway, how does the velocity matter in the calculation of the work? – Rody Oldenhuis Aug 7 '12 at 7:57 1 @user1220376 The normal force you wrote down is for a body that is at rest. When a body moves on a curve the normal force is different because there is acceleration in the normal direction (as you get in circular motion). Beyond that, see Qmechanic's answer below. – Guy Gur-Ari Aug 7 '12 at 20:22 ## 3 Answers I) The easy way to calculate the work $W_{\rm fric}$ done by friction (if one also knows initial and final speeds of the body, cf. DarenW's comment), is to use energy conservation $$W_{\rm fric}~=~ -\Delta E_{\rm kin} -\Delta E_{\rm pot}.$$ II) Else one would have to set up Newton's 2nd law along the curve, which is a second order vector-valued ODE, and solve it. - The stuff below doesn't help you with the problem--- that's Qmechanic's answer. You're supposed to use conservation of energy to infer the work done. But you asked what is the work done by friction for sliding on a polynomial curve: But for a given polynomial, you know the height is y(x), so the speed, ignoring friction, would be $$v = \sqrt{2g (h_0 - y(x)) }$$ The centripetal force to keep you on the curve is $$F_c = {v^2 \over R}$$ Where R is the radius of curvature: $${1\over R(x)} = {y'' \sqrt{(1+y'^2)} \over (1+y')^2}$$ while the normal force is by the cosine of the slope angle $$N = {mg\over \sqrt{1+y'^2}}$$ The work done by friction is the coefficient of friction $\mu$ times the total of these two forces, integrated over the curve: $${dW\over \mu} = ({ 2g v^2 \over R} + mg {1\over \sqrt{1+y'^2}}) ds$$ Where $ds = \sqrt{1+y'^2} dx$, so that this is $${W\over \mu} = \int { v^2 y'' (1+y'^2)\over (1+y')^2} + mg dx$$ Notably, the square roots cancel, and the second part, the friction for slow velocities, is just $\mu mg\Delta x$ for any curve, it's how far in X you moved. - The problem is too general for being solved this way and there is a condition of contact you have not used. The normal force must never be zero for the mobile not to take off. – Shaktyai Aug 7 '12 at 14:19 @Shaktyai: I have implicitly used the condition by saying that $v = sqrt{2g(h_0 - y) - W}$. You don't get any information about your problem from this, it is just a general interesting form of the work done by friction. You can't solve this equation, it's just an interesting fact that the slow velocity friction is proportional to the horizontal distance travelled, while the other part is not too bad for small $y'$ either. – Ron Maimon Aug 7 '12 at 19:59 I would use the conservation of energy: Ec(1)-Ec(2)+mg(h1-h2)=W At least you know that without knowing initial and final velocities nothing much can be said. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430328011512756, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/28834/event-horizons-without-singularities?answertab=votes
# Event horizons without singularities Someone answered this question by saying that black hole entropy conditions and no-hair theorems are asymptotic in nature -- the equations give an ideal solution which is approached quickly but never actually reached from the point of view of an observer outside the event horizon. Since then I've been wondering whether singularities are ever really created, and if not, why do we worry about naked singularities? Quick recap: to an external observer, an object falling into a black hole experiences time dilation such that it appears to take an infinite amount of time to cross the event horizon and ends up sitting frozen at the border. So here's my reasoning: the above should also apply during the formation of the black hole in the first place. The gravitational field approaches infinite density as the constituent matter approaches a central point, but to an outside observer, it takes an infinite amount of time for the singularity to form. In other words, it never happens. As I understand it, naked singularities are dismissed with hand-waving, "we'll fix it when we go quantum," but I don't see that as necessary. It seems to me that singularities never actually form, although event horizons clearly do. Does this mean that we can stop worrying? What happens in naked singularity scenarios when there is no singularity yet? - Actually, it's believed that naked singularities cannot classically form from ordinary'' matter except in extremely exceptional circumstances. No need to invoke quantum mechanics at all. – Jerry Schirmer May 23 '12 at 12:53 ## 1 Answer So here's my reasoning: the above should also apply during the formation of the black hole in the first place. That isn't true. Black holes don't start from a point in (for example) the centre of a collapsing star and grow outwards. It's actually the opposite - the event horizon forms outside the collapsing star. That means the matter forming the black hole is already inside the event horizon and GR tells us that anything inside the event horizon falls into the singularity in a finite time. You never have to worry about whether the matter does or doesn't cross the event horizon in a finite time. It seems odd for the event horizon to spring into existance outside the star, but it's because large black holes are easier to make than small black holes. The Schwarzschild radius depends on the mass, but if you assume a uniform star the mass depends on the star's radius cubed. This means the average density of the black hole, i.e. the mass divide by the volume within the event horizon, is lower for large balck holes than for small ones. So I think even the most sceptical would have to concede that singularities really exist. However the question of whether naked singularities exist is another and different question. If you do the maths then GR tells you that they can be created e.g. by charging a Reissner–Nordström black hole to it's extremal value. The question is whether the maths is related to reality. - How does this picture change if one uses dynamic and isolated horizons, rather than event horizons? The latter are somewhat aesthetically unpleasing in many ways, least of which you said, that it already exists in places which do not have high matter density and is extremely teleological in nature. – genneth May 23 '12 at 14:43 I'm not sure I see the connection with singularities, naked or otherwise. The main problem here is the old chestnut that nothing can fall through an event horizon because it takes infinite Schwarzschild co-ordinate time to reach the horizon. – John Rennie May 23 '12 at 15:12 I almost accepted this -- "GR tells us that anything inside the event horizon falls into the singularity in a finite time" but this is from the POV of the falling object, right? From an outside-the-horizon observer isn't it also infinite? If not, why not? – spraff May 27 '12 at 22:24 @spraff: it's true that if you use Schwarzschild co-ordinates it takes infinite co-ordinate time for an object outside the event horizon to reach the even horizon. However if an object starts inside the event horizon, i.e. because the event horizon forms outside it, then even in Schwarzschild co-ordinates it will reach the singularity in a finite time (though the Schwarzschild co-ordinates don't make a lot of sense inside the event horizon so you need to be careful what you mean by "time"). This is the key point. The collapsing star is inside the horizon when the horizon forms. – John Rennie May 28 '12 at 5:59 @JohnRennie, I don't think your answer really answers the question. We can separate one event - "event horizon forms" and think of what happens just before it. That would be some particle finally crossing the boundary of a Schwarzschild radius sphere. So the question is, does this event happen in finite time for an outside observer or maybe from his POV the particle will approach that boundary infinitely? – Fixpoint Jun 22 '12 at 23:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459141492843628, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6209/paradoxical-interaction-between-a-massive-charged-sphere-and-a-point-charge
# Paradoxical interaction between a massive charged sphere and a point charge Suppose we have a sphere of radius $r$ and mass m and a negatively charged test particle at distance d from its center, $d\gg r$. If the sphere is electrically neutral, the particle will fall toward the sphere because of gravity. As we deposit electrons on the surface of the sphere, the Coulomb force will overcome gravity and the test particle will start to accelerate away. Now suppose we keep adding even more electrons to the sphere. If we have n electrons, the distribution of their pairwise distances has a mean proportional to $r$, and there are $n(n-1)/2$ such pairs, so the binding energy is about $n^2/r$. If this term is included in the total mass-energy of the sphere, the gravitational force on the test particle would seem to be increasing quadratically with $n$, and therefore eventually overcomes the linearly-increasing Coulomb force. The particle slows down, turns around, and starts falling again. This seems absurd; what is wrong with this analysis? - 1 Can you clarify why it sounds absurd? – Mark Eichenlaub Mar 2 '11 at 5:34 Just because electrons are supposed to repel each other and gravity is supposed to be a weak force. I wonder if there is another argument that gives a different answer? – Dan Brumleve Mar 2 '11 at 6:12 3 I think the other comments are right: this result is correct and is not actually paradoxical. But I agree it's counterintuitive, and I think it's a nice thought experiment. – Ted Bunn Mar 2 '11 at 14:27 2 How do you keep accumulating electrons on the sphere anyway? – Raskolnikov Mar 8 '11 at 23:42 1 Apologies if I'm just stating the obvious, but: does it seem less counterintuitive to you once you consider that the requisite $n$ for which gravity dominates goes to infinity as $G_N \to 0$? – Matt Reece Mar 9 '11 at 1:27 show 8 more comments ## 3 Answers The statement that the gravitational attraction will eventually dominate the coulomb repulsion as n increases is correct. You probably think the restmass of the electrons will invoke gravitational attraction, but that part is neglible for high electrondensities on the sphere. The gravitational attraction by (the curvature of spacetime caused by) bindingenergy is far greater for high densities. I don't know where the breakpoint is, but suppose I look way past that limit, i.e. an incredible dense sphere(or shell) of electrons. If I make it dense enough, that could very well become a black hole. Note that this would be a strange black hole, since the 'mass' of such a black hole consists almost entirely of the bindingenergy, and not the restmasses of the electrons. From this point of view it might be more easily imaginable that the gravitional attraction will dominate eventually. - 3 Black holes of a given mass have a maximum charge, which I believe may be much less than you'd get if they were composed solely of electrons. Nobody has taken this into account so far. – Peter Shor Mar 10 '11 at 16:00 how does the maximum charge grows with mass? what happens if i add more charge than allowed? does the mass grows at least accordingly? doesn't that fix the minimum ratio of electrical charge versus gravitational mass for physical systems? – lurscher Mar 10 '11 at 17:19 @Peter Shor: there may be various upperlimits on the maximum charge of a black hole. I take one into account(below). @lurscher: yes to both. If charge is added, the mass grows at least accordingly, and yes, that fixes a maximum chargedensity. I suspect a blackhole of pure massless charged particles would exactly have that density. If you'd add more charge, you'd essentially add extra binding energy and as a result the radius and mass of the black hole would increase. So @Peter again: therefore there is maximum charge given the mass of a black hole. It could be even lower by other reasons, IDK. – JBSnorro Mar 10 '11 at 18:01 As a brainstorming answer, lets calculate the binding energy in another way: suppose we have N electron in the sphere, the electrostatic energy to bring a new electron into the sphere is $Ne/R$. If we add a new electron, only the net electrostatic potential $Ne/R$. the net potential on far-away electrons is $N( e - m_{e} G )/R$ what is actually quadratic in $N$ is the energy required to bind together N electrons in the sphere, it is actually $e/R + 2e/R + ... Ne/R = N(N-1)e/2R$ This also contributes to gravitational weight, but there will be a maximum capacity where the electrons will escape the sphere the capacitance of a sphere is given by $4 \pi \epsilon R$. So in this case the binding energy is bounded by the capacitance of the conductor used for your sphere - The passive gravitational mass of the electron is, by experiment, less than 9% of the theoretical expected value, as you can see in my answer on this question. The result was so unexpected that no one believed in the correcteness of the result. I believe that the active gravitational mass of the electron will also be of the same order (or zero). One nice theory could collapse because of one single experiment. I am very much impressed: Almost no question do not include the Black Hole in the response. May be the case that they atract votes up. On this site there are already 2000 references to this term. - That is interesting, however this thought experiment works just as well with protons, because it is the gravitational mass of the electric potential energy that is causing the dominant effect, not the gravitational rest mass. – Dan Brumleve Mar 13 '11 at 20:51 @Helder: the result is so unexpected, that many continue to not believe the correctness of the result -1. – Ron Maimon Oct 8 '11 at 21:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381975531578064, "perplexity_flag": "head"}
http://mathoverflow.net/questions/56148/wittens-qft-and-jones-poly-paper/56149
## Witten’s QFT and Jones Poly paper ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Data: $M$ is an oriented 3-dim manifold, $E$ is a $G$-bundle over $M$, with $G$ compact simple Lie group. Question: How does $\pi_3(G)\cong \mathbb{Z}$ imply that there exists non-trivial gauge transformations (i.e., continuous maps $M\rightarrow G$ which are not homotopic to the trivial map)? If anyone would like to read it from the source, check the paragraph leading up to equation 1.4. - If you like rational homotopy theory you could use the fact that rationally all these Lie groups look like products of odd-spheres... Then you would be analyzing maps from an algebra \wedge v_2i+1 \to C*(M) all. The map which sends v_3 \to a generator in H^3(M) should be the one... This should all be okay even in the non-simply connected case since pi_1 acts trivially on the higher homotopy groups for Lie groups. – Daniel Pomerleano Feb 21 2011 at 10:43 ## 4 Answers I think the important information is that $H^3(G,\mathbb{R}) \neq 0$. By the way, every $G$-bundle over $M$ is trivializable, under the conditions you have mentioned. That's why a gauge transformation can be regarded as a (smooth) map $g: M \to G$. Now you look at the behaviour of the Chern-Simons 3-form $CS(A)$ of a connection $A$ on $E$ under a gauge transformation $g$. The formula is $$CS(g^*A) = CS(A) + g^*H + \text{exact terms}.$$ where $H$ is the canonical 3-form of $G$ that represents a non-trivial element of $H^3(G,\mathbb{R})$. Now you can find a gauge transformation $g$ such that $g^*H$ is not exact. In that sense you have non-trivial gauge transformations. EDIT: The comment that every $G$-bundle is trivializable is only true if $G$ is additionally assumed to be simply-connected, sorry. So you either assume that (so does Witten) or you must see gauge transformations as maps $g:P \to G$, rather, and the Chern-Simons form $CS(A)$ as a form on $P$, not on $M$. - But, isn't true that $H^3\big(U(1),\mathbb{R}\big)\neq 0$? If so, then shouldn't we also have nontrivial gauge transformations in this case? However, we know that $k$ is not quantized for $U(1)$ theories - quantization comes directly from nontrivial gauge transformations. I'm sure that I am mixing something up here...just not sure what. – Kevin Wray Feb 21 2011 at 18:36 @ klw1026 - $U(1)$ is the circle which is $1$-dimensional, whence $H^3(U(1);\mathbb{R})=0$. – Somnath Basu Feb 21 2011 at 19:14 Sorry, I didn't mean to say $U(1)$. – Kevin Wray Feb 21 2011 at 19:19 @ klw1026: $U(1)$ is not simple. It's a subtle terminology, but simple contains the assumption of being non-abelian. Also, I think we need the assumption of simply-connectedness, see my edit. – Konrad Waldorf Feb 21 2011 at 20:07 By the way, Chern-Simons theory for $G=U(1)$ is non-trivial. That's because $H^4(BU(1),\mathbb{Z})$ is non-trivial. – Konrad Waldorf Feb 21 2011 at 20:16 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As Konrad Waldorf noted, in this case G-bundles are trivializable (since $\pi_2(G)$ is trivial). So gauge transformations are just maps $$\phi:M\rightarrow G$$ and these have a homotopy invariant that can be non-trivial, the degree of the map. One way to compute this is as $$\int_M \phi^*\omega_3$$ where $\omega_3$ is a generator of $H^3(G)$. Or, as usual for a degree, just pick an element of G, and count points (with sign) in the inverse image. - As Paul mentions, it's only for the case G=SU(2) that this is just the degree of the map. – Peter Woit Feb 22 2011 at 3:28 @kwl1026. Gauge transformations are sections of the Ad bundle $P\times_{Ad} g$ where $P\to M$ is the principal $G$ bundle; $g$ the lie algebra. When $G$ is abelian the adjoint action is trivial so, e.g. the $U(1)$ gauge group is always $Map(M,U(1))$ whether or not $P$ is trivial. Its homotopy classes are then $[M, U(1)]= H^1(M;Z)$, which is zero (for $M$ a closed 3-manifold) if and only if $M$ is a rational homology sphere. An elementary answer to your original question for $SU(2)=S^3$ is that obstruction theory shows that the primary obstruction gives an isomorphism $[M,S^3]\to H^3(M;Z)$. An induction using the fibration $SU(n)\to SU(n+1)\to S^{2n+1}$ and cellular approximation shows that $[M,SU(n)]=[M,SU(2)]$. Other tricks can get you there for other $G$. It is true that the differnence in Chern-Simons invariants (suitably normalized) coincides with this isomorphism (composed with $H^3(M;Z)\to Z$), as indcated by Konrad. For $SU(2)$ it also agrees with the degree, as mentioned by Peter. If $P$ is non-trivial you have to work a little harder, since you are asking what is the set of homotopy classes of sections of the fiber bundle $P\times_{Ad} g$. A useful reference is Donaldson's book on Floer homology. - Yes, I agree with what everyone has said. I am just trying to understand Witten's reasoning when he says that since $\pi_3(G) \cong \mathbb{Z})$ (i.e., when $\pi_3(G)$ is non-trivial) we have nontrivial gauge transformations. I understand Konrad's description (since $\pi_1(G)=\pi_2(G) =0$ there is no obstruction to a global section - giving gauge transformations as maps $M\rightarrow G$), but he is starting with a cohomological statement. So, is it that Witten is just thinking about the Hurewicz isom. to got to homology, then cohom., or is it straightforward from $pi_3(G)\neq 0$? – Kevin Wray Feb 22 2011 at 0:40 1 For any (simply connected) space $G$, the obstructions to nulhomotoping $f:M\to G$ lie in $H^i(M;\pi_i(G))$. If $\pi_1(G)=0=\pi_2(G)$, then since $M$ is 3-dimensional there is only one obstruction in $H^3(M;\pi_3(G))$. So $\pi_3(G)=Z$ means the only (primary) obstruction is in $H^3(M;Z)$. Although I'm citing obstruction theory, this is elementary since you can easily nullhomotop the 2-skeleton of $M$, so the map factors through $M/2-skeleton=S^3$ for an appropriate cell structure. Incidentally, for $G=Spin(4)$, $\pi_3(G)=Z\oplus Z$ since $Spin(4)=SU(2)\times SU(2)$. – Paul Feb 22 2011 at 0:55 Ok, this I like! For some reason though I was thinking that the obstruction (extending over the $3$-skeleton) was an element in $H^3\big(M;\pi_2(G)\big)$, not in $H^3\big(M;\pi_3(G)\big)$. Maybe that is the obstruction to extending a section over the $G$-bundle? – Kevin Wray Feb 22 2011 at 1:30 1 This is a homotopy obstruction, not an extension obstruction. Of course they are related: Think of it as a relative extension problem for $(M\times I, M\times\{0,1\})$, then use the suspension iso $H^i(M\times I, M\times\{0,1\})=H^{i-1}(M)$. This explains the shift. – Paul Feb 22 2011 at 13:10 @Paul: Yes, I realized this last night. Everything is ok now. Thanks for the explanation. – Kevin Wray Feb 23 2011 at 0:13 First consider the case $M = S^3$. Generalizing, consider the connected sum of a generic M with a sphere $M = M \# S^3$ Edit Here's what I was thinking (Still not sure if it's all correct, but it seems closer to the spirit of Witten's paper than the obstruction arguments.) Consider a gauge transform $f': M \rightarrow G$. Also, consider a gauge transformation $g' : S^3 \rightarrow G$ not homotopic to the identity. Continuity allows us to change $f'$ to a map $f$ homotopic to $f'$ such that in a neighborhood $U$ of $p \in M$ the map $f$ maps to the identity of $G$. We can define a map $g$ to have similar properties in a neighborhood $V$ of $q \in S^3$. Do the connected sum around $p$ and $q$ and obtain $M \# S^3 = M$ as well as a gauge transform $h$ on $M \# S^3 = M$ obtained by joining $f$ and $g$. Now, assume $h$ is homotopic to the identity. The homotopy taking $h$ to the identity can be used to construct a homotopy of $g$ to the identity. (Here we use the fact that $\pi_2(G)$ is trivial to continue the homotopy over the ball removed from $S^3$.) But, no such homotopy of $g$ to the identity exists. Thus, $h$ is not homotopic to the identity. Hence, $\pi_3(G) = \mathbf{Z}$ implies there exist continuous maps $M \rightarrow G$ not homotopic to the identity. - Yes, I understand the $M=S^3$ case, but not following what you are implying by considering the connected sum $M\# S^3$ – Kevin Wray Feb 21 2011 at 7:23 Presumably you should be able to take a map which is constant on $M\backslash D^3$ (and on the connecting tube) and essential on $(S^3\backslash D^3,\partial D^3) \simeq (S^3,pt)$? Then we could prove this is essential too by looking at the induced map on $H_3$ or something like that. (Or maybe there's an easier direct argument proving that such a construction can't be nullhomotopic.) – Aaron Mazel-Gee Feb 21 2011 at 9:00 Yeah at least in the simply connected case, that's probably what she means. We can just collapse the two skeleton of M and look at the map S^3 \to G given by a generator of pi_3. By Hurewicz and the fact that the collapsing map induces an iso on H_3, the map must be non-trivial on homology. In the non-simply connected case, pass to the universal cover and note that the map $H_3(\tilde(G))\to H_3(G)$ is multiplication by some non-zero number. – Daniel Pomerleano Feb 21 2011 at 10:57 the weird tilde thing is supposed to be the universal cover but it's late and i'm to lazy to fix it... – Daniel Pomerleano Feb 21 2011 at 10:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 119, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372050762176514, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/24582/renormalization-group-in-d-3
# renormalization group in d=3 Do we really understand why the renormalization group in $d=2+\varepsilon$ and $d=4-\varepsilon$ taking $\varepsilon=1$ gives "good" values for critical exponents in $d=3$? Are they exceptions? Is it also the case in high energy physics (particle, string, quantum gravity)? - 1 I would say we do for $4-\epsilon$, but $2+\epsilon$ is something I don't know. Where do you start for calculating Ising critical exponents in 2d? The 2d fixed points are special, and their perturbation theory doesn't seem to me to be an easy link to 3d. What's a reference for $2+\epsilon$? – Ron Maimon Apr 30 '12 at 4:48 ## 1 Answer S. Ginsburg (or Ginzburg. I do not know exactly, how his name has been translated) has published a paper on a true 3D renormgroup in Sov. Phys. JETP in 1975, as much as I remember. In his approach all these artificial tricks with small ε that later appears to be equal to 1 are not present, and a rigorous renormgroup theory has been built. The results, say for the position of the fixed point was, however, the same as that in the Wilson's theory. My understanding is that one can use the ε-expansion just as a trick that leads to a correct answer (just as it is established empirically), but those who do not like it and want to be safe should go for the 3D renormgroup of Ginsburg. It is, by the way, not more complicated than the ε-expansion approach. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9665424823760986, "perplexity_flag": "middle"}
http://motls.blogspot.com/2012/11/babar-directly-measures-time-reversal.html?m=1
# The Reference Frame Our stringy Universe from a conservative viewpoint ## Monday, November 19, 2012 ### BaBar directly measures time reversal violation Microscopic processes involving particles proceed differently if forced to go backwards One of them is Babar the Elephant. Don't ask me which one – I would guess it's the daddy. Instead, I can offer you Peter F.'s elephant who can paint an elephant with a flower. Physical Review Letters just published a paper Observation of Time Reversal Violation in the B0 Meson System (arXiv, July 2012) by the BaBar collaboration at Stanford's SLAC that directly proves the violation of T, or the time-reversal symmetry. Even though the result isn't new anymore, the publication was an opportunity for some vibrations in the media: Stanford press release The T-violation is equivalent to the CP-violation, via the CPT-theorem, as I discuss below, but comments about the discovered "microscopic arrow of time" weren't just a new sexy way to describe experiments looking for CP-violation. They have actually seen the T-violation "directly". Physicists have known what would happen in this experiment for decades; but they actually performed it for the first time now (the detailed idea behind this experiment has been around since the late 1990s when the long experiment was actually getting started). What did they do? They studied B-mesons – the same particles whose decays were recently claimed to send supersymmetry to the hospital. Mesons are particles constructed out of 1 quark and 1 antiquark, roughly speaking, and "B" means that bottom quarks and/or antiquarks are involved. The high frequency of the letter "B" in "BaBar" has the same reason. In fact, "BaBar" is $$B\bar B$$ [bee-bee-bar] as pronounced by Babar the Elephant. The BaBar Collaboration looked for various processes in which $$B^0$$ and $$\bar B^0$$, two "flavor eigenstates" of the neutral B-mesons, transform either to $$J/\psi K_L^0$$ (called $$B_+$$) or $$c\bar cK_S^0$$ (called $$B_-$$). And if I simplify things just a little bit, statistics applied to 468 million entangled $$B\bar B$$-pairs produced in $$\Upsilon (4S)$$ decays showed that some "asymmetries" that should be zero if T were a symmetry were decidedly nonzero. One may say that the transformation of $$B^0$$ into $$B_-$$ was detectably faster than the inverse process. We often talk about 2-sigma or 3-sigma "bumps" and 5 standard deviations is a threshold for a "discovery". So you may want to ask how many sigmas these BaBar folks actually have to claim that the microscopic laws have an inherent arrow of time. Their signal is actually 14 sigma or 17 sigma, depending on the details, so the probability of a false positive is something like $$10^{-43}$$. Compare it with the 10-percent risk of false positives tolerated in soft scientific disciplines. Note that one doesn't need an exponentially huge amount of data. If most of the errors are statistical in character, you only need about a 10 times greater dataset to go from 5 standard deviations to 15 standard deviations. Just 10 times more data and the risk of a false positive drops from $$10^{-6}$$ to $$10^{-43}$$. Reminding you of C, P, T, CP, CPT, and all that Our bodies (and many other things) are "almost" left-right symmetric. For a long time, physicists believed (and most laymen probably still believe) that the fundamental particles in Nature had to be left-right-symmetric as well, and behave in a left-right symmetric manner, too. And if some particles (such as amino acids) are left-right-asymmetric (look like a screw), there must exist their mirror images with exactly the same masses and other properties. This assumption seemed to be satisfied by all the phenomena known before the 1950s. But experiments in the 1950s showed that this left-right symmetry that physicists call "parity" and denote "P" is actually violated in Nature. There are particles that are spinning much like the wheels of a car going forward – but they prefer to shoot to the left side, not right side, for no apparent reason. While $$SO(3)$$ is a symmetry, $$O(3)$$ is not. Left-right-asymmetric physics (physicists say "chiral" physics, referring to the word "kheir/χειρ" for a "hand" because the left hand and the right hand differ which is why we use various hands for various right-hand rules) is easily constructed using spinors, mathematical objects generalizing vectors that may be described as "square roots of vectors". In particular, in 3+1 dimensions, or any even total dimensionality, one may write down equations for "Weyl [chiral] spinors" that will force a particle to resemble a left-handed screw, or right-handed screw, but forbid its opposite motion. And indeed, all the neutrinos are left-handed while the antineutrinos – and it's the antineutrino that you get by a decay of a neutron – are always right-handed. Nature has an inherent left-right asymmetry built into it. Note that this correlation also violates the C symmetry, or the charge-conjugation symmetry that replaces particles by antiparticles. If you act with C on a (possible) left-handed neutrino, you get a left-handed antineutrino which is not allowed. For a decade, people thought that a more sophisticated symmetry, CP, that you obtain by the simultaneous action of P and C is obeyed by Nature. If you mirror-reflect all the objects and particles and replace all particles by their antiparticles, you should get another allowed state, one that has the same mass/energy and behaves in the "same way". However, in the 1960s, even this CP-symmetry was found to be violated. The spectrum of allowed objects is pretty much CP-symmetric in Nature and in all Lagrangian quantum field theories we may write down but the pairs related by CP behave differently. The complex phase in the CKM matrix is the only truly established source of CP-violation we know in Nature. New physical effects such as supersymmetry implies that new sources of CP-violation probably exist. They're probably also badly needed to obtain the high matter-antimatter antisymmetry that had to exist when the Cosmos was young, before almost everything annihilated, so that we're still here. But no clear proofs of other sources of CP-violation are available at this moment although some hints of discrepancies exist. So C and P are not symmetries; they are violated even by the spectrum of allowed objects. CP is allowed by the spectrum of allowed objects but the dynamics (especially mixing angles etc.) imply that it is not an exact symmetry. As you can see, the CP-violation is even weaker than the C-violation and the P-violation. But there is a combination of operations that has to be a symmetry in every relativistic quantum field theory, the CPT-symmetry. This fact was proved by Wolfgang Pauli and is called the CPT-theorem. The CPT operation does C and P at the same moment and it also performs the time reversal – it reverts the direction of the arrow of time. Note that among C,P,T, only T is an "antilinear operator" which means that \[ T\ket{\lambda \psi} = \lambda^* T\ket\psi \] including the asterisk which means complex conjugation (that's the reason of the prefix, anti-). Various combinations of C,P,T are linear or antilinear depending on whether T is included. Note that the complex conjugation is needed for the time reversal already in ordinary non-relativistic quantum mechanics because the complex conjugation is the only sensible way to change $$\exp(+ipx/\hbar)$$ to $$\exp(-ipx/\hbar)$$ i.e. to change the sign of the momentum $$p$$ – and the velocity $$v=dx/dt$$ – which is needed for particles to evolve backwards. Why the CPT has to be a symmetry in quantum field theory – and almost certainly is an exact symmetry in string theory as well? It's because it may be interpreted as the "rotation of the spacetime by 180 degrees" which is a symmetry because it belongs to the Lorentz group analytically extended to complex values of the parameters (which is allowed). Work in the momentum space and extend the time coordinate to imaginary values\[ t\to \tau = it. \] Analytically continue all fields or Green's functions and amplitudes (as functions of the momenta, to be kosher, because only as functions of the momenta, the functions are holomorphic) to the imaginary values of the time component. Now, the 4-dimensional spacetime with points $$(x,y,z,\tau)$$ becomes a Euclidean 4-dimensional space. The rotations between $$z$$ and $$\tau$$ are nothing else than $$tz$$-boosts extended to imaginary values of the "boost rapidity". By the analyticity, if the ordinary real boosts are symmetries, so must be the imaginary boosts. The imaginary rapidity is nothing else than the ordinary angle. Take the angle to be $$\pi$$. This will revert the sign of both $$\tau$$ and $$z$$ – which means that it will perform both P and T. Now, if you analytically continue it back, the effect is clearly nothing else than the flipping of signs of $$t$$ and $$z$$, so you naively get the PT transformation and prove it is a symmetry because it is just a $$\pi$$-rotation. However, you actually get a CPT transformation. Purely geometrically, by looking at the shape of the world lines, you can't distinguish PT from CPT because C only acts "internally" and doesn't change the shape of the world lines etc. The reason why the rotation by 180 degrees is CPT and not just PT is that the reflection of T also reverts the "arrow" on all the world lines, and particles moving backwards in time are actually antiparticles. (I could formulate an equivalent argument more mathematically and convincingly, but it's enough here, I hope.) So CPT is always a symmetry. If you replace all particles by antiparticles; change their configuration to its mirror image; and invert the sign of all the velocities, then the subsequent evolution in time will look like exactly the evolution of the original system backwards in time (reflected in space as well and enjoying the inverted labels for all particles/antiparticles). Because CP isn't a symmetry and CPT is a symmetry, T – which is a convolution of the CP and CPT transformations, a convolution of a symmetry and a non-symmetry – clearly refuses to be a symmetry, too. That's also why they could directly detect a violation of T in the BaBar experiment. This has nothing to do with the arrow of time in statistical physics Thank God, even Sean Carroll knows and acknowledges this fact. I must emphasize that these effects are only large enough in special systems interacting via the weak interactions and they're weak, anyway. In reality, we know the "arrows of time" that have been discussed many times on this blog. We forget but rarely "unforget", eggs break but not unbreak, we mostly get older but not younger, the heat goes from warmer bodies to cooler ones but not vice versa, friction slows downs vehicles but doesn't speed them up, and so on. Decoherence produces nearly diagonal density matrices out of pure and coherent states but the opposite process – emergence of quantum coherence out of decoherent chaos – doesn't occur. These manifestations of the "arrow of time" have nothing whatsoever to do with the violation of T that was discussed at the beginning of the article and that was experimentally demonstrated by BaBar. The microscopic BaBar-like T-violation is neither necessary nor sufficient a condition for the existence of the arrow of time in thermodynamics etc. Even if you had microscopically time-reversal-symmetric laws of physics, they would produce time-reversal-asymmetric macroscopic laws with friction and the second law. It's because the origin of all these "macroscopic asymmetries" is in the logical arrow of time – the fact that the probabilities of evolution between ensembles of microstates have to be averaged over initial states but summed over final states, so the initial states and final states have to be treated differently, because of the basic laws of logic and probability calculus. Again, the microscopic T-violation isn't a necessary condition for the entropy to increase and for other signs of the arrow of time in the macroscopic world around us. The opposite relationship is also wrong; the microscopic T-violation wouldn't be sufficient for the macroscopic one, either. If you tried to deny the existence of the logical arrow of time, the BaBar-like T-violation in the microscopic laws of physics wouldn't be sufficient to produce the "huge" asymmetries between the processes that go in one direction and those that (usually don't) proceed in the opposite direction simply because the microscopic T-violation is far too weak and doesn't have a "uniform arrow" that would give the future its futureness and award the past with its pastness, anyway. I plan to dedicate some article to statistical physics in a foreseeable future again. Right now, one must emphasize that the experimental detection of the T-violation is a detection of an asymmetry in the fundamental equations of physics that apply when the initial state and the final state are fully specified and known – so ignorance, the main prerequsite needed for thermodynamics to emerge, is absent. #### 17 comments: 1. Yuri Possible explanation of violation: t2-t1=dt because mt2-mt1=dtm et2-et1=dte m and e not constants during the evolution of the Universe.that confirms the existence of generation of particles. 2. Fred Hi Lubos - I look forward to reading what you write about stat physics and time. I watched the Feynman lecture about the direction of time with my 13 year old son and we found it fascinating. I recall that some people claim that the requirement of increasing entropy is what drives time in one direction - that it is a more fundamental explanation. I have never understood this - how does one necessary follow from the other ? Why couldn't they both be independent facts ? 3. is there any relation between CPT and exchange symetry? 4. Dilaton it was very pleasant reading. The argument involving the worldlines to see why PT and CPT can not be destinguished, I have not yet seen stated elsewhere and I like this. I look forward to read some nice reminders about statistical physics here on TRF too :-) 5. Hi Lubos, good luck with a new article on arrow of time/statistical mechanics, but I don't think you can beat the deniers (determinists) with logic - they bascially have it right in their minds, IF the universe is deterministic then it could just as well go backwards as forwards (entropy wise). You can only defeat them by arguing that the universe doesn't evolve determistically 6. Dear James, misunderstandings in statistical physics - which may be even classical - and misunderstandings of quantum physics are a priori two different things. However, I agree with you that they're linked in the way you mention. In both cases, these people believe in some totally wrong and totally naive "realism" which means that they believe that physics should describe "how things are". But quantum physics is a tool to (more generally) "say valid statements about Nature" and indeed, this was really the case in classical statistical physics, too. A statement about thermodynamics inevitably works with ignorance and ensembles and propositions about such things inseparably have a logical arrow of time incorporated in them. That's why I have often called Boltzmann the "forefather of quantum mechanics" and why the misunderstanding of the second law of thermodynamics and of the postulates of quantum mechanics usually co-exist. 7. Dan Hi Lubos, I enjoyed your article; thanks! I am having trouble with believing that this is an ironclad test of time reversal violation. It seems that another hypothesis is that it shows an asymmetry between matter and antimatter. For instance, the branching ratios for decay are different for matter and antimatter. Sure, the theory says that an antiparticle traveling backward in time is exactly the same as a particle traveling forward. But what if that is not true? In other words, we are assuming a symmetry between matter and antimatter, but what if no perfect theory has this symmetry? Another hypothesis, valid even if there is symmetry between matter and antimatter, is that the Upsilon has memory of the fact that it was created by matter (the accelerator beam is made of matter). How do they discard these hypotheses? I have found nothing on this. Best regards, Dan 8. Thanks, Dan, for your interest. The only generally valid relationship between matter and antimatter is the CPT-theorem that holds for all relativistic quantum field theories and almost certainly for its extensions - string theory is the only example worth mentioning. What I wrote about the antiparticles' being particles moving backwards in time was a sketch of a rigorous proof of the CPT-theorem. I would even claim that a good physicist who has never heard about the CPT-theorem could use my sketch to rediscover the CPT-theorem and write its pretty rigorous version himself. So theoretically, it's an indisputable, rigorously proved result that antimatter evolving backwards and in the mirror obeys the same rule as matter evolving forward without a mirror. The CPT-theorem also holds according to all the experiments and observations we know. The previous sentence is equivalent to the fact that all the effects violating the past-future symmetry boil down to the same effects and same terms in the Lagrangian as the observationally known effects displaying a difference between matter and antimatter (when the mirroring of the space is added as well). The CPT-invariant theories (mostly quantum field theories which are enough) are simply keep on describing all the known empirical data but if the CPT-theorem were significantly wrong, e.g. as wrong as the CP-symmetry or even P-symmetry, a discrepancy would already have been discovered. 9. Dan So, first, I want to make clear, when I criticize the description of the experiment as a "direct observation of time reversal violation", I am not talking about the specific time-reversal asymmetry in the standard model. I am not assuming CPT. I don't think you can invoke a symmetry (CPT) that explains an experiment supposedly on time reversal only. You either directly measure something, or you don't. You can't invoke a correspondence between particles and antiparticles, even if it has worked well in describing all experiments. That standard would allow numerous theories consistent with all known observations (vis. Copernicus). You can't demonstrate a thing that your theory, which is incomplete, says is equivalent. You can't assume anything, like local interactions for example. Local interactions lead to singularities in field theory, as far as I am aware, and running of the coupling constants, and renormalization is great for getting physical results but has no known physical origin with local interactions. I might be completely wrong on this point. For this experiment invoking the hypothesis that the Upsilon has memory that it was created by matter, as I have done, is a bit unfair, because it would make the measurement of time reversal asymmetry impossible. But a philosopher might argue that it is indeed unprovable. Notwithstanding this argument, I think most everybody would agree that time reversal is defined as interchanging initial and final states. Defined, not understood to be equivalent in terms of an existing incomplete theory. In the present context, I think the following are plausible hypotheses fully consistent with theory and experiment. I may not be correct. What if there were a CPT eigenvalue that was conserved, such that the entire observable universe was in a condensate of one or the other of the CPT eigenstates. Call it left. The one particle CPT eigenstates are degenerate, left and right. This quantum number determines the direction of beta decay of Cobalt 60 for instance. The universe is more symmetric than it appears Now which way does this hypothetical, unobserved quantum number transform with respect to CPT? Well it could transform either way, right, depending on the actual symmetry of the universe? What if this quantum number determines the result of the babar experiment? I think that you must be saying, there is no superior theory, written much differently in terms of the mathematics, without anything like CPT symmetry, but with a time coordinate, which is symmetric with respect to t<->-t, there is a clear symmetry under a global operation that interchanges t and -t , but in which the Upsilon has memory of the fact that it was created by matter, or has a left eigenvalue, which causes the babar result. There is no possibility. In quantum mechanics you can define an effective Hamiltonian in a reduced space that is a function of energy, H(E_i)Psi_i = E_i Psi_i, and in the time dependent version there is a memory (non-markovian) term. How about that? What if the observable universe is actually a subset of the full universe, such that there is such an effective Hamiltonian, for instance? 10. Dan Also, the universe is understood to be asymmetric with respect to matter/antimater, with the observed abundance of the former. Some would argue that that unambiguously disqualifies any theory that assumes any such symmetry. 11. Sorry, Dan, your comments make no sense. You are not only "not assuming" the Standard Model: you are deliberately overlooking the evidence supporting the Standard Model - which is all the evidence. So your comments can't have anything to do with the reality. The Standard Model (like many field theories of this kind) implies that C, P, CP, T is broken (in well-defined ways) but CPT is preserved. As long as the Standard Model's predictions hold for everything, I may say the same about Nature: Nature breaks C, P, CP, and T, but it preserves CPT. You can't falsify CPT just by your desire to arrogantly and stupidly pretend that no observations exist. 12. Dan Remember we are talking about PROVING something, not showing that everything is consistent with known observations. Anyhow I thought we were having a civil conversation but clearly I am wrong. I'll go ahead and insult you back below. >So your comments can't have anything to do with the reality. You did not follow my reference to Copernicus. The geocentric theory fit all observations until it didn't. You are being an arrogant scientist, not me. If you assume that the center of the universe is the earth, then you can prove that mars travels in a highly unphysical way as it circles the earth, through direct observation. You would be wrong. Peace. 13. Dan Just to make it CRYSTAL: Before the Copernican revolution one could have made a DIRECT OBSERVATION of Mars moving around the earth in a loopy way. Lots of folks did and they were wrong. 14. Dear Dan, it's not true that we are talking just about proving something. I am not talking just about "proving" something all the time especially because in this situation - and in most situations in science - it's just not possible to "prove" something. It's not how science works. If you need someone else to say the same thing, watch e.g. this video: The time reversal symmetry has nothing to do with Copernicus. Have you ever tried to read your own comments and imagine how incredibly obnoxious, stupid crackpot you are in the eyes of others? 15. Dan Lubos, You can directly observe parity violation in a laboratory. You can set up the magnetic coil, collect the beta ray, observe that things are left-right asymmetric. Parity violation means setting up an experiment that is left-right symmetric and observing a left-right asymmetry in the result. Parity violation has been proven to exist. Fair statement? I think so. Table salt has been proven to be a compound of sodium and chlorine. Fair statement? I think so. So yes, in science we can prove things. If you are a philosopher, you can put all forms of proof in question. I'm not talking about that. We all agree on what salt, sodium, and chlorine are. Given that agreement, we can do an experiment that shows the former is composed of the latter. Has time reversal violation been proven to exist -- been directly observed -- been shown to happen -- like parity violation? No. Arguably, it cannot. However, the man on the street would say that reversing initial and final states is basically time reversal. Start talking to him about antiparticles actually being particles traveling backwards in time, and he'll want to know what you're selling. At the very least, he'll want you to prove it. And you can't prove that an antiparticle is the same thing as a particle traveling backwards in time, no matter their equivalence in your theory with CPT. Google scholar the terms "Memory kernel" and "chiral condensate." 16. Dan GEOCENTRIC MODEL <--> CPT SYMMETRY You are pretending to be dense, right?!? 17. i think that the violation of PT must to exist,independently of cpt.the asymmetry of left-right handed to the rotational invariance,implies the existence of speed of light as CONSTANT AND LIMIT TO UNIVERSE.AND THE NON-EXISTENCE OF ANTiMATTER.THE ANTIPARTICLES ARE PRODUCTS OF VIOLATION OF SYMMETRY IN THE SPACETIME CONTINUOS,PERMITING THAT THE SPACETIME IN THE QUADRIDIMENSIONAL MANIFOLDS BE LATTICES.IT IS DISCRETES AND NON TOTALLY SMOOTH IN ALL IT STRUCTURE. THE VIOLATION OF PT DOES APPEAR THE ANTIPARTICLES THAT ARE ENERGY STATES LOCALLY BUNDLEDLED IN THE VACCUM.THE ASYMMETR IN SPACETIME THAT IS SEEN IN THE SPACE AND TIME, SEPARATELY,OCCUR THAT THE TRANSFORMATIONS OF MASS INTO ENERGY AND VICEVERSA AREN"T SYMMETRICS WITH THE INCRESE OF SPEED IN RETATIONS TO OTHERS INERTIAL SYSTEMS IN RELATIVE MOTIONS THEN T is BROKEN TOGHETER TO P,CONNECTING SPACE AND TIME IN SPACETIME CONTINUOS. ## Who is Lumo? Luboš Motl Pilsen, Czech Republic View my complete profile ← by date
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 30, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496746063232422, "perplexity_flag": "middle"}
http://physicspages.com/2013/02/15/lie-derivative-higher-rank-tensors/
Notes on topics in science ## Lie derivative: higher-rank tensors Required math: algebra, calculus Required physics: none Reference: d’Inverno, Ray, Introducing Einstein’s Relativity (1992), Oxford Uni Press. – Section 6.2; Problem 6.1. We looked at a conceptual derivation of the Lie derivative earlier, but now we can look at another derivation which allows us to generalize the derivative to tensors of any rank. The idea of a Lie derivative is that we use a vector field ${X}$ to provide a congruence of curves along which derivatives of a tensor are calculated. If we have a tensor field defined over a manifold, then that tensor field has a particular value at each point in the manifold. Suppose for the sake of illustration we take a mixed tensor with one contravariant and one covariant index: ${T_{b}^{a}}$. Then at two neighbouring points ${P}$ and ${Q}$, this tensor has values ${T_{b}^{a}\left(P\right)}$ and ${T_{b}^{a}\left(Q\right)}$. Remember that in the case where we were finding the derivative of a vector field (a rank-one tensor field), this vector field consists of vectors that are tangent to a congruence of curves (a different congruence from the one we’re using to define the direction of the derivative – see the earlier post on the Lie derivative). In the case of a higher-rank tensor field, the tensors it contains are in a sense tangents to higher-dimensional surfaces. Thus the tensors ${T_{b}^{a}\left(P\right)}$ and ${T_{b}^{a}\left(Q\right)}$ are tangents to a surface in the ‘congruence of surfaces’ produced by the tensor field. Returning to our vector field ${X}$ that is being used to define the direction of the derivative, we can look at the curve from its congruence that passes through ${P}$ and ${Q}$ (we’re assuming that we’ve chosen the vector field in such a way that one of its congruence of curves does pass through these two points). Let’s say that ${Q}$ is a distance ${\delta u}$ along this curve from ${P}$, that is, if ${P}$ has coordinates ${x^{a}}$, then ${Q}$ has coordinates $\displaystyle x^{\prime a}=x^{a}+\delta uX^{a}\left(x\right) \ \ \ \ \ (1)$ If we apply this transformation to all points on the surface from the congruence of ${T_{b}^{a}}$ that passes through ${P}$, we’ll generate a new surface that passes through ${Q}$. In general, this surface will not be a surface from the congruence of ${T_{b}^{a}}$, so if we define ${T_{b}^{\prime a}}$ to be the tensor that is tangent to this new surface, then in general, ${T_{b}^{\prime a}\left(x^{\prime}\right)\ne T_{b}^{a}\left(x^{\prime}\right)}$. That is, the tensor obtained by dragging the surface along from ${P}$ to ${Q}$ by means of the transformation 1 will be different from the tensor at ${Q}$ which is defined as a tangent to the surface at ${Q}$ which is in the congruence of ${T_{b}^{a}}$. We can calculate these two tensors as follows. We treat the dragging operation from ${P}$ to ${Q}$ as a coordinate transformation, so under this transformation we have, using the rules for transforming covariant and contravariant components: $\displaystyle T_{b}^{\prime a}=\frac{\partial x^{\prime a}}{\partial x^{c}}\frac{\partial x^{d}}{\partial x^{\prime b}}T_{d}^{c}$ We need expressions for these two partial derivatives. We can get both by taking the derivative of 1. First, we get $\displaystyle \frac{\partial x^{\prime a}}{\partial x^{c}}=\delta_{c}^{a}+\delta u\frac{\partial X^{a}}{\partial x^{c}}$ For the other derivative, we get | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle x^{d}$ | $\displaystyle =$ | $\displaystyle x^{\prime d}-\delta uX^{d}$ | | $\displaystyle \frac{\partial x^{d}}{\partial x^{\prime b}}$ | $\displaystyle =$ | $\displaystyle \delta_{b}^{d}-\delta u\frac{\partial X^{d}}{\partial x^{\prime b}}$ | We can use the chain rule on the last line to get | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \frac{\partial x^{d}}{\partial x^{\prime b}}$ | $\displaystyle =$ | $\displaystyle \delta_{b}^{d}-\delta u\frac{\partial X^{d}}{\partial x^{c}}\frac{\partial x^{c}}{\partial x^{\prime b}}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle \delta_{b}^{d}-\delta u\frac{\partial X^{d}}{\partial x^{c}}\left(\delta_{b}^{c}-\delta u\frac{\partial X^{c}}{\partial x^{\prime b}}\right)$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle \delta_{b}^{d}-\delta u\frac{\partial X^{d}}{\partial x^{b}}+O\left(\delta u^{2}\right)$ | Using these formulas, we can get an expression for the dragged along tensor, to order ${O\left(\delta u\right)}$: | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle T_{b}^{\prime a}$ | $\displaystyle =$ | $\displaystyle \left(\delta_{c}^{a}+\delta u\frac{\partial X^{a}}{\partial x^{c}}\right)\left(\delta_{b}^{d}-\delta u\frac{\partial X^{d}}{\partial x^{b}}\right)T_{d}^{c}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle T_{b}^{a}+\delta u\left(\frac{\partial X^{a}}{\partial x^{c}}T_{b}^{c}-\frac{\partial X^{d}}{\partial x^{b}}T_{d}^{a}\right)$ | All the tensor quantities on the RHS are evaluated at point ${P}$, that is, for the unprimed coordinates ${x^{a}}$. Now, for the original tensor field, we can get an expression for its actual value at ${Q}$ by using a Taylor expansion to first order in ${\delta u}$. $\displaystyle T_{b}^{a}\left(x+\delta uX\right)=T_{b}^{a}\left(x\right)+\delta uX^{c}\frac{\partial T_{b}^{a}}{\partial x^{c}}$ where again all quantities on the RHS are evaluated at ${P}$. If we now take the difference between the ‘actual’ tensor at ${Q}$ and the dragged along tensor at ${Q}$, and divide by ${\delta u}$, we get | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \mathfrak{L}_{X}T_{b}^{a}$ | $\displaystyle \equiv$ | $\displaystyle \lim_{\delta u\rightarrow0}\frac{T_{b}^{a}\left(x^{\prime}\right)-T_{b}^{\prime a}\left(x^{\prime}\right)}{\delta u}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle X^{c}\frac{\partial T_{b}^{a}}{\partial x^{c}}-\frac{\partial X^{a}}{\partial x^{c}}T_{b}^{c}+\frac{\partial X^{d}}{\partial x^{b}}T_{d}^{a}$ | This is the Lie derivative of the tensor field ${T_{b}^{a}}$ with respect to the vector field ${X}$. Note that the contravariant index ${a}$ contributes a term ${-\frac{\partial X^{a}}{\partial x^{c}}T_{b}^{c}}$ while the covariant index ${b}$ contributes ${+\frac{\partial X^{d}}{\partial x^{b}}T_{d}^{a}}$. It’s fairly obvious from the derivation (just plug in an additional first order expansion of ${\frac{\partial x^{\prime a}}{\partial x^{c}}}$ for each contravariant index and a ${\frac{\partial x^{d}}{\partial x^{\prime b}}}$ for each covariant index) that each additional index will contribute a term of the same type as that shown here. That is, in general we will get $\displaystyle \mathfrak{L}_{X}T_{cd...}^{ab...}=X^{e}\frac{\partial T_{cd...}^{ab...}}{\partial x^{e}}-\frac{\partial X^{a}}{\partial x^{e}}T_{cd...}^{eb...}-\frac{\partial X^{b}}{\partial x^{e}}T_{cd...}^{ae...}-...+\frac{\partial X^{e}}{\partial x^{c}}T_{ed...}^{ab...}+\frac{\partial X^{e}}{\partial x^{d}}T_{ce...}^{ab...}+...$ As a simple example of this formula, we can find the Lie derivative of the Kronecker delta ${\delta_{b}^{a}}$: | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \mathfrak{L}_{X}\delta_{b}^{a}$ | $\displaystyle =$ | $\displaystyle X^{c}\partial_{c}\delta_{b}^{a}-\delta_{b}^{c}\partial_{c}X^{a}+\delta_{c}^{a}\partial_{b}X^{c}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle 0-\partial_{b}X^{a}+\partial_{b}X^{a}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle 0$ | The first term is zero since ${\delta_{b}^{a}}$ has the same constant value in all coordinate systems. (Problem 6.1 in d’Inverno asks that you also work this derivative out ‘from first principles’ which presumably means you need to work out the Lie derivative term for covariant indices (since he gives only the derivation for contravariant indices in the book). Since I’ve already done that above, that presumably answers that part of the question.) The Lie derivative obeys the product rule in the sense that, for any two tensor fields ${A}$ and ${B}$: $\displaystyle \mathfrak{L}_{X}\left(AB\right)=A\left(\mathfrak{L}B\right)+\left(\mathfrak{L}A\right)B$ This follows from the definition. The Lie derivative contains 3 types of terms. First there is the term ${X^{c}\frac{\partial T}{\partial x^{c}}}$. If ${T=AB}$, the ordinary product rule applies and we get | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|------------------------------------------------------------------------------| | $\displaystyle X^{c}\frac{\partial T}{\partial x^{c}}$ | $\displaystyle =$ | $\displaystyle X^{c}\frac{\partial AB}{\partial x^{c}}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle X^{c}\left(A\frac{\partial B}{\partial x^{c}}+B\frac{\partial A}{\partial x^{c}}\right)$ | For the other two types of term, we have one term for each index in the tensor of the form $\displaystyle -\frac{\partial X^{a}}{\partial x^{e}}T_{cd...}^{eb...}\;\mbox{contravariant}$ $\displaystyle +\frac{\partial X^{e}}{\partial x^{c}}T_{ed...}^{ab...}\;\mbox{covariant}$ If ${T_{cd...}^{ab...}=A_{c...}^{a...}B_{d...}^{b...}}$, then these terms split into two groups. One group will have a sum over a contravariant or covariant index of ${A}$, with ${B}$ just multiplied into the result, while the other group will have a sum over an index of ${B}$ with ${A}$ multiplied into the result. That is $\displaystyle -\frac{\partial X^{a}}{\partial x^{e}}T_{cd...}^{eb...}-\frac{\partial X^{b}}{\partial x^{e}}T_{cd...}^{ae...}-...+\frac{\partial X^{e}}{\partial x^{c}}T_{ed...}^{ab...}+\frac{\partial X^{e}}{\partial x^{d}}T_{ce...}^{ab...}+...$ $\displaystyle =$ $\displaystyle -B_{d...}^{b...}\frac{\partial X^{a}}{\partial x^{e}}A_{c...}^{e...}-A_{c...}^{a...}\frac{\partial X^{b}}{\partial x^{e}}B_{d...}^{e...}-...+B_{d...}^{b...}\frac{\partial X^{e}}{\partial x^{c}}A_{e...}^{a...}+A_{c...}^{a...}\frac{\partial X^{e}}{\partial x^{d}}B_{e...}^{b...}+...$ Adding up the product rule result for the first term with these other terms gives the product rule for the Lie derivative as a whole. Combining this result with the derivative of ${\delta_{b}^{a}}$ above, we get | | | | |------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------| | $\displaystyle \mathfrak{L}_{X}\left(\delta_{b}^{a}T_{a}^{b}\right)$ | $\displaystyle =$ | $\displaystyle \delta_{b}^{a}\mathfrak{L}_{X}T_{a}^{b}+T_{a}^{b}\mathfrak{L}_{X}\delta_{b}^{a}$ | | $\displaystyle$ | $\displaystyle =$ | $\displaystyle \delta_{b}^{a}\mathfrak{L}_{X}T_{a}^{b}$ | However, the first quantity ${\delta_{b}^{a}T_{a}^{b}}$ is a contraction of the tensor ${T}$, in the sense that $\displaystyle \delta_{b}^{a}T_{a}^{b}=T_{a}^{a}$ which is a scalar. So we have the result that the Lie derivative commutes with contraction: $\displaystyle \delta_{b}^{a}\mathfrak{L}_{X}T_{a}^{b}=\mathfrak{L}_{X}\left(T_{a}^{a}\right)$ (Incidentally, equation 6.13 in d’Inverno has the indices the wrong way round on the LHS. The repeated indices have to occur in pairs with one upper and one lower, as shown here.) ### Like this: By , on Friday, 15 February 2013 at 15:24, under Physics, Relativity. Tags: Lie derivatives, product rule, tensor dragging. 4 Comments ### Trackbacks • By Lie derivative: product rule « Physics tutorials on Friday, 15 February 2013 at 15:47 [...] the last post we derived the product rule for the Lie derivative. We can verify this rule in the specific case [...] • By Covariant derivative of covariant vector « Physics tutorials on Saturday, 16 February 2013 at 19:02 [...] with the Lie derivative, we can’t derive the corresponding expression for a covariant vector by considering [...] • By Covariant derivative of higher rank tensors « Physics tutorials on Saturday, 16 February 2013 at 20:37 [...] This its derivative is zero which means that the covariant derivative, like the Lie derivative, commutes with contraction: [...] • By Lie derivative in terms of the covariant derivative « Physics tutorials on Sunday, 17 February 2013 at 13:09 [...] this condition, we can derive a relation between the Lie and covariant derivatives. The Lie derivative [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 122, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251682162284851, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/118717/estimating-the-difference-between-linear-operators
# Estimating the Difference Between Linear Operators If $A, \phi \in \mathsf{GL}(V)$ where $V$ is a normed vector space and we are using the operator norm on $\mathsf{GL}(V)$, I'm trying to show that $$\frac{ \|\phi^{-1}\|^{2} \| A - \phi \| }{1 - \| \phi^{-1}(A - \phi) \|} < \| \phi^{-1} \|^2 \|A - \phi\|$$ whenever $$\|A - \phi \| < \frac{1}{ \|\phi^{-1}\|}$$ I have tried various approaches to this but none of them seem to work out. About the only thing I've been able to conclude is that the denominator is nonnegative but this doesn't seem to help with the estimation. I'm sure there's some algebra trick I could employ to see this but, if so, I cannot see it and would appreciate any constructive pointers on how to proceed. If it helps, the context of this is a proof to show that the function that carries an operator to its inverse is continuous. - 4 This is obviously false as stated. – scineram Mar 11 '12 at 0:12 4 If somebody on here tells me one more time that something is "obvious", I'm going to change my username to ItsNotFreakingObvious. In fact, I'm going to do that anyway, no need to wait. If it were "obvious" to me I would not have asked the question in the first place. – ItsNotObvious Mar 11 '12 at 3:11 ## 1 Answer Observe that $\|\phi^{-1}(A-\phi)\|\leq \| \phi^{-1}\| \| A-\phi\|<1$. Thus, for $A\ne \phi$, it follows that $0<\| \phi^{-1}(A-\phi)\|<1$. Hence the opposite inequality holds by multiplying each side by the denominator and cancelling common factors. - I don't understand what you mean by "hence the opposite inequality holds by multiplying each side by the denominator and cancelling common factors". Each side of what? Which denominator? What inequality is opposite of what? – ItsNotObvious Mar 11 '12 at 3:08 The equation you are trying to prove has the term $\| \phi^{-1}\|^2\| A-\phi\|$ on both sides of the inequality. So you can cancel them if they are positive. – azarel Mar 11 '12 at 3:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9789077043533325, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/63944/how-do-you-prove-the-following-inequality-concerning-complex-logarithms
# How do you prove the following inequality concerning complex Logarithms? If $0<|w|<1/2$, then $2|w|/3<|\operatorname{Log}(1+w)|$ using power series and modulus inequalities. - It doesn't appear true for $w=0$ – Ross Millikan Sep 12 '11 at 17:48 What's the power series for $\operatorname{Log}(1+w)$? (That's a hint, not a question:) – marty cohen Sep 12 '11 at 19:45 ## 1 Answer Though it is probably not the cleanest method, it is possible to use the minimum modulus principle and explicitly calculate the minimum value of $|\log(1+w)/w|$ on $|w| = 1/2$. As noted by AD., the function $$f(w) = \log(1+w)/w$$ is analytic on $|z| < 1$. Further, it is nonconstant and zero-free there, so we can appeal to the minimum modulus principle to conclude that $$|f(w)| > \min_{|z| = 1/2} |f(z)|$$ for $|w| < 1/2$. Now, $$\left|\log\!\left(1+\frac{e^{i\theta}}{2}\right)\right|^2 = \frac{1}{4}\left[\log\!\left(\frac{5}{4}+\cos\theta\right)\right]^2 + \arctan\!\left(\frac{\sin\theta}{2+\cos\theta}\right)^2$$ is symmetric about $\theta = \pi$. By beating the derivatives senseless and using the fact that $x \operatorname{arccot} x < 1$, it is possible to show that this function is strictly increasing on $(0,\pi)$. Thus $$\min_{|z| = 1/2} |f(z)| = 2 \min_{0 \leq \theta \leq \pi/2} \left|\log\!\left(1+\frac{e^{i\theta}}{2}\right)\right| = 2\log\!\left(\frac{3}{2}\right) > \frac{2}{3},$$ from which the result follows. - I am willing to include the proof that the function is strictly increasing if anyone would like. I left it out only because it's tedious. – Antonio Vargas May 16 '12 at 7:22 Yes that is the way, removed my hints since there was an error. (+1) – AD. May 16 '12 at 19:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230421185493469, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/121771/second-homotopy-of-a-torus-complement-in-the-4-sphere
## Second homotopy of a torus complement in the 4-sphere ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $T$ be the boundary of a solid torus in $S^4$. Are there any theorems or methods which would help one to compute $\pi_2(S^4 -T)$? Or to say if, e.g., it had finite rank and no torsion? More generally, suppose we have a closed manifold $X$ and we remove a submanifold $Y$ of at least codimension 2. Is there a general method for computing the homotopy groups of $X-Y$? The above case was the original example that got me to think about this but a more general result would also perhaps be helpful. - Is your $T$ topologically equal to $S^1\times S^1$? – Wlodzimierz Holsztynski Feb 14 at 4:46 2 If $X$ is the universal covering space of $S^4 - T$, $H_2 X$ is isomorphic to $\pi_2(S^4 - T)$, this is just the fact that $X$ is simply connected, together with the Hurewicz theorem that says the first non-trivial homotopy and homology groups agree above dimension $2$, and that covering spaces preserve higher homotopy groups. This is a standard technique to compute homotopy groups, for example, Serre made great mileage out of this in his dissertation. – Ryan Budney Feb 14 at 5:35 See also Sam Lomonaco's Pacific Math Journal paper. Here he is dealing with spheres, but the crossed module structure of $\pi_2$ should be interesting to you as well. – Scott Carter Feb 15 at 1:40 ## 1 Answer The paper Martins, João Faria The fundamental crossed module of the complement of a knotted surface. Trans. Amer. Math. Soc. 361 (2009), no. 9, 4593–4630. looks relevant. Note that for links in $\mathbb R^3$ it is standard to use the fundamental group and a Seifert-van Kampen Theorem. So for the case in question this paper uses the fundamental crossed module and a $2$-dimensional Seifert-van Kampen Theorem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145854115486145, "perplexity_flag": "head"}
http://mathoverflow.net/questions/86344?sort=votes
## Interpolation of derivatives ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If $U$ is an open interval of $\mathbb{R}$ and $f : U \to \mathbb{R}$ is an $L^2(U)$ function with second derivative $f'' \in L^2(U)$ (in the weak sense), is $f \in W^{1,1}(U)$? EDIT: Removed false inequality. - Sorry, but I can't seem to fix the TeX... – John H Jan 22 2012 at 1:05 1 I think linear functions might break your inequality what with the zero second derivative and non-zero first derivative. – BSteinhurst Jan 22 2012 at 2:31 1 I fixed your LaTeX, but don't immediately see how to fix your inequality – Yemon Choi Jan 22 2012 at 2:45 You are totally right, that inequality is false. – John H Jan 22 2012 at 2:51 ## 1 Answer I assume $U$ is a finite open interval, else the assertion is clearly false (let $f(x)=x$). Then a standard estimate shows that $f'$ is bounded, and thus in $L^1(U)$, whence $f$ is in the Sobolev space $W^{1,1}(U)$ (in fact in $W^{1,p}(U)$ for all $p$). Fix some $x_0 \in U$, and write $$\left|f'(x) - f'(x_0)\right| = \left| \int_{x_0}^x f''(y) dy \right| \leq \| f'' \|_2 \phantom. |x-x_0|^{1/2},$$ using Cauchy-Schwarz in the last step. Since $\| f'' \|_2$ is a finite constant and $|x-x_0|$ is bounded, so is $\left|f'(x) - f'(x_0)\right|$, and we are done. This kind of argument is of course well-known, and probably predates Sobolev himself, but is easier to write up than to look up. A reader better versed in the literature may be able to supply a canonical reference. - 3 This argument also shows that $f'$ is of Hölder class $C^{0, 1/2}$ and in particular is continuous. So the conclusion is that $f$ belongs to $C^{1, 1/2}$, and in particular is continuously differentiable. – Phil Isett Jan 22 2012 at 4:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361106157302856, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/type-theory+lambda-calculus
# Tagged Questions 1answer 468 views ### If $f(x)=g(x)$ for all $x:A$, why is it not true that $\lambda x{.}f(x)=\lambda x{.}g(x)$? There's something about lambda calculus that keeps me puzzled. Suppose we have $x:A\vdash f(x):P(x)$ and $x:A\vdash g(x):P(x)$ for some dependent type $P$ over a type $A$. Then it is not necessarily ... 6answers 2k views ### Learning Lambda Calculus What are some good online/free resources (tutorials, guides, exercises, and the like) for learning Lambda Calculus? Specifically, I am interested in the following areas: Untyped lambda calculus ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8992666006088257, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/183601/on-the-definition-of-the-hausdorff-distance
# On the definition of the Hausdorff distance $\newcommand{\dist}{\mathrm{dist}\,}$ Let $M$ be a metric space and $\emptyset\neq A,B\subset M$ bounded closed subsets. The Hausdorff distance is defined as $$h(A,B)=\max\{\dist(A,B),\dist(B,A)\},$$ where $$\dist(A,B)=\sup_{x\in A}\inf_{y\in B}d(x,y).$$ Why do we define $\dist(A,B)$ in this way? Is't it possible to replace the supremum by the infimum in the definition of $\dist\!$, that is, define $$\dist_{\mathrm{new}}(A,B)=\inf_{x\in A}\inf_{y\in B}d(x,y).$$ What is the impact of this 'new' definition on the 'Hausdorff distance' given by $$h_{\mathrm{new}}(A,B)=\max\{\dist_{\mathrm{new}}(A,B),\dist_{\mathrm{new}}(B,A)\}?$$ - 2 One problem that arises if you replace the sup by an inf is that the resulting distance function fails to be a pseudometric, as the triangle property fails to hold. For example, consider the sets $A=\{1\}$, $C = \{-1\}$ and $B=\{z\in\mathbb{C} | |z|=1 \}$ in $\mathbb{C}$ with the usual topology. We then have $d(A,B)=d(B,C)=0$, but $d(A,C)=2$. – John Wordsworth Aug 17 '12 at 12:16 3 Another problem (related to @Old John's observation) is that your suggested distance is zero already if $A$ and $B$ share a point while the Hausdorff distance is a genuine distance function. – t.b. Aug 17 '12 at 12:18 ## 2 Answers The intuition behind Hausdorff distance is to measure “how similar” two sets are in the metric sense. If two sets are in small Hausdorff distance, they are supposed to “look” almost the same. For example, if $A$ was some arbitrary compact set on the plane and $B$ was its countable dense subset, then the Hausdorff distance between them would be zero, which is to be expected, since they “look” pretty much the same, if you don't look too close. You might want to take a look at the picture in the Wikipedia article, I found that it is quite helpful to intuitively see how the distance works. Furthermore, if we take a locally compact metric space $X$, Hausdorff distance turns the set $\mathcal K(X)$ of non-empty compact subsets of $X$ into a well-behaved metric space (into which $X$ naturally isometrically embeds). Your definition could not do such a thing, because it would fail pretty much all axioms of metric except nonnegativity and symmetry. That's not to say that what you defined does not make sense (though, as suggested by t.b., the symmetrisation is unnecessary, because $\inf_{x\in A}\inf_{y\in B}d(x,y)=\inf_{(x,y)\in A\times B} d(x,y)=\inf_{y\in B}\inf_{x\in A}d(x,y)$). It does measure how “close” sets are to one another. It's just that it's not what Hausdorff distance is about. - More specifically, the suggested distance measures the minimal distance of points $(a,b)$ with $a \in A$ and $b \in B$. The symmetrization is unnecessary, since the suggested "distance" simply is $\operatorname{dist}_{\rm new}(A,B) = \inf_{a \in A, b \in B} d(a,b)$ which is already symmetric in $A$ and $B$. – t.b. Aug 17 '12 at 12:40 @t.b.: good point, I incorporated that comment into the answer. – tomasz Aug 17 '12 at 12:47 Here's my intuition on how you could have invented the Hausdorff distance. Hopefully it helps. You want a metric that tells you how far two compact sets are from being the same. And since these sets happen to be subsets of a metric space, you ought to define your metric in terms of the distances between the points of $A$ and $B$. Suppose $A\ne B$. Then either there is a point in $A$ that is not in $B$, or there is a point in $B$ that is not in $A$ (or both). Let's say there is an $a\in A$ with $a\not\in B$. How far is $a$ from being in $B$? Well, the least you have to move $a$ to get it into $B$ is the distance to the closest point in $B$, which is $\inf_{b\in B} d(a,b)$. So that's the distance from $a$ to $B$, which we might as well call $d(a,B)$. Observe that if $a\in B$ then $d(a,B)=0$, and because $B$ is compact, if $a\not\in B$ then $d(a,B)>0$. Now there are lots of points $a\in A$, some of which may be in $B$, and some may not. As long as there exists any $a\not\in B$, that is, any $a$ such that $d(a,B)>0$, we want to know about it. So we ought to take the supremum: $d_1(A,B)=\sup_{a\in A}d(a,B)$. This also means that every point in $A$ is at most $d_1(A,B)$ away from $B$. Finally, $d_1(A,B)$ only tells us if there is a point in $A$ that is far from $B$. We want the Hausdorff distance $d_H(A,B)$ to be large if either there is a point in $A$ far from $B$, or there is a point in $B$ far from $A$. So we define it to be the maximum of both $d_1(A,B)$ and $d_1(B,A)$. And we're done. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618282318115234, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/69428-solved-prove-composite.html
# Thread: 1. ## [SOLVED] Prove Composite Problem: Show that $8^{n}+1$ is composite for all $n\geq1$. What I have so far: In general, $a^{n}+1 = (a+1)(a^{(n-1)} - a^{(n-2)} + a^{(n-3)} - \cdots + 1)$ but this only holds for $n$ odd. Is there a way to factor when $n$ is even? Or would I have to find a solution specific to $a=8$? 2. Originally Posted by star_tenshi Problem: Show that $8^{n}+1$ is composite for all $n\geq1$. What I have so far: In general, $a^{n}+1 = (a+1)(a^{(n-1)} - a^{(n-2)} + a^{(n-3)} - \cdots + 1)$ but this only holds for $n$ odd. Is there a way to factor when $n$ is even? Or would I have to find a solution specific to $a=8$? Note: You may want to wait for a more NT minded member to answer this in case of error Have you tried induction? 3. Originally Posted by star_tenshi Problem: Show that $8^{n}+1$ is composite for all $n\geq1$. What I have so far: In general, $a^{n}+1 = (a+1)(a^{(n-1)} - a^{(n-2)} + a^{(n-3)} - \cdots + 1)$ but this only holds for $n$ odd. Is there a way to factor when $n$ is even? Or would I have to find a solution specific to $a=8$? You can use that identity with n=3, if you write 8 as 2^3. 4. Originally Posted by Opalg You can use that identity with n=3, if you write 8 as 2^3. Yes, I know that, but I am looking at the case where n is even. Even if I were to use $8^{n} + 1 = (2^{3})^m + 1 = 2^{3m} + 1$, $3m$ is not always odd. 5. Make use of the identity: $a^3 + b^3 = (a+b)(a^2 -ab + b^2)$ Here imagine: $a = 2^n$ and $b = 1$ For all $n \geq 1$, we have: $\begin{aligned} 8^n + 1 & = \left(2^3\right)^n + 1 \\ & = \left(2^n\right)^3 + 1 \\ & = \left(2^n + 1\right)\left((2^n)^2 - 2^n + 1\right) \end{aligned}$ So what can we conclude? 6. OHHH! Now I see what Opalg was trying to convey. Thanks again o_O for clearing that up! We can conclude that $8^{n} + 1$ is composite. Then to generalize, any number that can be expressed as $a^{3} + b^{3}$ is composite because it can be broken down into the factors $(a+b)(a^{2}-ab+b^{2})$!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528353214263916, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/26611/local-min-and-differentiability-of-a-function
# Local min and differentiability of a function Suppose there is a function $f: X \rightarrow \mathbb{R}$, where $X \subseteq \mathbb{R}^n$. 1. If $x^*$ is a local minimizer of $f$ over $X$, must $x^*$ be either of the two cases: • if $f$ is differentiable at $x^*$, then $x^*$ must be a stationary point i.e. $\nabla{f}(x^*)=0$ • $f$ is non-differentiable at $x^*$? In other words, are there other possible cases for a local minimizer besides the two? I seem to have seen this as a conclusion from somewhere that I now cannot recall. However the following proposition seems to challenge this. 2. From p194 on Nonlinear Programming by Dimitri P. Bertsekas: If $X$ is a convex set. Proposition 2.1.2: (Optimality Condition) (a) If $x^*$ is a local minimum of $f$ over $X$. then $$\nabla{f}(x^*)'(x - x^*) > 0, \forall x \in X.$$ (b) If $f$ is convex over $X$, then the necessary condition of part (a) is also sufficient for $x^*$ to minimize $f$ over $X$. If the conclusion in Part 1 is true, then if $f$ is differentiable at $x^*$, we will have $\nabla{f}(x^*)=0$. Under this logic, I was confused why we have the proposition (a) as above in the book? How to interpret the proposition geometrically? Thanks and regards! - @Tim: I think you wanted $\nabla$ when you typed `\grad`, correct? The LaTeX code is `\nabla`. – Arturo Magidin Mar 12 '11 at 22:04 @Arturo: Thanks! Yes, and nice to learn about that. – Tim Mar 12 '11 at 22:11 I think it depends on how "local minimum" is defined. In general, a further condition is needed, e.g., that the local minimum be in the interior of $X$. Take $X = [0,1]$ and the function $f(x) = x$ to see why. – cardinal Mar 12 '11 at 22:13 @cardinal: The definition for a point to be a local minimizer of a function is that there exists a neighbourhood of the point s.t. the function value at any point in the neighbourhood is always greater than or equal to that at the point. This implies that a local minimizer must be in the interior of the domain of the function. – Tim Mar 12 '11 at 22:22 @Tim, if your (sub)space is $X = [0,1]$, then what are the open sets over $X$? Then, what are the neighborhoods? – cardinal Mar 12 '11 at 22:46 show 2 more comments ## 1 Answer Dimitri Bertsekas is right. Your conclusion 1 cannot hold in general because when $X \subset \mathbb{R}^n$, and $x$ lies on the boundary of $X$, you are talking about a constrained minimizer. Note that $X$ must be closed for optimization to make sense. Consider for instance $X=[0,1] \subset \mathbb{R}$ and $f(x) = -e^x$. The condition given in part (a) of Proposition 2.1.2 is that there exist no (first-order) feasible descent direction for $f$ from $X$. Note that the convexity of $X$ is assumed here. In general, $x \in X$ is a constrained minimizer if, by definition, there is $\epsilon > 0$ such that for all $y \in X$ such that $\|x-y\| < \epsilon$, you have $f(x) \leq f(y)$. This applies even if $f$ is not differentiable. It's not difficult to show that if $f$ is differentiable, this implies that $-\nabla f(x)$ lies in the cone normal to $X$ at $x$. You'll find the definition of the normal and tangent cones, e.g., in the book by Bazaraa, Sherali and Shetty or pretty much any good optimization book. When $X$ is convex, this is exactly what part (a) of Proposition 2.1.2 says. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300405383110046, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/40170-differential-equation-taylor-polynomial-euler-s-method.html
# Thread: 1. ## Differential Equation + Taylor Polynomial + Euler's Method Consider the differential equation $\frac{dy}{dx} = 5x^2 - \frac{6}{y-2}$ for $y \neq 2$. Let $y=f(x)$ be the particular solution to this differential equation with the initial condition $f(-1) = -4$. a. Evaluate $\frac{dy}{dx}$ and $\frac{d^2 y}{dx^2}$ at $(-1, -4)$ b. Is it possible for the x-axis to be tangent to the graph of $f$ at some point? Explain why or why not. c. Find the second-degree Taylor polynomial for $f$ about $x=-1$. d. Use Euler's method, starting at $x=-1$ with two steps of equal size, to approximate $f(0)$. Show the work that leads to your answer. Thank you to anyone can help! 2. Originally Posted by bakanagaijin Consider the differential equation $\frac{dy}{dx} = 5x^2 - \frac{6}{y-2}$ for $y \neq 2$. Let $y=f(x)$ be the particular solution to this differential equation with the initial condition $f(-1) = -4$. a. Evaluate $\frac{dy}{dx}$ and $\frac{d^2 y}{dx^2}$ at $(-1, -4)$ b. Is it possible for the x-axis to be tangent to the graph of $f$ at some point? Explain why or why not. c. Find the second-degree Taylor polynomial for $f$ about $x=-1$. d. Use Euler's method, starting at $x=-1$ with two steps of equal size, to approximate $f(0)$. Show the work that leads to your answer. Thank you to anyone can help! a) to find $\left( \frac{dy}{dx} \right)_{-1}$ use the formula given $\left( \frac{dy}{dx} \right)_{-1}= 5(-1)^2 - \frac{6}{y_{-1}-2}$ to find $\left( \frac{d^2y}{dx^2} \right)_{-1}$ first find $\frac{d^2 y}{dx^2}$ by differentiating $\frac{dy}{dx}$ with respect to x. $\frac{d^2 y}{dx^2} = 10x + \frac{6}{(y-2)^2} \frac{dy}{dx}$ you use value of $\left( \frac{dy}{dx} \right)_{-1}$ to find $\left( \frac{d^2y}{dx^2} \right)_{-1}$ b) If the x-axis is to be a tangent of the curve $f(x)$ you require that $\frac{dy}{dx}=0$ and $y=0$ for some value of x. so you must determine whether $0=5x^2 - \frac{6}{0-2}$ has any real solutions. c) This is basic application of formula. $f(x-1) = f(-1)+ (x-1)f'(-1) +\frac{(x-1)^2}{2!}f''(-1)$ use your values form part a) to do this. d) Like part c) this is just application of the formula $\left( \frac{dy}{dx} \right)_{0} \approx \frac{y_{1} -y_{0}}{h}$ where $h = 0.5$ Bobak
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8981826305389404, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/5066/hill-cipher-known-plaintext-attack/5093
# Hill Cipher known plaintext attack I know a plaintext - ciphertext couple of length 6 for a hill cipher where its key is a [3x3] matrix. Based on what I've read and learned, to attack and crack keys of [n x n], if we know a plaintext - ciphertext duo of length $n^2$ then we have our set of n equations with n variables, and this is generally solvable. However in my case there is only length of $6$ instead of $n^2 = 9$ . My question is, how will I solve this problem and find the key? Or, since this is an obvious homework question, what should be my way of thinking? I cannot think of anything else but matrix multiplications and inverses but they do not help me at all. - 1 You're right, as far as I can tell that should not be enough to solve for the matrix. Do you have any additional information, like undecrypted ciphertext? Or could the keyspace be limited somehow? – Ilmari Karonen Oct 16 '12 at 16:16 I have another ciphertext that I'm required to decrypt of length 12 and as a hint I know its first letter. The hint should be useful I believe but I could not use it. – ecem Oct 17 '12 at 9:58 Is it correct that you are given 6+12 bytes of ciphertext, and 6+1 bytes of the corresponding plaintext, and want the 11 other bytes of plaintext? – fgrieu Oct 17 '12 at 11:37 What are the second value and the third value of the ciphertext of length 12? (Or maybe you can post all of the values? We won't give the solution, but hint towards the method.) – bob Oct 17 '12 at 12:10 2 @bob we know that "erkays" is encrypted to "devuao" and the ciphertext to be decrypted is "LQDUIRXSFFHO" and we know its first letter is S. That's all the information. – ecem Oct 17 '12 at 12:28 show 5 more comments ## 3 Answers However, there is still some information to be exploited: as you know already, the matrix used to encrypt (the secret key) need to be invertible in order to allow decryption. Since you basically know (say) the first column and the second column of the secret key from your plaintext/ciphertext, you derive from the invertibility of the matrix that the third one must not be co-linear to any (linear) combination of the first two. You can also take the following view. Let us assume that you look for the decryption matrix: $$M=\pmatrix{A_0 & b_0 & c_0 \\ A_1 & b_1 & c_1 \\ A_2 & b_2 & c_2 \\} .$$ From your knowledge of the plaintext/cipertext pairs, you can rewrite it: $$M=\pmatrix{ A_0 & \alpha_0 A_0 + \beta_0 & \gamma_0 A_0 + \delta_0 \\ A_1 & \alpha_1 A_1 + \beta_1 & \gamma_1 A_1 + \delta_1 \\ A_2 & \alpha_2 A_2 + \beta_2 & \gamma_2 A_2 + \delta_2 \\ }$$ where the $\alpha_i$, $\beta_i$, $\gamma_i$, and $\delta_i$ are known. Now for the ciphertext $z$ that you try to decrypt, it can be the case that $M\times z$ exhibits some strange properties. For instance, it could be that $M\times z$ does not depend on some or all of the $A_i$, that is: $$z_0+\alpha_i z_1 + \gamma_i z_2 = 0\pmod{26}.$$ - +1 for being first to suggest using the decryption matrix as unknown; we now are in agreement. – fgrieu Oct 19 '12 at 5:55 There are 18 plaintext and ciphertext letters $p_j$ and $c_j$, $0\le j<18$ (with $j<6$ for the "first plaintext"), all of which are known except $p_7..p_{17}$. Let $M=\pmatrix{m_{0,0}&m_{0,1}&m_{0,2}\\m_{1,0}&m_{1,1}&m_{1,2}\\m_{2,0}&m_{2,1}&m_{2,2}}$ be the key matrix (unknown, except that it is invertible). We have 18 linear equations in $\mathbb{Z}_{26}$ $$c_j=m_{j\bmod3,0}\cdot p_{3{\lfloor{j/3}\rfloor}}+m_{j\bmod3,1}\cdot p_{3{\lfloor{j/3}\rfloor}+1}+m_{j\bmod3,2}\cdot p_{3{\lfloor{j/3}\rfloor}+2}$$ with 20 unknowns, and the tiny information that $M$ is invertible. By an entropy argument, this can't be solved in the general case unless by exploiting redundancy in the second plaintext. One (naïve) possibility to solve the problem could be: using a computer, for each of the $26\cdot26=676$ combinations of $p_7$ and $p_8$, solve the first 9 equations (when possible and there is a unique invertible solution for $M$, which is I guess is for most combinations of $p_7$ and $p_8$), and display the resulting second ciphertext $p_6..p_{17}$; then find the most likely one using vision and brain. Update: But wait, we should use as unknown the decryption matrix $\hat M$ and the equations $$p_j=\hat m_{j\bmod3,0}\cdot c_{3{\lfloor{j/3}\rfloor}}+\hat m_{j\bmod3,1}\cdot c_{3{\lfloor{j/3}\rfloor}+1}+\hat m_{j\bmod3,2}\cdot c_{3{\lfloor{j/3}\rfloor}+2}$$ The 3 unknowns $\hat m_{0,0},\hat m_{0,1},\hat m_{0,2}$ can (likely) be found just by solving the system of 3 equations involving $p_0, p_3, p_6$. We can then compute $p_9, p_{12}, p_{15}$ without any guesswork. Similarly, any guess of any of the 8 remaining plaintext letters $p_j$ gives one extra equation involving $\hat m_{j\bmod3,0},\hat m_{j\bmod3,1},\hat m_{j\bmod3,2}$, which (likely) is enough to deduce these 3 unknowns and 3 others plaintext letters. That greatly ease tabulating the possible plaintexts, from which only a few will hopefully emerge as making sense. That could even be workable just with pencil and paper. In a computer search, the possible plaintexts could be ranked by their likelyhood given the frequency of digrams in English text. Further update: It could well be that the system of equations for $\hat m_{0,0},\hat m_{0,1},\hat m_{0,2}$ has several solutions (I guess 2, 13, or 26), making the problem harder. It could also be that this system has no solution, in which case we could rule out the statement as faulty. - It is interesting to see that your update expands the steps described in my comment that you doubt above. (We're not unlucky here since the system---leading to the discovery of the first row of the decryption matrix---has exactly two solutions.) btw: +1 for the clean description of the standard key recovery strategy (although you could have avoided the cumbersome indexing scheme: it's compact for you to write but a pain to read) – bob Oct 18 '12 at 6:51 As far as I understand from both of your answers and comments, I should try to recover decryption matrix first, and the encryption key will be just its inverse. I'll try to solve the problem with the information you provided when I get an answer I'll let you know again. – ecem Oct 18 '12 at 9:25 The first row of the decryption matrix is easily recovered (there are two possibilities unless I erred). jgrieu is correct that the decryption matrix cannot be determined entirely (there are two unknown parameters) but there are three letters ciphertexts that will have as an image a determined value, that is, independent of those parameters. Using this info (additional plaintext/ciphertext pairs) one might be able to infer a very low number of possible encryption matrices (far less than exhausting the two parameters). – bob Oct 18 '12 at 13:06 Also, do not forget that since you might know the underlying language and since you know the first, fourth, seventh, and tenth letter of the plaintext (from the two possible instances for the first row of the decryption matrix) you might be able to guess additional information. – bob Oct 18 '12 at 13:08 Well, I went and solved the puzzle using brute force and Maple. I won't spoil the actual answer, but here are some tips that ought to make the process a bit more quicker. • Solving the linear system modulo 2 gives you the parity of the second and third letters of the unknown plaintext. Note that all vowels in the English alphabet map to even numbers using the standard Hill cipher encoding! • Conversely, solving the system modulo 13 tells you the fourth letter of the unknown plaintext (up to rot13). This, together with the mod 2 solution, should help you guess what the second and third letters could be. • Even after correctly guessing the second and third letters, and ruling out any non-invertible decoding matrices, the modulo 2 solution will still not be fully determined. Thus, some brute force and/or guesswork will be required to decode the remaining letters. Hopefully, however, one of the possible plaintexts will show up as obviously more plausible than the others. Ps. One more hint: 007. - +1 "All the vowels have the same parity." – bob Oct 18 '12 at 18:25 +1 also for the idea of reducing to two problems modulo $\mathbb{Z}_2$ and $\mathbb{Z}_{13}$ – fgrieu Oct 19 '12 at 5:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440716505050659, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=226792
Physics Forums ## About polar vectors and pseudo vectors "Polar vector or real vector is a vector which possesses direction inherently (eg. displacement), the direction of polar vector remains unchanged irrespective of the coordinate system chosen. If the components of a polar vector are reversed, the vector obtained is different from the original vector. Components of polar vector change sign when the coordinate system is inverted but components of pseudo vector does not change in such a case. Psedo vector remains unchanged even if its components are reversed (eg. angular velocity) Are the above statements I read about pseudo vector correct in general? Angular velocity vector remains unchanged even if its components are reversed? I am unclear about this, could some one kindly elaborate? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus You have to be more careful to distinguish between the components of a vector (with respect to a coordinate system) and the vector itself. If x, y, and z of a coordinate system are changed into -x, -y, -z, the components of a polar vector will change into their negatives, which will keep the vector unchanged. The components of a pseudovector, will not change, which means that the pseudovector will now point in the opposite direction. Is torque T a polar vector or a pseudo vector? Blog Entries: 1 Recognitions: Gold Member Science Advisor Staff Emeritus ## About polar vectors and pseudo vectors Quote by manjuvenamma Is torque T a polar vector or a pseudo vector? In general, the cross product of two vectors produces a pseudovector. A torque is defined as a cross product and therefore is a pseudovector. Recognitions: Homework Help Don't want to split hairs, but since the word vectors cover both polar(or proper or true) and pseudo, "the cross product of two polar vectors produces a pseudo vector". Also, the curl of a proper vector field is a pseudo-vector field. Quote by Shooting Star Don't want to split hairs, but since the word vectors cover both polar(or proper or true) and pseudo, "the cross product of two polar vectors produces a pseudo vector". Also, the curl of a proper vector field is a pseudo-vector field. The cross product of two psuedo vectors also produces a pseudo vector. The word vector, with no adjective, usually denotes polar vector. Recognitions: Homework Help Quote by pam The cross product of two psuedo vectors also produces a pseudo vector. Yes. I forgot to mention that. Thanks for reminding. This made me remember that the cross product of a polar vector and a pseudo vector is also a polar vector. I mean, of course we all know it, but don't think about it separately, but use it in things like F = q(vXB), v = ΩXR etc. Thinking about these things, I ran into a sort of a puzzle, which took me some time to figure out. I thought I must share it. Suppose B and C are pseudo vectors. If A is a polar vector, then, AX(BXC) = (A.C)B - (A.B)C. The LHS is a polar vector, but the RHS is a linear combination of a two pseudo vectors. As we did not own a car, my mother was fond of referring to her wash machine as our "pseudo-automobile" because it was not a car but had four wheels. My baby brother grew up speaking in this way and became a Great Physicist, since he grasped the notion of pseudovector at once: it is not a vector but has three components. The rest of us were damned to lifelong confusion about axial and polar vectors. But there is Good News: forget about axial and polar vectors! You don't need them! Here a quote: "Books on vector algebra commonly make a distinction between polar vectors and axial vectors, with a x b identified as an axial vector if a and b are polar vectors. This confusing practice of admitting two kinds of vectors is wholly unnecessary. An "axial vector" is nothing more than a bivector disguised as a vector. So with bivectors at our disposal, we can do without axial vectors. As we have defined it, the quantity a x b is a vector in exactly the same sense that a and b are vectors." from David Hestenes "New Foundations for Classical Mechanics" (second edition) p. 61 A somewhat longer essay on this topic is found in the attachment. Attached Files axial.pdf (53.5 KB, 45 views) Some sense can be made of pseudo vectors when they are replaced with the wedge product and another operator, which is valid aswell, in spaces other than three dimensions, whereas the cross product is unique to three.wedge product. Accepting an equation like this: $$\nabla \times B - \mu \epsilon (dE/dt) = \mu J$$ requires a leap of faith, that I'd forgotten I'd taken along with many others. One is required to accept that each term in this equation represents a vector in the three spatial dimensions. Fortunately all the hard work has been done a long time ago in recasting these sort of equations in a logical form. Blog Entries: 47 Recognitions: Gold Member Homework Help Science Advisor Quote by Shooting Star Yes. I forgot to mention that. Thanks for reminding. This made me remember that the cross product of a polar vector and a pseudo vector is also a polar vector. I mean, of course we all know it, but don't think about it separately, but use it in things like F = q(vXB), v = ΩXR etc. Thinking about these things, I ran into a sort of a puzzle, which took me some time to figure out. I thought I must share it. Suppose B and C are pseudo vectors. If A is a polar vector, then, AX(BXC) = (A.C)B - (A.B)C. The LHS is a polar vector, but the RHS is a linear combination of a two pseudo vectors. I think the coefficient, A.C, is a pseudoscalar.... so (A.C)B is actually a polar vector. Similarly for the other vector. force and R are vector(polar vector),so their vector product is pseudo vector. if u exert a force on something from right to left in front of a mirror,the force will exert from left to right on the image of that. have a good time Thread Tools | | | | |-------------------------------------------------------------|----------------------------------|---------| | Similar Threads for: About polar vectors and pseudo vectors | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 3 | | | Precalculus Mathematics Homework | 1 | | | Linear & Abstract Algebra | 8 | | | Introductory Physics Homework | 1 | | | General Math | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389724731445312, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/77784/list
## Return to Answer 5 added 439 characters in body; deleted 10 characters in body; deleted 26 characters in body As you noticed, it is sufficient to consider the case $$F=\bigcup_{i=1}^n F_i$$ where $F_1$, $F_2,\dots, F_n$ are disjoint convex figures with nonempty interior. Let $s$ be mean shadow of $F$. Denote by $K$ the convex hull of all $F$. Note that $$\mathop{\rm length}(\partial K\cap F)\le s.$$ We will prove the following claim: one can bite from $F$ some arbitrary small area $a$ so that mean shadow decrease by amount almost $\ge 2{\cdot}\pi{\cdot}\tfrac{a}s$ (say $\ge 2{\cdot}\pi{\cdot}\tfrac{a}s{\cdot}(1-\tfrac{a}{s^2})$ will do). Once it is proved, we can bite whole $F$ by very small pieces, when nothing remains you will add things up and get the inequality you need. The claim is easy to prove in case if $\partial F$ has a corner (i.e., the curvature of $\partial F$ has an atom at some point). Note that the total curvature of $\partial K$ is $2{\cdot}\pi$, therefore there is a point $p\in \partial K$ with curvature $\ge 2{\cdot}\pi{\cdot}\tfrac1s$. The point $p$ has to lie on $\partial F$ since $\partial K\backslash \partial F$ is a collection of line segments. Around this point one may bite from $F$ some small area $a$ so that mean shadow decrease by amount almost $\ge 2{\cdot}\pi{\cdot}\tfrac{a}s$. This way Moreover, if there are no corners, we can bite whole assume that $F$, when nothing remains you will add things up and get the inequality you needp$is not an end of segment of$\partial K\cap F\$. This proof is a bit technical to formalize, but this is possible. (If I would have to write it down, I would better find an other one.) 4 deleted 15 characters in body As you noticed, it is sufficient to consider the case $$F=\bigcup_{i=1}^n F_i$$ where $F_1$, $F_2,\dots, F_n$ are disjoint convex figures. Let $s$ be mean shadow of $F$. Denote by $K$ the convex hull of all $F$. Note thatthe total length of $$\partial $\mathop{\rm length}(\partial K\cap F$$ is at least as big as the mean shadow, say $s$ of $F$F)\le s. The total curvature of $\partial K$ is $2{\cdot}\pi$, therefore there is a point $p\in \partial K$ with curvature $\ge 2{\cdot}\pi{\cdot}\tfrac1s$. The point $p$ has to lie on $\partial F$ since $\partial K\backslash \partial F$ is a collection of line segments. Around this point one may bite from $F$ some small area $a$ so that mean shadow decrease by amount almost $\ge 2{\cdot}\pi{\cdot}\tfrac{a}s$. This way we can bite whole $F$, when nothing remains you will add things up and get the inequality you need. This proof is a bit technical to formalize, but this is possible. (If I would have to write it down, I would better find an other one.) Post Undeleted by Anton Petrunin 3 added 385 characters in body of disjoint union of finite number of convex figures $$F=\bigcup_{i=1}^n F_i$$ where $F_1$, $F_2,\dots, F_n$ . Choose a point $p_i\in F_i$ in each figureare disjoint convex figures. Now set Denote by $F_i(t)=F_i-t{\cdot}\vec p_i$.So K$the convex hull of all$F_i=F_i(0)$.F$.Note that the total length of $$\partial K\cap F$$is at least as big as the mean shadow, say $s$ of $\bigcup_i F_i(t)$ decrease in F\$ The total curvature of $t$and its area stays the same untill \partial K$is$2{\cdot}\pi$,therefore there is a couple of point$F_i(t)$touch eachotherp\in \partial K$ with curvature $\ge 2{\cdot}\pi{\cdot}\tfrac1s$.Once it happen, The point $p$ has to lie on $\partial F$ since $\partial K\backslash \partial F$ is a collection of line segments.Around this point one can exachange the touching figures may bite from $F$ some small area $a$ so that mean shadow decrease by their convex hullamount almost $\ge 2{\cdot}\pi{\cdot}\tfrac{a}s$. As a result This way we get can bite whole $n-1$ convex figures with bigger total area F\$, when nothing remains you will add things up and smaller mean shadowget the inequality you need. This gives proof is a reduction in $n$.bit technical to formalize, but this is possible.(If I would have to write it down, I would better find an other one.) Post Deleted by Anton Petrunin 2 deleted 4 characters in body As you noticed, it is sufficient to consider the case of disjoint union of finite number of convex figures $F_1$, $F_2,\dots, F_n$. Choose a point $p_i\in F_i$ in each figure. Now set $F_i(t)=F_i+(1-t){\cdot}\vec F_i(t)=F_i-t{\cdot}\vec p_i$. So $F_i=F_i(0)$. Note that the mean shadow of $\bigcup_i F_i(t)$ decrease in $t$ and its area stays the same untill a couple of $F_i(t)$ touch eachother. Once it happen, one can exachange the touching figures by their convex hull. As a result we get $n-1$ convex figures with bigger total area and smaller mean shadow. This gives a reduction in $n$. 1 As you noticed, it is sufficient to consider the case of disjoint union of finite number of convex figures $F_1$, $F_2,\dots, F_n$. Choose a point $p_i\in F_i$ in each figure. Now set $F_i(t)=F_i+(1-t){\cdot}\vec p_i$. So $F_i=F_i(0)$. Note that the mean shadow of $\bigcup_i F_i(t)$ decrease in $t$ and its area stays the same untill a couple of $F_i(t)$ touch eachother. Once it happen, one can exachange the touching figures by their convex hull. As a result we get $n-1$ convex figures with bigger total area and smaller mean shadow. This gives a reduction in $n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272347092628479, "perplexity_flag": "middle"}
http://medlibrary.org/medwiki/Grain_growth
# Grain growth Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: Grain growth is the increase in size of grains (crystallites) in a material at high temperature. This occurs when recovery and recrystallisation are complete and further reduction in the internal energy can only be achieved by reducing the total area of grain boundary. The term is commonly used in metallurgy but is also used in reference to ceramics and minerals. ## Importance of grain growth Most materials exhibit the Hall–Petch effect at room-temperature and so display a higher yield stress when the grain size is reduced. At high temperatures the opposite is true since the open, disordered nature of grain boundaries means that vacancies can diffuse more rapidly down boundaries leading to more rapid Coble creep. Since boundaries are regions of high energy they make excellent sites for the nucleation of precipitates and other second-phases e.g. Mg–Si–Cu phases in some aluminium alloys or martensite platlets in steel. Depending on the second phase in question this may have positive or negative effects. ## Rules of grain growth Grain growth has long been studied primarily by the examination of sectioned, polished and etched samples under the optical microscope. Although such methods enabled the collection of a great deal of empirical evidence, particular with regard to factors such as temperature or composition, the lack of crystallographic information limited the development of an understanding of the fundamental physics. Nevertheless, the following became well-established features of grain growth: 1. Grain growth occurs by the movement of grain boundaries and not by coalescence (i.e. like water droplets) 2. Boundary movement is discontinuous and the direction of motion may change suddenly. 3. One grain may grow into another grain whilst being consumed from the other side 4. The rate of consumption often increases when the grain is nearly consumed 5. A curved boundary typically migrates towards its centre of curvature 6. When grain boundaries in a single phase meet at angles other than 120 degrees, the grain included by the more acute angle will be consumed so that the angles approach 120 degrees. ## Driving force The boundary between one grain and its neighbour (grain boundary) is a defect in the crystal structure and so it is associated with a certain amount of energy. As a result, there is a thermodynamic driving force for the total area of boundary to be reduced. If the grain size increases, accompanied by a reduction in the actual number of grains per volume, then the total area of grain boundary will be reduced. The local velocity of a grain boundary at any point is proportional to the local curvature of the grain boundary, i.e.: $v=M \sigma \kappa$, where $v$ is the velocity of grain boundary, $M$ is grain boundary mobility (generally depends on orientation of two grains), $\sigma$ is the grain boundary energy and $\kappa$ is the sum of the two principal surface curvatures. For example, shrinkage velocity of a spherical grain embedded inside another grain is $v= M \sigma \frac{2}{R}$, where $R$ is radius of the sphere. This driving pressure is very similar in nature to the Laplace pressure that occurs in foams. In comparison to phase transformations the energy available to drive grain growth is very low and so it tends to occur at much slower rates and is easily slowed by the presence of second phase particles or solute atoms in the structure. ## Ideal grain growth Computer Simulation of Grain Growth in 3D using phase field model. Click to see the animation. Ideal grain growth is a special case of normal grain growth where boundary motion is driven only by local curvature of the grain boundary. It results in the reduction of the total amount of grain boundary surface area i.e. total energy of the system. Additional contributions to the driving force by e.g. elastic strains or temperature gradients are neglected. If it holds that the rate of growth is proportional to the driving force and that the driving force is proportional to the total amount of grain boundary energy, then it can be shown that the time t required to reach a given grain size is approximated by the equation $d^2 - {d_0}^2 = kt \,\!$ where d0 is the initial grain size, d is the final grain size and k is a temperature dependent constant given by an exponential law: $k = k_0 \exp \left ( \frac{-Q}{RT} \right ) \,\!$ where k0 is a constant, T is the absolute temperature and Q is the activation energy for boundary mobility. Theoretically, the activation energy for boundary mobility should equal that for self-diffusion but this is often found not to be the case. In general these equations are found to hold for ultra-high purity materials but rapidly fail when even tiny concentrations of solute are introduced. ## Normal vs abnormal Fig 1. The distinction between continuous (normal) grain growth, where all grains grow at roughly the same rate, and discontinuous (abnormal) grain growth, where one grain grows at a much greater rate than its neighbours. In common with recovery and recrystallisation, growth phenomena can be separated into continuous and discontinuous mechanisms. In the former the microstructure evolves from state A to B (in this case the grains get larger) in a uniform manner. In the latter, the changes occur heterogeneously and specific transformed and untransformed regions may be identified. Discontinuous grain growth is characterised by a subset of grains growing at a high rate and at the expense of their neighbours and tends to result in a microstructure dominated by a few very large grains. In order for this to occur the subset of grains must possess some advantage over their competitors such as a high grain boundary energy, locally high grain boundary mobility, favourable texture or lower local second-phase particle density. ## Factors hindering growth If there are additional factors preventing boundary movement, such as Zener pinning by particles, then the grain size may be restricted to a much lower value than might otherwise be expected. This is an important industrial mechanism in preventing the softening of materials at high temperature. ## References • F. J. Humphreys and M. Hatherly (1995); Recrystallization and related annealing phenomena, Elsevier Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Grain growth", available in its original form here: http://en.wikipedia.org/w/index.php?title=Grain_growth • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194842576980591, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/30529?sort=oldest
## Digraph intermediate connectivity ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What's the name for a digraph such that for each pair of vertices $u,v$, there is either a path from $u$ to $v$ or a path from $v$ to $u$? I'd call it just connected, since this is an intermediate property between weak and strong connectivity, and is in fact equivalent to the existence of a path containing all vertices. However, I'm not an expert of the subject, and I was unable to find any reference about this, so far. - ## 4 Answers Just `connected' is fine. For example, Wikipedia and Tutte agree. However, since "the number of systems of terminology presently used in graph theory is equal, to a close approximation, to the number of graph theorists," (R.P. Stanley, 1986) you might want to include the definition anyway. - Thank you! I had not seen the definition on Wikipedia. It was added by an anonymous user on last Oct 13, apparently. I had asked the same question in the talk page, but nobody answered me there. The reference from Tutte is completely satisfactory for me. – Ale De Luca Jul 5 2010 at 18:13 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Such digraph is called traceable. For example, it is defined as such in the paper http://www.ams.org/bull/1976-82-01/S0002-9904-1976-13955-9/S0002-9904-1976-13955-9.pdf - 2 Thank you very much for your answer. It seems, however, that the definition of traceable graph in such paper requires the existence of a Hamiltonian path. This is not equivalent to what I am asking; I am not interested whether the path visits each vertex exactly once, as long as it does visit all of them. For example, the digraph $\{(a,b),(b,a),(a,c),(c,a),(a,d)\}$ is "connected" as I'm meaning, but it is not traceable as the shortest path visiting all vertices is $bacad$ (or $cabad$), thus visiting $a$ twice. The same graph is also not strongly connected, as no edge starts from $d$. – Ale De Luca Jul 5 2010 at 0:29 Such a graph is called a semiconnected graph. You can find references to it in Cormen and Diestel's book on graph theory http://diestel-graph-theory.com/ - Thanks… but again, it doesn't quite seem to be the same thing. In Diestel's book I could only find a definition for semiconnectedness of submultigraphs of infinite graphs. It is not obvious to me how would this apply to digraphs in general. – Ale De Luca Jul 5 2010 at 18:10 Another term that has been used is "unilateral" or "unilaterally connected". I don't have a particularly strong opinion in favor of this terminology, but I am slightly opposed to just calling it "connected". (I usually assume "connected" means "weakly connected" for digraphs.) However, I must admit a reference by Tutte is good. Some references for "unilateral": • Graph theory applications by L. R. Foulds • On minimal feedback vertex sets of a digraph by Frank Harary (I think Harary's graph theory book uses it also) - thank you very much! – Ale De Luca Jul 7 2010 at 17:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9630988836288452, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/02/17/determining-generalized-eigenvectors/?like=1&source=post_flair&_wpnonce=fe7f791e1e
# The Unapologetic Mathematician ## Determining Generalized Eigenvectors Our definition of a generalized eigenvector looks a lot like the one for an eigenvector. But finding them may not be as straightforward as our method for finding eigenvectors. In particular, we’re asking that the vector be in the kernel of not one transformation, but some transformation in a whole infinite list of them. We can’t just go ahead and apply each transformation, hoping to find one that sends our vector to zero. Maybe the form of this list can help us. We’re really just taking the one transformation $T-\lambda1_V$ and applying it over and over again. So we could start with $v$ and calculate $(T-\lambda1_V)v$, and then $(T-\lambda1_V)^2v$, and so on, until we end up with $(T-\lambda1_V)^nv=0$. That’s all well and good if $v$ is a generalized eigenvector, but what if it isn’t? At what point do we stop and say we’re never going to get to zero? The first thing we have to notice is that as we go along the list of transformations, the kernel never shrinks. That is, if $(T-\lambda1_V)^iv=0$ then $\displaystyle(T-\lambda1_V)^{i+1}v=(T-\lambda1_v)(T-\lambda1_V)^iv=(T-\lambda1_V)0=0$ Thus, we have an increasing sequence of subspaces $\displaystyle 0=\mathrm{Ker}\left((T-\lambda1_V)^0\right)\subseteq\mathrm{Ker}\left((T-\lambda1_V)^1\right)\subseteq\mathrm{Ker}\left((T-\lambda1_V)^2\right)\subseteq...$ Next we have to recognize that this sequence is strictly increasing until it levels out. That is, if $\mathrm{Ker}\left((T-\lambda1_V)^{i-1}\right)=\mathrm{Ker}\left((T-\lambda1_V)^i\right)$ then $\mathrm{Ker}\left((T-\lambda1_V)^i\right)=\mathrm{Ker}\left((T-\lambda1_V)^{i+1}\right)$. Then, of course, we can use an inductive argument to see that all the kernels from that point on are the same. But why does this happen? Well, let’s say that $(T-\lambda1_V)^iv=0$ implies that $(T-\lambda1_V)^{i-1}v=0$ (the other implication we’ve already taken care of above). Then we can see that $(T-\lambda1_V)^{i+1}v=0$ implies that $(T-\lambda1_V)^iv=0$ by rewriting them: $\displaystyle\begin{aligned}(T-\lambda1_V)^{i+1}v&=0\\\Rightarrow(T-\lambda1_V)^i(T-\lambda1_V)v&=0\\\Rightarrow(T-\lambda1_V)^{i-1}(T-\lambda1_V)v&=0\\\Rightarrow(T-\lambda1_V)^iv&=0\end{aligned}$ where we have used the assumed implication between the second and third lines. So once this sequence stops growing at one step, it never grows again. That is, if the kernels ever stabilize the sequence looks like $\displaystyle\begin{aligned}...\subsetneq\mathrm{Ker}\left((T-\lambda1_V)^{n-1}\right)\subsetneq\mathrm{Ker}\left((T-\lambda1_V)^n\right)\\=\mathrm{Ker}\left((T-\lambda1_V)^{n+1}\right)=...\end{aligned}$ and $\mathrm{Ker}\left((T-\lambda1_V)^n\right)$ is as large as it ever gets. So does the sequence top out? Of course, it has to! Indeed, each step before it stops growing raises the dimension by at least one, so if it didn’t stop by step $d=\dim(V)$ it would get bigger than the space $V$ itself, which is absurd because these are all subspaces of $V$. So $\mathrm{Ker}\left((T-\lambda1_V)^d\right)$ is the largest of these kernels. Where does this leave us? We’ve established that if $v$ is in the kernel of any power of $T-\lambda1_V$ it will be in $\mathrm{Ker}\left((T-\lambda1_V)^d\right)$. Thus the space of generalized eigenvectors with eigenvalue $\lambda$ is exactly this kernel. Now finding generalized eigenvectors is just as easy as finding eigenvectors. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 5 Comments » 1. [...] points, it’s going to be useful to have a sort of dual to the increasing chain of subspaces we found yesterday. Instead of kernels, we’ll deal with [...] Pingback by | February 18, 2009 | Reply 2. [...] turns out that generalized eigenspaces do capture this notion, and we have a way of calculating them to boot! That is, I’m asserting that the multiplicity of an eigenvalue is both the number of [...] Pingback by | February 19, 2009 | Reply 3. [...] a single transformation whose only eigenvalue is . That is, for sufficiently large powers . We’ve seen that is sufficiently large a power to take to check if is nilpotent. We’ve also seen that [...] Pingback by | February 26, 2009 | Reply 4. [...] polynomial of , whose roots are the eigenvalues of . For each eigenvalue , we can define the generalized eigenspace as the kernel , since if some power of kills a vector then the th power [...] Pingback by | March 4, 2009 | Reply 5. [...] same argument as before tells us that the kernel will stabilize by the time we take powers of an operator, so we [...] Pingback by | April 6, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357792735099792, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Molecular_energy_state
# Energy level (Redirected from Molecular energy state) A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy. This contrasts with classical particles, which can have any energy. These discrete values are called energy levels. The term is commonly used for the energy levels of electrons in atoms or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized. If the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If more than one quantum mechanical state is at the same energy, the energy levels are "degenerate". They are then called degenerate energy levels. ## Explanation Quantized energy levels result from the relation between a particle's energy and its wavelength. For a confined particle such as an electron in an atom, the wave function has the form of standing waves. Only stationary states with energies corresponding to integral numbers of wavelengths can exist; for other states the waves interfere destructively, resulting in zero probability density. Elementary examples that show mathematically how energy levels come about are the particle in a box and the quantum harmonic oscillator. ## History The first evidence of quantized energy levels in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The theoretical explanation for energy levels was discovered in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory, based on the Schrödinger equation, was advanced by Erwin Schrödinger and Werner Heisenberg in 1926 ## Atoms ### Intrinsic energy levels #### Orbital state energy level - atom/ion with nucleus + one electron Assume there is one electron in a given atomic orbital in a hydrogen-like atom (ion). The energy of its state is mainly determined by the electrostatic interaction of the (negative) electron with the (positive) nucleus. The energy levels of an electron around a nucleus are given by : $E_n = - h c R_{\infty} \frac{Z^2}{n^2} \$ (typically between 1 eV and 103 eV), where $R_{\infty} \$ is the Rydberg constant, Z is the atomic number, n is the principal quantum number, $h$ is Planck's constant, and c is the speed of light. For hydrogen-like atoms (ions) only, the Rydberg levels depend only on the principal quantum number $n$. #### Multi-electron atoms include electrostatic interaction of an electron with other electrons If there is more than one electron around the atom, electron-electron-interactions raise the energy level. These interactions are often neglected if the spatial overlap of the electron wavefunctions is low. For multi-electron atoms, interactions between electrons cause the preceding equation to be no longer accurate as stated simply with Z as the atomic number. Instead an approximate correction may be used where Z is substituted with an effective nuclear charge symbolized as Zeff. $E_{n,l} = - h c R_{\infty} \frac{{Z_{\rm eff}}^2}{n^2} \$ In such cases, the orbital types (determined by the azimuthal quantum number l) as well as their levels within the molecule affect Zeff and therefore also affect the various atomic electron energy levels. The Aufbau principle of filling an atom with electrons for an electron configuration takes these differing energy levels into account. For filling an atom with electrons in the ground state, the lowest energy levels are filled first and consistent with the Pauli exclusion principle, the Aufbau principle, and Hund's rule. #### Fine structure splitting Fine structure arises from relativistic kinetic energy corrections, spin-orbit coupling (an electrodynamic interaction between the electron's spin and motion and the nucleus's electric field) and the Darwin term (contact term interaction of s-shell electrons inside the nucleus). These affect the levels by a typical order of magnitude of $10^{-3}$ eV. #### Hyperfine structure Main article: Hyperfine structure This even finer structure is due to spin - nuclear-spin interaction, resulting in a typical change in the energy levels by a typical order of magnitude of $10^{-4}$ eV. ### Energy levels due to external fields #### Zeeman effect Main article: Zeeman effect There is an interaction energy associated with the magnetic dipole moment, μL, arising from the electronic orbital angular momentum, L, given by $U = -\boldsymbol{\mu}_L\cdot\mathbf{B}$ with $\boldsymbol{\mu}_L = \dfrac{e\hbar}{2m}\mathbf{L} = \mu_B\mathbf{L}$. Additionally taking into account the magnetic momentum arising from the electron spin into account. Due to relativistic effects (Dirac equation), there is a magnetic momentum, μS, arising from the electron spin $\boldsymbol{\mu}_S = -\mu_Bg_S\mathbf{S}$, with gS the electron-spin g-factor (about 2), resulting in a total magnetic moment, μ, $\boldsymbol{\mu} = \boldsymbol{\mu}_L + \boldsymbol{\mu}_S$. The interaction energy therefore becomes $U_B = -\boldsymbol{\mu}\cdot\mathbf{B} = \mu_B B (M_L + g_SM_S)$. #### Stark effect Main article: Stark effect ## Molecules Chemical bonds between atoms in a molecule form because they make the situation more stable for the involved atoms, which generally means the sum energy level for the involved atoms in the molecule is lower than if the atoms were not so bonded. As separate atoms approach each other to covalently bond, their orbitals affect each other's energy levels to form bonding and anti-bonding molecular orbitals. The energy level of the bonding orbitals is lower, and the energy level of the anti-bonding orbitals is higher. For the bond in the molecule to be stable, the covalent bonding electrons occupy the lower energy bonding orbital, which may be signified by such symbols as σ or π depending on the situation. Corresponding anti-bonding orbitals can be signified by adding an asterisk to get σ* or π* orbitals. A non-bonding orbital in a molecule is an orbital with electrons in outer shells which do not participate in bonding and its energy level is the same as that of the constituent atom. Such orbitals can be designated as n orbitals. The electrons in an n orbital are typically lone pairs. In polyatomic molecules, different vibrational and rotational energy levels are also involved. Roughly speaking, a molecular energy state, i.e. an eigenstate of the molecular Hamiltonian, is the sum of the electronic, vibrational, rotational, nuclear, and translational components, such that: $E = E_\mathrm{electronic}+E_\mathrm{vibrational}+E_\mathrm{rotational}+E_\mathrm{nuclear}+E_\mathrm{translational}\,$ where $E_\mathrm{electronic}$ is an eigenvalue of the electronic molecular Hamiltonian (the value of the potential energy surface) at the equilibrium geometry of the molecule. The molecular energy levels are labelled by the molecular term symbols. The specific energies of these components vary with the specific energy state and the substance. In molecular physics and quantum chemistry, an energy level is a quantized energy of a bound quantum mechanical state. ### Energy level diagrams There are various types of energy level diagrams for bonds between atoms in a molecule. Examples Molecular orbital diagrams, Jablonski diagrams, and Franck-Condon diagrams. ## Energy level transitions An increase in energy level from E1 to E2 resulting from absorption of a photon represented by the red squiggly arrow, and whose energy = h ν A decrease in energy level from E2 to E1 resulting in emission of a photon represented by the red squiggly arrow, and whose energy = h ν Electrons in atoms and molecules can change (make transitions in) energy levels by emitting or absorbing a photon (of electromagnetic radiation) whose energy must be exactly equal to the energy difference between the two levels. Electrons can also be completely removed from a chemical species such as an atom, molecule, or ion. Complete removal of an electron from an atom can be a form of ionization, which is effectively moving the electron out to an orbital with an infinite principal quantum number, in effect so far away so as to have practically no more effect on the remaining atom (ion). For various types of atoms, there are 1st, 2nd, 3rd, etc. ionization energies for removing 1, 2, 3, etc. of the highest energy electrons from the atom in a ground state. Energy in corresponding opposite quantities can also be released, often in the form of photon energy, when electrons are added to positively charged ions or sometimes atoms. Molecules can also undergo transitions in their vibrational or rotational energy levels. Energy level transitions can also be nonradiative, meaning emission or absorption of a photon is not involved. If an atom, ion, or molecule is at the lowest possible energy level, it and its electrons are said to be in the ground state. If it is at a higher energy level, it is said to be excited, or any electrons that have higher energy than the ground state are excited. Such a species can be excited to a higher energy level by absorbing a photon whose energy is equal to the energy difference between the levels. Conversely, an excited species can go to a lower energy level by spontaneously emitting a photon equal to the energy difference. A photon's energy is equal to Planck's constant (h) times its frequency (ν) and thus is proportional to its frequency, or inversely to its wavelength. Correspondingly, many kinds of spectroscopy are based on detecting the frequency or wavelength of the emitted or absorbed photons to provide information on the material analyzed, including information on the energy levels and electronic structure of materials obtained by analyzing the spectrum. An asterisk is commonly used to designate an excited state. An electron transition in a molecule's bond from a ground state to an excited state may have a designation such as σ→σ*, π→π*, or n→π* meaning excitation of an electron from a σ bonding to a σ antibonding orbital, from a π bonding to a π antibonding orbital, or from an n non-bonding to a π antibonding orbital. Reverse electron transitions for all these types of excited molecules are also possible to return to their ground states, which can be designated as σ*→σ, π*→π, or π*→n. A transition in an energy level of an electron in a molecule may be combined with a vibrational transition and called a vibronic transition. A vibrational and rotational transition may be combined by rovibrational coupling. In rovibronic coupling, electron transitions are simultaneously combined with both vibrational and rotational transitions. Photons involved in transitions may have energy of various ranges in the electromagnetic spectrum, such as X-ray, ultraviolet, visible light, infrared, or microwave radiation, depending on the type of transition. In a very general way, energy level differences between electronic states are larger, differences between vibrational levels are intermediate, and differences between rotational levels are smaller, although there can be overlap. Translational energy levels are practically continuous and can be calculated as kinetic energy using classical mechanics. Higher temperature causes fluid atoms and molecules to move faster increasing their translational energy and can thermally excite (nonradiatively) polyatomic molecules to a higher average distribution of vibrational and rotational energy levels. This means as temperature rises, translational, vibrational, and rotational contributions to molecular heat capacity let molecules absorb heat and hold more internal energy. Conduction of heat typically occurs as molecules or atoms collide transferring the heat between each other. At even higher temperatures, electrons can be thermally excited to higher energy orbitals in atoms or molecules. A subsequent drop of an electron to a lower energy level can release a photon, causing a possible colored glow. An electron farther from the nucleus has higher potential energy than an electron closer to the nucleus, thus it becomes less bound to the nucleus, since its potential energy is negative and inversely dependent on its distance from the nucleus.[1] ## Crystalline materials Crystalline solids are found to have energy bands, instead of or in addition to energy levels. Electrons can take on any energy within an unfilled band. At first this appears to be an exception to the requirement for energy levels. However, as shown in band theory, energy bands are actually made up of many discrete energy levels which are too close together to resolve. Within a band the number of levels is of the order of the number of atoms in the crystal, so although electrons are actually restricted to these energies, they appear to be able to take on a continuum of values. The important energy levels in a crystal are the top of the valence band, the bottom of the conduction band, the Fermi energy, the vacuum level, and the energy levels of any defect states in the crystal.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920513927936554, "perplexity_flag": "middle"}
http://nrich.maths.org/48/note
### Fencing Arrange your fences to make the largest rectangular space you can. Try with four fences, then five, then six etc. ### Two by One An activity making various patterns with 2 x 1 rectangular tiles. ### Hallway Borders A hallway floor is tiled and each tile is one foot square. Given that the number of tiles around the perimeter is EXACTLY half the total number of tiles, find the possible dimensions of the hallway. # Pebbles ## Pebbles Imagine that you're walking along the beach, a rather nice sandy beach with just a few small pebbles in little groups here and there. You start off by collecting just four pebbles and you place them on the sand in the form of a square. The area inside is of course just $1$ square something, maybe $1$ square metre, $1$ square foot, $1$ square finger ... whatever. By adding another $2$ pebbles in line you double the area to $2$, like this: The rule that's developing is that you keep the pebbles that are down already (not moving them to any new positions) and add as FEW pebbles as necessary to DOUBLE the PREVIOUS area, using RECTANGLES ONLY! So, to continue, we add another three pebbles to get an area of $4$: You could have doubled the area by doing: But this would not obey the rule that you must add as FEW pebbles as possible each time. So this one is not allowed. Number 6 would look like this: So remember:- #### The rule is that you keep the pebbles that are down already (not moving them to any new positions) and add as FEW pebbles as necessary to DOUBLE the PREVIOUS area. Well, now it's time for you to have a go. "It's easy,'' I hear you say. Well, that's good. But what questions can we ask about the arrangements that we are getting? We could make a start by saying "Stand back and look at the shapes you are getting. What do you see?'' I guess you may see quite a lot of different things. It would be good for you to do some more of this pattern. See how far you can go. You may run out of pebbles, paper or whatever you may be using. (Multilink, pegboard, elastic bands with a nail board, etc.) Well now, what about some questions to explore? Here are some I've thought of that look interesting: 1. How many extra pebbles are added each time? This starts off $2$, $3$, $6$ ... 2. How many are there around the edges? This starts off $4$, $6$, $8$ ... 3. How big is the area? This starts off $1$, $2$, $4$ ... 4. How many are there inside? This starts off $0$, $0$, $1$, $3$, $9$ ... Try to answer these, and any other questions you come up with, and perhaps put them in a kind of table/graph/spreadsheet etc. Do let me see what you get - I'll be most interested. Don't forget the all-important question to ask - "I wonder what would happen if I ...?'' ### Why do this problem? Use this activity to introduce the youngsters to an investigation that mixes both shape and space work with number work. You could also introduce learners to this extended piece of work to help you look at perseverence and persistence. ### Possible approach A good introduction can be had with the whole class by making the first two or three arrangements altogether. It is useful to have squared and dotted (squares) paper available whilst some pupils may benefit from using blocks (such as multilink) to represent the pebbles. You may also find it helpful to use the Virtual Geoboard for sharing ideas amongst the whole group. After children have worked in pairs for a time, investigating subsequent arrangements, you can pose some of the suggested questions (for example looking at the number of pebbles added each time) and invite them to ask and explore their own questions. Encourage record keeping in whatever form the pupils feel is appropriate. ### Key questions What are you counting? (Sometimes there is confusion about the counting of the pebbles and the counting of the spaces in between them - particularly along the lengths of sides.) Is this rectangle double the size of the last one? How are you recording what you have done? ### Possible extension Some pupils may produce a table or a spreadsheet of their results which would enable them to explore further. Here is an example of many results that lead to the consideration of the digital roots (d.r.): ### For the exceptionally mathematically able Go to More Pebbles here. ### Possible support Children may benefit from adult support in keeping track of where they are in their exploration. They could be helped to proceed as if it were a game. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562773704528809, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/11190/list
## Return to Answer 2 added 237 characters in body The drawing of lines as you have explained, gives a group law only in the case of genus $1$ curves. This does not work for any other genus. The reason is that the Riemann-Roch theorem gives the third point, under the composition, and it works out only in the case of genus $1$. Riemann-Roch is the most important theorem in the study of Riemann surfaces, or algebraic curves. When you set up the situation in terms of divisors and apply Riemann-Roch, you kind of get associativity "for free". This seems the most natural explanation to me. This is also much the same as the Jacobian explanation given earlier. This is given in Silverman's AEC. But it is a bit algebraic. See the proof of the group law in J. W. S. Cassels, Lectures on Elliptic Curves. First the proof given by Harrison Brown is explained, and then this "conceptual" explanation using Riemann-Roch is given. However since your approach is complex analytic, it will be very instructive to look into Rick Miranda's book on Riemann surfaces. Also Raghavan Narasimhan's ETH lecture notes give the complex analytic construction of the Jacobian variety, referred to by other people in earlier answers. The more advanced(and definitive) volume on complex algebraic geometry is of Griffiths and Harris. 1 The drawing of lines as you have explained, gives a group law only in the case of genus $1$ curves. This does not work for any other genus. The reason is that the Riemann-Roch theorem gives the third point, under the composition, and it works out only in the case of genus $1$. Riemann-Roch is the most important theorem in the study of Riemann surfaces, or algebraic curves. This is given in Silverman's AEC. But it is a bit algebraic. See the proof of the group law in J. W. S. Cassels, Lectures on Elliptic Curves. First the proof given by Harrison Brown is explained, and then this "conceptual" explanation using Riemann-Roch is given. However since your approach is complex analytic, it will be very instructive to look into Rick Miranda's book on Riemann surfaces. Also Raghavan Narasimhan's ETH lecture notes give the complex analytic construction of the Jacobian variety, referred to by other people in earlier answers. The more advanced(and definitive) volume on complex algebraic geometry is of Griffiths and Harris.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936701774597168, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/7197/kelly-criterion-and-sharpe-ratio
# Kelly criterion and Sharpe ratio Whats the relationship between the Kelly criterion and the Sharpe ratio? $$f=\frac{p(b+1)-1}{b}$$ where $f$ is a percentage of how much capital to place on a bet, $p$ is the probability of success, and $b$ is the payout odds (eg. 3 dollars for ever 1 dollar bet). Is $b$ (the payout ratio) also the Sharpe ratio? I am having a hard time understanding what Ernie is refering to when he is connecting the two concepts. - ## 2 Answers The Sharpe ratio $S_i$ of a strategy indexed by $i$ is given by the ratio of the mean excess return $m_i$ to the standard deviation of returns $\sigma_i$, The formula you have quoted is the discrete Kelly criterion. That's not so useful in trading, where the outcomes are continuous. The continuous Kelly criterion states that for every $i$th strategy with Sharpe ratio $S_i$ and standard deviation of returns $\sigma_i$, you should be leveraged $f_i = m_i/\sigma_i^2 = S_i/\sigma_i$. Note of difference between the discrete and continuous criteria: The Kelly criterion is designed to protect your equity from "ruin", so it will never tell you to bet more than what you have in the discrete case - because when you "lose", you lose the complete bet you've placed. The leverage $f_i$ will always be $<1$ in the discrete case. On the other hand, in the continuous case, your leverage can be $>1$. Let us assume we have a portfolio with an overall Sharpe ratio $S$. What Ernie is talking about is that the maximum compounded growth rate $g$ is given by $g = r + S^2/2$. We usually drop the risk-free rate (unless we post treasuries for margin), so we have $g = S^2/2$. - Kristine, thank you very much for your response. Though I am still confused as to why there are two versions of the Kelly criterion, continuous and discrete. I have a tendency to look at Sharpe ratios from a trading perspective. Are you saying the Sharpe ratio of your trading model is constantly changing along with the volatility of returns?? I guess my question is where does the continuity aspect come from? Not to mention, why doesn't the continuous Kelly not include probability of the outcome? – jessica Feb 3 at 7:12 Yes, your Sharpe ratio changes with the volatility of returns. That's why a corollary of the Kelly criterion requires you to rebalance your portfolio constantly. The continuity comes from the continuity of prices - a "loss" can be perceived as a continuous range of outcomes where your exit price is {0 ticks + commissions, 1 tick + commissions, 2 ticks + commissions, ...} less than your entry price. The probability of the outcome is implicit in $S_i$ and $\sigma_i$, which specify the probability distribution. – kristine Feb 3 at 7:43 So the "g" reflects the growth rate in the asset? – jessica Feb 3 at 8:12 Maximum compounded growth rate of your portfolio. – – kristine Feb 3 at 14:53 I would not put too much weight on any relationship between Sharpe ratio and Kelly criterion. The two are simply not logically related other than they both share common inputs. Kelly relates to sizing your position while Sharpe ratios relate your excess returns to the volatility of those. As long as you find common inputs you can always setup a mathematical relationship between two equations. Yes, both relate to risk but thats as far as I would go in relating one concept to the other. - Few comments. (1) Excess returns, volatility and position sizing should not be looked at separately. (2) There is logical intuition that drives the relationship between the Sharpe ratio and the Kelly leverage; you want to increase your allocation to a strategy that you believe to have better risk-adjusted returns (Sharpe ratio). (3) In mathematics, when you derive a theorem or equation, then it holds. You cannot ignore the relationship between its variables on the basis of emotion, opinion or religion as suggested. You can, however, challenge the assumptions and steps of your derivation. – kristine Feb 3 at 15:23 Nobody argues with emotions here. Nowhere in any literature is a bet size (Kelly is not even widely accepted as a a standard financial asset position sizing tool) related to Sharpe ratio. Sharpe ratio represents risk adjusted return regardless of position size. The position size and your capital base, two incredibly important aspects when deriving optimal bet size, are non-existent in deriving risk adjusted return a-la Sharpe ratios. – Freddy Feb 3 at 15:45 (1) I agree that the Kelly formula has no place in standard practice, but this is off-topic. (2) However, with regards to the jessica's question, there is an intuitive relationship and it is well-defined in the original literature (edwardothorp.com/sitebuildercontent/sitebuilderfiles/…, equation 7.3) though glossed over because it is akin to redundant substitution. I should point out to you that the Sharpe ratio is defined as the excess return over the standard deviation of return. – kristine Feb 3 at 17:34 1 @kristine, you continue to miss the point here. – Freddy Feb 3 at 17:51 1 Thanks, Freddy. – kristine Feb 3 at 18:22 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429004192352295, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/115566/what-does-s-1-2-3-mean
# What does $S = \{1,2,3 \}^*$ mean? Apparently, if one adds an asterisk to the right side of a set definition, it means the set to the left can be built out of elements in the set to the right. How is this so? What does the asterisk officially mean? - ## 3 Answers This notation is most common in discrete mathematics. In that context the set $S$ is considered to be an alphabet and $S^*$ just means the set of all finite strings that can be formed with letters from the alphabet. Variants are $S^n$ for the set of all strings of length $n$, and $S^{\le n}$ for the set of all strings with length no more than $n$. These latter two variants are widely used in set theory as well. - 1 And it should probably be explicitly noted that this includes the empty string, usually denoted either by $\lambda$ or by $\varepsilon$. – Brian M. Scott Mar 2 '12 at 11:39 Yes, the string of length 0. – Patrick Mar 2 '12 at 14:11 It means whatever it says it means in the place where you found it. Without knowing that context, it could mean anything. One common meaning is something like what you've said. It could mean all finite strings of symbols made up from the symbols 1, 2, and 3. - +1 for the first line alone! – user21436 Mar 2 '12 at 8:49 It's called the "Kleene star": see http://en.wikipedia.org/wiki/Kleene_star -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481605887413025, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/4805/effects-of-space-mining-on-earths-orbit?answertab=votes
Effects of space mining on Earth's orbit I was reading a post about space mining, specially lunar mining. I was thinking about what would change in Earth's orbit if we start bringing tons of rocks to it? I mean, in a huge scale. So, would space mining change Earth's orbit in a way to produce any dangerous effects? If the system in question is not a planet/satellite like Earth/Moon, would that change anything? Would bringing tons of rocks from Mars be any different than Moon? - 4 Answers No, there would be no detectable - and surely no dangerous - changes to the Earth's orbit. Just for the sake of an argument, imagine that we double our coal reserves by bringing coal from another place - and even some of the precious metals are too expensive to be brought by spaceships at this moment. ;-) The Earth's coal reserves are something like 1 trillion tons which is $10^{15}$ kg. Let's bring this amount from another celestial body - it's about 10 orders of magnitude less than what we can do now but let's imagine we double our coal reserves in this way. The Earth's mass is $6\times 10^{24}$ kg, so the coal reserves are approximately $10^{-10}$ of the Earth's mass. Now, if all the extraterrestrial coal landed by a slow speed relatively to the Earth, the Earth's velocity wouldn't change at all; only the mass would increase and the heavier Earth would continue along exactly the same trajectory as before. But now, imagine that the coal lands at some speed, e.g. the safe speed that the space shuttles used to take. It's about $350$ km/h. If you don't get approximately this low, there's a risk that your coal will burn in the atmosphere. So if $10^{-10}$ of the Earth's mass has a relative velocity that is $350$ km/h, and let's imagine that all the momentum will go in the same direction - we could make the coal space shuttles land at different places if we wanted - then the velocity of the Earth will change by $350\times 10^{-10}$ km/h which is $3.5\times 10^{-5}$ m/h or $10^{-8}$ m/s. The speed of Earth around the Sun is approximately $3\times 10^{4}$ m/s, so we only change it by a trillionth. Correspondingly, the eccentricity of the Earth's orbit could change at most by one trillionth. We would have a hard time to detect this change. Obviously, you would need to increase the amount of resources you bring roughly by 10 orders of magnitude (which means by 20 orders of magnitude relatively to what we can achieve today) to produce any threat for the Earth. But even if you brought the whole Moon here to Earth, $7\times 10^{22}$ kg (eight orders of magnitude heavier than the Earth's coal reserves), there would be no significant change of the orbit because the speed of the Moon is approximately the same as the speed of the Earth - they're bound together. Well, the shape of the Earth could change a bit if we tried to incorporate the Moon too quickly. ;-) You would have to bring a big fraction of Mars to the Earth (Mars is both heavier and has a substantially different speed) to change the eccentricity of the Earth's orbit by a significant amount and I assure you that this will remain in the realm of science fiction for many, many centuries if not forever. If you brought 1/3 of Mars to the Earth, you would also have to build mountains that are 1000 kilometers high - more than 100 times Mount Everest, around the whole Earth. And it would still not be too dangerous as far as the orbital characteristics go. Of course, there could be a danger for the people who suddenly have 1000 kilometers of rock above their heads. ;-) Your question clearly seems to be an artifact of the unscientific doomsday scenarios that have been presented as science in recent years - e.g. the doomsday caused by CO2 in the atmosphere. Even relatively to the atmosphere, our additions of CO2 are negligible - we're changing the number of molecules in the atmosphere by 2 parts per million (0.0002%) per year. But the atmosphere is just one millionth of the Earth's mass, so our annual CO2 emissions only redistribute something like 2 parts per trillion of the Earth's mass every year. Clearly, all those changes are irrelevant from a "mechanical" viewpoint and they're arguably irrelevant from a climatic viewpoint, too. - 2 Most of this answer was excellent, but the last paragraph is completely off-topic considering the question. – Mark Eichenlaub Feb 8 '11 at 22:27 The earth accumulates cosmic dust at about 40 tons per day (according to Wikipedia: Cosmic Dust). This seems to have no effect on gravity and the orbit of the earth around the sun. For space mining to have an effect, the mined amount will have to be significantly larger. - 1 40 tons per day is 2 grams per person per year. Anything economically significant would be several orders of magnitude more than 40 tons per day. – Mark Eichenlaub Feb 8 '11 at 13:10 And then there's an effect in the opposite direction: thermal evaporation of gases (He, O2?, N2?) into space reduces Earth's mass slightly. I don't have any numbers. Which of those effects might be the larger one? – Jens Sep 26 '12 at 13:57 The redistribution of mass due to space activities is very small. To the extent there is any space mining which happens it is that we mine out materials from the Earth and over the last several decades sent somewhere around 10 thousand tons of material into space. Given that the mass of the Earth is about $6\times 10^{21}$ metric tons the mass of stuff we have sent into space is a very small percentage. The perturbation is tiny, Any mining from space is also going to be a tiny perturbation. Commercial activities in space have been with massless forms, in particular with information. The next possible step might be with solar power satellites. Again this is a massless form. Sending spacecraft into space to mine materials is not likely to work any time in the near future. The Saturn Apollo rocket weighed in at 3000 tons of highly processed materials, which brought back 50kg of lunar rocks. From an economic or ratio of materials and energy in verses materials and energy back this clearly is a loss. - edited edited edited – Georg Feb 8 '11 at 13:28 The cost of the apollo program in toto divided by total kgs of moon brought back would be a very disillusioning "price"! – Georg Feb 8 '11 at 13:39 The answers so far miss the key physical aspect: planetary orbit does not depend on the mass of the planet (see Kepler's laws) because planet's mass is much smaller then the mass of the Sun. This means that even if you double Earth's mass by mining Mars completely, it will stay on the same orbit. Update: this is assuming that Earth's velocity stays the same, but then, if people will be able to move these enormous masses, they could change Earth's orbit at will. - My answer has surely not missed this point. I explicitly explain that if the reserves land at zero speed relatively to the Earth, the orbit won't change at all. – Luboš Motl Feb 8 '11 at 13:45 @Luboš: well, we wrote our answers simultaneously. But, to be fair, you only briefly mention it in your detailed answer, whereas I think this is the only part worth answering. Speculations on details of space mining are off-topic for Physics.SE. – gigacyan Feb 8 '11 at 14:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540612697601318, "perplexity_flag": "head"}
http://mathoverflow.net/questions/78696?sort=oldest
## Is there an intuitive reason for Zariski’s main theorem? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Zariski's main theorem has many guises, and so I will give you the freedom to pick the one that you find to be most intuitive. For the sake of completeness, I will put here one version: Zariski's main theorem: Let $f:X\rightarrow Y$ be a quasi-finite separated birational map of varieties, where $Y$ is normal. Then $f$ is an open immersion. There is no reason to pick this particular formulation. In fact, every formulation seems to me like a technical lemma rather than a theorem with geometrically intuitive content. ### Question Is there a formulation of Zariski's main theorem that has an intuitive/pictorial reason'' for it? Or is Zariski's main theorem in its core a technical result with no geometric reason? - 7 I am looking forward to the answers very much. The result relates a very algebraic hypothesis to a very geometric conclusion , which suggests to me that there can't exactly be a "pictorial" reason why it's true. I am guessing that the fun will be more in seeing why the various geometric conclusions are really the same--and that when we have understood that then we can say that we really know the geometric meaning of "integrally closed". – Tom Goodwillie Oct 20 2011 at 19:52 1 I think the normality of $Y$ is not the real reason. This version of ZMT when $Y$ is not necessarily normal is that $X$ is open in some $X'$ which is finite birational to $Y$. – Qing Liu Oct 20 2011 at 22:47 1 Qing Liu: What exactly is that statement? Let $X'\to Y$ be an arbitrary finite birational morphism. For instance the normalization of a singular projective curve. Let $X=X'$. This seems to satisfy your condition. So, what is that statement without normality of $Y$? – Sándor Kovács Oct 21 2011 at 3:11 @Sándor Kovács: If $f :X\to Y$ is a quasi-finite and separated morphism to a noetherian scheme and if $X, Y$ are integral, then $f$ factorizes into an open immersion $X\to X'$ and a finite morphism $X'\to Y$. This implies the following statement: if $f$ is further birational and $Y$ is normal, then $X'\to Y$ is finite birational hence an isomorphism and $f$ is an open immersion. – Qing Liu Oct 21 2011 at 7:29 ## 4 Answers This is a beautiful question, and I do not know whether one can give a satisfactory answer. Anyway, let me try and say something. My favourite version of Zariski Main Theorem is the one given in Hartshorne: Let $f \colon X \to Y$ be a birational projective morphism of noetherian integral schemes, with $Y$ normal. Then the fibre $f^{-1}(y)$ is connected for any $y \in Y$. The fact that normality is necessary can be easily understood by means of the following example: take a del Pezzo quintic surface in $X \subset \mathbb{P}^5$ and project it from a general point to $\mathbb{P}^4$. It is possible to prove that the image surface $X' \subset \mathbb{P}^4$ has exactly one singular point, which is a non-normal double point, and that appears since there exists exactly one line $\ell$ in $\mathbb{P}^5$ which intersect $X$ at more than one point. In fact, $\ell \cap X$ contains exactly two points, so the preimage of the singular point of $X'$ via the birational map $\pi \colon X \to X'$ consists of those two points, in particular it is not connected. If $Y$ is normal, Zariski main theorem tells you that situations like this cannot occur: any fibre of a birational map is either exactly one point, or it is a connected variety of dimension $\geq 1$. Why normality is the key condition? Well, the reason is that if $Y$ is normal and $f \colon X \to Y$ is birational then `$f_* \mathcal{O}_X = \mathcal{O}_Y$`. I follow Hartshorne again: assume $Y$ affine, i.e. $Y=\textrm{Spec}(A)$. Then $f_* \mathcal{O}_X$ is a coherent sheaf of `$\mathcal{O}_Y$`-algebras, hence `$B=\Gamma(Y, f_* \mathcal{O}_X)$` is a finitely generated $A$-module. But $A$ and $B$ are integral domains with the same quotient field (birationality of $f$) and $A$ is integrally closed (normality of $Y$) hence we must have $A=B$ and we are done. This is really easy. Of course, I cheated a bit because I used a big weapon: $f_* \mathcal{O}_X=\mathcal{O}_Y$ implies "connected fibres". The proof of this statement uses in fact the deep Theorem of Formal Functions. At any rate, I hope that this answer sheds at least some light on the geometrical meaning of Zariski Main Theorem. - Dear Francesco: in your initial statement of ZMT, "finite" should be "connected". – Artie Prendergast-Smith Oct 20 2011 at 21:54 3 Also, it seems worth mentioning that to see that normality is necessary, there is of course a very simple example: let Y be a nodal curve, and $X \rightarrow Y$ its normalisation. – Artie Prendergast-Smith Oct 20 2011 at 21:57 2 @Artie: of course I intended "connected", thank you! Well, you are right about the example of the curve. But I preferred to give the one with the surface since it shows the first non-trivial case of non-normal isolated singularity (for curves, normality is equivalent to smoothness). The tangent cone at the singularity is given in that case by two planes in $\mathbb{P}^4$ intersecting at one point – Francesco Polizzi Oct 20 2011 at 22:02 1 Francesco: Of course, your example is more interesting than the one I mentioned; I just pointed it out for the benefit of "passers-by" who might happen to look at the question. – Artie Prendergast-Smith Oct 20 2011 at 22:10 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There are many formulations of Zariski's main theorem, and Mumford's Red Book gives a very nice description of some of them, and of their interrelations. It is worthwhile remembering that it was in fact Zariski who first proved a version of his main theorem. Zariski was a geometer first and foremost (although he was also one of the greatest ever commutative algebraists), and so it is reasonable to look for the geometric content in this result. In fact Zariski first proved his main theorem before he developed his theory of formal functions (which was his method for proving connectedness theorems, and in its modern cohomlogical reformulation by Grothendieck remains the basic method for proving connectedness statements, as in Francesco Polizzi's answer). Zariski's original version of his theorem stated that if the preimage of a point on a normal variety under a birational map contains an isolated point, then the birational map is in fact an isomorphism in a n.h. of that point. As Mumford explains, what this result and the later variations have in common is that a variety is unibranch at a normal point, i.e. there is only one branch of the variety passing through such a point. Thus, if we blow the variety up in some way, we might be able to increase the dimension at this point (in the sense that we might be able to replace $y$ by something higher dimensional), but we cannot break the variety apart there. Grothendieck's formulation is very natural: it states that is we have a quasi-finite morphism, we can always compactify it (compactification to be understood in a relative sense) to a finite morphism. To see how this implies Zariski's original result, just observe that if $f:X \to Y$ is birational with $Y$ normal, and $x$ is an isolated point in $f^{-1}(y)$, then we can choose a n.h. $U$ of $x$ over which $f$ is quasi-finite, which then compactifies to a finite morphism. But since $Y$ is normal, any finite birational map to $Y$ must be an isomorphism, and so $U$ must be embedding into $Y$ via an open immersion. In short, $f$ is an isomorphism between a n.h. of $x$ and a n.h. of $y$. - My favorite version of ZMT is the same as Francesco's: Let $f \colon X \to Y$ be a birational projective morphism of noetherian integral schemes, with $Y$ normal. Then the fibre $f^{-1}(y)$ is connected for any $y \in Y$. Let me try to give a pedestrian naive${}^1$ answer: If $f^{-1}(y)$ is not connected, then $f$ "glues" two or more pieces together. In particular, and this is the key, it follows that image point is not unibranched or in other words its tangent cone is not irreducible. Therefore a local equation for $Y$ would look something like $$uv+ p=0,$$ where $p$ has a higher degree (in whatever sense locally at the point in question) than $uv$. Now for simplicity assume that $p$ is a polynomial of $u,v$. This is not entirely true, but I am not claiming to prove ZMT here. At least it is true for a nodal cubic. Anyway, once we have that, we're kind of done: if $uv + p(u,v)=0$, divide by $av^{d+N}$ where $av^du^N$ is the highest degree term of $p$ in $u$ and obtain a monic polynomial of degree $N$ in $\dfrac uv$. ${}^1$: pedestrian naive = heuristic, not trying to be precise - 4 I know you are not the one making up this particular piece of terminology but to me it seems a pedestrian should have ample time to look around and make everything precise whereas a driver swishes through at high speed and should not... – Torsten Ekedahl Oct 21 2011 at 4:24 2 Dear *red herring*: I don't understand your "false friend" comment. The English word comes from Latin with the exact meaning that you are saying. It is easy to imagine how "ordinary way of transport" became synonymous with walking. (cont'ed) – Sándor Kovács Oct 21 2011 at 16:07 2 Word Origin & History pedestrian 1716, "prosaic, dull" (of writing), from L. pedester (gen. pedestris) "plain, prosaic" (sense contrasted with equester "on horseback"), from pedes "one who goes on foot," from pes (gen. pedis) "foot" (see foot). Meaning "going on foot" is first attested 1791 in Eng. (it was also a sense of L. pedester). The noun meaning "walker" is 1793, from the adj. – Sándor Kovács Oct 21 2011 at 16:07 2 Finally, in my language (Hungarian) a pedestrian is called gyalogos which literally means "walker". – Sándor Kovács Oct 21 2011 at 16:09 2 I think that in our enlightened age we should be allowed to disregard equestrian prejudice against the horseless so I stand by my comment. – Torsten Ekedahl Oct 21 2011 at 17:09 show 4 more comments I think Grothendieck's proof of ZMT (in EGA IV-8) -- which I find fantastic -- is itself a great source of intuition, at least when presented in outline. (Here I mean the one that a quasi-finite separated map factors as an open immersion and a finite morphism.) Here's the strategy: let's say you want to show first that $f: X\to Y$, a quasi-finite separated map between nice (e.g. noetherian) schemes (or finite presentation otherwise) is quasi-projective (or even quasi-affine). This is the basic step, after which the version of ZMT in Hartshorne and a little more work gives you the full result (see e.g. EGA III.IV for this). OK, so this doesn't seem obvious: you have a map of schemes that could be very pathological, but still you're claiming that $X$ can be "compactified" to a projective $Y$-scheme. To see this, we need to get an ample line bundle on $X$; the claim is that $\mathcal{O}_X$ works, which again is to say that $X$ is quasi-affine. How do we see this? We work locally on $Y$. This is part of a simple idea developed at length by Grothendieck that to show that a certain local property is true for a map $f: X \to Y$, you can just check at all the local rings after base-change (and is itself a property of the "noetherian descent" formalism: if a property is true on an appropriate inverse limit, it descends (or ascends?) to being true at some finite stage). Thus one reduces to the case where $Y$ is local. The next idea is to make $Y$ complete local---this is a consequence of faithfully flat descent. The point is, it's not hard to show that anything quasi-finite over a complete local ring is the sum of a finite morphism plus something smaller. So, the result for $Y$ complete local and for the $X$ finite over the closed point is easy commutative algebra. The rest of $Y$ follows by noetherian induction. In other words, the point is to reduce a) reduce to the case of $Y$ local, by noetherian descent b) reduce to the case of $Y$ complete local, by faithfully flat descent, and c) use a clever inductive trick based on puncturing $Y$ (which is very geometric---puncturing $Y$ at the closed point is not something you can do purely algebraically!). So, maybe the intuition I take from this is that something which is true over complete local rings has a good chance of being true in general. I'm afraid this isn't all that geometric -- maybe one way of saying it is that if something is true analytically locally, then it has a good chance of being true algebraically locally. (The proof of the full strength ZMT in EGA IV-8 is, I think, the same sort of idea, though added with some use of harder commutative algebra -- properties of excellent rings.) Another illustration of this technique of reducing to complete local rings (and induction) is in SGA I, expose IX, Theorem 4.7: finite surjective morphisms of finite presentation are morphisms of effective descent for the etale site. In expose VIII, sec. 6 the above argument for half of ZMT is given (in a much more abbreviated form than which it appears in EGA). - 3 To be honest, I cannot see any intuition at allin what you wrote :) I know: you like potayto and I like potahto... – Mariano Suárez-Alvarez Oct 21 2011 at 14:31 1 Dear Akhil, I think that you mean "*locally* on $Y$" in the third paragraph. Also, I found this description of Grothendieck's proof very helpul; thanks! Best wishes, Matthew – Emerton Oct 21 2011 at 14:53 Dear Matthew, thanks for the correction (and for the kind words!). – Akhil Mathew Oct 21 2011 at 22:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 103, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424172043800354, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/105017/integrating-int-limits-01-1xexeexdx-wolfram-alpha-doesnt-pro/105020
# Integrating: $\int \limits_{0}^{1} (1+xe^x)e^{e^x}dx$ (wolfram alpha doesn't provide the steps for this one) How do I integrate this thing? $\int \limits_{0}^{1} (1+xe^x)e^{e^x}dx$ I've tried all the different integration methods and "by parts" combinations for $u$ and $dv$... - ## 3 Answers Notice first that $(d/dx)(e^{e^x}) = e^x.e^{e^x}$. Integrating by parts, we see that $$\int xe^xe^{e^x} dx = xe^{e^x} - \int e^{e^x} dx$$ Hence $$\int (1 + xe^x)e^{e^x} dx = xe^{e^x} + C$$ and the definite integral $$\int_0^1 (1 + xe^x)e^{e^x} dx = e^e$$ - that's like saying "notice the answer"... am I supposed to guess or recognize the derivative of $e^{e^x}$ ? – nofe Feb 2 '12 at 17:40 1 I'm sorry you don't like it. It's a very natural thing to notice because the derivative of $e^{f(x)}$ is $f'(x)e^{f(x)}$. – Simon S Feb 2 '12 at 17:43 7 @nofe: To be blunt, yes, you are supposed to recognize the derivative of $e^{e^x}$. – Brian M. Scott Feb 2 '12 at 17:50 1 If nothing else, now you've seen this method you'll be able to cope with integrals such as $$\int (\sec x . \tan x + 1)e^{\sin x} dx$$ – Simon S Feb 2 '12 at 17:54 which parts did you use to plug in? I've already tried integrating partially by setting $dv=(1+xe^x)$ and $u=e^{e^x}$ and that didn't work. – nofe Feb 2 '12 at 17:57 show 8 more comments This might be a long or a short question,deppending on how you see it. Integration is much like learning kung fu. You learn all these "secret" cool tricks to be able to tear down huge scary baddases (also know as integrals) When to use these cool and secret commes only with much practice, after solving hundreds of integrals you start to develop a gutt feeling for what will work. But before that, it will require much blood, tears and sweat. One of these hidden techniques I have discovered is integration by cancelation, in lack of a better word for it. The gist of it is breaking the integral into parts, then use integration by parts on one of the parts to obtain some kind of helpful cancelation. I give two examples below, first the easiest example (although a very cool one) Evaluate the integral below $$I = \int \sin(\ln x) + \cos(\ln x) \, \mathrm{d}x$$ The standard method of solving this integral is by either rewriting using complex numbers, or use the substitution $u=ln x$ and $x = e^u$. I will spare you for the details, but the problem is rather cumbersome (although an exellent excercise, try it!) Now, for the experienced kung fu integrator, he will solve this problem by using integration by parts. We use parts on the first part of our integral. Let $$\begin{align} u &= \sin(\ln x) \qquad \quad v' = 1 \\ u'&=\frac{1}{x}\cos(\ln x) \qquad \, v = x \end{align}$$ This turns our integral into $$\begin{align} I & = \int \sin(\ln x) + \cos(\ln x) \, \mathrm{d}x \\ I & = \int \sin(\ln x)\, \mathrm{d}x + \int \cos(\ln x) \, \mathrm{d}x \\ I & = \left[ x \sin(\ln x) - \int x \cdot \frac{1}{x} \cos(\ln x) \, \mathrm{d}x \right] + \int \cos(\ln x) \, \mathrm{d}x \\ I & = \left[ x \sin(\ln x) - \int \cos(\ln x) \, \mathrm{d}x \right] + \int \cos(\ln x) \, \mathrm{d}x \\ I & = x \sin(\ln x) + \mathcal{C} \end{align}$$ And so we see this made the computations a cakewalk. Now actually spotting this can be challengening. And to me this is usually some sort of last approach. Let us look at another example. $$I = \int \left( 1 + 2x^2 \right)e^{x^2} \, \mathrm{d}x$$ Many will look at this integral, and claim that it is unsolvable in terms of elementary functions. They say this because they spot an $e^{x^2}$ in the integrand. This is closely relatated to the gaussian integral. However the integral above is "solvable" in lack of a better word. However most approaches will fail. I will leave it to you to show that the classical approaches all fail here: Integration by parts, and substitutions. The "trick" here is to split the integral into two parts like below $$I = \int e^{x^2} \, \mathrm{d}x + \int 2x^2 e^{x^2} \, \mathrm{d}x$$ Now if one were to use integration by parts on the first part. (Which might seem absurd, as we know it has no antiderivative) we obtain, once again, some clever cancelations. I will leave it to you, to fill in the details. Now, the last problem here is very closely related to your problem. And by applying the exact same approach, we can calculate your integral. Please note, we never once assumed we knew the answer. The only thing we did was some rearrangements, and some clever use of integration by parts. Again, these things are hard to spot. But you will get used to it, and you will eventually learn kung fu. Below are one more problem you might want try to solve using this method. $$\int \frac{2}{\log x} + \log(\log x) - \frac{1}{\log^2x} \, \mathrm{d}x$$ - +1 for "Integration is much like learning kung fu." Hokuto Finger Shot of Emptiness! – Korgan Rivera Feb 9 '12 at 2:54 I pressed your integral ki points, your integral will be solved in three days. – N3buchadnezzar Feb 9 '12 at 7:32 Well, I was trying to come up with a more straightforward solution, but I think I just ended up lucking into it instead. I was hoping I could somehow eliminate one term with integration by parts. My first step eliminated the x term by multiplying the first part by $e^{-x}$ and the second by $e^x$ to get $\int(e^{-x}+x)e^xe^{e^x}dx$ $u=e^{-x}+x,du=1-e^{-x}$ $dv=e^xe^{e^x}dx,v=e^{e^x}$ $\int(e^{-x}+x)e^xe^{e^x}dx=e^{e^x}(e^{-x}+x)-\int(1-e^{-x})e^{e^x}dx$ Problem now is if I try the same multiplication as before, I won't be able to eliminate any terms. However, it is possible to proceed from here. Instead, we'll multiply the first part by $e^x$ and the second part by $e^{-x}$. $\int(1-e^{-x})e^{e^x}dx=\int(e^x-1)e^{-x}e^{e^x}dx=\int(e^x-1)e^{e^x-x}dx$ $z=e^x-x,dz=(e^x-1)dx$ $\int(e^x-1)e^{e^x-x}dx=\int e^zdz=e^z=e^{e^x-x}$ Plugging this back in, we get $\int(1+xe^x)e^{e^x}dx=e^{e^x}(e^{-x}+x)-\int(1-e^{-x})e^{e^x}dx=$ $e^{e^x}(e^{-x}+x)-e^{e^x}e^{-x}+C=xe^{e^x}+C$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588105082511902, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/98130?sort=oldest
## Determining the maximum number of distance relationships that can be defined between points in Euclidean space ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let there be $m$ points in the Euclidean space $\mathbb R^n$. Randomly choose $k$ distinct pairs of these $m$ points, and assign a random (positive) value for the Euclidean distance between each of these $k$ pairs. Determine the maximum value of $k$ as a function of $n$ and $m$ such that, for any random choice of $k$ distinct pairs of points and the Euclidean distances between the points, either 1. There exists some configuration of $m$ points satisfying all the distance relationships, OR 2. There exists a triplet of points for which all three pairwise distances are defined, and these three distances do not satisfy the triangle inequality. The question above is similar to this one http://mathoverflow.net/questions/97611/reconstructing-an-euclidean-point-cloud-from-their-pairwise-distances (and others like it) but I believe the math involved is different. - see mathoverflow.net/questions/7794/… – Anton Petrunin May 28 at 20:42 Hi Anton, I'm not sure whether the solutions there can be extended to $\mathbb R ^n$, but thanks for the tip. – Vincent Tjeng May 29 at 15:09 ## 2 Answers The following 6 distances between 4 points $a,b,c,d$ can not be realized in a Euclidean space of any dimension: $d(a,b)=d(b,c)=d(a,c)=1$ and $d(a,d)=d(b,d)=d(c,d)=0.51$, although all triangle inequalities are satisfied and even strict. Adding any number of points and assigning any set of other distances to be equal to 1 does not change this fact. So no $k\ge 6$ is good if $m\ge 4$. As Will Sawin showed, $k=4$ is not good either. For $k=5$, add a point $E$ and the relation $d(A,E)=1$ to Will's example. Thus the answer to the question as stated is $k=3$ for all $n\ge 2$ and $m\ge 5$. If $m=4$ and $n\ge 2$, one can take $k=5$. If $n=1$ and $m\ge 3$, the answer is $k=2$, obviously. In the remaining cases ($n\ge 2$, $m\le 3$ and $n=1$, $m\le 2$) one can define all the $m(m-1)/2$ distances. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have no answer but three different comments. It seems unwieldy to post these all as one or more comments. 1: Let $k=4$, and form a quadrilateral otherwise disconnected to other vertices. There are no triangles to violate the triangle inequality, but you can still violate the quadrilateral inequality: $AB \leq BC+CD+AD$. This can cause a violation of monotonicity, where $k$ satisfies your condition and $k+1$ does not. Presumably you don't want this? 2: I think probability is a red herring, since there is no probability measure here. 3: An obvious upper bound is, if there are at least $n+2$ choose $2$ distances, you can make $n+2$ vertices into a regular $n+1$-simplex and do something with the other vertices and edges, to get a graph that embeds in some metric space but not $\mathbb R^n$. So $(n+2)(n+1)/2-1$ is an upper bound. - In comment 1, I assume you mean "quadrilateral" rather than "square" and in comment 3, "distances" rather than "questions"? In any case, thanks for pointing out the issue of the quadrilateral inequality. I'm looking at reformulating the question by replacing the condition on the triangle inequality with the following condition -- There exist $a$ points ${P_1, P_2, ... P_a}$ where all the distances $P_i P_{i+1}$ are defined and $P_1 P_a>P_1 P_2+P_2 P_3 +...+P_{a-1}P_a$. However, does it make the question trivial? – Vincent Tjeng May 28 at 14:37 No. Sergei's answer still works, it just changes the numbers slightly. You need a stronger condition, that probably involves the words "positive semidefinite", to beat Sergei's example. – Will Sawin May 28 at 16:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922532856464386, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Kendall_tau_rank_correlation_coefficient
Kendall tau rank correlation coefficient In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's tau (τ) coefficient, is a statistic used to measure the association between two measured quantities. A tau test is a non-parametric hypothesis test for statistical dependence based on the tau coefficient. Specifically, it is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938,[1] though Gustav Fechner had proposed a similar measure in the context of time series in 1897.[2] Definition Let (x1, y1), (x2, y2), …, (xn, yn) be a set of observations of the joint random variables X and Y respectively, such that all the values of (xi) and (yi) are unique. Any pair of observations (xi, yi) and (xj, yj) are said to be concordant if the ranks for both elements agree: that is, if both xi > xj and yi > yj or if both xi < xj and yi < yj. They are said to be discordant, if xi > xj and yi < yj or if xi < xj and yi > yj. If xi = xj or yi = yj, the pair is neither concordant nor discordant. The Kendall τ coefficient is defined as: $\tau = \frac{(\text{number of concordant pairs}) - (\text{number of discordant pairs})}{\frac{1}{2} n (n-1) } .$[3] Properties The denominator is the total number pair combinations, so the coefficient must be in the range −1 ≤ τ ≤ 1. • If the agreement between the two rankings is perfect (i.e., the two rankings are the same) the coefficient has value 1. • If the disagreement between the two rankings is perfect (i.e., one ranking is the reverse of the other) the coefficient has value −1. • If X and Y are independent, then we would expect the coefficient to be approximately zero. Hypothesis test The Kendall rank coefficient is often used as a test statistic in a statistical hypothesis test to establish whether two variables may be regarded as statistically dependent. This test is non-parametric, as it does not rely on any assumptions on the distributions of X or Y or the distribution of (X,Y). Under the null hypothesis of independence of X and Y, the sampling distribution of τ has an expected value of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the normal distribution, with mean zero and variance $\frac{2(2n+5)}{9n (n-1)}$.[4] Accounting for ties A pair {(xi, yi), (xj, yj)} is said to be tied if xi = xj or yi = yj; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range [-1, 1]: Tau-a The Tau-a statistic tests the strength of association of the cross tabulations. Both variables have to be ordinal. Tau-a will not make any adjustment for ties. Tau-b The Tau-b statistic, unlike Tau-a, makes adjustments for ties.[5] Values of Tau-b range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association. The Kendall Tau-b coefficient is defined as: $\tau_B = \frac{n_c-n_d}{\sqrt{(n_0-n_1)(n_0-n_2)}}$ where $\begin{array}{ccl} n_0 & = & n(n-1)/2\\ n_1 & = & \sum_i t_i (t_i-1)/2 \\ n_2 & = & \sum_j u_j (u_j-1)/2 \\ n_c & = & \mbox{Number of concordant pairs} \\ n_d & = & \mbox{Number of discordant pairs} \\ t_i & = & \mbox{Number of tied values in the } i^{th} \mbox{ group of ties for the first quantity} \\ u_j & = & \mbox{Number of tied values in the } j^{th} \mbox{ group of ties for the second quantity} \end{array}$ Tau-c Tau-c differs from Tau-b as in being more suitable for rectangular tables than for square tables. Significance tests When two quantities are statistically independent, the distribution of $\tau$ is not easily characterizable in terms of known distributions. However, for $\tau_A$ the following statistic, $z_A$, is approximately distributed as a standard normal when the variables are statistically independent: $z_A = {3 (n_c - n_d) \over \sqrt{n(n-1)(2n+5)/2} }$ Thus, to test whether two variables are statistically dependent, one computes $z_A$, and finds the cumulative probability for a standard normal distribution at $-|z_A|$. For a 2-tailed test, multiply that number by two to obtain the p-value. If the p-value is below a given significance level, one rejects the null hypothesis (at that significance level) that the quantities are statistically independent. Numerous adjustments should be added to $z_A$ when accounting for ties. The following statistic, $z_B$, has the same distribution as the $\tau_B$ distribution, and is again approximately equal to a standard normal distribution when the quantities are statistically independent: $z_B = {n_c - n_d \over \sqrt{ v } }$ where $\begin{array}{ccl} v & = & (v_0 - v_t - v_u)/18 + v_1 + v_2 \\ v_0 & = & n (n-1) (2n+5) \\ v_t & = & \sum_i t_i (t_i-1) (2 t_i+5)\\ v_u & = & \sum_j u_j (u_j-1)(2 u_j+5) \\ v_1 & = & \sum_i t_i (t_i-1) \sum_j u_j (u_j-1) / (2n(n-1)) \\ v_2 & = & \sum_i t_i (t_i-1) (t_i-2) \sum_j u_j (u_j-1) (u_j-2) / (9 n (n-1) (n-2)) \end{array}$ Algorithms The direct computation of the numerator $n_c - n_d$, involves two nested iterations, as characterized by the following pseudo-code: ```numer := 0 for i:=2..N do for j:=1..(i-1) do numer := numer + sgn(x[i] - x[j]) * sgn(y[i] - y[j]) return numer ``` Although quick to implement, this algorithm is $O(n^2)$ in complexity and becomes very slow on large samples. A more sophisticated algorithm[6] built upon the Merge Sort algorithm can be used to compute the numerator in $O(n \cdot \log{n})$ time. Begin by ordering your data points sorting by the first quantity, $x$, and secondarily (among ties in $x$) by the second quantity, $y$. With this initial ordering, $y$ is not sorted, and the core of the algorithm consists of computing how many steps a Bubble Sort would take to sort this initial $y$. An enhanced Merge Sort algorithm, with $O(n \log n)$ complexity, can be applied to compute the number of swaps, $S(y)$, that would be required by a Bubble Sort to sort $y_i$. Then the numerator for $\tau$ is computed as: $n_c-n_d = n_0 - n_1 - n_2 + n_3 - 2 S(y)$, where $n_3$ is computed like $n_1$ and $n_2$, but with respect to the joint ties in $x$ and $y$. A Merge Sort partitions the data to be sorted, $y$ into two roughly equal halves, $y_{left}$ and $y_{right}$, then sorts each half recursive, and then merges the two sorted halves into a fully sorted vector. The number of Bubble Sort swaps is equal to: $S(y) = S(y_{left}) + S(y_{right}) + M(Y_{left},Y_{right})$ where $Y_{left}$ and $Y_{right}$ are the sorted versions of $y_{left}$ and $y_{right}$, and $M(\cdot,\cdot)$ characterizes the Bubble Sort swap-equivalent for a merge operation. $M(\cdot,\cdot)$ is computed as depicted in the following pseudo-code: ```function M(L[1..n], R[1..m]) i := 1 j := 1 nSwaps := 0 while i <= n and j <= m do if R[j] < L[i] then nSwaps := nSwaps + n - i + 1 j := j + 1 else i := i + 1 return nSwaps ``` A side effect of the above steps is that you end up with both a sorted version of $x$ and a sorted version of $y$. With these, the factors $t_i$ and $u_j$ used to compute $\tau_B$ are easily obtained in a single linear-time pass through the sorted arrays. A second algorithm with $O(n \cdot \log{n})$ time complexity, based on AVL trees, was devised by David Christensen.[7] References 1. Kendall, M. (1938). "A New Measure of Rank Correlation". 30 (1–2): 81–89. doi:10.1093/biomet/30.1-2.81. JSTOR 2332226. 2. Kruskal, W.H. (1958). "Ordinal Measures of Association". 53 (284): 814–861. doi:10.2307/2281954. JSTOR 2281954. MR 100941. 3. Nelsen, R.B. (2001), "Kendall tau metric", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 4. Prokhorov, A.V. (2001), "Kendall coefficient of rank correlation", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 5. Agresti, A. (2010). Analysis of Ordinal Categorical Data, Second Edition. New York, John Wiley & Sons. 6. Knight, W. (1966). "A Computer Method for Calculating Kendall's Tau with Ungrouped Data". 61 (314): 436–439. doi:10.2307/2282833. JSTOR 2282833. 7. Christensen, David (2005). "Fast algorithms for the calculation of Kendall's τ". 20 (1): 51–62. doi:10.1007/BF02736122. • Abdi, H. (2007). "Kendall rank correlation". In Salkind, N.J. Encyclopedia of Measurement and Statistics. Thousand Oaks (CA): Sage. • Kendall, M. (1948) Rank Correlation Methods, Charles Griffin & Company Limited
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8703713417053223, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/32386/superspace-uncertainty-principle?answertab=active
# Superspace Uncertainty Principle Do the "operator for translations in superspace" and the "position in superspace operator" follow an uncertainty principle? How "real" is superspace? Aside from being weird (and possibly just a "mathematical trick") is superspace really just an extension of space on the same footing as real space (with off-shell degrees of freedom)? - ## 1 Answer 1) Well, no measuring device is going to measure a Grassmann-odd$^1$ number. A measuring device can only produce real outputs $\subseteq\mathbb{R}$. However, one can measure a fermionic condensate, i.e. an expectation value $\subseteq\mathbb{R}$ of an composite bosonic Hermitian operator, where each term is built from an even number of elementary fermionic operators, typically two. 2) It also makes sense to e.g. consider canonical anticommutation relations (CAR) for fermionic operators in complete analogy with canonical commutation relations (CRR) for bosonic operators. As is well-known, CCRs can be viewed as the source/origin of Heisenberg's uncertainty principle (HUP). It is natural to ponder if there exists a fermionic HUP? It turns out that we can only see a HUP effect of CARs indirectly. We first have to construct composite bosonic Hermitian operator observables, say $A$ and $B$, build out of some elementary fermionic (and bosonic) operators that obeys CARs (and CCRs). Again, the composite bosonic operators $A$ and $B$ must necessarily both consist of an even number of elementary fermionic operators in each of their terms. The uncertainty $$\langle ( \Delta A )^{2} \rangle \langle ( \Delta B )^{2} \rangle ~\geq~ \frac{1}{4} | \langle [ A,B ] \rangle |^{2}$$ of $A$ and $B$ is in principle measurable, and can be linked to the pertinent CARs (and CCRs). 3) An Example. Consider Hermitian elementary fermions $c^i$, $i=1, \ldots, n$, with CAR $$\tag{1} \{c^i, c^j\}~=~ \hbar g^{ij}{\bf 1} , \qquad g^{ji}~=~g^{ij}~\in~\mathbb{R},$$ and a fermionic condensate $$\chi^{ij}~:=~\frac{\mathrm{i}}{2}\langle [c^i, c^j] \rangle, \qquad \chi^{ji}~=~-\chi^{ij}~\in~\mathbb{R}.$$ Let the two composite bosonic operators be $$A~=~ \frac{\mathrm{i}}{4}\alpha_{ij} [c^i, c^j] , \qquad \alpha_{ji}=-\alpha_{ij}~\in~\mathbb{R},$$ and $$B~=~ \frac{\mathrm{i}}{4}\beta_{ij} [c^i, c^j] , \qquad \beta_{ji}=-\beta_{ij}~\in~\mathbb{R}.$$ Then one may calculate that $$\tag{2} \langle [ A,B ] \rangle~=~-\mathrm{i}\hbar\alpha_{ij}g^{jk}\beta_{k\ell}\chi^{\ell i} .$$ The presence of the metric $g^{jk}$ on the right-hand side of (2) is due to the CAR (1), which leads to uncertainty in measuring the observables $A$ and $B$ simultaneously. -- $^1$ In this answer, bosonic (fermionic) will mean Grassmann-even (Grassmann-odd), respectively. - Ah, this is interesting (both the question and the answer); I never brooded about an uncertainty principle involving fermionic operators. Dear Qmechanic, could you write an additional line such that I can explicitely see how the anticommutators of the fermionic constituents of A and B are linked to the uncertainty? – Dilaton Jul 19 '12 at 16:32 I updated the answer. – Qmechanic♦ Jul 19 '12 at 19:51 Hey thanks Qmechanic, if I could I would +1 once again :-) – Dilaton Jul 19 '12 at 20:03 Thanks for the detailed answer! I only have time to skip it over at the moment so i'll comment if I have further questions about your answer. – tachyonicbrane Aug 9 '12 at 14:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043262004852295, "perplexity_flag": "middle"}
http://nrich.maths.org/1404
### Be Reasonable Prove that sqrt2, sqrt3 and sqrt5 cannot be terms of ANY arithmetic progression. ### The Root Cause Prove that if a is a natural number and the square root of a is rational, then it is a square number (an integer n^2 for some integer n.) ### Good Approximations Solve quadratic equations and use continued fractions to find rational approximations to irrational numbers. # Proof Sorter - the Square Root of 2 Is Irrational ##### Stage: 5 Challenge Level: Full screen version This text is usually replaced by the Flash movie. This method of proof can easily be generalised to prove that $\sqrt n$ is irrational when $n$ is not a square number . What is the length of the diagonal of a square with sides of length 2? How do we find the value of $\sqrt 2$? What number has 2 as its square? What is the side of a square which has area 2? Now $(1.4)^2=1.96$, so the number $\sqrt 2$ is roughly $1.4$. To get a better approximation divide $2$ by $1.4$ giving about $1.428$, and take the average of $1.4$ and $1.428$ to get $1.414$. Repeating this process, $2\div 1.414 \approx 1.41443$ so $2\approx 1.414 \times 1.41443$, and the average of these gives the next approximation $1.414215$. We can continue this process indefinitely getting better approximations but never finding the square root exactly. If $\sqrt 2$ were a rational number, that is if it could be written as a fraction $p/q$ where $p$ and $q$ are integers, then we could find the exact value. The proof sorter shows that this number is IRRATIONAL so we cannot find an exact value. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241556525230408, "perplexity_flag": "head"}
http://physics.aps.org/articles/v2/88
# Trend: Nearly perfect fluidity , Department of Physics, North Carolina State University, Raleigh, NC 27695, USA Published October 26, 2009  |  Physics 2, 88 (2009)  |  DOI: 10.1103/Physics.2.88 Is there a fundamental lower bound on viscosity? To answer this question, we can look at the coldest and hottest fluids that laboratories are able to produce. ## Introduction From everyday experience, we have an intuitive feel for how a “good” fluid behaves. A good fluid, such as water, supports complicated flow patterns that decay slowly over time. In contrast, in a “poor” fluid, like honey or tar, we cannot observe waves or eddies, and flow processes decay quickly. Far beyond the realm of the everyday, experiments on ultracold gases and the extremely hot quark-gluon plasma are now allowing us to explore fundamental aspects of fluid mechanics. This article explores how results from studying such seemingly different systems are helping us address the question: Can there be such a thing as a perfect fluid? The physical quantity that distinguishes good from poor fluids is the shear viscosity $η$ (Fig. 1), which is a measure of the friction force $F$ per unit area $A$ created by a shear flow with transverse flow gradient $∇yvx$, $FA=η∇yvx.$ (1) The SI unit for viscosity is a pascal second ($Pa⋅s$). Given that we define good fluids as having a low shear viscosity, it may be disconcerting to think that the experimental values of $η$ for water, liquid helium, cold atomic gases, and hot quark-gluon plasmas—all described as “good” fluids—vary by some 24 orders of magnitude. For example, the shear viscosity of a cold atomic Fermi gas is ∼ $2×10-15Pa⋅s$, while the shear viscosity of the quark-gluon plasma produced recently at the Relativistic Heavy Ion Collider (RHIC)—and dubbed a “perfect fluid”—is $∼5×1011Pa⋅s$. Fluid flows are described by a differential equation called the Navier-Stokes equation. The Reynolds number, which is the ratio of inertial to viscous forces in the Navier-Stokes equation, determines the physical behavior of the solutions to this equation. Specifically, the kinds of flows that we associate with good fluidity are characterized by large Reynolds numbers. The Reynolds number is given by $Re=(mnη)vL,$ (2) where $v$ and $L$ are the characteristic velocity and length scale of the flow, respectively, and $mn$ is the fluid’s mass density. We note that the first term, ($mn/η$), is solely a property of the fluid. Since $mvL$ has units of angular momentum, $η/n$ can be measured in units of $ħ$. The ratio $η/(ħn)$ is a useful measure of fluidity, but it cannot be directly applied to relativistic fluids for which the particle number is not conserved. Again, we can look to the Reynolds number for guidance. In a relativistic fluid, the Reynolds number is defined in terms of $η/s$, where $s$ is the entropy density. Since for many fluids the entropy per particle in units of Boltzmann’s constant $kB$ is of order one, the ratio $η/s$ in units of $ħ/kB$ can be used to compare both relativistic and nonrelativistic fluids. Another way to think about the ratio $η/s$ is to realize that shear viscosity determines the amount of entropy produced by time-irreversible, dissipative, effects, so this ratio measures the relative change in entropy over a characteristic time $L/v$. The ratio $η/s$ in units of $ħ/kB$ for many good fluids is of order one. Water near the triple point reaches $η/s≃2$ and measurements in liquid helium give ratios as low as $η/s≃0.7$, see Ref. [1] for an overview. This leads us to the question: Can we observe fluids for which $η/s$ is arbitrarily small, or is there a fundamental limit to fluidity? ## Estimating the fundamental lower bound on viscosity A first hint that a lower limit on viscosity may exist comes from the molecular theory of transport phenomena in dilute gases. This theory goes back to Maxwell, who realized that shear viscosity is related to momentum transport by individual molecules. A simple estimate of the shear viscosity of a dilute gas is $η=13npl,$ (3) where $n$ is the density, $p$ is the average momentum of the molecules, and $l$ is the mean free path. Since the mean free path varies inversely with the density $n$, the shear viscosity of a dilute gas is, to good approximation, independent of density. The fact that the viscosity of a dilute gas does not depend on its density has some interesting implications. For example, it means that the damping of a pendulum caused by the surrounding air is independent of atmospheric pressure. This counterintuitive result is confirmed by experiment, going back to measurements carried out by Maxwell himself [2]. At fixed density and temperature the shear viscosity is only proportional to the mean free path, which will become shorter as the particles in the fluid start to interact more strongly. We expect, however, that there is a limit to how small $η$ can become. Shear viscosity is a measure of the ability of a fluid to transport momentum from one point to another, and quantum mechanics limits the accuracy with which momentum and position can be simultaneously determined. Based on Heisenberg’s uncertainty principle we expect that $pl≳ħ$. Using $s∼kBn$, this relation implies a lower bound of $η/s≳ħ/kB$ [3]. There are many possible objections to this simple argument. First of all, kinetic theory is not applicable in the regime $pl∼ħ$, because there are no well defined quasiparticles. Also, there are many systems for which the entropy per particle can become much larger than $kB$. Is it possible to make the lower bound on viscosity more precise? ## Enter string theory Perhaps surprisingly, a precise value of the bound on viscosity comes not from transport theory, but from a calculation in string theory. The now famous AdS/CFT (anti-de Sitter/Conformal Field Theory) conjecture says there is a correspondence between certain field theories in four dimensions, and string theory on curved, higher dimensional spaces (see Ref. [4] for a recent pedagogical overview). What is remarkable about the correspondence is that in the limit that the field theory becomes very strongly coupled—meaning the particles of the theory are strongly interacting—the corresponding string theory reduces to classical Einstein gravity. The most studied example of the correspondence is the equivalence between the large $Nc$ limit of a supersymmetric extension of QCD, $N=4$ superconformal QCD (the CFT of AdS/CFT), and string theory on $AdS5×S5$. Here, $Nc$ is the number of colors in the field theory, $AdS5$ is five-dimensional anti-de Sitter space—a solution of the Einstein equations with a negative cosmological constant—and $S5$ is a 5 sphere. The boundary of $AdS5×S5$ is four-dimensional Minkowski space. In order to study the boundary theory at finite temperature one has to consider solutions of Einstein’s equations that contain a black hole in the $AdS5$ space. The temperature of the field theory is equal to the Hawking temperature of the back hole. The shear viscosity of a plasma that sits on the boundary of this space is determined by the absorptive part of the stress tensor correlation function. The stress tensor is the source of a gravitational field in the five-dimensional geometry, and the shear viscosity of the plasma can be related to the absorption of gravitational waves by the black hole. Policastro et al. computed $η/s$ in the strong coupling limit of superconformal QCD and found [5] that it was equal to $ħ4πkB(∼0.08ħkB).$ (4) This result is universal—it applies to all theories that have a classical gravitational dual—and any corrections to account for the fact that the coupling in the CFT is not infinite will only increase the ratio $η/s$. Kovtun, Son, and Starinets (KSS) conjectured that $η/s≥ħ/(4πkB)$ is a universal lower bound, valid for all fluids [6]. We now know that this conjecture is not strictly true—there are theories in which corrections lower the ratio $η/s$—but these violations are themselves bounded [7]. However, the precise value of the universal lower bound is not known at present. The shear viscosity to entropy density ratio of helium exceeds the KSS bound $ħ/(4πkB)$ by almost an order of magnitude. But are there fluids in nature that approach this lower bound? Remarkably, the two best candidates are the coldest and hottest fluids that can be produced in the laboratory. ## From the coldest fluid on earth … The coldest fluid is made from optically trapped $6Li$ atoms. $6Li$ is a fermion, and it can be trapped in two different hyperfine states. We can view these states as the “up” and “down” states of a nonrelativistic spin $1/2$ particle. The interaction between spin-up and spin-down particles can be tuned via a Feshbach resonance, where the two particles form a bound state with zero binding energy. On resonance the two-body scattering length diverges and at this so-called “unitarity limit” the only dimensionful quantity describing the atomic gas is the particle density. As a consequence, the viscosity in units of $ħ$ must be a multiple of the density $n$. Nearly perfect fluidity is observed in experiments on cold atom gases like the one shown in Fig. 2 (from Ref. [8]). The top row shows the expansion of a Fermi gas out of a deformed trap after the trapping potential is removed. In hydrostatic equilibrium the pressure gradient in the short (transverse) direction is larger than the one in the long (axial) direction. After the potential is removed this pressure gradient accelerates the cloud in the transverse direction, and the cloud eventually elongates in the short direction. Shear viscosity tends to counteract this expansion and at unitarity, the expansion is consistent with almost ideal hydrodynamics. Additional information is obtained if the gas is released from a slowly rotating trap, as in the second and third row of Fig. 2. As the cloud becomes spherical, the rotational motion speeds up, which implies that the moment of inertia is suppressed. This is the hallmark of irrotational flow, which is usually observed in superfluids, but at unitarity the moment of inertia is also suppressed in the normal phase of a fluid. A quantitative analysis of the data shown in Fig. 2 was reported by John Thomas from Duke University in Ref. [9]. He finds $η/s≃(0.1-0.5)$ in units of $ħ/kB$. This value is even smaller than an earlier estimate, $η/s≲0.5$, based on an analysis of the damping of collective oscillations [10]. ## …to the hottest Almost ideal hydrodynamic flow was also observed in a completely different physical system, the quark-gluon plasma created in heavy-ion collisions at RHIC at Brookhaven National Laboratory [11, 12, 13]. The conditions for creating the quark gluon plasma could not be more different from the optically trapped cold ions: The energy in gold-gold collisions at RHIC is $100GeV$ per nucleon, and the nuclei are Lorentz contracted by a factor of $γ≃100$. The transverse radius of a gold nucleus is approximately $6fm$, and on the order of $7000$ particles are produced overall. The motion of the particles is relativistic, and the duration of a heavy-ion event is $∼6fm/c$—about $10-23s$. In order for hydrodynamic theory to apply to the quark-gluon plasma, this time has to be large compared to the time it takes for the plasma to equilibrate. The most dramatic evidence for hydrodynamic behavior in the quark-gluon plasma is the observation of elliptic flow in noncentral heavy-ion collisions (see Fig. 3 for a schematic illustration of the geometry). Elliptic flow occurs when the plasma collectively responds to pressure gradients in the initial state, just as in the case of cold atomic gases. Hydrodynamic evolution converts the initial pressure gradients to velocity gradients in the final state. In a heavy-ion collision we cannot control the deformation of the initial state, as we can by using specially designed traps for cold atom gases. Instead, the deformation of the plasma is determined by the shape of the overlapping region of the colliding nuclei. This shape is governed by the impact parameter $b$, the transverse separation of the two nuclei (Fig. 3). The impact parameter can be measured on an event-by-event basis using the azimuthal dependence of the spectra of produced particles. Once the impact parameter direction is known, the particle distribution can be expanded in Fourier components of the azimuthal angle $ϕ$: $p0dNd3p|pz=0=v0(pT)(1+2v2(pT)cos(2ϕ)+2v4(pT)cos(4ϕ)+…),$ (5) where $N$ is the number of particles, $p0$ is the energy, and $pT=(px2+py2)1/2$ is the transverse momentum. The Fourier coefficients $v2,v4,$ carry information about the deformation of the final state and, in particular, a positive $v2$ harmonic implies that particles are preferentially emitted in the short direction, i.e., elliptic flow, just as in the cold atomic gases. Figure 4 shows that the experimentally measured elliptic flow coefficient can become as large as $15%$, which corresponds to a significant deformation in the anisotropy of the particle distribution $(1+2v2)/(1-2v2)≃1.85$. Shear viscosity slows down the transverse expansion of the system and reduces $v2$. A quantitative analysis of the RHIC data, taken from the work of Paul Romatschke and Ulrike Romatschke, is shown in Fig. 4 [14]. Similar analyses can be found in [15, 16]. We observe that the best fit to the data is obtained for $η/s≃0.03ħ/kB$. (Note that the hydrodynamic fit is not expected to describe the spectra for $pT>2GeV$, because particles in this range of momenta are rare and have large mean free paths and therefore don’t reach equilibrium on such short time scales.) The best fit value of $η/s$ is smaller than $ħ/(4πkB)$, but there are significant uncertainties associated with the anisotropy of the initial state, which cannot be directly measured, and hydrodynamics breaks down in the late stages of the evolution when quarks and gluons hadronize into the observed particles, such as pions, kaons, and nucleons. A conservative bound from the RHIC experiments is $η/s≲0.4ħ/kB$. Current work on both the cold atomic gases and the quark gluon plasma is devoted to sharpening these numerical estimates, and to correlate the shear viscosity with other transport properties, like the energy loss of energetic probes. ## Future experiments Ultimately, we would like to understand what nearly perfect fluids are like: Is momentum transport governed by quasiparticles, or are there no quasiparticles at all, as suggested by the AdS/CFT correspondence? There are several ways to address this question. One approach is to use quantum Monte Carlo calculations to compute the spectral function of the energy-momentum tensor. Kinetic theory predicts that the spectral function contains a peak associated with the contribution of quasiparticles, whereas the AdS/CFT correspondence leads to a completely smooth spectral function. Experimentally, we can study the way hydrodynamics break down as one goes to smaller or even more deformed systems, and how shear viscosity is correlated with other transport properties like the diffusion constant. In heavy-ion collisions, diffusion can be studied by measuring the extent to which heavy charm and bottom quarks follow the flow of light quarks. Heavy quarks are rare, and in the experiment their trajectories have to be reconstructed from their decay products. As a consequence, flow data for heavy quarks has not yet reached the accuracy that has been achieved for light quarks. Finally, both theorists and experimentalists are eagerly awaiting data from the Large Hadron Collider (LHC) at CERN. The LHC will be able to produce a higher temperature quark-gluon plasma than that RHIC can currently reach. Data on elliptic flow will tell us whether hydrodynamics continues to be applicable under these conditions, and how the “perfection” of the hot plasma at the LHC compares to what was seen at RHIC. ### References 1. T. Schäfer and D. Teaney, Rep. Prog. Phys. (to be published); arXiv:0904.3107. 2. S. G. Brush, The kind of motion we call heat (North Holland, Amsterdam, 1986)[Amazon][WorldCat]. 3. P. Danielewicz and M. Gyulassy, Phys. Rev. D 31, 53 (1985). 4. I. R. Klebanovand J. M. Maldacena, Phys. Today 62, No. 1, 28 (2009). 5. G. Policastro, D. T. Son, and A. O. Starinets, Phys. Rev. Lett. 87, 081601 (2001). 6. P. Kovtun, D. T. Son, and A. O. Starinets, Phys. Rev. Lett. 94, 111601 (2005). 7. A. Buchel, R. C. Myers, and A. Sinha, arXiv:0812.2521. 8. B. Clancy, L. Luo, and J. E. Thomas, Phys. Rev. Lett. 99, 140401 (2007). 9. J. E. Thomas, arXiv:0907.0140. 10. T. Schäfer, Phys. Rev. A 76, 063618 (2007). 11. S. S. Adler and et al. (PHENIX Collaboration), Phys. Rev. Lett. 91, 182301 (2003). 12. B. B. Back and et al. (PHOBOS Collaboration), Phys. Rev. C 72, 051901 (2005). 13. J. Adams and et al. (STAR Collaboration), Phys. Rev. C 72, 014904 (2005). 14. P. Romatschke and U. Romatschke, Phys. Rev. Lett. 99, 172301 (2007). 15. K. Dusling and D. Teaney, Phys. Rev. C 77, 034905 (2008). 16. H. Song and U. W. Heinz, Phys. Rev. C 77, 064901 (2008). ### About the Author: Thomas Schäfer Thomas Schäfer received his Ph.D. at the University of Regensburg, Germany, in 1992. He was a postdoc at Stony Brook University, NY, (1992–1994), the Institute for Nuclear Theory in Seattle, WA, (1995–1998), and the Institute for Advanced Study in Princeton, NJ, (1998–1999). From 2000 to 2002 he was a professor at Stony Brook University, and in 2003 he moved to North Carolina State University. His research interests include QCD, many-body physics, and transport theory. Previous trend | Next Trend ## Related Articles ### More Atomic and Molecular Physics Condensate in a Can Synopsis | May 16, 2013 Remove the Noise Synopsis | Apr 25, 2013 ### More Particles and Fields Positrons Galore Viewpoint | Apr 3, 2013 A Year-Long Search for Dark Matter Synopsis | Mar 28, 2013 ### More Fluid Dynamics Round and Round in Circles Synopsis | May 9, 2013 Ocean Wave Breaking Stirs Up Atmosphere Focus | May 3, 2013 ## New in Physics Wireless Power for Tiny Medical Devices Focus | May 17, 2013 Pool of Candidate Spin Liquids Grows Synopsis | May 16, 2013 Condensate in a Can Synopsis | May 16, 2013 Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Fire in a Quantum Mechanical Forest Viewpoint | May 13, 2013 Insulating Magnets Control Neighbor’s Conduction Viewpoint | May 13, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 95, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097127318382263, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/210636-express-real-imaginary-parts.html
# Thread: 1. ## Express in real and imaginary parts I am having trouble with this question i just dont know where to start, im not looking for a straight up answer just hints to help me on my way to solving it myself. thanks in advance. Express the following complex number in real and imaginary parts; z = 1 / cos^2(0.5*j*ln(j)) 2. ## Re: Express in real and imaginary parts $\ln z=\ln |z|+i\arg z\Rightarrow \ln i=\cdots$ Should be straightforward from there.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8642969727516174, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=65552&page=14
Physics Forums Thread Closed Page 14 of 15 « First < 4 11 12 13 14 15 > ## [SOLVED] Renormalization Arnold Neumaier wrote: > Matthew Nobes wrote: > > On 2005-04-26, Arnold Neumaier <[email protected]> wrote: [lattice QCD] > >>But you can't go to the limit, > > > > Yes you can, in perturbation theory you can take the a->0 limit. > > But not on a small or even a big computer. I think we may be talking past each other. In peturbation theory, I can compute some quantity to n-loops. I will have some counterterms, which I can then use to get rid of infinities. If I used dim-reg I get poles in \epsilon, if I use a lattice I get things like \log(a) and 1/a. Are you claiming I can't adjust counterterms to get rid of the divergences in the lattice case? > > The renormalization program works the same way, order by order, you tune > > counterterms, etc. It is more technically involved (there are more > > diagrams, more, and more complicated, Feynman rules) but is possible. > > > > There are some theorems, which prove this, due to Reisz IIRC. > > Interesting; I didn't know this. [snip] > but the abstract says it is for SU(N) gauge theory. In particular, > it does not apply to QED. Okay, but for SU(N) the point stands. As it does for massive QED. [snip] > >>while you can in dimensional > >>regularization at fixed loop order. With lattice calculations, > >>you'd never come close in accuracy to the experimental value > >>of the Lamb shift for hydrogen. > > > > I don't know about the Lamb shift calculations, but it's worth pointing > > out that Tom Kinoshita doesn't use dim-reg for his four- and five-loop > > computations of the magnetic moment anomoly. > > Yes. He uses NRQED, which is, however, specially adapted to QED. Huh? For the g-2 he uses standard QED, not NRQED. See hep-ph/0402206, he uses standard (relativistic) fermion propagators (A3). Matthew [email protected] wrote: > > Please consider the example of my last post which I will repeat below. > Let, > > F1(n) = \int_1^{\infty} x^n dx (integration in the interval [1, > oo]) > > An integral is defined as an infinite sum. For example at n=0 we would > have, > > F1(0) = 1 + 1 + 1 + 1 + 1 + etc. (1) > > Now define another function, > > F2(n) = -1/(n+1) (2) > > Now F1(n) = F2(n) for n<-1. If we replace n by the complex variable z > we have, > > F2(z) = -1/(z+1) (3) > > We would say that F2(z) is the analytic continuation of F1(z) to the > entire complex plane (except z = -1). Now at z = 0 we have F2(0) = -1. > If F1(z) and F2(z) are equal at z=0 then we would have, > > 1 + 1 + 1 +1 + etc. = -1 (4) > > which doesn't make sense. > > Now suppose I have a mathematical model of a physical process. My > model predicts that under certain conditions I will get the result R1 > which is given by, > > R1 = 1/(F1(0) + 1) (5) > > Now when I examine my expression for F1(0) I conclude it is infinite. > If I use this fact in the above expression for R1 I get, > > R1 = 1/(oo + 1) = 0 (6) > > This is a perfectly reasonable result for a physical process. However > by your reasoning I should replace F1(0) by F2(0). In this case, > > R1 = 1/(-1 + 1) = 1/0 = oo (7) > > Therefore I get a nonsensical result. This example shows that you > cannot arbitrarily replace a function with its analytical continuation. > They are not equal everywhere. They are equal where both make sense, and the analytic continuation is the unique way to kae sense elsewhere. It is like (x^2-1)/(x-1)=x+1. Equality holds only for x not 1, but no one doubts that the right hand side is the right way to extend the function where it wasn't defined before. Arnold Neumaier Dan Piponi wrote: > A Bergman wrote: > > >>Sure it does (up to monodromies). > > > Dan Solomon's criticisms are entirely justified. There is no reason to > think f1(z)=f2(z) outside of C1 because he made no assumption that f1 > was analytic outside of C1. And even if f1 is analytic everywhere in its > domain it might not actually be defined outside of C1 even though f2 is. Of course this is true from a fully rigorous point of view, since there is no rigorous definition of a covariant 4D quantum field theory. Thus one needs to make somewhere some additional reasonable assumptions, nd analyticity is _the_ reasonable assumption since it is known to hold in the nonrelativistic case. > This replacement of functions with their analytic continuations is just > one of those things that physicists do, but rarely explicitly state they > are doing, that make life difficult for mathematicians who try to follow > their work. Yes, one needs to learn to read between the lines. Most of physics is written for physicists, not for mathematicians. Arnold Neumaier C. M. Heard wrote: > I remember puzzling over dimensional regularization as a graduate student, > and the issue I had was this: to the extent that a d-dimensional version > of a quantum field theory is defined at all, it is for non-negative integral > values of d only, and if the theory is non-trivial, it is usually for some > subset of the integral values less than four. That is a finite set of > points, and you cannot make an analytic continuation from that. Sure, the > integrals that we get as terms in the perturbation series have a form that > admits definition for a continuum of values, and that will admit a > continuation; but surely this must be seen as a fortuitous accident, since > the underlying QFT is itself undefined except for a finite number of points > in that continuum. For the traditional physicist's setting of dimensional regularization, this is indeed true. But there are variants that are better justified. > Does anyone who does rigorous work in contructive QFT use dimensional > regularization? Can it actually be made rigorous? (The only > regularization I've seen discussed in the context of constructive QFT > is lattice regularization, but that is probably because of my ignorance.) See the section ''Dimensional regularization'' in my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physics-faq.txt Arnold Neumaier Matthew Nobes wrote: > On 2005-04-27, Arnold Neumaier <[email protected]> wrote: > >>[email protected] wrote: > > [snip] > >>>>Arnold Neumaier >>> >>>So Dr. Neumaier says yes and you say no,no, no. Maybe you two should >>>talk to each other. >> >>It was a matter of emphasis. We both agree on that _some_ >>regularization procedure must be part of the basic postulates. >>The question is which one. >> >>I recommended dimensional regularization, because it is most flexible, >>but allowed in my qualifying statement for less flexible >>alternatives. Matthew Nobes http://www.lns.cornell.edu/~nobes/ >>works with lattice gauge theories and hence emphasizes the freedom >>to use lattice regulators instead. > > > Not to start an arguement, but it should be pointed out that "most > flexible" strongly depends on the problem you want to solve. For a > numerical evaluation of the path integral, a lattice regulator is the > "most flexible". > > As I understand it (and I'm willing to be corrected) > dim-reg is defined only at the level of perturbation theory. But perturbation theory in the widest sense, including renormalization group improved computations and effective 1PI and 2PI theories, which go far beyond simple perturbation theory. (For example, one can even consider phase transitions.) > For non-abelian > gauge theories, dim-reg is certainly the most flexible scheme for > perturbative calculations. > > [snip] > >>The proof of absence of infinities for all 4D renormalizable theories >>in the limit of removed cutoff has been given only for dimensional >>regularization, I believe. >> >>Thus dimensional regularization is safe, while the situation is >>less clear for lattice regularization. > > The theorems of Reisz apply here. For certain classes of lattice > actions (including some, but not all of the popular actions used in > simulations) you can show that the a -> 0 limit in perturbation theory > gives you the same results as the eps -> 0 limit in dim-reg. I believe this statement is at best true for asymptotically free theories. Phi^4 is nointrivial in perturbation theory, but appears to be trivial in lattice regularization, which means that one does _not_ get the same results. > This is certainly true for the Wilson gauge and quark actions, so a > lattice regulated theory is equivalent to using dim-reg, from a > perturbative standpoint. I'd like to see references substantiating this. Arnold Neumaier On 2005-05-02, Arnold Neumaier <[email protected]> wrote: > drnobes wrote: > [snip] >> >> I think we may be talking past each other. > > Yes, it seems so. > > >> In perturbation theory, I >> can compute some quantity to n-loops. I will have some counterterms, >> which I can then use to get rid of infinities. If I used dim-reg I get >> poles in \epsilon, if I use a lattice I get things like \log(a) and >> 1/a. Are you claiming I can't adjust counterterms to get rid of the >> divergences in the lattice case? > > No; only that you can't do the limit on a computer. > (Though one perhaps can, with today's symbolic packages?) Okay, we were talking past each other. I was refering to ordinary PT. And no, you can't take the limit as a->0. You can run at several spacings and fit, which is good enough for me, but is not taking the limit. [snip] >>>but the abstract says it is for SU(N) gauge theory. In particular, >>>it does not apply to QED. >> >> Okay, but for SU(N) the point stands. As it does for massive QED. > > > The latter is not clear to me, since QED contains fermions. > Is there any work on proving that lattice regularization works > with fermions? I thought there were problems with fermion doubling!? In another post I mentioned that the Reisz theorems apply to Wilson quarks, which don't have doublers. Far better are the "new" formulations of Lattice Fermions, which go by the names "domain wall" or "overlap". The have an exact chiral symmetry (they're also equivalent). Unfortunately they're very expensive to simulate with. >>>>out that Tom Kinoshita doesn't use dim-reg for his four- and five-loop >>>>computations of the magnetic moment anomaly. > > Yes, because, as he remarks in reference [62] of the paper quoted below, > it is not known how to implement it in a numerically stable way at high > orders. (But using lattice regularization would be even more > inaccurate...) Yes, which was my point, regulators should be tuned to match the problem at hand. I'm still not sure what you mean that the lattice regulator would be more inaccurate. If the calculation was done to four-loops with any regulator, it would be the same. (again, I'm *not* talking about Monte-Carlo simulation, just useing a lattice regulator). >>>Yes. He uses NRQED, which is, however, specially adapted to QED. >> >> Huh? For the g-2 he uses standard QED, not NRQED. See hep-ph/0402206, >> he uses standard (relativistic) fermion propagators (A3). > > I didn't know this recent paper. Earlier work was done with NRQED. > (Perhaps by Lepage and not Kinoshita?) Peter (Lepage, my boss) did the Lamb shift with NRQED, and lots of other bound state problems. I don't think he did the MMA though. The MMA in NRQED would require you to match the \sigma \dot B operator anyway, which I suspect would amount to doing the relativistic calculation. I certainly can't see how you'd get an accurate MMA without an accurate matching for that operator (and probably at least one more order in v/c matched to at least one-loop). -- Matthew Nobes | email: [email protected] Newman Lab, Cornell University | web: http://lepp.cornell.edu/~nobes/ Ithaca NY 14853, USA | Aaroin Bergman points out: > Weinberg explicitly states above (11.A.2) Yes, Weinberg states he is making an analytic continuation, but he doesn't justify it mathematically. He's simply taking something he can't compute and replacing it with something he can compute. The unstated hidden assumption in physics publications is that you can make this substitution whenever you feel like it. Occasionally physicists do make these assumptions explicit: eg. I seem to remember one school of thought explicitly stating, as an axiom of physics, that the S-matrix is analytic, but this is the exception. This isn't meant to be a criticism of what physicists do, I'm just trying to give one reason why physicists and mathematicians can sometimes speak at cross-purposes, something that has been happening here. Arnold Neumaier says: > Yes, one needs to learn to read between the lines. But physicists may also sometimes need to recognise when they're doing that - especially when talking to mathematicians. > >>>At present I believe that there is no such thing as an interacting > >>>QFT based on Feynman-Dyson perturbation theory in 3+1 dimensions. > >>>Dr. Neumaier tells me that if I sacrifice Lorentz invariance then > >>>there is a way of developing the theory from first principles > >>>without sacrificing consistency. The lack of Lorentz invariance > >>>makes the theory ugly compared to what I worked out myself, but if > >>>it does indeed work, I would be forced to accept it, and I would > >>>qualify my criticisms of QFT (as currently practised) accordingly. > >> > >>QFT without Lorentz invariance is not ugly at all. It is simply a > >>more general approach, from which the Lorentz invariant theories are a > >>special case. > > > > Any situation where a fudge factor is introduced to conceal some basic > > failure is ugly, > > You are shifting grounds. Above you said that the lack of Lorentz > invariance makes the theory ugly, and I showed that this is not > the case, just as differential geometry is not ugly though it > does not assume any symmetries. I did not say that every generalisation of a theory is ugly. But the generalisation of QFT to the case where it is only Lorentz invariant with a particular choice of the generalising parameter certainly is. Especially as it flies in the face of some of the most copious experimental evidence we have (i.e. that which supports Special Relativity). If one is going to generalise, it should be to align with experimental evidence rather than to intentionally contradict it (Newton's generalisation of the terrestrial force that makes apples fall to understand planetary motions is a nice example). > > whether is is a non-Lorentz-invariant cutoff or > > (meaningless) complex number of spacetime dimensions in RQFT, or an extra > > fictitious amount introduced into a company's accounts to make the books > > balance. These are all examples of the same phenomenon - dishonesty, and > > dishonesty is ugly. > > There is nothing dishonest in renormalization since nothing is > concealed. Everything happens completely open, for anyone to check. > And many understand what is there to understand. I have seen upright, respectable members of the HEP community write equations on blackboards where they equate the difference of two divergent integrals to the number they want to get. I do not believe that such things would happen had not the floodgates of mathematical impropriety been opened by advocates of renormalization. > >>I have more important things to do than to write papers specifically > >>for someone like you who needs every detail spelled out and gives up > >>at the slightest inconvenience. My students can do much better; > >>they give up only when they are really lost, and then come with the > >>part they completed successfully to show that they deserve further > >>supervision. > > > > If we follow up your earlier suggestion of working with the action in terms > > of Fourier transform fields we just get the same inconsistency as before. > > > > The \phi^4 action in position space is > > > > I = \int d^4x 1/2 (\partial \phi(x))^2 - m^2/2 \phi(x)^2 - \gamma/4! > > \phi(x)^4 > > > > If we write \phi(x) = \int d^4p e^{ip.x} \phi(p) then we get an action in > > terms of the Fourier transform fields thus: > > > > I = (2\phi)^4 ( 1/2 \int d^4p (p^2-m^2) \phi(p)\phi(-p) - > > \gamma/4! \int d^4p_1 d^4p_2 d^4 p_3 \phi(p_1) \phi(p_2) \phi(p_3) > > \phi(p - p_1 - p_2 - p_3) ) > > > > I think that it is reasonably clear that the variational equations are in > > agreement with those obtained by varying the position space action. > > Yes. But this only gives the field equations for the unrenormalized > theory. > > > > Now let > > us make \phi(p) vanish outside some given momentum space region, which using > > the terminology I developed earlier is > > > > \phi(p) = f(p, \Lambda) \chi(p) (*) > > > > where \chi(p) is operator-valued and f(p, \Lambda) is a c-number function > > that vanishes outside a region whose bounds are defined by \Lambda. > > You need to substitute (*) into the above expression for I and get > a modified action I(Lambda,chi). Then you need to vary chi in this > action to get the modified field equations. These are consistent with > this action, and do not include any \chi(p) at momenta beyond the > cutoff. > > But they are different from what you'd get from inserting (8) > in the unregularized field equations. Which gave you the seeming > inconsistency. Why does this make a difference? One just applies the Chain Rule when varying the action and ends up with the same equations. Chris Oakley wrote: >>>If we follow up your earlier suggestion of working with the action in terms >>>of Fourier transform fields we just get the same inconsistency as before. >>>The \phi^4 action in position space is >>> >>>I = \int d^4x 1/2 (\partial \phi(x))^2 - m^2/2 \phi(x)^2 - \gamma/4! >>>\phi(x)^4 >>> >>>If we write \phi(x) = \int d^4p e^{ip.x} \phi(p) then we get an action >>>in terms of the Fourier transform fields thus: >>> >>>I = (2\phi)^4 ( 1/2 \int d^4p (p^2-m^2) \phi(p)\phi(-p) - >>> \gamma/4! \int d^4p_1 d^4p_2 d^4 p_3 \phi(p_1) \phi(p_2) \phi(p_3) > >>>\phi(p - p_1 - p_2 - p_3) ) >>> >>>I think that it is reasonably clear that the variational equations are >>>in agreement with those obtained by varying the position space action. >> >>Yes. But this only gives the field equations for the unrenormalized >>theory. >> >> >> >>>Now let >>>us make \phi(p) vanish outside some given momentum space region, which >>> using the terminology I developed earlier is >>> >>>\phi(p) = f(p, \Lambda) \chi(p) (*) >>> >>>where \chi(p) is operator-valued and f(p, \Lambda) is a c-number function >>>that vanishes outside a region whose bounds are defined by \Lambda. >> >>You need to substitute (*) into the above expression for I and get >>a modified action I(Lambda,chi). Then you need to vary chi in this >>action to get the modified field equations. These are consistent with >>this action, and do not include any \chi(p) at momenta beyond the >>cutoff. >> >>But they are different from what you'd get from inserting (8) >>in the unregularized field equations. Which gave you the seeming >>inconsistency. > > Why does this make a difference? One just applies the Chain Rule when > varying the action and ends up with the same equations. No. It makes a difference because the variation is over a different set of allowed changes. If you vary a field which has no fast Fourier modes you cannot get equations involving the latter! Do the calculations and you'll see. Talking without doing is not enough to understand. Arnold Neumaier Aaron Bergman wrote: > In article <[email protected]>, > "Dan Piponi" <[email protected]> wrote: > > > Aaroin Bergman points out: > > > Weinberg explicitly states above (11.A.2) > > > > Yes, Weinberg states he is making an analytic continuation, but he > > doesn't justify it mathematically. > > I wouldn't know what it would mean to justify it mathematically. > I am not sure from your response if you are saying that it is not required to justify this procedure mathematicllay or you simply don't know how to do so. Irregardless I think that Weinbergs explanation of dimensional regularization on page 477 of his book cannot be justified mathematically. Suppose we have the following relationship, P = A + B (1) If we have another relationship, A = C (2) we can rewrite (1) as, P = C + B (3) Now what does the equal sign mean in equation 2. It means two quantities, A and C, are numerically equal. That is we have some process of turning A and C into a number. And when that process is used A and C turn into the same number. Now what Weinberg does is this. He starts with an expression for the polarization tensor which is of the form of (1). (See page 477 of his book). He rewrites (1) in terms of a parameter "d" so he has, P(d) = A(d) + B(d) (4) The original expression is recovered when d=4. He then discovers that for sufficiently small d (I think d<2) he has the relationships given by 11.2.11 and 11.2.12 of his book. Consider 11.2.12. We write this as, A(d) = A1(d) (5) where A(d) is the expression on the left side of 11.2.12 and A1(d) is the expression on the right side. However the eqaulity is only valid for d<2. However he assumes the equality is valid for values of d near 4. However this is not correct. You cannot turn A(d) and A1(d) into the same number for d>2. They are not numberically equal. An integral is a short hand way of writing an infinite sum. For values of d near 4 A(d) is a divergent sum of positive numbers. A1(d), on the other hand is finite (except at d=4) and, in fact, can be negative. So we can find values of d near 4 where we end up with something like this, 1 + 2 + 3 + 5 + 10 + etc. = -3 (6) (Note - this is for illustrative purposes only) Yet what Weinberg does is to substitute A1(d) for A(d) in his expression for the polarization tensor and then evaluates the expression for values of d near 4 where the two quantities, A(d) and A1(d), are certainly not numerically equal. This is not a mathematically corrrect step. Now it has been argued that this step is justified by analytic continuation. However this justifies nothing since the analytic continuation of a function does not necessarily equal the original function in the entire complex plane. This is what bothers me about this procedure. It seems to seriously lack mathematical rigor. I know that this procedure is required to make QFT work and agrees with experiment. However aren't physicists interested in mathematical rigor also? In a seperate post Dr. Neumaier recommended some books that have a more rigorous approach. I have ordered the one by G. Scharf and I interested in reading how he handles the problem of regularization. Dan Solomon In article <[email protected]>, Chris Oakley <[email protected]> wrote: > > What you really do > > is to compute everything regularized, with only finite quantities, and > > then limit away the regulator, keeping the physical couplings constant. > > This is not how I would put it. What I think you do is entirely dependent on > what you want the theory to look like after it has been renormalized. If you > can tie this down precisely enough then I agree that the renormalization > scheme is irrelevant. But it is these requirements that in reality define > the theory - not the quantum field theory you started with. The quantum theory we started with was nonsense. The only hope is that it can be derived as an effective theory for some theory which is not nonsense. That's the trick we pull off with renormalization; we can compute things that don't depend on the UV completion, just assuming that such a thing exists. Aaron Arnold Neumaier wrote: > C. M. Heard wrote: > > Does anyone who does rigorous work in contructive QFT use dimensional > > regularization? Can it actually be made rigorous? (The only > > regularization I've seen discussed in the context of constructive QFT > > is lattice regularization, but that is probably because of my ignorance.) > > See the section ''Dimensional regularization'' in my > theoretical physics FAQ at > http://www.mat.univie.ac.at/~neum/physics-faq.txt This is a nice exposition on how to regularize integral expressions that arise in a perturbative computation of the S matrix, but it does not answer my question of whether it is possible to use dimensional regularization to actually construct a field theory in a non-perturbative way. //cmh > This is what bothers me about this procedure. It seems to seriously > lack mathematical rigor. I know that this procedure is required to make > QFT work and agrees with experiment. However aren't physicists > interested in mathematical rigor also? In a seperate post Dr. Neumaier > recommended some books that have a more rigorous approach. I have > ordered the one by G. Scharf and I interested in reading how he handles > the problem of regularization. > > Dan Solomon I too purchased Scharf's book. The title "Finite Quantum Electrodynamics" certainly sounds promising, but I was disappointed. Superficially it all looks similar to the good old meaningless renormalized perturbation theory that we inherited from Feynman, et al, except that it seems to take much more work to do a lot less. A little less superficially, it seems to be about introducing extra functions to multiply the fields in time-ordered products for which certain limits can be taken to extract finite amplitudes. The principle, though, is the same as every other renormalization scheme: generalize the theory in some way, set up some kind of differencing with the parameters and then take a limit in a prescribed way to get finite amplitudes. It may be finite, but it is still contrived. Their argument is, however, significantly more involved than any I have seen, so authoritative comments (from me, at least) will have to wait until I can find time to study it properly. Their methods are based on a paper by Epstein & Glaser, Ann. Inst. Poincare A 19 (1973) 211. In article <[email protected]>, [email protected] wrote: > Aaron Bergman wrote: > > In article <[email protected]>, > > "Dan Piponi" <[email protected]> wrote: > > > > > Aaroin Bergman points out: > > > > Weinberg explicitly states above (11.A.2) > > > > > > Yes, Weinberg states he is making an analytic continuation, but he > > > doesn't justify it mathematically. > > > > I wouldn't know what it would mean to justify it mathematically. > > > > I am not sure from your response if you are saying that it is not > required to justify this procedure mathematicllay or you simply don't > know how to do so. Neither. I'm saying that I'm not sure what it would mean. We don't have an axiomatic definition of QFT in any useful sense. What we have is a recipe that's well defined and works. What more do you want? I've explained, I think, a number of times, why we use analytic continuation to get sensible answers. You can complain all you want that the integral is not equal to its analytic continuation everywhere, but that doesn't mean that the procedure of replacing an integral with its analytic continuation is ill-defined in any way. It's perfectly rigorous. Its justification is more complicated, but the basic answer is that it works. Other cutoffs can be more easily justified, but since they give the same answer, we're happy with them all. Aaron C. M. Heard wrote: > Arnold Neumaier wrote: > >>C. M. Heard wrote: >> >>>Does anyone who does rigorous work in contructive QFT use dimensional >>>regularization? Can it actually be made rigorous? (The only >>>regularization I've seen discussed in the context of constructive QFT >>>is lattice regularization, but that is probably because of my ignorance.) >> >>See the section ''Dimensional regularization'' in my >>theoretical physics FAQ at >> http://www.mat.univie.ac.at/~neum/physics-faq.txt > > This is a nice exposition on how to regularize integral expressions that > arise in a perturbative computation of the S matrix, but it does not answer > my question of whether it is possible to use dimensional regularization to > actually construct a field theory in a non-perturbative way. I think this is an open research problem. Arnold Neumaier > It's perfectly rigorous. said Aaron Bergman I think rigourous isn't quite the right word here. It's a well defined procedure (well, I'm not 100% sure if it is but I'll give it the benefit of the doubt for now) that tells you how to transform one expression into another one. That transformation isn't one of the usual ones of mathematics such as substituting A for B in C when we know A=B. So it's not a mathematical derivation in the usual sense. The situation is probably similar to Newton working with infinitesimals or Heaviside with differential operators before either of these things were fully formalised. In article <[email protected]>, "Dan Piponi" <[email protected]> wrote: > > It's perfectly rigorous. > said Aaron Bergman > > I think rigourous isn't quite the right word here. It's a well defined > procedure (well, I'm not 100% sure if it is but I'll give it the > benefit of the doubt for now) that tells you how to transform one > expression into another one. That transformation isn't one of the usual > ones of mathematics such as substituting A for B in C when we know A=B. > So it's not a mathematical derivation in the usual sense. The situation > is probably similar to Newton working with infinitesimals or Heaviside > with differential operators before either of these things were fully > formalised. If you want me to fully define a quantum field theory for you, then I'm obviously not going to be able to do it. But, we do understand, physically, what's going on. This gives us a well-defined procedure to get a formal power series from the QFT, but that's far from a definition. Aaron Thread Closed Page 14 of 15 « First < 4 11 12 13 14 15 > Thread Tools Similar Threads for: [SOLVED] Renormalization Thread Forum Replies Quantum Physics 21 Calculus 1 Quantum Physics 2 Quantum Physics 3 Quantum Physics 6
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379503130912781, "perplexity_flag": "middle"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177692643
### Discrete Dynamic Programming with Unbounded Rewards J. Michael Harrison Source: Ann. Math. Statist. Volume 43, Number 2 (1972), 636-644. #### Abstract Countable state and action Markov decision processes are investigated, the objective being to maximize expected discounted reward. Well-known results of Maitra and Blackwell are generalized, their assumption of bounded rewards being replaced by weaker conditions, the most important of which is as follows. The expected reward to be received at time $n + 1$ minus the actual reward received at time $n$, viewed as a function of the state at time $n$, the action at time $n$ and the decision rule to be followed at time $n + 1$, can be bounded. It is shown that there exists an $\varepsilon$-optimal stationary policy for every $\varepsilon > 0$ and that there exists an optimal stationary policy in the finite action case. First Page: Full-text: Open access Permanent link to this document: http://projecteuclid.org/euclid.aoms/1177692643
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265478849411011, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/12/06/drmathochist/?like=1&source=post_flair&_wpnonce=ae4d6bb289
# The Unapologetic Mathematician ## Compactly Supported De Rham Cohomology Since we’ve seen that all contractible spaces have trivial de Rham cohomology, we can’t use that tool to tell them apart. Instead, we introduce de Rham cohomology with compact support. This is just like the regular version, except we only use differential forms with compact support. The space of compactly supported $k$-forms on $M$ is $\Omega_c^k(M)$; closed and exact forms are denoted by $Z_c^k(M)$ and $B_c^k(M)$, respectively. And the cohomology groups themselves are $H_c^k(M)$. To see that these are useful, we’ll start slowly and compute $H_c^n(\mathbb{R}^n)$. Obviously, if $\omega$ is an $n$-form on $\mathbb{R}^n$ its exterior derivative must vanish, so $Z_c^n(\mathbb{R}^n)=\Omega_c^n(\mathbb{R}^n)$. If $\omega\in B_c^n(\mathbb{R}^n)$, then we write $\omega=d\eta$ for some compactly-supported $n-1$-form $\eta$. The support of both $\omega$ and $\eta$ is contained in some large $n$-dimensional parallelepiped $R$, so we can use Stokes’ theorem to write $\displaystyle\int\limits_{\mathbb{R}^n}\omega=\int\limits_Rd\eta=\int\limits_{\partial R}\eta=0$ I say that the converse is also true: if $\omega$ integrates to zero over all of $\mathbb{R}^n$ — the integral is defined because $\omega$ is compactly supported — then $\omega=d\eta$ for some compactly-supported $\eta$. We’ll actually prove an equivalent statement; if $U$ is a connected open subset of $\mathbb{R}^n$ containing the support of $\omega$ we pick some parallelepiped $Q_0\subseteq U$ and an $n$-form $\omega_0$ supported in $Q_0$ with integral $1$. If $\omega$ is any compactly supported $n$-form with support in $U$ and integral $c$, then $\omega-c\omega_0=d\eta$ for some compactly-supported $\eta$. It should be clear that our assertion is a special case of this one. To prove this, let $Q_i\subseteq U$ be a sequence of parallelepipeds covering the support of $\omega$. Another partition of unity argument tells us that it suffices to prove this statement within each of the $Q_i$, so we can assume that $\omega$ is supported within some parallelepiped $Q$. I say that we can connect $Q$ to $Q_0$ by a sequence of $N$ parallelepipeds contained in $U$, each of which overlaps the next. This follows because the set of points in $U$ we can reach with such a sequence of parallelepipeds is open, as is the set of points we can’t; since $U$ is connected, only one of these can be nonempty, and since we can surely reach any point in $Q_0$, the set of points we can’t reach must be empty. So now for each $i$ we can pick $\nu_i$ supported in the intersection of the $i$th and $i+1$st parallelepipeds and with integral $1$. The difference $\nu_i-\nu_{i-1}$ is supported in the $i$th parallelepiped and has integral $0$; since the parallelepiped is contractible, we can conclude that $\nu_{i-1}$ and $\nu_i$ differ by an exact form. Similarly, $\omega_0-\nu_1$ has integral $0$, as does $\omega-c\nu_N$, so these also give us exact forms. And thus putting them all together we find that $\displaystyle(\omega-c\nu_N)+c(\nu_N-\nu_{N-1})+\dots+c(\nu_2-\nu_1)+c(\nu_1-\omega_0)$ is a finite linear combination of a bunch of exact $n$-forms, and so it’s exact as well. The upshot is that the map sending an $n$-form $\omega$ to its integral over $\mathbb{R}^n$ is a linear surjection whose kernel is exactly $B_c^n(\mathbb{R}^n)$. This means that $H_c^n(\mathbb{R}^n)=Z_c^n(\mathbb{R}^n)/B_c^n(\mathbb{R}^n)\cong\mathbb{R}$. ## 2 Comments » 1. [...] time we saw that compactly-supported de Rham cohomology is nonvanishing in the top degree for . I say that this is true for any oriented, connected [...] Pingback by | December 8, 2011 | Reply 2. [...] will be a nice, simple definition. As we saw last time, the top-degree compactly-supported de Rham cohomology is easy to calculate for a connected, oriented manifold . Specifically, we know that [...] Pingback by | December 9, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387230277061462, "perplexity_flag": "head"}
http://nrich.maths.org/5459/index
### Frieze Patterns in Cast Iron A gallery of beautiful photos of cast ironwork friezes in Australia with a mathematical discussion of the classification of frieze patterns. ### The Frieze Tree Patterns that repeat in a line are strangely interesting. How many types are there and how do you tell one type from another? ### Friezes Some local pupils lost a geometric opportunity recently as they surveyed the cars in the car park. Did you know that car tyres, and the wheels that they on, are a rich source of geometry? # ...on the Wall ##### Stage: 3 Challenge Level: This problem follows on from Mirror, Mirror... You might find it helpful to copy this diagram onto squared paper . Reflect the flag in one of the lines. Reflect the resulting image in the other line. Can you describe the single transformation you would need to get from the first flag to the last flag? Does it matter in which line you reflect first? Try this with the flag in other positions. Now try it with lines that meet at $45^{\circ}$ and at $60^{\circ}$ (you might find it helpful to use isometric paper for the $60^{\circ}$ case). Again, try it with the flag in different positions Can you predict what single transformation you would need to get from the first flag to the last flag if the lines meet at $\theta^{\circ}$? If you have enjoyed this problem, you may like to have a go at Who is the fairest of them all? . The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146302938461304, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/118619?sort=newest
## Robin-Laplacian in unbounded domains ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\Omega\subset \mathbb R^n$ be an open domain and $\tau>0$. Consider the following boundary value problem $-\Delta v=f$ in $\Omega$, $\partial_\nu v+\tau v=g$ on $\partial\Omega$. If $\Omega$ is a bounded with sufficiently smooth boundary it is known that we have "maximal elliptic $L_p$-regularity", i.e. for $f\in H^k_p(\Omega)$ and $g\in W^{k+1-1/p}_p(\partial\Omega)$ there is a unique solution $v\in H^{k+2}_p(\Omega)$, where $k\in \mathbb N$ and $p\in(1,\infty)$, see e.g. Triebel, Interpolation theory, Function spaces, Differential operators. Is anybody aware of a corresponding result for the case when $\Omega$ is unbounded, e.g. a half space or an infinite layer? - ## 1 Answer For the case $p=2$, I would have a look at this paper by Wolfgang Arend and Mahamadi Warma, and its follow-up papers: Potential Analysis 19: 341–363, 2003. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8924229145050049, "perplexity_flag": "head"}
http://mathoverflow.net/questions/65340?sort=newest
Paracompact Hausdorff but not compactly generated? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm sorry to be asking a (possibly) elementary question, but I've run into a problem in point-set topology; I've just read that there exists paracompact Hausdoff spaces which are not compactly generated. I ask the following: Question: If $X$ is paracompact Hausdorff, is its compactly generated replacement, $k\left(X\right),$ paracompact Hausdorff? Recall: The inclusion $i:CGH \to Haus$ of compactly generated Hausdorff spaces into Hausdorff spaces has a right adjoint $k,$ which replaces the topology of $X$ with the following topology: $U \subset X$ is open in $k\left(X\right)$ if and only if for all compact subsets $K \subset X,$ $U \cap K$ is open in $K$. Another way of describing this topology is that it is the final topology with respect to all maps into $X$ with compact Hausdorff domain. (For the experts, $CGH$ is the mono-coreflective Hull of the category of compact Hausdorff spaces in the category of Hausdorff spaces) - Touche, but I don't actually think point-set topology is ugly. It was more meant as a joke. Anyway, I'll change it. – David Carchedi May 19 2011 at 5:39 Every compactly generated space is a quotient of a locally compact Hausdorff space. That may help, but not in the naive way. You definitely can't conclude $k(X)$ is paracompact just because it's a quotient of a paracompact space. – David White May 22 2011 at 19:13 Thanks, I'm aware of this result, but I'm not sure how to use it. In fact, this is and if and only if, i.e. it characterizes compactly generated spaces. Moreover, for compactly generated Hausdorff spaces, they are the obvious quotient of the disjoint union of all their compact subsets, and if $X$ is not compactly generated, this quotient is $k\left(X\right).$ This means when $X$ is paracompact Hausdorff, $k\left(X\right)$ is a quotient of a space which is is both locally compact and paracompact Hausdorff. I'm not sure where to go from here. – David Carchedi May 23 2011 at 1:31 1 Answer (This should be a comment, but my rep is too low.) It seems that it's certainly Hausdorff, as the topology of $k(X)$ is finer (if $U$ is open in $X$ then $U\cap K$ is open in $K$ for all compacta $K$, by definition of the subspace topology.) So the two separating sets that worked for $X$ still work for $k(X)$. - Yes, it is indeed Hausdorff; I know that $k$ is a functor $$k:Haus \to CGH,$$ the question is whether or not it is paracompact. – David Carchedi May 18 2011 at 23:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951564371585846, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/16209-logic-proof.html
# Thread: 1. ## logic and proof What is wrong with this argument? Let S(x,y) b " x is shorter than y". Given the premise $\left( {\exists x} \right)$ S(s,Max), it follows that S(Max,Max). Then by existential generalization it follows that $\left( {\exists x} \right)$S(x,x), so that someone is shorter than himself. 2. Originally Posted by TheRekz What is wrong with this argument? Let S(x,y) b " x is shorter than y". Given the premise $\left( {\exists x} \right)$ S(s,Max), it follows that S(Max,Max). Then by existential generalization it follows that $\left( {\exists x} \right)$S(x,x), so that someone is shorter than himself. It only follows that S(x, Max) if there is such a y such that x < y for all s. This value of y need not exist. As an example, let the universe of discourse be the set $\{ 1 - n^{-1}| n \in \mathbb{Z}^+ \}$. No Max exists such that S(x, Max) for all x. -Dan 3. I don't really get your explanation, could you try to explain it a bit more or maybe more examples. Sorry! I don't get the example that you give me, but I understand that in order of S(x,y) to be valid, x has to be less than y and here x = y, so it can't happen right? Am I explaining it the right way? I also don't understand by the domain that you give me. Is that for all x and y? Is this a type of fallacy? 4. Originally Posted by TheRekz What is wrong with this argument? Let S(x,y) b " x is shorter than y". Given the premise $\left( {\exists x} \right)$ S(s,Max), it follows that S(Max,Max). Then by existential generalization it follows that $\left( {\exists x} \right)$S(x,x), so that someone is shorter than himself. RonL 5. Originally Posted by TheRekz Is this a type of fallacy? Yes the fallacy occurred in your first use of existential instantiation. The basic rule for EI is: $\frac{{\left( {\exists x} \right)\phi (x)}}{{\phi (v)}}$ where v is an individual constant having no prior occurrence in the context. 6. so what type of fallacy is this? it it the affirming the conclusion? 7. Originally Posted by TheRekz so what type of fallacy is this? it it the affirming the conclusion? I have no idea what name your instructor/textbook would give to that fallacy. It is just a violation of the instantiation rules.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325558543205261, "perplexity_flag": "middle"}
http://avatarsearch.com/Emission_(electromagnetic_radiation)
Emission (electromagnetic Radiation) Research Materials This page contains a list of user images about Emission (electromagnetic Radiation) which are relevant to the point and besides images, you can also use the tabs in the bottom to browse Emission (electromagnetic Radiation) news, videos, wiki information, tweets, documents and weblinks. Emission (electromagnetic Radiation) Images Rihanna - Take A Bow Music video by Rihanna performing Take A Bow. YouTube view counts pre-VEVO: 66288884. (C) 2008 The Island Def Jam Music Group. THE LEGEND OF ZELDA RAP [MUSIC VIDEO] WATCH BLOOPERS & MORE: http://bit.ly/ZELDAxtras DOWNLOAD THE SONG: http://smo.sh/13NrBp8 DOWNLOAD UNCENSORED SONG: http://smo.sh/WMYpsf GET LEGEND OF SMOSH T... Key & Peele: Substitute Teacher A substitute teacher from the inner city refuses to be messed with while taking attendance. FIRETRUCK! (Official Music Video) BLOOPERS: http://bit.ly/FiretruckBloopers GET THE SONG: http://smo.sh/WMZv7l MILKSHAKE MUSIC VIDEO: http://bit.ly/MilkyMilkshake CHECK OUT THIS FIRETRUCK TEE... Draw My Life - Ryan Higa So i was pretty hesitant to make this video... but after all of your request, here is my Draw My Life video! Check out my 2nd Channel for more vlogs: http://... Assassin's Creed Meets Parkour in Real Life Watch the Behind The Scenes in this link below: http://youtu.be/36CLFOyaml0 Make sure to subscribe to this channel for new vids each week! http://youtube.com... Adele - Rolling In The Deep Music video by Adele performing Rolling In The Deep. (C) 2010 XL Recordings Ltd. #VEVOCertified on July 25, 2011. http://www.vevo.com/certified http://www.yo... David Guetta - Just One Last Time ft. Taped Rai "Just One Last Time" feat. Taped Rai. Available to download on iTunes including remixes of : Tiësto, HARD ROCK SOFA & Deniz Koyu http://smarturl.it/DGJustOne... YOLO (feat. Adam Levine & Kendrick Lamar) YOLO is available on iTunes now! http://smarturl.it/lonelyIslandYolo New album coming soon... Check out the awesome band the music in YOLO is sampled from Th... PEOPLE ARE AWESOME 2011 Most Annoying People On The Internet Don't be these people. Mapoti See Bloopers and Behind-The-Scenes Here!: http://youtu.be/dfpo7uXwJnM Huge thank you and shout out to Dtrix: http://www.youtube... The electromagnetic waves that compose electromagnetic radiation can be imagined as a self-propagating transverse oscillating wave of electric and magnetic fields. This diagram shows a plane linearly polarized EMR wave propagating from left to right. The electric field is in a vertical plane and the magnetic field in a horizontal plane. The two types of fields in EMR waves are always in phase with each other with a fixed ratio of electric to magnetic field intensity. Electromagnetism • Electromagnetic radiation Scientists Electromagnetic radiation (EM radiation or EMR) is a form of energy emitted and absorbed by charged particles which exhibits wave-like behavior as it travels through space. EMR has both electric and magnetic field components, which stand in a fixed ratio of intensity to each other, and which oscillate in phase perpendicular to each other and perpendicular to the direction of energy and wave propagation. In a vacuum, electromagnetic radiation propagates at a characteristic speed, the speed of light. Electromagnetic radiation is a particular form of the more general electromagnetic field (EM field), which is produced by moving charges. Electromagnetic radiation is associated with EM fields that are far enough away from the moving charges that produced them that absorption of the EM radiation no longer affects the behavior of these moving charges. These two types or behaviors of EM field are sometimes referred to as the near and far field. In this language, EMR is merely another name for the far-field. Charges and currents directly produce the near-field. However, charges and currents produce EMR only indirectly—rather, in EMR, both the magnetic and electric fields are associated with changes in the other type of field, not directly by charges and currents. This close relationship assures that the electric and magnetic fields in EMR exist in a constant ratio of strengths to each other, and also to be found in phase, with maxima and nodes in each found at the same places in space. EMR carries energy—sometimes called radiant energy—through space continuously away from the source (this is not true of the near-field part of the EM field). EMR also carries both momentum and angular momentum. These properties may all be imparted to matter with which it interacts. EMR is produced from other types of energy when created, and it is converted to other types of energy when it is destroyed. The photon is the quantum of the electromagnetic interaction, and is the basic "unit" or constituent of all forms of EMR. The quantum nature of light becomes more apparent at high frequencies (or high photon energy). Such photons behave more like particles than lower-frequency photons do. In classical physics, EMR is considered to be produced when charged particles are accelerated by forces acting on them. Electrons are responsible for emission of most EMR because they have low mass, and therefore are easily accelerated by a variety of mechanisms. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. Quantum processes can also produce EMR, such as when atomic nuclei undergo gamma decay, and processes such as neutral pion decay. EMR is classified according to the frequency of its wave. The electromagnetic spectrum, in order of increasing frequency and decreasing wavelength, consists of radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. The eyes of various organisms sense a somewhat variable but relatively small range of frequencies of EMR called the visible spectrum or light. The effects of EMR upon biological systems (and also to many other chemical systems, under standard conditions) depends both upon the radiation's power and frequency. For lower frequencies of EMR up to those of visible light (i.e., radio, microwave, infrared), the damage done to cells and also to many ordinary materials under such conditions is determined mainly by heating effects, and thus by the radiation power. By contrast, for higher frequency radiations at ultraviolet frequencies and above (i.e., X-rays and gamma rays) the damage to chemical materials and living cells by EMR is far larger than that done by simple heating, due to the ability of single photons in such high frequency EMR to damage individual molecules chemically. Physics Theory Shows the relative wavelengths of the electromagnetic waves of three different colors of light (blue, green, and red) with a distance scale in micrometers along the x-axis. Main articles: Maxwell's equations and Near and far field Maxwell’s equations for EM fields far from sources James Clerk Maxwell first formally postulated electromagnetic waves. These were subsequently confirmed by Heinrich Hertz. Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields, and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. According to Maxwell's equations, a spatially varying electric field is always associated with a magnetic field that changes over time. Likewise, a spatially varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the electric field are always accompanied by a wave in the magnetic field in one direction, and vice versa. This relationship between the two occurs without either type field causing the other; rather they occur together in the same way that time and space changes occur together and are interlinked in special relativity (In fact, magnetic fields may be viewed as relativistic distortions of electric fields, so the close relationship between space and time changes here is more than an analogy). Together, these fields form a propagating electromagnetic wave, which moves out into space and need never again affect the source. The distant EM field formed in this way by the acceleration of a charge carries energy with it that "radiates" away through space, hence the term for it. Near and far fields Main article: Liénard–Wiechert potential In electromagnetic radiation (such as microwaves from an antenna, shown here) the term applies only to the parts of the electromagnetic field that radiate into infinite space and decrease in intensity by an inverse-square law of power, so that the total radiation energy that crosses through an imaginary spherical surface is the same, no matter how far away from the antenna the spherical surface is drawn. Electromagnetic radiation thus includes the far field part of the electromagnetic field around a transmitter. A part of the "near-field" close to the transmitter, forms part of the changing electromagnetic field, but does not count as electromagnetic radiation. Maxwell's equations established that some charges and currents ("sources") produce a local type of electromagnetic field near them that does not have the behavior of EMR. In particular, according to Maxwell, currents directly produce a magnetic field, but it is of a magnetic dipole type which dies out rapidly with distance from the current. In a similar manner, moving charges being separated from each other in a conductor by a changing electrical potential (such as in an antenna) produce an electric dipole type electrical field, but this also dies away very quickly with distance. Both of these fields make up the near-field near the EMR source. Neither of these behaviors are responsible for EM radiation. Instead, they cause electromagnetic field behavior that only efficiently transfers power to a receiver very close to the source, such as the magnetic induction inside an electrical transformer, or the feedback behavior that happens close to the coil of a metal detector. Typically, near-fields have a powerful effect on their own sources, causing an increased “load” (decreased electrical reactance) in the source or transmitter, whenever energy is withdrawn from the EM field by a receiver. Otherwise, these fields do not “propagate,” freely out into space, carrying their energy away without distance-limit, but rather oscillate back and forth, returning their energy to the transmitter if it is not received by a receiver. By contrast, the EM far-field is composed of radiation that is free of the transmitter in the sense that (unlike the case in an electrical transformer) the transmitter requires the same power to send these changes in the fields out, whether the signal is immediately picked up, or not. This distant part of the electromagnetic field is "electromagnetic radiation" (also called the far-field). The far-fields propagate without ability for the transmitter to affect them, and this causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Because such waves conserve the amount of energy they transmit through any spherical boundary surface drawn around their source, and because such surfaces have an area that is defined by the square of the distance from the source, the power of EM radiation always varies according to an inverse-square law. This is in contrast to dipole parts of the EM field close to the source (the near-field), which varies in power according to an inverse cube power law, and thus does not transport a conserved amount of energy over distances, but instead dies away rapidly with distance, with its energy (as noted) either rapidly returning to the transmitter, or else absorbed by a nearby receiver (such as a transformer secondary coil). The far-field (EMR) depends on a different mechanism for its production than the near-field, and upon different terms in Maxwell’s equations. Whereas the magnetic part of the near-field is due to currents in the source, the magnetic field in EMR is due only to the local change in the electric field. In a similar way, while the electric field in the near-field is due directly to the charges and charge-separation in the source, the electric field in EMR is due to a change in the local magnetic field. Both of these processes for producing electric and magnetic EMR fields have a different dependence on distance than do near-field dipole electric and magnetic fields, and that is why the EMR type of EM field becomes dominant in power “far” from sources. The term “far from sources” refers to how far from the source (moving at the speed of light) any portion of the outward-moving EM field is located, by the time that source currents are changed by the varying source potential, and the source has therefore begun to generate an outwardly moving EM field of a different phase. A more compact view of EMR is that the far-field that composes EMR is generally that part of the EM field that has traveled sufficient distance from the source, that it has become completely disconnected from any feedback to the charges and currents that were originally responsible for it. Now independent of the source charges, the EM field, as it moves farther away, is dependent only upon the accelerations of the charges that produced it. It no longer has a strong connection to upon the direct fields of the charges, or to the velocity of these changes (currents). In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that also results the particle's uniform velocity, are both seen to be associated with the electromagnetic near-field, and do not comprise EM radiation. Properties Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave of electric and magnetic fields. This 3D diagram shows a plane linearly polarized wave propagating from left to right This 3D diagram shows a plane linearly polarized wave propagating from left to right. Note that the electric and magnetic fields in such a wave are in-phase with each other, reaching minima and maxima together The physics of electromagnetic radiation is electrodynamics. Electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent lightwaves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual lightwaves. Since light is an oscillation it is not affected by travelling through static electric or magnetic fields in a linear medium such as a vacuum. However in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields — these interactions include the Faraday effect and the Kerr effect. In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount. EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in a large number of experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not too difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior of light. Rather, it reflects the quantum nature of matter.[1] Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle problem. There are experiments in which the wave and particle natures of electromagnetic waves appear in the same experiment, such as the self-interference of a single photon. True single-photon experiments (in a quantum optical sense) can be done today in undergraduate-level labs.[2] When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once. A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics. Wave model Electromagnetic radiation is a transverse wave, meaning that the oscillations of the waves are perpendicular to the direction of energy transfer and travel. The electric and magnetic parts of the field stand in a fixed ratio of strengths in order to satisfy the two Maxwell equations that specify how one is produced from the other. These E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). A common misconception is that the E and B fields in electromagnetic radiation are out of phase because a change in one produces the other, and this would produce a phase difference between them as sinusoidal functions (as indeed happens in electromagnetic induction, and in the near-field close to antennas). However, in the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a more correct description is that a time-change in one type of field is proportional to a space-change in the other. These derivatives require that the E and B fields in EMR are in-phase (see math section below). An important aspect of the nature of light is frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has a spectrum of frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction. A wave consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves the size of buildings to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation: $\displaystyle v=f\lambda$ where v is the speed of the wave (c in a vacuum, or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant. Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. The energy in electromagnetic waves is sometimes called radiant energy. Particle model and quantum theory See also: Quantization (physics) and Quantum optics An anomaly arose in the late 19th century involving a contradiction between the wave theory of light on the one hand, and on the other, observers' actual measurements of the electromagnetic spectrum that was being emitted by thermal radiators known as black bodies. Physicists struggled with this problem, which later became known as the ultraviolet catastrophe, unsuccessfully for many years. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. Later, Albert Einstein proposed that the quanta of light might be regarded as real particles, and (still later) the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by $E = hf = \frac{hc}{\lambda} \,\!$ where h is Planck's constant, $\lambda$ is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation.[3] In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave.[4] Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength: $p = { E \over c } = { hf \over c } = { h \over \lambda }.$ The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect. As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (on average, one that is farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light equal to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. When the emission of the photon is immediate, this phenomenon is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. When the emission of the photon is delayed, the phenomenon is called phosphorescence. Wave–particle duality The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned if an object has a large mass, and it was not until a bold proposition by Louis de Broglie in 1924 that the scientific community realised that electrons also exhibited wave–particle duality. Wave and particle effects of electromagnetic radiation Together, wave and particle effects explain the emission and absorption spectra of EM radiation, wherever it is seen. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer, absorbing certain frequencies of the light between emitter and detector/eye, then emitting them in all directions, so that a dark band appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when the emitting gas is glowing due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Today, scientists use these phenomena to perform various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements a star is composed of. Spectroscopy is also used in the determination of the distance of a star, using the red shift. Speed of propagation Main article: Speed of light Any electric charge that accelerates, or any changing magnetic field, produces electromagnetic radiation. Electromagnetic information about the charge travels at the speed of light. Accurate treatment thus incorporates a concept known as retarded time (as opposed to advanced time, which is not physically possible in light of causality), which adds to the expressions for the electrodynamic electric field and magnetic field. These extra terms are responsible for electromagnetic radiation. When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the electric current. In many such situations it is possible to identify an electrical dipole moment that arises from separation of charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the charges move back and forth. This oscillation at a given frequency gives rise to changing electric and magnetic fields, which then set the electromagnetic radiation in motion. At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in transition state which has an electric dipole moment that oscillates in time. This oscillating dipole moment is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Such states occur (for example) in atoms when photons are radiated as the atom shifts from one stationary state to another. Depending on the circumstances, electromagnetic radiation may behave as a wave or as particles. As a wave, it is characterized by a velocity (the speed of light), wavelength, and frequency. When considered as particles, they are known as photons, and each has an energy related to the frequency of the wave given by Planck's relation E = hν, where E is the energy of the photon, h = 6.626 × 10−34 J·s is Planck's constant, and ν is the frequency of the wave. One rule is always obeyed regardless of the circumstances: EM radiation in a vacuum always travels at the speed of light, relative to the observer, regardless of the observer's velocity. (This observation led to Albert Einstein's development of the theory of special relativity.) In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum. Special theory of relativity Main article: Special theory of relativity By the late nineteenth century, however, a handful of experimental anomalies remained that could not be explained by the simple wave theory. One of these anomalies involved a controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's equations did not appear unless the equations were modified in a way first suggested by FitzGerald and Lorentz (see history of special relativity), or else otherwise it would depend on the speed of observer relative to the "medium" (called luminiferous aether) which supposedly "carried" the electromagnetic wave (in a manner analogous to the way air carries sound waves). Experiments failed to find any observer effect, however. In 1905, Albert Einstein proposed that space and time appeared to be velocity-changeable entities, not only for light propagation, but all other processes and laws as well. These changes then automatically accounted for the constancy of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—even those in relative motion. History of discovery Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to William Herschel, the astronomer. Herschel published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared. In 1801, the German physicist Johann Wilhelm Ritter made the discovery of ultraviolet in an experiment similar to Hershel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the rays (which at first were called "chemical rays") were capable of causing chemical reactions. In 1862-4 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were not first detected from a natural source, but were rather produced deliberately and artificially by the German scientist Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves. Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evaccuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered the main properties of X-rays that we understand to this day. The last portion of the EM spectrum was discovered associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, and found that they were similar to X-rays but with shorter wavelengths and higher frequency. Electromagnetic spectrum Electromagnetic spectrum with light highlighted Main article: Electromagnetic spectrum Legend: γ = Gamma rays HX = Hard X-rays SX = Soft X-Rays EUV = Extreme-ultraviolet NUV = Near-ultraviolet Visible light (colored bands) NIR = Near-infrared MIR = Moderate-infrared FIR = Far-infrared EHF = Extremely high frequency (microwaves) SHF = Super-high frequency (microwaves) UHF = Ultrahigh frequency (radio waves) VHF = Very high frequency (radio) HF = High frequency (radio) MF = Medium frequency (radio) LF = Low frequency (radio) VLF = Very low frequency (radio) VF = Voice frequency ULF = Ultra-low frequency (radio) SLF = Super-low frequency (radio) ELF = Extremely low frequency(radio) In general, EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, the visible spectrum we perceive as visible light, ultraviolet, X-rays, and gamma rays. Arbitrary electromagnetic waves can always be expressed by Fourier analysis in terms of sinusoidal monochromatic waves, which in turn can each be classified into these regions of the EMR spectrum. The behavior of EM radiation depends on its frequency. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe. Soundwaves are not electromagnetic radiation. At the lower end of the electromagnetic spectrum, about 20 Hz to about 20 kHz, are frequencies that might be considered in the audio range. However, electromagnetic waves cannot be directly perceived by human ears. Sound waves are the oscillating compression of molecules. To be heard, electromagnetic radiation must be converted to pressure waves of the fluid in which the ear is located (whether the fluid is air, water or something else). Radio and microwave heating and currents, and infrared heating When EM radiation interacts with matter, its behavior changes qualitatively as its frequency changes. At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both. Infrared EMR interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. For this reason, infrared is reflected by metals (as is most EMR into the ultraviolet) but is absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. In the same process, bulk substances radiate in the infrared spontaneously (see thermal radiation section below). Reversible and nonreversible molecular changes from visible light As frequency increases into the visible range, photons of EMR have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the "visible range," as the mechanism of vision involves the change in bonding of a single molecule (retinal) which absorbs light in the rhodopsin the retina of the human eye. Photosynthesis becomes possible in this range as well, for similar reasons, as a single molecule of chlorophyll is excited by a single photon. Animals which detect infrared do not use such single molecule processes, but are forced to make use of small packets of water which change temperature, in an essentially thermal process that involves many photons (see infrared sensing in snakes). For this reason, infrared, microwaves, and radio waves are thought to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation (however, there does remain controversy about possible non-thermal biological damage from low frequency EM radiation, see below). Visible light is able to affect a few molecules with single photons, but usually not in a permanent or damaging way, in the absence of power high enough to increase temperature to damaging levels. However, in plant tissues that carry on photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, in order to prevent reactions which would otherwise infere with photosynthesis at high light levels. There is also some limited evidence that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A does.[5] Molecular damage from ultraviolet As a photon interacts with single atoms and molecules, the effect depends on the amount of energy the photon carries. As frequency increases beyond visible into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. If these molecules are biological molecules in DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A, which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) which are far worse than would be produced by simple heating (temperature increase) effects. This property of causing molecular damage that is far out of proportion to all temperature-changing (i.e., heating) effects, is characteristic of all EMR with frequencies at the visible light range and above. These properties of high-frequency EMR are due to quantum effects which cause permanent damage to materials and tissues at the single molecular level. Ionization and extreme types of molecular damage from X-rays and gamma rays At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volts (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV." (Most of this is filtered by the Earth's atmosphere). Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (There are also many other kinds of ionizing radiation made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays are ionizing radiation. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths from the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, are penetrating to matter. It is this type of damage which causes these types of radiation to be especially carefully monitored, due to their hazard, even at comparatively low-energies, to all living organisms. Propagation and absorption in the Earth's atmosphere Main articles: ozone layer, shortwave radio, skywave, and ionosphere Rough plot of Earth's atmospheric transmittance (or opacity) to various wavelengths of electromagnetic radiation Most electromagnetic waves of higher frequency than visible light (UV and X-rays) are blocked by absorption from electronic excitation in ozone and dioxygen (for UV), and by ionization of air for energies in the extreme UV and above. Visible light is well transmitted in air, as it is not energetic enough to excite oxygen, but too energetic to excite molecular vibrational frequencies of molecules in air. Below visible light, a number of absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves. Finally, at radio wavelengths longer than 10 meters or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere of upper Earth atmosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 meters or 3 MHz) to be reflected and results in farther shortwave radio than can be obtained by line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 meters). Types and sources, classed by spectral band (frequency) See electromagnetic spectrum for detail Radio waves Main article: Radio waves When EM radiation at the frequencies for which it is referred to as "radio waves" impinges upon a conductor, it couples to the conductor, travels along it, and induces an electric current on the surface of the conductor by moving the electrons of the conducting material in correlated bunches of charge. Such effects can cover macroscopic distances in conductors (including as radio antennas), since the wavelength of radiowaves is long, by human scales. Radio waves thus have the most overtly "wave-like" characteristics of all the types of EMR, since their waves are so long. Microwaves Main article: Microwaves Infrared Main article: Infrared Visible light Main article: Light Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light, especially when visibility to humans is not relevant. Ultraviolet Main article: Ultraviolet X-rays Main article: X-rays Gamma rays Main article: Gamma rays Thermal radiation and electromagnetic radiation as a form of heat Main articles: Thermal radiation and Planck's law This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2013) The basic structure of matter involves charged particles bound together in many different ways. When electromagnetic radiation is incident on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the situation. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may also get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray, and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens both for infrared, microwave, and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can also easily set paper afire. Ionizing electromagnetic radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms in the material eventually most of the energy is downgraded to thermal energy; this whole process happens in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to the ultraviolet (UV) spectrum, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation which is far greater per unit energy than heating effects produce. Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature, and is associated with an entropy change per unit of thermal energy. However, the word "heat" is a highly technical term in physics and thermodynamics, and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy termperature of) a material, when it is absorbed. The inverse or time-reversed process of absorption is responsible for thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material. Thermal radiation is an important mechanism of heat transfer. The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy. Biological effects The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the power and the frequency of the radiation. For low-frequency radiation (radio waves to visible light) the best-understood effects are those due to radiation power alone, acting through the effect of simple heating when the radiation is absorbed by the cell. For these thermal effects, the frequency of the radiation is important only as it affects radiation penetration into the organism (for example microwaves penetrate better than infrared). Initially, it was believed that low frequency fields that were too weak to cause significant heating could not possibly have any biological effect.[6] Despite this opinion among researchers, evidence has accumulated that supports the existence of complex biological effects of weaker non-thermal electromagnetic fields, (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation[6][7][8]), and modulated RF and microwave fields.[9][10][11] Fundamental mechanisms of the interaction between biological material and electromagnetic fields at non-thermal levels are not fully understood.[6] Bioelectromagnetics is the study of these interactions and effects. The World Health Organization has classified radiofrequency electromagnetic radiation as a possible group 2b carcinogen.[12][13] This group contains possible carcinogens with weaker evidence, at the same level as coffee and automobile exhaust. For example, there have been a number of epidemiological studies of looking for a relationship between cell phone use and brain cancer development, which have been largely inconclusive, save to demonstrate that the effect, if it exists, cannot be a large one. See the main article referenced above. At higher frequencies (visible and beyond), the effects of individual photons of the radiation begin to become important, as these now have enough energy individually directly or indirectly to damage biological molecules.[14] All frequences of UV radiation have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.[15][16] Thus, at UV frequencies and higher (and probably somewhat also in the visible range),[17] electromagnetic radiation does far more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet, and also X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can produce severe damage to life at powers that produce very little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum. Derivation from electromagnetic theory Main article: electromagnetic wave equation Electromagnetic waves as a general phenomenon were predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. Inspection of Maxwell's equations without sources (charges or currents) results in, along with the possibility of nothing happening, nontrivial solutions of changing electric and magnetic fields. Beginning with Maxwell's equations in free space: $\nabla \cdot \mathbf{E} = 0 \qquad \qquad \qquad \ \ (1)$ $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \qquad \qquad \ (2)$ $\nabla \cdot \mathbf{B} = 0 \qquad \qquad \qquad \ \ (3)$ $\nabla \times \mathbf{B} = \mu_0 \epsilon_0 \frac{\partial \mathbf{E}}{\partial t} \qquad \quad \ (4)$ where $\nabla$ is a vector differential operator (see Del). One solution, $\mathbf{E}=\mathbf{B}=\mathbf{0},$ is trivial. For a more useful solution, we utilize vector identities, which work for any vector, as follows: $\nabla \times \left( \nabla \times \mathbf{A} \right) = \nabla \left( \nabla \cdot \mathbf{A} \right) - \nabla^2 \mathbf{A}$ To see how we can use this, take the curl of equation (2): $\nabla \times \left(\nabla \times \mathbf{E} \right) = \nabla \times \left(-\frac{\partial \mathbf{B}}{\partial t} \right) \qquad \qquad \qquad \quad \ \ \ (5) \,$ Evaluating the left hand side: $\nabla \times \left(\nabla \times \mathbf{E} \right) = \nabla\left(\nabla \cdot \mathbf{E} \right) - \nabla^2 \mathbf{E} = - \nabla^2 \mathbf{E} \qquad \ \ (6) \,$ where we simplified the above by using equation (1). Evaluate the right hand side: $\nabla \times \left(-\frac{\partial \mathbf{B}}{\partial t} \right) = -\frac{\partial}{\partial t} \left( \nabla \times \mathbf{B} \right) = -\mu_0 \epsilon_0 \frac{\partial^2 \mathbf{E}}{\partial t^2} \quad \ \ \ \ (7)$ Equations (6) and (7) are equal, so this results in a vector-valued differential equation for the electric field, namely $\nabla^2 \mathbf{E} = \mu_0 \epsilon_0 \frac{\partial^2 \mathbf{E}}{\partial t^2}$ Applying a similar pattern results in similar differential equation for the magnetic field: $\nabla^2 \mathbf{B} = \mu_0 \epsilon_0 \frac{\partial^2 \mathbf{B}}{\partial t^2}.$ These differential equations are equivalent to the wave equation: $\nabla^2 f = \frac{1}{{c_0}^2} \frac{\partial^2 f}{\partial t^2} \,$ where c0 is the speed of the wave in free space and f describes a displacement Or more simply: $\Box f = 0$ where $\Box$ is d'Alembertian: $\Box = \nabla^2 - \frac{1}{{c_0}^2} \frac{\partial^2}{\partial t^2} = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2} - \frac{1}{{c_0}^2} \frac{\partial^2}{\partial t^2} \$ Notice that, in the case of the electric and magnetic fields, the speed is: $c_0 = \frac{1}{\sqrt{\mu_0 \epsilon_0}}$ This is the speed of light in vacuum. Maxwell's equations have unified the vacuum permittivity $\epsilon_0$, the vacuum permeability $\mu_0$, and the speed of light itself, c0. Before this derivation it was not known that there was such a strong relationship between light and electricity and magnetism. But these are only two equations and we started with four, so there is still more information pertaining to these waves hidden within Maxwell's equations. Let's consider a generic vector wave for the electric field. $\mathbf{E} = \mathbf{E}_0 f\left( \hat{\mathbf{k}} \cdot \mathbf{x} - c_0 t \right)$ Here, $\mathbf{E}_0$ is the constant amplitude, $f$ is any second differentiable function, $\hat{\mathbf{k}}$ is a unit vector in the direction of propagation, and ${\mathbf{x}}$is a position vector. We observe that $f\left( \hat{\mathbf{k}} \cdot \mathbf{x} - c_0 t \right)$ is a generic solution to the wave equation. In other words $\nabla^2 f\left( \hat{\mathbf{k}} \cdot \mathbf{x} - c_0 t \right) = \frac{1}{{c_0}^2} \frac{\partial^2}{\partial t^2} f\left( \hat{\mathbf{k}} \cdot \mathbf{x} - c_0 t \right),$ for a generic wave traveling in the $\hat{\mathbf{k}}$ direction. This form will satisfy the wave equation, but will it satisfy all of Maxwell's equations, and with what corresponding magnetic field? $\nabla \cdot \mathbf{E} = \hat{\mathbf{k}} \cdot \mathbf{E}_0 f'\left( \hat{\mathbf{k}} \cdot \mathbf{x} - c_0 t \right) = 0$ $\mathbf{E} \cdot \hat{\mathbf{k}} = 0$ The first of Maxwell's equations implies that electric field is orthogonal to the direction the wave propagates. $\nabla \times \mathbf{E} = \hat{\mathbf{k}} \times \mathbf{E}_0 f'\left( \hat{\mathbf{k}} \cdot \mathbf{x} - c_0 t \right) = -\frac{\partial \mathbf{B}}{\partial t}$ $\mathbf{B} = \frac{1}{c_0} \hat{\mathbf{k}} \times \mathbf{E}$ The second of Maxwell's equations yields the magnetic field. The remaining equations will be satisfied by this choice of $\mathbf{E},\mathbf{B}$. Not only are the electric and magnetic field waves in the far-field traveling at the speed of light, but they always have a special restricted orientation and proportional magnitudes, $E_0 = c_0 B_0$, which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as $\mathbf{E} \times \mathbf{B}$. Also, E and B far-fields in free space, which as wave solutions depend primarily on these two Maxwell equations, are always in-phase with each other. This is guaranteed since the generic wave solution is first order in both space and time, and the curl operator on one side of these equations results in first-order spacial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first order in time, resulting in the same phase shift for both fields in each mathematical operation. From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left; but this picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. On a quantum level, it is described as photon polarization. The direction of the polarization is defined as the direction of the electric field. More general forms of the second-order wave equations given above are available, allowing for both non-vacuum propagation media and sources. A great many competing derivations exist, all with varying levels of approximation and intended applications. One very general example is a form of the electric field equation,[18] which was factorized into a pair of explicitly directional wave equations, and then efficiently reduced into a single uni-directional wave equation by means of a simple slow-evolution approximation. References 1. Carmichael, H. J. "Einstein and the Photoelectric Effect". Quantum Optics Theory Group, University of Auckland. Retrieved 22 December 2009. [] 2. Paul M. S. Monk (2004). Physical Chemistry. John Wiley and Sons. p. 435. ISBN 978-0-471-49180-4. 3. Weinberg, S. (1995). The Quantum Theory of Fields 1. Cambridge University Press. pp. 15–17. ISBN 0-521-55001-7. 4. ^ a b c Binhi, Vladimir N; Repiev, A & Edelev, M (translators from Russian) (2002). Magnetobiology: Underlying Physical Problems. San Diego: Academic Press. pp. 1–16. ISBN 978-0-12-100071-4. OCLC 49700531. 5. Delgado JM, Leal J, Monteagudo JL, Gracia MG (May 1982). "Embryological changes induced by weak, extremely low frequency electromagnetic fields". Journal of Anatomy 134 (3): 533–51. PMC 1167891. PMID 7107514. 6. Harland JD, Liburdy RP (1997). "Environmental magnetic fields inhibit the antiproliferative action of tamoxifen and melatonin in a human breast cancer cell line". Bioelectromagnetics 18 (8): 555–62. doi:10.1002/(SICI)1521-186X(1997)18:8<555::AID-BEM4>3.0.CO;2-1. PMID 9383244. 7. Aalto S, Haarala C, Brück A, Sipilä H, Hämäläinen H, Rinne JO (July 2006). "Mobile phone affects cerebral blood flow in humans". Journal of Cerebral Blood Flow and Metabolism 26 (7): 885–90. doi:10.1038/sj.jcbfm.9600279. PMID 16495939. 8. Cleary, SF, Liu LM, Merchant RE 1990. In vitro lyphocyte proliferation induced by radio-frequency electromagnetic radiation under isothermal conditions. Bioelectromagnetics 11(1):47-56 9. Czerska EM, Elson EC, Davis CC 1992 Swicord ML, Czerski P, Effects of continuous and pulsed 2450-MHz radiation on spontaneous lymphoblastoid transformation of human lymphocytes in vitro. Bioelectromagnetics 13(4):247-259. 10. 11. See PMID 22318388 for evidence of quantum damage from visible light via reactive oxygen species generated in skin. This happens also with UVA. With UVB, the damage to DNA becomes direct, with photochemical formation of pyrimidine dimers. 12. Narayanan, DL; Saladi, RN, Fox, JL (2010 Sep). "Ultraviolet radiation and skin cancer.". International Journal of Dermatology 49 (9): 978–86. doi:10.1111/j.1365-4632.2010.04474.x. PMID 20883261. 13. Saladi, RN; Persaud, AN (2005 Jan). "The causes of skin cancer: a comprehensive review.". Drugs of today (Barcelona, Spain : 1998) 41 (1): 37–53. doi:10.1358/dot.2005.41.1.875777. PMID 15753968. 14. Kinsler, P. (2010). "Optical pulse propagation with minimal approximations". Phys. Rev. A 81: 013819. arXiv:0810.5689. Bibcode:2010PhRvA..81a3819K. doi:10.1103/PhysRevA.81.013819. Further reading • Hecht, Eugene (2001). Optics (4th ed.). Pearson Education. ISBN 0-8053-8566-5. • Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks Cole. ISBN 0-534-40842-7. • Tipler, Paul (2004). Physics for Scientists and Engineers: Electricity, Magnetism, Light, and Elementary Modern Physics (5th ed.). W. H. Freeman. ISBN 0-7167-0810-8. • Reitz, John; Milford, Frederick; Christy, Robert (1992). Foundations of Electromagnetic Theory (4th ed.). Addison Wesley. ISBN 0-201-52624-7. • Jackson, John David (1999). Classical Electrodynamics (3rd ed.). John Wiley & Sons. ISBN 0-471-30932-X. • Allen Taflove and Susan C. Hagness (2005). Computational Electrodynamics: The Finite-Difference Time-Domain Method, 3rd ed. Artech House Publishers. ISBN 1-58053-832-0. News Documents Don't believe everything they write, until confirmed from WEBSITE REPORTED site. What is WEBSITE REPORTED? It's a social web research tool that helps anyone exploring anything. Updates: Stay up-to-date. Socialize with us! We strive to bring you the latest from the entire web.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248676896095276, "perplexity_flag": "middle"}
http://www.reference.com/browse/cyclotron
Definitions Nearby Words # cyclotron [sahy-kluh-tron, sik-luh-] /ˈsaɪkləˌtrɒn, ˈsɪklə-/ cyclotron: see particle accelerator. Particle accelerator that accelerates charged atomic or subatomic particles in a constant magnetic field. It consists of two hollow semicircular electrodes, called dees, in a large evacuated cylindrical box. An alternating electric field between the dees continuously accelerates the particles from one dee to the other, while the magnetic field guides them in a circular path. As the speed of the particles increases, so does the radius of their path, and the particles spiral outward. In this manner, a cyclotron can accelerate protons to energies of up to 25 million electron volts. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. A cyclotron is a type of particle accelerator. Cyclotrons accelerate charged particles using a high-frequency, alternating voltage (potential difference). A perpendicular magnetic field causes the particles to spiral almost in a circle so that they re-encounter the accelerating voltage many times. Ernest Lawrence, of the University of California, Berkeley, is credited with the invention of the cyclotron in 1929. He used it in experiments that required particles with energy of up to 1 MeV. ## How the cyclotron works The electrodes shown at the right would be in the vacuum chamber, which is flat, in a narrow gap between the two poles of a large magnet. In the cyclotron, a high-frequency alternating voltage applied across the "D" electrodes (also called "dees") alternately attracts and repels charged particles. The particles, injected near the center of the magnetic field, accelerate only when passing through the gap between the electrodes. The perpendicular magnetic field (passing vertically through the "D" electrodes), combined with the increasing energy of the particles forces the particles to travel in a spiral path. With no change in energy the charged particles in a magnetic field will follow a circular path. In the cyclotron, energy is applied to the particles as they cross the gap between the dees and so they are accelerated (at the typical sub-relativistic speeds used) and will increase in mass as they approach the speed of light. Either of these effects (increased velocity or increased mass) will increase the radius of the circle and so the path will be a spiral. (The particles move in a spiral, because a current of electrons or ions, flowing perpendicular to a magnetic field, experiences a perpendicular force. The charged particles move freely in a vacuum, so the particles follow a spiral path.) The radius will increase until the particles hit a target at the perimeter of the vacuum chamber. Various materials may be used for the target, and the collisions will create secondary particles which may be guided outside of the cyclotron and into instruments for analysis. The results will enable the calculation of various properties, such as the mean spacing between atoms and the creation of various collision products. Subsequent chemical and particle analysis of the target material may give insight into nuclear transmutation of the elements used in the target. ## Uses of the cyclotron For several decades, cyclotrons were the best source of high-energy beams for nuclear physics experiments; several cyclotrons are still in use for this type of research. Cyclotrons can be used to treat cancer. Ion beams from cyclotrons can be used, as in proton therapy, to penetrate the body and kill tumors by radiation damage, while minimizing damage to healthy tissue along their path. Cyclotron beams can be used to bombard other atoms to produce short-lived positron-emitting isotopes suitable for PET imaging. ## Problems solved by the cyclotron The cyclotron was an improvement over the linear accelerators that were available when it was invented. A linear accelerator (also called a linac) accelerates particles in a straight line through an evacuated tube (or series of such tubes placed end to end). A set of electrodes shaped like flat donuts are arranged inside the length of the tube(s). These are driven by high-power radio waves that continuously switch between positive and negative voltage, causing particles traveling along the center of the tube to accelerate. In the 1920s, it was not possible to get high frequency radio waves at high power, so either the accelerating electrodes had to be far apart to accommodate the low frequency or more stages were required to compensate for the low power at each stage. Either way, higher-energy particles required longer accelerators than scientists could afford. Modern linacs use high power Klystrons and other devices able to impart much more power at higher frequencies. But before these devices existed, cyclotrons were cheaper than linacs. Cyclotrons accelerate particles in a spiral path. Therefore, a compact accelerator can contain much more distance than a linear accelerator, with more opportunities to accelerate the particles. ## Advantages of the cyclotron • Cyclotrons have a single electrical driver, which saves both money and power, since more expense may be allocated to increasing efficiency. • Cyclotrons produce a continuous stream of particles at the target, so the average power is relatively high. • The compactness of the device reduces other costs, such as its foundations, radiation shielding, and the enclosing building. ## Limitations of the cyclotron The spiral path of the cyclotron beam can only "synch up" with klystron-type (constant frequency) voltage sources if the accelerated particles are approximately obeying Newton's Laws of Motion. If the particles become fast enough that relativistic effects become important, the beam gets out of phase with the oscillating electric field, and cannot receive any additional acceleration. The cyclotron is therefore only capable of accelerating particles up to a few percent of the speed of light. To accommodate increased mass the magnetic field may be modified by appropriately shaping the pole pieces as in the isochronous cyclotrons, operating in a pulsed mode and changing the frequency applied to the dees as in the synchrocyclotrons, either of which is limited by the diminishing cost effectiveness of making larger machines. Cost limitations have been overcome by employing the more complex synchrotron or linear accelerator, both of which have the advantage of scalability, offering more power within an improved cost structure as the machines are made larger. ## Mathematics of the cyclotron ### Non-relativistic The centripetal force is provided by the transverse magnetic field B, and the force on a particle travelling in a magnetic field (which causes it to be angularly displaced, i.e spiral) is equal to Bqv. So, $frac\left\{mv^2\right\}\left\{r\right\} = Bqv$ (Where m is the mass of the particle, q is its charge, v is its velocity and r is the radius of its path.) The speed at which the particles enter the cyclotron due to a potential difference, V. $v = sqrt\left\{frac\left\{2Vq\right\}\left\{m\right\}\right\}$ Therefore, $frac\left\{v\right\}\left\{r\right\} = frac\left\{Bq\right\}\left\{m\right\}$ v/r is equal to angular velocity, ω, so $omega = frac\left\{Bq\right\}\left\{m\right\}$ And since the angular frequency is $omega = \left\{2pi\right\} f_c$ Therefore, $f_c = frac\left\{Bq\right\}\left\{2pi m\right\}$ This shows that for a particle of constant mass, the frequency does not depend upon the radius of the particle's orbit. As the beam spirals out, its frequency does not decrease, and it must continue to accelerate, as it is travelling more distance in the same time. As particles approach the speed of light, they acquire additional mass, requiring modifications to the frequency, or the magnetic field during the acceleration. This is accomplished in the synchrocyclotron. ### Relativistic The radius of curvature for a particle moving relativistically in a static magnetic field is $r = frac\left\{gamma m v\right\}\left\{q B\right\}$ where $gamma=frac\left\{1\right\}\left\{sqrt\left\{1-left\left(frac\left\{v\right\}\left\{c\right\}right\right)^2\right\}\right\}$ the Lorentz factor Note that in high-energy experiments energy, E, and momentum, p, are used rather than velocity, and both measured in units of energy. In that case one should use the substitution, $frac\left\{p\right\}\left\{E\right\} = v$ where this is in Natural units The relativistic cyclotron frequency is $f=f_csqrt\left\{1-left\left(frac\left\{v\right\}\left\{c\right\}right\right)^2\right\}$, where $f_c$ is the classical frequency, given above, of a charged particle with velocity $v$ circling in a magnetic field. The rest mass of an electron is 511 keV, so the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage. The proton mass is nearly two thousand times the electron mass, so the 1% correction energy is about 9 MeV, which is sufficient to induce nuclear reactions. An alternative to the synchrocyclotron is the isochronous cyclotron, which has a magnetic field that increases with radius, rather than with time. The de-focusing effect of this radial field gradient is compensated by ridges on the magnet faces which vary the field azimuthally as well. This allows particles to be accelerated continuously, on every period of the radio frequency, rather than in bursts as in most other accelerator types. This principle that alternating field gradients have a net focusing effect is called strong focusing. It was obscurely known theoretically long before it was put into practice. ## Related technologies • The spiraling of electrons in a cylindrical vacuum chamber within a transverse magnetic field is also employed in the magnetron, a device for producing high frequency radio waves (microwaves). • The Synchrotron moves the particles through a path of constant radius, allowing it to be made as a pipe and so of much larger radius than is practical with the cyclotron and synchrocyclotron. The larger radius allows the use of numerous magnets, each of which imparts angular momentum and so allows particles of higher velocity (mass) to be kept within the bounds of the evacuated pipe. The magnetic field strength of each of the bending magnets is increased as the particles gain energy in order to keep the bending angle constant. ## See also • Beamline • cyclotron radiation, synchrotron light or its close relative, bremsstrahlung radiation. • electron cyclotron resonance • Gyrotron • ion cyclotron resonance • Linear accelerator • Particle accelerator • Storage ring • synchrocyclotron • synchrotron • TRIUMF - largest cyclotron in the world • Radiation reaction ## External links • -- Method and apparatus for the acceleration of ions • Indiana University Cyclotron Facility MPRI treats first patient using robotic gantry system. • "The 88-Inch Cyclotron at LBNL" • "The NSCL at Michigan State University" Home of coupled K500 and K1200 cyclotrons; the K500, being the first superconducting cyclotron and the K1200 currently the most powerful in the world. • Rutgers Cyclotron and "Building a Cyclotron on a Shoestring" Tim Koeth, now a graduate student at Rutgers University, built a 12-inch 1 MeV cyclotron as an undergraduate project, which is now used for a senior-level undergraduate and a graduate lab course. • "Cyclotron java applet" • "Resonance Spectral Analysis with a Homebuilt Cyclotron" an experiment done by Fred M. Niell, III his senior year of high school (1994-95) with which he won the overall grand prize in the ISEF. • Relativistic accelerator physics PDF • web site of company IBA • The Cyclotron Kids A group of high schooled students attempting to build their own 1 MeV cyclotron for experimentation. Last updated on Thursday September 18, 2008 at 19:08:30 PDT (GMT -0700) Search another word or see cyclotronon Dictionary | Thesaurus |Spanish
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236867427825928, "perplexity_flag": "middle"}
http://users.telenet.be/vdmoortel/dirk/Physics/Acceleration.html
Home Is Where The Wind Blows SR treatment of arbitrarily accelerated motion I posted this a while ago on Usenet in group sci.physics.relativity. See the thread: a3M1b.82977\[email protected] ( V1.7 - Most recent additions and modifications: 6-May-2013 ) This is the MathJax version. Click here for the original ASCII text version. FWIW, I did this as a little exercise a few weeks ago on a vacation day when it was too hot to do anything else. Since there are a few active threads on accelerated motion and whether it can or cannot be treated by special relativity, I decided to post this for those who might be interested. Everything is pretty straighforward, so I have not attached numbers to the equations. Feel free to point out the typo's and errors. See also http://hermes.physics.adelaide.edu.au/~dkoks/Faq/Relativity/SR/acceleration.html http://groups.google.co.uk/groups?&[email protected] Added later: Time rate of arbitrarily accelerated clock observed by arbitrarily accelerated observer: Once we know the acceleration that is "felt" by an observer as a function of his own proper time (called the proper acceleration), which can be very easily measured, this function can be used to calculate every aspect of his motion as seen by any interested inertial observer. Suppose I is an inertial observer and A is an arbitrarily accelerated observer that constantly monitors the acceleration he feels as a function $a($$\tau$$)$ of his own proper time $\tau$ (this is "tau"). At each time $\tau$, observer A can write the amount with which his velocity has changed during a small (infinitesimal) proper time interval $d \tau$ during which the acceleration does not change as $dV(\tau)$ $= a(\tau) d \tau$ This change of velocity is to be regarded with respect to the instantaneously comoving inertial frame at time $\tau$. Observer I can parametrize the worldline of A with the same proper time $\tau$, so he will see infinitesimally consecutive velocities of A as $v(\tau)$ and $v(\tau+d \tau)$. Since $v(\tau+d \tau)$ is the standard SR composition ('addition') of the velocities $v(\tau)$ and $dV(\tau)$, we can write (using $c=1$): $$v(\tau+d \tau) = \frac{ v(\tau) + dV(\tau) }{ 1 + v(\tau) dV(\tau) }$$ This part was added for the benefit of Mr. "Mike" aka "Bill Smith" aka "Undeniable" aka "Eleatis". See also the threads "An interesting SR puzzle" and "The Bell Tolls for Dinky-Donkey" You know that SR says that     "When an object P has a relative velocity u w.r.t.      to an inertial frame Q , of which the observer itself      has a relative velocity v w.r.t. an inertial observer R ,     then the object P has a relative velocity          w = (u+v)/(1+u v)     w.r.t. the observer R." Now replace     P => observer A at proper time T+dT     Q => instantaneously comoving inertial frame of observer             at proper time T of observer     R => initial rest frame I     u => dV(T)     v => v(T)     w => v(T+dT) We must do this because it must be valid for *all* values of T, and v(T) becomes quite large, so using the Galilean transformation here would be totally wrong. Together with the fact that dV(T) can be written as         dV(T) = a(T) dT because dT is infinitesimal, that is the very essence of the derivation. which, using the expression for $dV(\tau)$, becomes $$v(\tau+d \tau) = \frac{ v(\tau) + a(\tau) d \tau }{ 1 + v(\tau) a(\tau) d \tau }.$$ To calculate $\frac{dv}{d \tau}(\tau)$ we write $$\frac{ v(\tau+d \tau) - v(\tau) }{d \tau} = a(\tau) \frac{1-v^2(\tau)}{ 1 + v(\tau) a(\tau) d \tau }$$ and take the limit $d \tau \rightarrow 0$, giving $$\frac{dv}{d \tau}(\tau) = a(\tau) ( 1 - v^2(\tau) ).$$ Rearranging to $$\frac{dv(\tau)}{ 1 - v^2(\tau) } = a(\tau) d \tau$$ and integrating between $\tau_0$ and $\tau$, writing $v_0 = v(\tau_0)$, this becomes $$\text{artanh}(v(\tau)) - \text{artanh}(v_0) = \int_{\tau_0}^\tau a(\tau') d \tau',$$ which, using the abbreviation $$A(\tau) = \int_{\tau_0}^\tau a(\tau') d \tau',$$ produces $$\text{artanh}(v(\tau)) - \text{artanh}(v_0) = A(\tau).$$ Inverting to $$\frac{ v(\tau) - v_0 }{ 1 - v(\tau) v_0 } = \tanh(A(\tau))$$ and isolating $v(\tau)$ gives $$v(\tau) = \frac{ v_0 + \tanh(A(\tau)) }{ 1 + v_0 \tanh(A(\tau)) },$$ or, written more tersely: $$v(\tau) = \tanh( A(\tau) + \text{artanh}(v_0) ),$$ which, with $v_0 = 0$, reduces to $$v(\tau) = \tanh(A(\tau)).$$ Use the standard SR Lorentz transformation (with $c=1$) between the frame I and the instantaneously comoving inertial frame of A at time $\tau$: $$dt = \gamma(\tau) ( d \tau + v(\tau) d \xi )$$ $$dx = \gamma(\tau) ( d \xi + v(\tau) d \tau ).$$ Since we are working on the worldline of I where $\xi=0$ and thus $d \xi=0$, we get: $$\begin{align} \frac{dt}{d \tau} &= \gamma(\tau) \\ &= \frac{1} { \sqrt{1-v^2(\tau) } } \\ &= \frac{1} { \sqrt{1-\tanh^2( A(\tau)+\text{artanh}(v_0) ) } } \\ &= \cosh( A(\tau)+\text{artanh}(v_0) ) \end{align}$$ and $$\begin{align} \frac{dx}{d \tau} &= \gamma(\tau) v(\tau) \\ &= \frac{v(\tau)}{ \sqrt{ 1-v^2(\tau) } } \\ &= \frac{\tanh( A(\tau)+\text{artanh}(v_0) )}{ \sqrt{ 1-\tanh^2( A(\tau)+\text{artanh}(v_0) ) } } \\ &= \sinh( A(\tau)+\text{artanh}(v_0) ) \end{align}$$ Integrated between $\tau_0$ and $\tau$, and with $x_0 = x(\tau_0)$ and $t_0 = t(\tau_0)$, this results in $$x(\tau) = x_0 + \int_{\tau_0}^\tau \sinh( A(\tau')+\text{artanh}(v_0) ) d \tau'$$ $$t(\tau) = t_0 + \int_{\tau_0}^\tau \cosh( A(\tau')+\text{artanh}(v_0) ) d \tau'.$$ If possible, eliminate $\tau$ to find the equation of the worldline $x(t)$ of the accelerated observer A in the frame of the inertial observer I: $$x(t) = \dots$$ If possible, invert the expression for $t(\tau)$ to find the proper time of A as a function of the coordinate time $t$ of I: $$\tau(t) = \dots$$ and use it to find the velocity $v(t)$ as a function of coordinate time $t$: $$v(t) = \tanh( A(\tau(t))+\text{artanh}(v_0) ).$$ Summary: $x$ and $t$: coordinates of object as seen in inertial frame $a(\tau)$: felt proper acceleration as function of proper time $\tau$ $$A(\tau) = \int_{\tau_0}^\tau a(\tau') d \tau'$$ $$v(\tau) = \tanh( A(\tau)+\text{artanh}(v_0) )$$ $$\frac{dt}{d \tau} = \cosh( A(\tau)+\text{artanh}(v_0) )$$ $$\frac{dx}{d \tau} = \sinh( A(\tau)+\text{artanh}(v_0) )$$ $$t(\tau) = t_0 + \int_{\tau_0}^\tau \cosh( A(\tau')+\text{artanh}(v_0) ) d \tau'$$ $$x(\tau) = x_0 + \int_{\tau_0}^\tau \sinh( A(\tau')+\text{artanh}(v_0) ) d \tau'$$         Eliminate $\tau$ to find the worldline equation $x(t)$. Note: These equations are derived in a different way in the article http://arxiv.org/PS_cache/physics/pdf/0411/0411233v1.pdf See equations (3), (4), (5) on page 3. Special case When $\tau_0 = t_0 = x_0 = v_0 = 0$, $$v(\tau) = \tanh(A(\tau))$$ $$\frac{dt}{d \tau} = \cosh(A(\tau))$$ $$\frac{dx}{d \tau} = \sinh(A(\tau))$$ $$t(\tau) = \int_0^\tau \cosh(A(\tau')) d \tau'$$ $$x(\tau) = \int_0^\tau \sinh(A(\tau')) d \tau'$$ Example: The rocket with constant acceleration of the FAQ: http://hermes.physics.adelaide.edu.au/~dkoks/Faq/Relativity/SR/rocket.html Take $$\tau_0 = t_0 = x_0 = v_0 = 0$$ and $$a(\tau) = a = \text{constant}.$$ So $$A(\tau) = \int_0^\tau a(\tau') d \tau' = a \tau$$ and $$v(\tau) = \tanh(A(\tau)) = \tanh(a \tau)$$ $$t(\tau) = \int_0^\tau \cosh(a \tau') d \tau' = \frac{1}{a} \sinh(a \tau).$$ $$x(\tau) = \int_0^\tau \sinh(a \tau') d \tau' = \frac{1}{a} ( \cosh(a \tau) - 1 )$$ Eliminate $\tau$: $$\left(x+\frac{1}{a}\right)^2 - t^2 = \frac{1}{a^2}$$ giving the hyperbola $$x(t) = \frac{1}{a} \left( \sqrt{ 1 + (a t)^2 } - 1 \right).$$ Proper time as a function of coordinate time: $$\tau(t) = \frac{1}{a} \text{arsinh}(a t)$$ so $$\begin{align} v(t) &= \tanh( a \tau(t) ) \\ &= \tanh( \text{arsinh}(a t) ) \\ &= \frac{a t}{ \sqrt{ 1 + (a t)^2 } }. \end{align}$$ Re-introduce c and we find as functions of proper time $\tau$: $$v(\tau) = c \tanh\left(\frac{a \tau}{c}\right)$$ $$\gamma(\tau) = \cosh\left(\frac{a \tau}{c}\right)$$ $$t(\tau) = \frac{c}{a} \sinh\left(\frac{a \tau}{c}\right)$$ $$x(\tau) = \frac{c^2}{a} \left( \cosh\left(\frac{a \tau}{c}\right) - 1 \right)$$ and as functions of coordinate time t: $$v(t) = \frac{a t}{ \sqrt{ 1 + \left(\frac{a t}{c}\right)^2 } }$$ $$\gamma(t) = \sqrt{ 1 + \left(\frac{a t}{c}\right)^2 }$$ $$\tau(t) = \frac{c}{a} \text{arsinh}\left(\frac{a t}{c}\right)$$ $$x(t) = \frac{c^2}{a} \left( \sqrt{ 1 + \left(\frac{a t}{c}\right)^2 } -1 \right) \qquad (\text{the hyperbola})$$ which are the equations of the FAQ entry. [ added on 9-Jul-2009 ] Furthermore we have the momentum given by $$p(t) = m \gamma(t) v(t) = m a t \qquad (\text{linear})$$ and the force $$F(t) = \frac{dp}{dt}(t) = m a \qquad (\text{constant})$$ Dirk Vdm Much more straightforward and concise, using 4-vector formalism Posted on Usenet: Putting $c = 1$ and using accent notation for the derivative w.r.t. proper time $\frac{d}{d \tau}$: Velocity 4-vector $$V = ( t', x', 0, 0 ) = ( \frac{dt}{d \tau}, \frac{dx}{d \tau}, 0, 0 ).$$ Acceleration 4-vector $$V' = ( t'', x'', 0, 0 ) = ( \frac{d^2t}{d \tau^2}, \frac{d^2x}{d \tau^2}, 0, 0 )$$ with unknown functions $t(\tau)$ and $x(\tau)$. You have $V.V = 1$, giving $$\left(\frac{dt}{d \tau}\right)^2 - \left(\frac{dx}{d \tau}\right)^2 = 1.$$ Since $\frac{dV}{d \tau}.\frac{dV}{d \tau}$ is invariant, it must have the value it has in the comoving inertial frame, giving $$\left(\frac{d^2t}{d \tau^2}\right)^2 - \left(\frac{d^2x}{d \tau^2}\right)^2 = 0 - a^2.$$ These 2 differential equations are pretty common and easily solved. With the boundary conditions $t(0) = 0$ and $x(0) = 0$: $$t(\tau) = \frac{1}{a} \sinh( a \tau )$$ $$x(\tau) = \frac{1}{a} \cosh( a \tau ) - \frac{1}{a}$$ from which you can eliminate the proper time $\tau$ and find the hyperbola $$x(t) = \frac{1}{a} \left( \sqrt{ 1 + (a t)^2 } -1 \right)$$ For an "interesting" comment, see also [email protected] Hit this to mail me. (-: Dirk Van de moortel ;-) Home Is Where The Wind Blows
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 59, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8913017511367798, "perplexity_flag": "middle"}
http://nrich.maths.org/1785/solution
### 14 Divisors What is the smallest number with exactly 14 divisors? ### Summing Consecutive Numbers Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? ### Rule of Three If it takes four men one day to build a wall, how long does it take 60,000 men to build a similar wall? # Largest Product ##### Stage: 3 Challenge Level: Thank you to all those who tried this problem, there were a large number of solutions received. Luke and Alex from Aqueduct Primary sent in their solution: The largest product that we could find was $2.5 \times 2.5 \times 2.5 \times2.5 = 39.06$. We spent hours looking at other numbers but realised that larger numbers often made smaller products. This was a really fun challenge! Matthew from Bydales realised that we have to exclude negatives, otherwise solutions like $99\times - 44\times - 45 = 196 020$ are possible. He wrote: Using negative numbers you can make the number as big as you like as long as the other two numbers are negative and will leave you with 10 when they are added. You can therefore get close to inifinity if negatives are allowed! Thank you to all of the children at St George's CE Primary School who had a go at this problem. Nathan, Otis, Hannah and Leon all sent in the correct answer. Saif fom Durston School, Sion and Daniel from TES and Cameron from Tokoroa Intermediate in New Zealand all sent in correct solutions as well. Andy from Garden International School sent in a very clearly explained solution showing his working: First we divided ten by $2$, then ten by $3$, then ten by $4$ and so on until we reach $9$. Then we multiplied the divided number. In the case of $2$ is: $5\times 5$ In the case of $3$ is: $3\frac{ 1}{3}\times3\frac{ 1}{3}\times3\frac{ 1}{3}$. We divided ten because the same number multiplied together would give the largest product. We started with two because there's no point using one and we end with $9$ because there's no point multiplying $1$ by $1$ nine times. In the end $2 \frac{1}{2} \times 2 \frac{1}{2} \times 2 \frac{1}{2} \times 2 \frac{1}{2}$ gives the largest product and the product is $39.0625$. Kang Yun Seok, also from Garden International School sent in a complete solution and discussed how similar numbers give a larger product due to the properties of shapes like squares and circles. This was also discussed in other solutions including the one sent in by Mikey of Tadcaster School. The largest product of any two numbers is from numbers that are as similar as possible. That is why a $20 \times20$ square has bigger area than a rectangle of the same perimeter - say, $39 \times1$ ...(that is because the biggest shape you can make with a length of string is a circle or square instead of a long thin rectangle). So the largest product from ten is... with $2$ numbers that add up to ten: $5 \times5$ $3$ numbers that add up to ten: $3.333 \times3.333 \times3.33$ 4... 5... and the largest of these products is: $2 \frac{1}{2} \times 2 \frac{1}{2} \times 2 \frac{1}{2} \times 2 \frac{1}{2}= 39.0625$. It soon becomes apparent that when $10$ is split into $n$ parts (where $n$ is a whole number), the largest product is given by $(\frac{10}{n})^n$ . So the problem is in fact to find the largest value of $(\frac{10}{n})^ n$ . The largest value of $(\frac{10}{n})^ n$ is when $n=4$ that is $39.0625$. This is the largest product. Thomas Hu from A Y Jackson school used his knowledge of the euler number ($e=2.71828...$) to find the maximal solution for all values of $x$ Given the product $(\frac{x}{n})^n$ (where $x$ is the number in question and $n$ in the number of parts it is being divided into). It is already clear that repeated multiplication of the same number $(2.5^4)$ is greater than that of two different numbers $(4*6)$ due to maximization and difference of squares $(x-a)(x+a)= x^2-a^2$. Although the optimal $n$ for $10$ was stated to be $4$, that is only true if one assumes that $n$ must be an integer. Otherwise, $n = 3.7, 3.68, 3.679, 3.6788$ each provides increasingly larger products. But these cannot work as the number must still add to the value $x$. But to maximise $(\frac{x}{n})^n$, where $n$ is an integer, then $n$ must be chosen to find $\frac{x}{n}$ as close as possible to e. Or in technical notation, to minimise the absolute value of $\frac{x}{n}-e$ subject to $n$ an integer for given $x$. Well done Thomas, you've really got the hang of this problem. For those who don't understand his notation, you have to find an integer $n$ to make $\frac{x}{n}$ as close as possible to 2.7.You then use this integer to find the answer. Abover $x=10$, $n=4$ and so $\frac{x}{n}$ is 2.5. So the sum is $10$, and the product is $(\frac{x}{n})^n$, which is $39.0625$ in this case. Well done everyone on a very tough problem! The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584386348724365, "perplexity_flag": "head"}
http://mathoverflow.net/questions/73163?sort=oldest
## Lifting infinitesimal deformations for coverings ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $f:X \rightarrow Y$ be an (unramified) holomorpic covering map between two (maybe non compact) complex manifolds. Q: Does every infinitesimal deformation of Y lift faithfully to an infinitesimal deformation of X, (i.e. is there a canonical injective map $l:H^1(Y, \Theta_Y) \rightarrow H^1(X,\Theta_X)$? If not, do you know a counterexample? Thanks! - I think that if you look at the universal covering of a compact Riemann surface of genus at least two, i.e. covering by the disk, there are no infinitesimal deformations of the disk, so the lifting is not faithful. – Ben McKay Aug 18 2011 at 16:24 ## 1 Answer Since $f$ is unramified, $f^*\Theta_Y = \Theta_X$ so the map you want is just the map induced by pullback. The map will be injective if $f$ is a finite covering since then the natural inclusion `$\Theta_X \to f_*f^*\Theta_X$` splits by using the trace. In general it will not be injective. For an example you can consider an elliptic curve $Y = \mathbb{C}/\Lambda$ with $\Lambda$ a lattice (and $X = \mathbb{C}$). $H^1(Y,\Theta_Y)$ is $1$ dimensional but the corresponding space for $X$ is $0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8927529454231262, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/09/24/inverses-of-power-series/?like=1&source=post_flair&_wpnonce=966197e62c
# The Unapologetic Mathematician ## Inverses of Power Series Now that we know how to compose power series, we can invert them. But against expectations I’m talking about multiplicative inverses instead of compositional ones. More specifically, say we have a power series expansion $\displaystyle p(x)=\sum\limits_{n=0}^\infty p_nz^n$ within the radius $r$, and such that $p(0)=p_0\neq0$. Then there is some radius $\delta$ within which the reciprocal has a power series expansion $\displaystyle\frac{1}{p(x)}=\sum\limits_{n=0}^\infty q_nz^n$ In particular, we have $q_0=\frac{1}{p_0}$. In the proof we may assume that $p_0=1$ — we can just divide the series through by $p_0$ — and so $p(0)=1$. We can set $\displaystyle P(z)=1+\sum\limits_{n=1}^\infty\left|p_nz^n\right|$ within the radius $h$. Since we know that $P(0)=1$, continuity tells us that there’s $\delta$ so that $|z|<\delta$ implies $|P(z)-1|<1$. Now we set $\displaystyle f(z)=\frac{1}{1-z}=\sum\limits_{n=0}^\infty z^n$ $\displaystyle g(z)=1-p(z)=\sum\limits_{n=0}^\infty -p_nz^n$ And then we can find a power series expansion of $f\left(g(z)\right)=\frac{1}{p(z)}$. It’s interesting to note that you might expect a reciprocal formula to follow from the multiplication formula. Set the product of $p(z)$ and an undetermined $q(z)$ to the power series $1+0z+0z^2+...$, and get an infinite sequence of algebraic conditions determining $q_n$ in terms of the $p_i$. Showing that these can all be solved is possible, but it’s easier to come around the side like this. ### Like this: Posted by John Armstrong | Analysis, Calculus, Power Series No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9149054884910583, "perplexity_flag": "middle"}
http://www.reference.com/browse/Koszul_complex
Definitions # Koszul complex In mathematics, the Koszul complex was first introduced to define a cohomology theory for Lie algebras, by Jean-Louis Koszul (see Lie algebra cohomology). It turned out to be a useful general construction in homological algebra. ## Introduction In commutative algebra, if x is an element of the ring R, multiplication by x is R-linear and so represents an R-module homomorphism x:R →R from R to itself. It is useful to throw in zeroes on each end and make this a (free) R-complex: $$ 0to Rxrightarrow{ x }Rto0. Call this chain complex K•(x). Counting the right-hand copy of R as the zeroth degree and the left-hand copy as the first degree, this chain complex neatly captures the most important facts about multiplication by x because its zeroth homology is exactly the homomorphic image of R modulo the multiples of x, H0(K•(x)) = R/xR, and its first homology is exactly the annihilator of x, H1(K•(x)) = AnnR(x). This chain complex K•(x) is called the Koszul complex of R with respect to x. Now, if x1, x2, ..., xn are elements of R, the Koszul complex of R with respect to x1, x2, ..., xn, usually denoted K•(x1, x2, ..., xn), is the tensor product in the category of R-complexes of the Koszul complexes defined above individually for each i. The Koszul complex is a free chain complex. There are exactly (n choose j) copies of the ring R in the jth degree in the complex (0 ≤ j ≤ n). The matrices involved in the maps can be written down precisely. Letting $e_\left\{i_1...i_p\right\}$ denote a free-basis generator in Kp, d: Kp $mapsto$ Kp − 1 is defined by: $$ d(e_{i_1...i_p}) := sum _{j=1}^{p}(-1)^{j-1}x_{i_j}e_{i_1...widehat{i_j}...i_p}. For the case of two elements x and y, the Koszul complex can then be written down quite succinctly as $$ 0 to R xrightarrow{ d_2 } R^2 xrightarrow{ d_1 } Rto 0, with the matrices $d_1$ and $d_2$ given by $$ d_1 = begin{bmatrix} x & y end{bmatrix} and $$ d_2 = begin{bmatrix} -y x end{bmatrix}. Note that di is applied on the left. The cycles in degree 1 are then exactly the linear relations on the elements x and y, while the boundaries are the trivial relations. The first Koszul homology H1(K•(x, y)) therefore measures exactly the relations mod the trivial relations. With more elements the higher-dimensional Koszul homologies measure the higher-level versions of this. In the case that the elements x1, x2, ..., xn form a regular sequence, the higher homology modules of the Koszul complex are all zero, so K•(x1, x2, ..., xn) forms a free resolution of the R-module R/(x1, x2, ..., xn)R. ## Example If k is a field and X1, X2, ..., Xd are indeterminates and R is the polynomial ring k[X1, X2, ..., Xd], the Koszul complex K•(Xi) on the Xi's forms a concrete free R-resolution of k. ## Theorem If (R, m) is a local ring and M is a finitely-generated R-module with x1, x2, ..., xn in m, then the following are equivalent: 1. The (xi) form a regular sequence on M, 2. H1(K•(xi)) = 0, 3. Hj(K•(xi)) = 0 for all j ≥ 1. ## Applications The Koszul complex is essential in defining the joint spectrum of a tuple of bounded linear operators in a Banach space. ## References • David Eisenbud, Commutative Algebra. With a view toward algebraic geometry, Graduate Texts in Mathematics, vol 150, Springer-Verlag, New York, 1995. ISBN 0-387-94268-8 Last updated on Sunday August 19, 2007 at 22:38:19 PDT (GMT -0700) Search another word or see Koszul_complexon Dictionary | Thesaurus |Spanish
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8242249488830566, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/106593-how-do-you-determine-total-amount-positive-factors-number.html
# Thread: 1. ## How do you determine the total amount of positive factors of a number Part of a homework assignment I'm stuck on from my discrete math class. I suspect this may belong in another forum, but I wasn't sure which one. Part a.) Find the prime factorization of 7056. okay.. no problem $7056 = 2 * 3528 = 2^2 * 1764 = 2^3 * 882 = 2^4 * 441 = 2^4 * 3 * 147 = 2^4 * 3^2 * 49 = 2^4 * 3^2 * 7^2$ Part b.) How many positive factors are there of 7056? I'm stuck on this, I used an example of a previous question from the homework where I discovered that: $48 = 2^4 * 3$ and that it has 10 positive factors. (namely 1, 2, 3, 4, 6, 8, 12, 16, 24, and 48) and 10 negative identical factors. I can't seem to derive the method to determine total factors based on the powers of each individual factor. I certainly don't want to write them all out. My best guess is 36, which i figured by adding each exponent, then adding each exponent multiplied to the other exponents, in turn, then adding all three exponents multiplied. (1+4+2+2+1)+(4*2+4*2+2*2)+(4*2*2)=10+20+16=36 something seems fishey about my methodology though.. Thanks for the help! 2. just a counting argument. any factor of that number is going to share some or all of the prime factors. Let $n=p_1^{e_1}\cdot... \cdot p_n^{e_1}$ Then any factor of n will have 0 factors of $p_1$, 1 factor of $p_1$, ... or $e_1$factors of $p_1$. Similarly for each prime. Thus the total number of possible combinations of factors will be $(e_1+1)\cdot ... \cdot (e_n+1)$ So in your example you have $n=2^43^1$ so it has $(4+1)(1+1)=5\cdot2=10$ factors 3. Hello, TravisH82! Part (a) Find the prime factorization of 7056. okay.. no problem . . . $7056 \:=\: 2^4\cdot3^2\cdot7^2$ Part (b) How many positive factors are there of 7056? Let's construct a factor of 7056 . . . We have 5 choices for twos: . $\begin{Bmatrix}\text{0 twos} \\ \text{1 two} \\ \text{2 twos} \\ \text{3 two's} \\ \text{4 two's} \end{Bmatrix}$ We have 3 choices for threes: . $\begin{Bmatrix}\text{0 threes} \\ \text{1 three} \\ \text{2 threes} \end{Bmatrix}$ We have 3 choices for sevens: . $\begin{Bmatrix}\text{0 sevens} \\ \text{1 seven} \\ \text{2 sevens} \end{Bmatrix}$ Hence, there are: . $5\cdot3\cdot3 \:=\:45$ choices for a factor of 7056. . . Therefore, 7056 has $\boxed{45}$ factors. . . . This includes 1 and 7056 itself. Now you can see why Gamma's formula works. Find the prime factorization: . $7056 \;=\;2^4\cdot3^2\cdot7^2$ Add 1 to each exponent and multiply: . . $(4+1)(2+1)(2+1) \;=\;5\cdot3\cdot3 \;=\;45$ factors . . . see? 4. Exactly. Wow very nice looking solution Soroban, exactly what I was going for, but I suck at TeX, lol. Thanks for taking the time to clean it up. 5. Ahh.. I see where I was going wrong.. i was neglecting to consider 2^0 as a choice. Thank you all for your help.. this makes it much clearer to me!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318051338195801, "perplexity_flag": "middle"}
http://nrich.maths.org/1956/clue
### Surds Find the exact values of x, y and a satisfying the following system of equations: 1/(a+1) = a - 1 x + y = 2a x = ay ### The Root of the Problem Find the sum of the series. ### Fit for Photocopying Photocopiers can reduce from A3 to A4 without distorting the image. Explore the relationships between different paper sizes that make this possible. # Origami Shapes ##### Stage: 4 Challenge Level: The first thing to notice when solving this problem is that the sides of a sheet of A4 paper are in the ratio $1$ to $\sqrt{2}$ . (Check this by taking a sheet and measuring it.) It simplifies the working a great deal if we treat the sheet of A4 paper as a rectangle with sides $1$ and $\sqrt{2}$, and keep the working exact by leaving the square roots in our expressions rather than typing them into a calculator. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910463809967041, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/129207/division-of-a-matrix-polynomial
# Division of a matrix polynomial I'm trying to show: Let $C(t)=C_rt^r+C_{r-1}t^{r-1}+\cdots+C_1t^1+C_0\in \mathcal{M}_n(\mathbb{F}[t])$ a polynomial with coefficients $C_{i}$ in $\mathcal{M}_n(\mathbb{F})$. Show that, exists a matrix polynomial $Q(t)$ such that: $$C(t)=Q(t)(tI_n-A)+C(A)$$ That means, the remainder of the division 'to the right' of $C(t)$ by $(tI_n-A)$ is the matrix: $$C(A)=C_rA^r+C_{r-1}A^{r-1}+\cdots+C_1A^1+C_0$$ Actually, I'm a little confused with this exercise. It says nothing about $(tI_n-A)$. - Compute $C(t)-C(A)$ using identities like $t^kI-A^k=(tI-A)\ldots$. – Davide Giraudo Apr 8 '12 at 8:50 ## 1 Answer You can do this by thinking of matrices with polynomials as entries as polynomials with matrices as coefficients (your expresion for $C(t)$ already suggests this point of view), by taking the coefficient of $t^i$ to be the matrix of the coefficients of $t^i$ of your polynomial entries. Be careful, those are polynomials with non-commutative coefficients, and not everything that works for commutative polynomials works for them, notably one cannot just evaluate them (in a matrix); this means your expression $C(A)$ does not have any sense a priori. For now I'll call it $R$ (for remainder) instead and just assume it to be some constant matrix; I'll show that it is given by your expression later. Euclidean right-division by a monic polynomial such as $I_nt-A$ turns out to work as usual (here monic means the leading coefficient is the identity matrix $I_n$). (Left-division works too, but the answers may differ.) Here's how it works in detail. Since $R$ is a constant (no $t$ is present), the leading term $C_rt^r$ can only come (where I'm assuming $r>0$) from the product $Q(t)(I_nt-A)$; this forces $Q(t)$ to have leading term $C_rt^{r-1}$. Denoting by $Q_1(t)$ the remaining terms of $Q(t)$ it is easy to see they have to satisfy $$C(t)-C_rt^r+C_rAt^{r-1}=Q_1(t)(I_nt-A)+R$$ (note that I rewrote $C_rt^{r-1}A$ as $C_rAt^{r-1}$; this commutation of powers of $t$ with constant matrices is obvious when interpreting these as matrices with polynomial entries). The left hand side is a polynomial of degree at most $r-1$, so we may assume by induction on $r$ that we know how to handle the Euclidean division for it. The base case of the induction which I skipped over is easy: if the left hand side $C(t)$ is constant, the obviously $Q(t)=0$ and $R=C(t)$ (which we may write as $C(A)$ if we like since $C(t)$ does not contain $t$ anyway). So if we put $C_1(t)=C(t)-C_rt^r+C_rAt^{r-1}$ and assume by induction that there are unique $Q_1(t)$ and constant $R$ such that $C_1(t)=Q_1(t)(I_nt-A)+R$, then we have shown that there also exists a unique pair for $C(t)$, namely $Q(t)=C_rt^{r-1}+Q_1(t)$ and the same $R$ as for $C_1(t)$. All that remains is showing that $R=C(A)$ as given by the expression in the question (which is called the right-evaluation of $C(t)$ at $A$). We've seen that this is trivially true if $C(t)$ is constant, so we may assume it holds for $C_1(t)$, in other words $C_1(t)=Q_1(t)(I_nt-A)+C_1(A)$. Now $C_1(t)$ was obtained from $C(t)$ by removing the leading term $C_rt^r$ and adding the term $C_rAt^{r-1}$ in its place; it is immediate that the contribution of those two terms to the value of $C(A)$ is the same, and therefore $C(A)=C_1(A)$. Thus one finds $R=C(A)$ as promised. - Thank you very much! – Hiperion Apr 8 '12 at 15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.969028890132904, "perplexity_flag": "head"}
http://mathoverflow.net/questions/88539/sums-of-rational-squares/89948
## sums of rational squares ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is a well-known fact that if an integer is a sum of two rational squares then it is a sum of two integer squares. For example, Cohen vol. 1 page 314 prop. 5.4.9. Cohen gives a short proof that relies on Hasse-Minkowski, but he attributes the theorem (without reference) to Fermat, who didn't have Hasse-Minkowski available. So my question is, how did Fermat prove this theorem? and part 2 of the question is, what is the simplest direct proof? I googled for this result and found a manuscript with a proof that doesn't use Hasse-Minkowski, but it's not very short. - ## 5 Answers This result is pretty shy of needing the full Hasse-Minkowski Theorem. Indeed, since Fermat already knew which integers were a sum of two integer squares, it would suffice for him to show that those that weren't (i.e., those with an odd power of some prime congruent to 3 mod 4 showing up in its prime factorization) could also not be written as a sum of two rational squares. But this is the easy direction of Hasse-Minkowski: To show that (let's say) a prime $p\equiv 3\pmod{4}$ can't be written as a sum of two rational squares, it suffices to check that it can't be a sum of two $\ell$-adic rational squares for some $\ell$. Of course, Fermat did not have the language of the $\ell$-adics, so this would have had been replaced with mod-$q^k$ conditions for various $k$. Specifically, the modern Hasse-Minkowski proof boils down to the statement that a prime which is 3 mod 4 can't be written as a sum of two rational squares because it can't be done so 2-adically. Indeed, one can just compute the single Hilbert symbol $$p\equiv 3\pmod{4}\Rightarrow (p,-1)_2=(-1)^{(p-1)/2}=-1,$$ showing that $x^2=pz^2-y^2$ has no $2$-adic, and hence no rational, solutions, which afte the substitutions $a=x/z$ and $b=y/z$, implies one cannot write $p=a^2+b^2$ with $a,b\in\mathbb{Q}$. Of course (again), Fermat did not have Hilbert symbols, but this is just a change of language away from Fermat's approach (I imagine). It would not be hard to unwind the above calculation into a single (probably lengthy) mod-8 calculation, since that's all that goes into deciding which elements of $\mathbb{Q}_2$ are squares, which in turn is essentially all that lives behind the Hilbert symbols. - 16 Can't one just say that if a=(b/c)^2+(d/e)^2, then ac^2e^2=(be)^2+(cd)^2? So this reduces to the statement that if a number can't be written as the sum of two integer squares, then that number times a square can't be written as the sum of two integer squares. But this is obvious given the theorem on which numbers are the sum of two integer squares. – Will Sawin Feb 15 2012 at 20:19 Ah, the ol' "clearing the denominator trick." Great! – Cam McLeman Feb 15 2012 at 20:23 The fact that $p \equiv 3 \pmod{4}$ can't be written as a sum of two rational squares can also be proved $p$-adically : if $p d^2 = a^2+b^2$ then $-1$ would be a square mod $p$, which is impossible. In modern language the equation $p=x^2+y^2$ has obstruction precisely at $2$ and at $p$. – François Brunault Feb 16 2012 at 7:36 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In his early days, Fermat realized that a natural number that can be written as a sum of two rational squares actually is a sum of two integral squares, but he did not come back to this claim when eventually he discovered the proof of the Two-Squares Theorem. The result in question can be proved with the methods available to Fermat, as I will show here. Theorem 1 If $n$ is a sum of two rational squares, then every prime $q = 4n+3$ divides $n$ an even number of times. Theorem 2 Every prime number $p = 4n+1$ is the sum of two integral squares. Now we invoke the product formula for sums of two squares $$(a^2 + b^2)(c^2 + d^2) = (ac-bd)^2 + (ad+bc)^2.$$ It implies that every product of prime numbers $4n+1$ and some power of $2$ can be written as a sum of two integral squares, and multiplying through by squares of primes $q = 4n+3$, the claim follows. Proof of Theorem 1 We will show that primes $p = 4n+3$ do not divide a sum of two coprime squares. Assume that $p \mid x^2 + y^2$ with $\gcd(x,y) = 1$. Reducing $x$ and $y$ modulo $p$ we may assume that $-p/2 < x, y < p/2$; cancelling possible common divisors we then have $p \mid x^2 + y^2$ with $\gcd(x,y) = 1$ and $x^2 + y^2 < \frac12 p^2$. If $x$ and $y$ are both odd, then the identity $$\Big(\frac{x+y}2\Big)^2 + \Big(\frac{x-y}2\Big)^2 = \frac{x^2+y^2}2$$ allows us to remove any remaining factor of $2$ from the sum, and we may therefore assume that $a^2 + b^2 = pr$ for some odd number $r < p$. Since $a$ and $b$ then have different parity, $a^2 + b^2$ must have the form $4n+1$, and therefore the number $r$ must have the form $4n+2$. But then $r$ must have at least one prime factor $q \equiv 3 \bmod 4$, and since $a^2 + b^2 < p^2$, we must have $q \le r < p$. Thus if $p \equiv 3 \bmod 4$ divides a sum of two coprime squares, then there must be some prime $q \equiv 3 \bmod 4$ less than $p$ with the same property. Applying descent we get a contradiction. Proof of Theorem 2 1. The prime $p$ divides a sum of two squares. For every prime $p = 4n+1$ there is an integer $x$ such that $p$ divides $x^2+1$. By Fermat's Theorem, the prime $p$ divides $$a^{p-1}-1 = a^{4n}-1 = (a^{2n}-1)(a^{2n}+1).$$ If we can show that there is an integer $a$ for which $p$ does not divide the first factor we are done, because then $p \mid (a^n)^2+1$. By Euler's criterion it is sufficient to choose $a$ as a quadratic nonresidue modulo $p$. Equivalently we may observe that the polynomial $x^{2n}-1$ has at most $2n$ roots modulo $p$. 2. The descent. The basic idea is the following: Assume that the prime number $p = 4n+1$ is not a sum of two squares. Let $x$ be an integer such that $p \mid x^2 + 1$. Reducing $x$ modulo $p$ shows that we may assume that $p \mid x^2+1$ for some even integer $x$ with $0 < x < p$. This implies that $x^2+1 = pm$ for some integer $m < p$. Fermat then shows that there must be som prime divisor $q$ of $m$ such that $q$ is not a sum of two squares; since prime divisors of sums of two coprime squares have the form $4n+1$, Fermat now has found a prime number $q = 4n+1$ strictly less than $p$ that is not a sum of two squares. Repeating this step eventually shows that $p = 5$ is not a sum of two squares, which is nonsense since $5 = 1^2 + 5^2$. Assume that $p = 4n+1$ is not a sum of two squares. We know that $pm = x^2 + 1$ is a sum of two squares for some odd integer $m < p$. By Theorem 1, $m$ is only divisible by primes of the form $4k+1$. We now use the following Lemma Assume that $pm = x^2 + y^2$ for coprime integers $x, y$, and let $q$ be a prime dividing $m$, say $m = qm_1$. If $q = a^2 + b^2$ is a sum of two squares, then $pm_1 = x_1^2 + y_1^2$ is also a sum of two squares. Applying this lemma repeatedly we find that if every $q \mid m$ is a sum of two squares, then so is $p$ in contradiction to our assumption. Thus there is a prime $q = 4k+1$, strictly smaller than $p$, which is not a sum of two squares, and now descent takes over. It remains to prove the Lemma. To this end, we have to perform the division $\frac{x^2+y^2}{a^2+b^2}$. By the product formula $$(a^2+b^2)(c^2+d^2) = (ac-bd)^2 + (bc+ad)^2$$ we have to find integers $c, d$ such that $x = ac-bd$ and $y = bc+ad$. Since $q \mid (a^2 + b^2)$ and $q \mid (x^2 + y^2)$ we also have $$q \mid (a^2+b^2)x^2 - (x^2+y^2)b^2 = a^2x^2 - b^2y^2 = (ax-by)(ax+by).$$ Since $q$ is prime, it must divide one of the factors. Replacing $b$ by $-b$ if necessary we may assume that $q$ divides $ax-by$. We know that $q$ divides $a^2 + b^2$ as well as $pm = x^2 + y^2$, hence $q^2$ divides $(ax - by)^2 + (ay+bx)^2$ by the product formula. Since $q$ divides the first summand, it must divide the second as well, and we have $pm_1 = (\frac{ax-by}q)^2 + (\frac{ay+bx}q)^2$. - Dear Michael, concerning your first question, I think that Franz's proof is really in the spirit of Fermat's techniques. Concerning the second question, here is a short, elementary proof, inspired by a theorem of Thue (cf. exercice 1.2 in Franz's book Reciprocity laws). I tried to write it using only notions known to Fermat. We want to prove that if there exist positive integers $m,n,a$ and $b$ such that $$m^2n=a^2+b^2,$$ then $n$ is itself the sum of two integer squares. It is easily seen that it is sufficient to prove this assertion under the hypothesis that $a$ and $b$ (and therefore $a$ and $n$) are coprime and that $n$ is not a square. Let $t$ be the unique positive integer such that $t^2< n<(t+1)^2$. Since there are $(t+1)^2>n$ integers of the form $au+bv$ with $0\leq u,v\leq t$, it follows that $n$ divides the difference $a(u-u')+b(v-v')$ of two of them. Setting $x=u-u'$ and $y=v-v'$, we have the inequalities $|x|,|y|\leq t$. The integer $n$ then divides $a^2x^2-b^2y^2$; since it also divides $a^2y^2+b^2y^2$, it divides their sum, which is equal to $a^2(x^2+y^2)$. Now, the integers $a$ and $n$ being coprime, it follows that $n$ divides $x^2+y^2$. The inequalities $0< x^2+y^2<2n$ finally imply that $n=x^2+y^2$. - This is an easy consequence of the fact that if $p \equiv 3$ (mod 4) is a rational prime, then $p$ remains prime in the ring of Gaussian integers $\mathbb{Z}[i].$ Then it follows that if $a$ and $b$ are rational integers, the powers of $p$ dividing $a+bi$ and $a-bi$ (in $\mathbb{Z}[i]$ ) are equal. Hence if $n$ is an integer with $a^2 + b^2 = c^2 n$ for integers $a,b$ and $c,$ then the power of $p$ dividing $n$ is even. The fact that $p$ remains prime in $\mathbb{Z}[i]$ was (at least implicitly) known to Fermat. For if $p = (r +si)(t+ui)$ for integers $r,s,t$ and $u,$ then $p^2 = (r^2 +s^2)(t^2 + u^2).$ We can't have $p = r^2 + s^2$ since a sum of two integer squares is congruent to $0,1$ or $2$ (mod 4). Hence we must conclude that one of $r + si$ or $t +ui$ is a unit in $\mathbb{Z}[i]$, so $p$ does remain prime in $\mathbb{Z}[i].$ - EDIT: there is an elementary trick to do this due to Aubry, it is Theorem 4 on page 5 of PETE. The positive primitive binary forms which give the easy Aubry trick showing rational implies integral are: $$x^2 + y^2, x^2 + 2 y^2, x^2 + 3 y^2, x^2 + 5 y^2,$$ $$x^2 + x y + y^2, x^2 + x y + 2 y^2, x^2 + x y + 3 y^2,$$ $$2 x^2 + 3 y^2, 2 x^2 + x y +2 y^2, 2 x^2 + 2 x y + 3 y^2.$$ Note that these are all "ambiguous," that is, equivalent to their "opposites." This is not an accident. Probably needs mention, for the property mentioned by the OP there is no difference between the sum of two squares and the sum of three squares... Right, I don't know about Fermat, but this phenomenon happens often enough. The first mention on MO is http://mathoverflow.net/questions/3269/intuition-for-the-last-step-in-serres-proof-of-the-three-squares-theorem and the technique, due to Aubry, Cassels, and Davenport, is mentioned in Serre A Course in Arithmetic, pages 45-47, and Weil Number Theory: An approach through history from Hammurapi to Legendre, pages 59 and 292ff in which Fermat's possible thinking is discussed. About my use of the word "phenomenon," it is necessary for the Aubry-Cassels-Davenport trick to work that we have Pete's "Euclidean" condition, http://mathoverflow.net/questions/39510/must-a-ring-which-admits-a-euclidean-quadratic-form-be-euclidean which is usually, for positive quadratic forms, referred to as a bound on the "covering radius" of the integral lattice under consideration. It took me a year or so to prove that Pete's condition implied that there could only be one class in that genus, http://mathoverflow.net/questions/69444/a-priori-proof-that-covering-radius-strictly-less-than-sqrt-2-implies-class-nu A complete list of positive forms that satisfy Pete's condition is at NEBE. A mild generalization of the condition, due to Richard Borcherds and his student, Daniel Allcock, applies to such forms as the sum of five squares. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 179, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536331295967102, "perplexity_flag": "head"}
http://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics/42835
## Examples of common false beliefs in mathematics. [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes. Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are (i) a bounded entire function is constant; (ii) sin(z) is a bounded function; (iii) sin(z) is defined and analytic everywhere on C; (iv) sin(z) is not a constant function. Obviously, it is (ii) that is false. I think probably many people visualize the extension of sin(z) to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense. A second example is the statement that an open dense subset U of R must be the whole of R. The "proof" of this statement is that every point x is arbitrarily close to a point u in U, so when you put a small neighbourhood about u it must contain x. Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed) and that the reasons they are found plausible are quite varied. - 46 I have to say this is proving to be one of the more useful CW big-list questions on the site... – Qiaochu Yuan May 6 2010 at 0:55 12 The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress. – To be cont'd May 22 2010 at 9:04 13 wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read. – S. Sra Sep 20 2010 at 12:39 14 It's a thought -- I might consider it. – gowers Oct 4 2010 at 20:13 21 Meta created meta.mathoverflow.net/discussion/1165/… – quid Oct 8 2011 at 14:27 show 8 more comments ## 169 Answers I don't think I've seen it in here: Every vector space has a non-trivial dual space ($L^p$ for $0 < p < 1$ was a counter-example only mentioned during one of the classes in measure theory) And of course there's the common false belief of people outside of mathematics that "mathematicians work with numbers and formulae all day long" :) - 9 Well, it is true that every vector space has a dual space, even $L^{1/2}$... and it is even true that every topological vector space has a continuous dual space... What you mean is that it is not true that every topological vector space has a non-trivial continuous dual space (or, that the continuous dual of a topological vector space does not necessarily separate points) – Mariano Suárez-Alvarez Jul 7 2010 at 18:54 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. "It cannot be shown without some form of AC that the union (or disjoint union) of countably many countable sets is countable. I have a countably infinite set X of countably infinite sets. Therefore, the union of X cannot be shown to be countable without Choice." The fallacy is that in many cases of interest, it is possible to exhibit an explicit counting of every element of X. In such a case a counting of X by antidiagonals is easily constructed. The usual counting of the rationals is an example of this. I think this may even be an example of a more general phenomenon of "people think AC is necessary for a certain construction, but in fact it turns out not to be necessary for the example they have in mind". For example, AC is necessary to find a maximal ideal in an arbitrary ring ... but it isn't if you're prepared to assume the ring is Noetherian. - 3 If "Noetherian" is defined by the ascending chain condition or by requiring all ideals to be finitely generated, then in order to deduce the existence of maximal ideals, you still need a weak form of the axiom of choice. The usual argument uses the axiom of dependent choice. (Of course, if you define "Noetherian" to mean that every set of ideals has a maximal element, then deducing the existence of maximal ideals is a choiceless triviality.) A good reference is "Six impossible rings" by Wilfrid Hodges (J. Algebra 31 (1974) 218-244). – Andreas Blass Oct 22 2010 at 15:29 show 1 more comment The assumption that a cubic surface expressed as a foliation of Weierstrass curves cannot be rational, because a general Weierstrass curve is not rational. I've seen this false assumption more than once on sci.math over the years. But there are simple counterexamples, such as: $(x + y) (x^2 + y^2) = z^2$ On defining $u = x/y$ and $v = z/y$ one obtains $y (u + 1) (u^2 + 1) = v^2$, and hence x, y, z as rational functions of u, v. I'd love to have a reference to a procedure for calculating the geometric genus and algebraic genus of surfaces like this, because they are rational if and only if both these quantities are zero, and for other cubic surfaces that interest me it would save a lot of fruitless hacking around trying to find a rational solution that probably doesn't exist! Are there any symbolic algebra packages that can do this? I mean for example is $x y (x y + 1) (x + y) = z^2$ rational? I'm almost sure it isn't; but how can one be sure? - Something I was sure about until earlier today: Suppose $\kappa$ is an $\aleph$ number, then $AC_\kappa$ is equivalent to $W_\kappa$, namely the universe holds that the product of $\kappa$ many sets is non-empty if and only if every cardinality is either of size less than $\kappa$ or has a subset of cardinality $\kappa$. In fact this is only true if you assume full $AC$, and $(\forall \kappa) AC_\kappa$ doesn't even imply $W_{\aleph_1}$, I was truly shocked. Furthermore, $W_\kappa$ doesn't even imply $AC_\kappa$ in most cases. The strongest psychological implication is that most people actually think of the well-ordering principle as a the "correct form" of choice, when it is actually Dependent Choice (limited to $\kappa$, or unbounded) which is the "proper" form, that is $DC_\kappa$ implies both $AC_\kappa$ and $W_\kappa$. - 6 How common is this misconception? – Thierry Zell Apr 17 2011 at 3:08 1 @Thierry: For the past couple of weeks I spent a lot time considering models without choice, not only I held that misconception but not once anyone corrected me about it - grad students and professors alike. – Asaf Karagila Apr 17 2011 at 6:09 Hopefully this isn't a repeat answer. False belief: a matrix is positive definite if its determinant is positive. - 3 Is this really a common(!) false belief? – Martin Brandenburg Oct 3 2011 at 7:23 Fans: (related to the one of polytopes written above) all convex cones are rational, i.e. one would expect that a line would eventually hit a point in the lattice. It is obviously not true, just take the one-dimensional cone generated by $(1,\sqrt{2})$. A similar one was thinking that if I rotate the cone a bit, I can always make it rational. - 2 reminds me of the curious fact that some circles in the plane, too, have no points in $\mathbb Q^2$. (proven simply by cardinality!) – AndrewLMarshall Oct 4 2010 at 19:21 I'm not sure how common it is but I've certainly been able to trick a few people into answering the following question wrong: Given $n$ identical and independently distributed random variables, $X_k$, what is the limiting distribution of their sum, $S_n = \sum_{k=0}^{n-1} X_k$, as $n \to \infty$? Most (?) people's answer is the Normal distribution when in actuality the sum is drawn from a Levy-stable distribution. I've cheated a little by making some extra assumptions on the random variables but I think the question is still valid. - This might not be common, but I once believed the following. Let $A, B$ be integers, and define a sequence by the linear recurrence $s_n = A s_{n-1} + B s_{n-2}$ with the base case $s_0 = 0$, $s_1 = 1$. Two important special cases are the Fibonacci sequence ($A = B = 1$) and the sequence $s_n = 2^n - 1$ (where $A = 3$, $B = -2$). Then, for any integers $n$ and $k$, $\gcd(s_n, s_k) = s_{\gcd(n,k)}$. This is true in the two mentioned special cases, so it's tempting to believe it's true in general. But there's a counterexample: $A = B = k = 2$, $n = 3$. Update: corrected the powers of two minus one example from B = 2 to B = -2. Thanks to Harry Altman. - ## From Keith Devlin "Multiplication is not the same as repeated addition", as put forward in Devlin's MAA column. I'm not really sure how I feel about this one; I might be one of the unfortunate souls who are still prey to that delusion. ## Caution In case you missed it, the column ended up spilling a lot of electronic ink (as evidenced in this follow-up column), so I don't believe it would be wise to start yet a new one on MO. Thanks in advance! - 13 The more I think about this "error", the less I am convinced. It's like saying that you cannot say that $\binom n k$ is the number of $k$-element sets in an $n$-element set because then you will be unable to generalize to complex values of $n$. Or you cannot define the chromatic polynomial as the function counting the colourings and then plug in $-1$ to get the acyclic orientations of the graph. Also, I think it is perfectly understandable what it means to add something halfways. – thei Apr 10 2011 at 20:50 1 It's not a "false belief". It's a false heuristic. And it's actually here: mathoverflow.net/questions/2358/… – darij grinberg Apr 10 2011 at 21:17 2 When I taught elementary teachers the course on arithmetic, they all had been taught that multiplication is repeated addition, but I myself thought it was the cardinality of the cartesian product. We enjoyed discussing this difference in point of view. – roy smith May 9 2011 at 3:06 1 The "repeated addition" characterization has an advantage over the "cardinality of the Cartesian product" characterization (which possibly in some contexts could be considered a disadvantage). That is that it's not self-evident that it's commutative, and so one has a useful exercise for certain kinds of students: figure out why it's commutative. – Michael Hardy May 20 2011 at 2:28 show 5 more comments Way late to the party... "$\mathrm{polymod}\ p$ and $\mathrm{mod}\ p$ are the same thing." And it's cousin: "$\forall{x}, f(x) \cong g(x) \pmod{q} \implies f(x) = g(x)$" - 4 What does polymod mean? – darij grinberg Oct 20 2010 at 11:47 1 Probably I understand what this means: if $f(x)=0\pmod 2$ for all $x$, then $f=0$ over $\mathbb F_2$. This is similar to my second example: mathoverflow.net/questions/23478/… – zhoraster Oct 20 2010 at 18:33 1 $\mathrm{polymod}$ is "polynomial mod". Two polynomials are congruent $\mathrm{polymod} p$ iff the coefficients each power of the variable are congruent $\pmod{p}$. The equivalence classes are sets of polynomials where each coefficient ranges over an equivalence class $\pmod{p}$. For the cousin, there are many local/globals but they all seem to require additional conditions (q.v. Hensel lifting). I think the set from which $x$ was chosen was left unspecified because this "imprecise mental abbreviation" pops up at various levels of sophistication each with a different such set. – Anonymous Oct 23 2010 at 15:22 show 2 more comments If every collection of disjoint open sets in a topological space is at most countable, then the space is separable - Related to this answer: $$\pi=\left(\frac{1}{10^5}\sum_{-\infty}^{+\infty}e^{-n^2/10^{10}}\right)^2.$$ Proof: With a computer one can verify that the first 42 billions digits of the two numbers are the same, see J. Borwein and P. Borwein, Strange series and high precision fraud, in The American Mathematical Monthly, 1992, pages 622-640. - 10 I voted this down because I don't think it's a statement that anyone actually believes, and therefore doesn't fit the spirit of this questions, but I have to say it's pretty clever. – Nate Eldredge Oct 19 2010 at 21:11 8 I must admit I'm a little bit surprised just how quickly $f(a) = (1/a \sum e^{-n^2/a^2})^2$ converges to $\pi$ as $a \to \infty$. (According to the identity given in the article, $\lim a^{-2} \log (f(a)-\pi) = -\pi^2$. This feels much faster than we have any right to expect. – Michael Lugo Oct 26 2010 at 4:40 show 1 more comment $\exp(\pi\sqrt{163})$ is an integer. Proof: it has a mathematically interesting definition, and the $12$ first digits after dot are zeros. No integral power of $2$ has $7$ as first digit. Proof: compute by hands successive powers $2^n$ for an hour. You can't find one beginning with $7$. Well, if you ask a computer, it is a different tale. - 1 This is also discussed in this MO question mathoverflow.net/questions/4775/… – Andrey Rekalo Oct 19 2010 at 16:07 6 You underestimate our ability to double. 64 * (1024)^4 leads to such a power, and it does not take an hour to compute by hand. Now, if you have such a number start with 77, well, I will suggest using powers of 6 instead. Gerhard "Numbers Doubled While You Wait" Paseman, 2010.10.19 – Gerhard Paseman Oct 19 2010 at 16:25 1 @Gerhard. You're right. Computing $2^{46}$, a number with $14$ digits, step by step requires calculating approximately $\frac12(46\times14)=322$ digits. At one digit per second, this requires less than $6$ minutes. I apologize. – Denis Serre Oct 20 2010 at 5:26 1 @Gerhard. What is more your computation there gives a very easy proof by estimation rather than calculation. The existence of 1024 as an easy power of 2 means you keep adding 2.4% so will eventually get to any initial digit including 7. Starting with 64 makes it easy. – Mark Bennet Feb 6 2011 at 11:10 2 I can't see how the second could possibly be a common belief among mathematicians. Since $\log_{10} 2$ is irrational and all... – Todd Trimble Mar 31 2011 at 13:59 show 2 more comments The sigma function $$\sigma_{1}({p_i}^{\alpha_i}) = \displaystyle\sum_{j=0}^{\alpha_i}{{p_i}^j}$$ satisfies the inequalities $$\sigma_{1}({p_i}^{\alpha_i}) \gt (\alpha_i + 1)(\sqrt{p_i})^{\alpha_i}$$ $$\sigma_{1}({p_i}^{\alpha_i}) \gt 1 + \alpha_i(\sqrt{p_i})^{1 + \alpha_i}$$ for prime $p_i$ and $\alpha_i \ge 1$. The "proof" uses the Arithmetic Mean-Geometric Mean Inequality. As a particular application of this result, Sorli's Conjecture implies the OPN Conjecture. - 7 Is this really a common belief? – Mariano Suárez-Alvarez Jan 28 2011 at 18:21 1 @Arnie Your first equation makes no sense. Presumably you want to define $\sigma_1(n)$ for every positive integer $n$, hence a product is missing on the LHS. And the sum on the RHS cannot end at $\alpha_i$. Also, I wonder what is the use of the subscript $i$ in the two inequalities. – Didier Piau Feb 6 2011 at 10:36 show 3 more comments When I was a kid (8th grade), I solved a bunch of math problems in an exam using the well-known identity'' that $(x+y)^2=x^2+y^2$, which I was sure I had been taught the year before. It was of course way before I heard about characterstic two and I didn't get a good grade that day! - 11 Quoth the question, "The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed)". – JBL Dec 1 2010 at 23:39 2 Also, this is of course just a special case of the more general “law of universal linearity”, which iirc was mentioned in earlier answers… – Peter LeFanu Lumsdaine Dec 2 2010 at 0:40 I don't know if this is what you are looking for, but I keep hearing that "a differentiable function is one that is locally linear", not one whose local variation can be approximated linearly. No one stops to think about e.g, x2, and the fact that its graph does not look like a line at any value of x. - 4 I would say this is more a heuristic than a false statement; as such, it would be more appropriate as an answer to mathoverflow.net/questions/2358/… (although I do not think anyone interprets it the way you apparently do). – Qiaochu Yuan May 5 2010 at 4:53 show 2 more comments I had a false belief in linear algebra, that a basis of a vector space could have infinitely many elements (like an orthonormal basis in Fourier analysis). That tripped me up trying to understand the definition of tensor products, and even after someone explained the issue to me I didn't believe it at first. - 3 I don't understand. A basis can have infinitely many elements. That's no false belief, that's correct. – Johannes Hahn Aug 22 2010 at 12:07 14 The false believe would be that when you define basis, you allow infinite linear combinations. If some confusion is possible, say "Hamel basis" ... Even if there is no topology defined, it still will emphasize that only finite linear combinations are considered. – Gerald Edgar Aug 22 2010 at 12:30 show 1 more comment I had the false belief that recursive functions are always decidable in ZFC. - $n!$ is the product of all positive integers less than or equal to $n$. In fact it should be defined in combinatorial terms. Many assume the fact that parallel lines in Euclidean geometry do not cross is an axiom, while it can easily be proved in terms of vector space. Many lecturers do not stress the difference between inner product and scalar product and most students think that these are different names for the same thing. In complex numbers $i = \sqrt{-1}$. Obviously it is not correct as well. - 3 I wouldn't call any of these false mathematical statements except for possibly the last. You can axiomatize Euclidean geometry in any number of ways. – Qiaochu Yuan May 5 2010 at 20:52 2 Why should $n!$ necessarily be defined in combinatorial terms? I mean, why is defining it as $n!=\prod\limits_{k=1}^n k$ a mistake? -- As for axiomatization of Euclidean geometry, there is no definite mistake here, but the modern affine-plane-with-inner-product construction is superior to the traditional in many ways (including working over arbitrary fields of characteristic $\neq 2$), so the fifth axiom is indeed not the important thing to fuss about that it was before. Though, in hindsight, it has helped develop some highly interesting differential geometry. – darij grinberg May 5 2010 at 21:42 19 @Neil It's not a special case. Everyone knows the product of the empty set is 1. :-) – Dan Piponi May 6 2010 at 0:39 12 "The product of the first 0 positive integers" is the product of the elements of an empty set. – JBL May 6 2010 at 2:38 10 Am I the only one confused over what #3 (inner vs scalar) is driving at? – Thierry Zell Nov 27 2010 at 16:27 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 90, "mathjax_display_tex": 4, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471989274024963, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=190332
Physics Forums ## tension in string A rock of mass 200g is attached to a 0.75m long string and swung in a vertical plane. A) what is the speed that the rock must travel so that its weight at the top of the swing is 0? B) what is the tension in teh string at the bottom of the swing? C) if the rock is now moved to a horizontal swinging position, what is the angle of the string to the horizontal? Here is what i could do: I drew a vertical circle with 2 positions a and b. A is at the top and B is at the bottom. so at A the F_g is down and T is up. at B,T is down and F_g is up. Right? then what? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 1 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by pinkyjoshi65 i had another question.. A rock of mass 200g is attached to a 0.75m long string and swung in a vertical plane. A) what is the speed that the rock must travel so that its weight at the top of the swing is 0? B) what is the tension in teh string at the bottom of the swing? C) if the rock is now moved to a horizontal swinging position, what is the angle of the string to the horizontal? Here is what i could do: I drew a vertical circle with 2 positions a and b. A is at the top and B is at the bottom. so at A the F_g is down and T is up. at B,T is down and F_g is up. Right? then what? This question requires you to use 2 concepts, i) that of the Newton' s laws of motion and ii)the conservation of energy. Firstly, if you've drawn a Free Body Diagram(FBD) correctly then you should get the tension $$T$$ upwards at the lowermost point in the trajectory(of the the rock), and downwards at the topmost point. Thats obvious because the string is inelastic (its length is always constant) so every little part of the string pulls the adjacent little part of the string with a force of the same magnitude (say $$dT$$) so that the string doesn't change in its physical dimension (to keep its length constant). Hence the direction of tension is always opposite to the force that is applied at its end, so that the length of the string remains same. Now moving specifically to the problem. Let the length of the string be $$l$$. We could assume the lowermost point to have a zero gravitational potential energy. Also let the velocity at the bottom of the loop be $$v_0$$ and at the top be $$v$$. First writing the energy conservation equation: $$\frac{1}{2} mv_0^{2} = \frac{1}{2}mv^{2} + 2mgl .... (1)$$ Where $$m$$ is the mass of the rock, $$g$$ is the acceleration due to gravity in the downward direction. This equates the initial kinetic energy of the rock at the bottom to the final mechanical energy of the rock(i.e. the kinetic energy plus the gained potential energy) at the topmost position. The second equation is the force equation at the top of the loop. Here we have: $$mg + T = \frac {mv^{2}}{l} .... (2)$$ Also since the the weight goes to zero at the topmost point, the tension should be minimum as the string is just about going to go slack, such that the centripetal force is equal to the weight. That is, we put $$T=0$$ in equation $$(2)$$; we also substitute the value of $$v$$ from the equation of conservation of energy to get the required answer ($$v_0$$). The answer should come to be $$v_0 = \sqrt{5gl}$$ That answers part a). For part b) balance the forces at the lowermost point. Here, the tension is upwards, the weight is downwards. $$T = mg + \frac {mv_{0}^{2} }{l}$$ Use the value obtained in a) to get the answer. Part c) the angle would obviously depend on the initial velocity of the rock when set into motion. It should behave like a conical pendulum. Balance the vertical components and horizontal components of the tension in string (I hope you get the direction right this time around) with the weight and the centripetal acceleration respectively. Use some trig. to get your answer. Part a) also gives the minimal velocity that has to be imparted to the rock at its lowermost position for it to complete one complete loop. Hope that helps! Thread Tools | | | | |----------------------------------------|-------------------------------|---------| | Similar Threads for: tension in string | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 13 | | | Introductory Physics Homework | 20 | | | Introductory Physics Homework | 8 | | | Introductory Physics Homework | 7 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222567677497864, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/148032/what-is-the-larger-of-the-two-numbers/148035
# What is the larger of the two numbers? What is the larger of the two numbers? $$\sqrt{2}^{\sqrt{3}} \mbox{ or } \sqrt{3}^{\sqrt{2}}\, \, \; ?$$ I solved this, and I think that is an interesting elementary problem. I want different points of view and solutions. Thanks! - 1 This again $e^\pi$ or $\pi^e$ – checkmath May 22 '12 at 3:14 2 @Gerry, although I liked your comment, but I might disagree that all we want is to get our answer accepted. First, we can still get likes form others if the answer is correct (see for example Robert's answer). Second, we need to get those questions answered for the benefits of others anyway. – Rafid May 22 '12 at 5:33 1 @Rafid: Actually, it is not about people getting their answer accepted, but about askers such as this one acknowledging the help they've received. It is the polite thing to do! – Steve D May 22 '12 at 8:16 ## 6 Answers $$\sqrt2^{\sqrt 3}<^?\sqrt3^{\sqrt 2}$$ Raise both sides to the power $2\sqrt 2$, and get an equivalent problem: $$2^{\sqrt 6}<^?9$$ Since $\sqrt 6<3$, we have: $$2^{\sqrt 6}< 2^3 = 8 <9$$ So ${\sqrt 2}^{\sqrt 3}$ is smaller than $\sqrt3^{\sqrt 2}$. - Hint: If $a$ and $b$ are positive numbers, $a^b < b^a$ if and only if $\dfrac{\ln a}{a} < \dfrac{\ln b}{b}$. Find intervals on which $\dfrac{\ln x}{x}$ is increasing or decreasing. - Yet 2<e<3, so how? – wok May 22 '12 at 9:03 It answers the question for $e^\pi$ or $\pi^e$. – wok May 22 '12 at 9:08 2 @wok: try $\sqrt2$ and $\sqrt3$ (not $2$ and $3$). – Did May 22 '12 at 11:54 1 @Didier Ok, my mistake. – wok May 22 '12 at 12:17 We have $\sqrt{2}>1$ and $\sqrt{3}>1$, so raising either of these to powers $>1$ makes them larger. Call $x=\sqrt{2}^\sqrt{3}$ and $y=\sqrt{3}^\sqrt{2}$. We have $x^{2\sqrt{3}}$=8 and $y^{2\sqrt{2}}=9.$ Since $2\sqrt{2} < 2\sqrt{3}$, we conclude $y>x$. - Hint: Use the Logarithm function. - 12 I downvoted as it was not useful. Might as well say "potato" unless one assumes one already knows how to do it, defeating the purpose. – Mahmud May 22 '12 at 3:27 1 You can add a similar idea to Robert's, if that is what you had in mind. – Peter Tamaroff May 22 '12 at 4:56 1 This is a fine hint. The asker may have not thought at all about the logarithm function. – Potato Jul 21 '12 at 5:44 You're just saying that because your name is Potato. – The Chaz 2.0 Oct 18 '12 at 4:31 In general, we can state two pertinent results: (1) If $a$ and $b$ are positive real numbers such that $b > a \ge e,$, then $a ^ {b} > b ^ {a}$; (2) If a and b satisfy $e \ge b > a > 0$, then $b ^ {a} > a ^ {b}.$ - $$\sqrt{2}^{\sqrt{3}} \approx 1.414^{1.732} \approx 1.822$$ $$\sqrt{3}^{\sqrt{2}} \approx 1.732^{1.414} \approx 2.174$$ $$\text{The rest is clear.}$$ - 1 I assume the problem is meant to be solved without a calculator, which may not be easy from your solution. – Joe May 23 '12 at 3:30 1 hahah nice one! – student May 23 '12 at 4:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537158012390137, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/106931?sort=votes
## Existence of solution of a Non-linear PDE via Fixed point theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi all I've the following non-linear PDE $-\Delta Y + Y^3 =U$ on $\Omega \subset R^n$, open, bounded, Lipschitz boundary domain $Y=0 ,$ on $\partial\Omega$ 1.Let $Y\in H_0^1$ and as $H_0^1 \hookrightarrow \hookrightarrow L^5$ can we define a compact operator $T:L^5 \times [0,1] \rightarrow L^5$ and use the Leray-Schauder Fixed Point Theorem to prove the existence of a solution of above PDE for a general $U\in L^2$? 2.Or if not then how can we apply Leray-Schauder Fixed Point Theorem to proove existence of a solution in $Y\in H_0^1$? - Can you say a few words about your motivation for this question? Is this an exewrcise in a PDE book? – András Bátkai Sep 12 at 14:21 ## 1 Answer The Leray-Schauder is a fixed point theorem in the spirit of Brouwer's. When it gives an existence result, it says nothing about uniqueness. Your problem is much better than that, because it does have a unique solution in $H^1_0(\Omega)\cap L^4(\Omega)$, whenever $U$ belongs to the dual space $X=H^{-1}(\Omega)+L^{4/3}(\Omega)$. The reason is that $Y$ is a critical point of the functional $$E[Y]=\int_\Omega(\frac12|\nabla Y|^2+\frac14Y^4-UY)dx.$$ This function turns out to be continuous and coercive over $X$, and strictly convex. Therefore the standard arguments of the so-called direct method of calculus of variations yields existence and uniqueness of a critical point, which is the point $Y$ at which $E$ achieves its minimum. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8807481527328491, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/products+calculus
# Tagged Questions 3answers 370 views ### Prove this product How to prove this product? $$\prod\limits_{k=2}^ n {\frac{k^2+k+1}{k^2-k+1}}=\frac{n^2+n+1}{3}$$ 2answers 174 views ### How to find finite trigonometric products I wonder how to prove ? $$\prod_{k=1}^{n}\left(1+2\cos\frac{2\pi 3^k}{3^n+1} \right)=1$$ give me a tip 1answer 108 views ### Evaluate $\prod_{x=2}^\infty\frac{x^4-1}{x^4+1}$ Difficult question from some test somewhere (I forget). $$\prod_{x=2}^\infty\frac{x^4-1}{x^4+1}$$ $x$ is, of course, an integer. 1answer 59 views ### Infinite Product is converges I am adding this problem since it is interesting and valuable to be verified here: Prove that the infinite product $\prod_{k=1}^{\infty}(1+u_k)$, wherein $u_k>0$, converges if ... 1answer 69 views ### Limit of an n-ary product Since a definite integral is defined as $$\lim_{n\to\infty} \sum_{i=0}^n f(x_i^*)\,\Delta x = \int_a^b f(x)\,dx$$ and the integral is much easier to calcluate than a sum, if we change the sum to a ... 2answers 78 views ### Calculate $\lim_{n\to\infty}[(1+x)(1+x^2)(1+x^4)\cdot\cdot\cdot(1+x^{2n})]$, $|x|<1$ Please help me solving $\lim_{n\to\infty}[(1+x)(1+x^2)(1+x^4)\cdot\cdot\cdot(1+x^{2n})]$, $|x|<1$ 1answer 97 views ### Is there a relationship between products and integrals? Is there a way to convert a product into an integral? I know that the Euler-Maclaurin formula establishes a relationship between sums and integrals, but is there some sort of formula that establishes a relationship between products and integrals? I don't ... 2answers 234 views ### Infinite product How do I solve the infinite product of $$\prod_{n=2}^\infty\frac{n^3-1}{n^3+1}?$$ I know that I have to factorise to $$\frac{(n-1)(n^2+n+1)}{(n+1)(n^2-n+1)},$$ but how do I do the partial product? ... 2answers 477 views ### Compute $\lim\limits_{n\to\infty} \prod\limits_2^n \left(1-\frac1{k^3}\right)$ I've just worked out the limit $\lim\limits_{n\to\infty} \prod\limits_{2}^{n} \left(1-\frac{1}{k^2}\right)$ that is simply solved, and the result is $\frac{1}{2}$. After that, I thought of calculating ... 1answer 162 views ### Is there a “continuous product”? Is there a "continuous product" which is the limit of the discrete product $\Pi$, just like the integral $\int$ is the limit of summation $\sum$. Thanks! 1answer 93 views ### Infinite product of recursive sequence Let $a_{n+1}=\sqrt {(a_n+a_{n-1})/2}$ and $a_0=a_1=2$, how to prove convergence of the product $a_0 a_1 a_2 a_3...a_\infty$, and possibly find its value? 2answers 472 views ### The derivative of a product of more than two functions I'm trying to generalize the product rule to more than the product of two functions using the fact that I can treat the product of $n$-1 functions as a single one. Here is an example of what I mean: ... 2answers 54 views ### interval for a product to infinity I was wondering - how would I specify the interval (the amount that n increases each time) between terms? Is that possible? What if I want it to increase by, say, ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329747557640076, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/59642/list
## Return to Answer 2 Added 'convex' for clarification. If $s$ is a non-zero section whose image lies in $\mathcal U_{n,d}$, then it has constant sign on $V^\ast:=\mathbb R^{n+1}\setminus{0}$ and after possibly multiplying by $-1$ we may assume that $s$ is strictly positive on $V^\ast$. The strictly positive $s$ form an open convex cone $C$ (we do not assume that $0$ belongs to a cone) and is hence contractible when non-empty which this one is when $d$ is even. As $C\to\mathcal U_{n,d}$ is a fibration with fibres $\mathbb R_+$ so is $\mathcal U_{n,d}$. 1 If $s$ is a non-zero section whose image lies in $\mathcal U_{n,d}$, then it has constant sign on $V^\ast:=\mathbb R^{n+1}\setminus{0}$ and after possibly multiplying by $-1$ we may assume that $s$ is strictly positive on $V^\ast$. The strictly positive $s$ form an open cone $C$ (we do not assume that $0$ belongs to a cone) and is hence contractible when non-empty which this one is when $d$ is even. As $C\to\mathcal U_{n,d}$ is a fibration with fibres $\mathbb R_+$ so is $\mathcal U_{n,d}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546239376068115, "perplexity_flag": "head"}
http://mathoverflow.net/questions/102580?sort=newest
## How do you compute the dual norm of an induced norm on a subspace of a finite-dimensional Lp-normed vector space? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Say you have a finite-dimensional vector space $V$ with an $L^p$ norm on it. In general, the norm induced on a subspace $V_s$ of doesn't have to be another $L^p$ norm, so the unit sphere in $V_s$ using this induced can be some strange shape. Given $V_s$ and a norm $||·||$ induced this way on it, how can one compute an expression for the dual norm $||·||_*$ on $V_s^*$, the dual space of linear functionals on $V_s$? I understand that this norm must satisfy the relationship $||w||_* = \text{sup }\frac{w(v)}{||v||}$ for $v$ in $V_s$ and $w$ in $V_s^*$, and that this means I need to find the intersection of the unit sphere in $V_s$ with the direction specified by $w$. However, I'm not sure what a good strategy might be to actually find an expression for the dual norm in this way. I thought that some implication of Hahn-Banach might help to pave the way forward, but after some research I still haven't seen anything obvious. I do have a hunch that for the case where the norm on $V$ is $L^1$ or $L^\infty$, and hence where the unit sphere for induced norm on $V_s$ is some sort of polytope, that the unit sphere on $V_s^*$ will be the dual polytope exchanging faces and vertices. - Just find the critical values of $v \mapsto w(v)/\|v\|$. – Deane Yang Jul 18 at 21:23 This would probably work as well, but I found an exact solution here: math.unl.edu/~s-bbockel1/928/node25.html Basically, $V_S^*$ is isometrically isomorphic to $V^*/S°$, where $S°$ is the subspace in $V^*$ for which $s(t) = 0$ for $s$ in $S°$ and $t$ in $S$. – Mike Battaglia Jul 21 at 4:06 ## 2 Answers An exact solution can be found here using the Hahn-Banach Theorem: http://math.unl.edu/~s-bbockel1/928/node25.htm Using this, you can show that $V^∗_S$ is isometrically isomorphic to $V^∗/S$°, where $S°$ is the subspace in V∗ for which $s(t)=0$ for $s$ in $S°$ and $t$ in $S$. – Mike Battaglia Jul 21 at 4:06 - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a fairly general case of the Legendre transformation, I guess. I don't really see that it should be that much simpler than the general case (but I'm not an expert). - Thanks. I did a bit of research into the Legendre transformation and this looks like another good way to go. I ended up using Hahn-Banach, which I've commented on above. – Mike Battaglia Jul 20 at 21:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395044445991516, "perplexity_flag": "head"}