url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://physics.stackexchange.com/questions/40786/how-felix-baumgartner-has-reached-the-speed-of-sound-quickly/41549
# How Felix Baumgartner has reached the speed of sound quickly I have watched Felix Baumgartner freefall; but I wonder how Felix has reached the speed of sound quickly, in a matter of some seconds, then we had no idea of its speed? - 2 I too wonder about this wikipedia quote "Baumgartner also attempted to break three other world records—the highest manned balloon flight, the highest altitude jump, and the longest time in free fall. The free fall was initially expected to last between five and six minutes; it ended after 4:22." How did they miscalculate that so badly? (Also as a side note, I think longest free fall time is not a particularly interesting thing to measure. I image people in 50 years just sitting around in empty space, far from masses, breaking records.) – Nick Kidman Oct 14 '12 at 20:19 3 – Nick Kidman Oct 14 '12 at 20:31 Has anybody posted his velocity profile during the fall? I always wondered about this. – ja72 Jan 5 at 22:56 ## 4 Answers The Red Bull Stratos project involving the 43-year-old Austrian man Felix Baumgartner is to break the sound barrier. Within the first 15,000 feet of his jump he was traveling well over the cruising speed of a commercial jetliner, reaching some 625 mph. The maximum velocity reached by Felix is about some 380 km/s. How did he do that? During a free-fall, there would be two forces acting on the object. The Air-resistance (or Drag) and Gravity ($mg$). So, Two parts come into our play... The acceleration due to gravity $(g=\frac{GM}{(R+h)^2})$ value is almost a constant 9.8. Yes, It varies from 9.6 to 9.8 within that 39 km range. For example, at a height of 39000 m, it is some 9.684 and at a height of 10 km, it's about 9.7. At last at the sea level, it is 9.8 as you know. But, this $g$ value has no higher difference than 0.2 (even at 39 kms). Now, the major part... The air resistance acting on a free-falling object depends on its velocity. As the velocity increases, the drag also increases and a period of time arrives when the body falls with a constant average velocity called terminal-velocity. It's probably around 50 m/s in air. At the terminal velocity, the force due to gravity equals air resistance. But, His free-fall is from about an altitude of some 120,000 ft. (39 km) from ground. It's in the stratosphere (8-50 kms). In the stratosphere, the pressure is too low which provides the fact that the density of air is too small. This is because the weight acting upon molecules is low for higher molecules and vice-versa. This is where Drag gets weaker. 'Cause the air-resistance also depends upon the density of the medium (air). Hence, only gravity accelerates him down and he'd reach the sound barrier soon. Once he enters our troposphere (up to 8 km from ground), he slows down due to drag and he'd reach 170 m/h which is enough for parachuting, touch-down and landing at last..! On comparison to the effect caused by $g$, I'd say that air resistance plays a major role. In addition to these, the speed of sound is also a low value at the stratosphere (a supportive factor). Because the pressure wave also depends on the density of the medium. The Laplace correction is given by $v=\sqrt{\frac{\gamma P}{\rho}}$. As the speed is inversely proportional to density $\rho$ or directly proportional to pressure, the $v$ value increases at stratosphere. (i.e.) It's not a constant 330-343 m/s. At about 30 km, it's 305 m/s. This further reduces the effect caused by $g$ difference case..! Edit: It seems that my approximation of $g$ and $v$ has become such an issue. So, I've updated to a newer version. Some calculators I've used - Variation of $g$ with altitude and Variation of Speed of sound. If we use the Mach value 1.2, we could get the output in that calculator. - 4 Note that the speed of sound is not constant, but depends on the density, temperature and pressure of the fluid, hence why mission control needed to look at data to determine whether the sound barrier was actually broken during the fall. (Baumgartner himself said that he was too busy tumbling head over feet while falling to notice any sonic boom effects.) – Jerry Schirmer Oct 15 '12 at 18:00 @JerrySchirmer: Yeah, I agree with that, Jerry... It's because that Sound's just a longitudinal pressure wave of vibrations (requires medium) & not even close to EM. Hence, it depends on these factors... I went through Wiki already... – Ϛѓăʑɏ βµԂԃϔ Oct 15 '12 at 18:38 And $g$ is not constant, but varies with altitude. There is also a centrifugal component also, but it is negligible I think. – ja72 Oct 23 '12 at 20:36 @ja72: Hello ja72. I think the variation of $g$ could be approximated to be around 9.8. Ok, As the issue has become somewhat bigger, I've inserted everything... What about now guys? – Ϛѓăʑɏ βµԂԃϔ Oct 24 '12 at 4:16 1 @CrazyBuddy: I assure you touchdown against the ground at 170 miles/hour moving straight down, definitely takes form of a splashdown in the sense mentioned in my prior comment. It may be a good speed for opening the parachute but not for landing. – SF. Oct 24 '12 at 20:07 show 3 more comments For N universal gravitational law `g=go(r^2/(r+z)^2)` where `r` is the radius of the earth and `z` is the altitude reached with respect to the earth `crest`, so the gravity is not constant during his fall and doesn't depend on the mass of the body. As for the high increase in speed, the decrease in the effect of air resistance at z= 39 km causes a rapid increase due to gravitational strength but this increase after a minute starts to fall back till it reaches its max speed. The temperature, pressure and density of air also affect his fall. - The value $g$ is not a constant $9.8$ $ms^{-2}$ in this case. Given: His free fall is at a height of $39,045$ $m$, the mean radius of Earth, $6,371,000$ $m$ and the mass of Earth $5.9736\times10^{24}$ $kg$ the value $g=9.7026$ $ms^{-2}$ at the start of his dive and $g=9.8219$ $ms^{-2}$ at the end of his dive. These figures may vary based on the elevation of his landing spot, but it's clear nonetheless that $g$ is a variable in this case..! - I think you meant for this to be a comment, rather than an answer... – Warrick Oct 15 '12 at 18:47 i think that they had made the astronaut suit heavier just to break the record quickly and to reach the max speed before reaching the dense air. – geogeek Oct 15 '12 at 19:33 3 @geogeek the suit being heavier or ligther makes no difference at all. the accel g is equal for any mass, as demonstrated by Galileu centuries ago. – Helder Velez Oct 16 '12 at 1:57 1 Of course, I would expect that this would cause less than a 1% error in calculating fall times. – Jerry Schirmer Oct 23 '12 at 22:31 In the differential equation it's not enough to put a constant value for the density of air. If you do so, the limit velocity turns out to be too slow, even if using reasonable values for the cross-section area and for the drag coefficient. Instead, if you use the experimental data (included in Mathematica), the density of air changes as a function of the altitude. In this case, you see that the solution (obtained via numerical integration) shows a peak in the velocity during the first minute which is in rough agreement with the actual value. See my post, I explain everything here: http://disipio.wordpress.com/2012/10/21/the-felix-baumgartner-equation/ - ## protected by Qmechanic♦Dec 23 '12 at 22:34 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478801488876343, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/131556/linearity-of-a-function
Linearity Of A Function I understand that the linearity of a function is determined by the degree of the polynomial but I was unsure whether the modulus operator changes this? Is $f(x)$ = N mod x a linear function if $N$ and $x$ are integers? As in: f(x) = 17 mod x - That would not be a well defined function. What is $17 \mod 1.23$? – Peter Tamaroff Apr 14 '12 at 2:07 1 @PeterT.off Surely OP wants the domain $\Bbb Z$ or $\Bbb N$? – anon Apr 14 '12 at 2:09 @anon I think it is normal to note $x$ a real number. Anyways, it would not be a function like polynomials and linear functions, which the OP mentions, which are usually $\mathbb R \mapsto \mathbb R$. – Peter Tamaroff Apr 14 '12 at 2:11 Sorry, I didn't think the clarification would change the answer. Both N and x are integers. N is just another variable. It is linear? – Char Apr 14 '12 at 2:15 2 No. $f(4) = 17 \mod 4 = 1$ but $f(2) + f(2) = 17 \mod 2 + 17 \mod 2 = 2$. – Neal Apr 14 '12 at 2:17 show 2 more comments 1 Answer You have to decide what you mean by linear before you can answer this question. The function $f(x)=mx+b$, which you call "definitively linear", satisfies $$f(r-s)-2f(r)+f(r+s)=0$$ for all $r,s$. The function $f(x)=17$ reduced modulo $x$ doesn't: $$f(2)-2f(3)+f(4)=1-4+1=-2\ne0$$ If you want to call it linear, go ahead, but beware that it won't do most of the things that you might expect linear functions to do. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932050883769989, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/84091/non-isomorphic-graphs-with-the-same-numbers-of-closed-walks/84153
## Non-isomorphic graphs with the same numbers of closed walks ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Can somebody help me to construct two family of finite simple connected graph $G_i$ and $H_i$, $i=1, 2, \cdots,n$ ($n$ possibly large), such that: $1)$ $G_i‎\ncong H_i$ for $i=1, 2, \cdots, n$ $2)$ $|V(G_i)|=|V(H_i)|, |E(G_i)|=|E(H_i)|$ $3)$ If $C_k(G)$ denotes the number of closed walk of length $k$ in graph $G$, we have: $C_k(G_i)=C_k(H_i)$ for $i=1, 2, \cdots, n$ $4)$ Preferably, I need these graphs be $a)$minimal and $b)$highly irregular(or has one of these two conditions $(a)$ or $(b)$). $Definition 1:$ A graph $G$ is Highly irregular, if every vertex $v$ of $G$ is adjacent only to vertices with distinct degrees. $Definition 2:$ The sequence of graphs $G_i$,$i=1,2,\cdots,n$, is minimal, if the number of vertices of every $G_i$ is minimum. For example, two trees $T_1$ and $T_2$ with degree sequences $4,4,1,1,1,1,1,1$ and $5,2,2,1,1,1,1,1$ respectively, are minimal, because they are minimum vertices co-spectral trees. I will appreciate any help and guidance. - As Igor notes, you have alreadu recieved an answer to this, from MacKay, in response to your question shahrooz.Janbaz (mathoverflow.net/users/19885), Operation on Isospectral graphs, mathoverflow.net/questions/83817 (version: 2011-12-18) – Chris Godsil Dec 22 2011 at 17:22 1 Your question is too vague for this forum. Now that you know your condition is same as being cospectral, do a google search for "cospectral graph" and you will find large amounts of information on it. – Brendan McKay Dec 23 2011 at 0:10 1 Dear Shahrooz: MathOverflow is not meant to be a substitute for PhD supervision, not for the necessary process of solving one's own problems (or at least finding simpler versions that once can solve) – Yemon Choi Dec 23 2011 at 12:09 Dear Choi, I know and understand your note. But, I am solving a quite difficult problem that the answer of this question can be very helpful. I don't want some professors solve this problem for me, but I will be so appreciate that they share their experiences with me. Three month ago, I solved a problem about Bandwidth of graph after 6 month effort and thinking, when I introduced the problem here, some of very helpful guidance, showed that this is another version of other problem in design theory. I think here is a place with name Mega-Experience and Idea Sharing. But thank you for your note. – Shahrooz Dec 23 2011 at 12:22 ## 3 Answers This is true if and only if the adjacency matrices of your families are (pairwise) isospectral. Since you already know how to construct regular isospectral graphs, you know how to answer your question. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a nice paper about this kind of question: Waiting for a bat to fly by (in polynomial time), by Itai Benjamini, Gady Kozma, Laszlo Lovasz, Dan Romik and Gabor Tardos (arXiv:math/0310435). They address exactly this question, phrased a bit differently: launch a simple random walk in a finite graph, and observe only its successive return times to a marked vertex; what can you tell about the shape of the graph from this information? They exhibit an example of two graphs with the same return time distribution, and from there by adding pieces you should be able to produce examples of all sizes. As you notice, there are things like the number of vertices that are easy to compute, and there are graphs that are indistinguishable that way; the main question in the paper is, replacing the SRW by something else that is observed only at a given vertex (say some Glauber dynamics), can you do better than a single SRW? (Plus, I like the title of the paper very much ;->) - Look around. The term cospectral is also used. Some of the people who have been answering you showed that Almost all Trees are Co-spectral. Of course trees can be quite irregular. An early paper is Cospectral Graphs and Digraphs Bull. London Math. Soc. (1971) 3(3): 321-328 Frank Harary, Clarence King, Abbe Mowshowitz, and Ronald C. Read. It has some nice pictures. It might be that almost all graphs share their spectrum with another non-isomorphic graph (in some precise sense). It might also be the case that almost all graphs are determined up to automorphism by their spectra (in some precise sense of almost all.) But once you have some cospectral graphs you can make lots more by combining them in various ways. It might not be that satisfying to explicitly describe examples (by a picture or adjacency matrix.).The thing about the latin square graphs is that with very little work one can describe how to get thousands of mutually non-isomorphic graphs all with the same spectrum. - Dear Meyerowitz, We say a graph $G$ is Highly irregular, if every vertex $v$ of $G$ is adjacent only to vertices with distinct degrees. I know that we can find a lot of co-spectral graphs in tree that they have very different degree. But, the condition minimal and highly irregularity is very important. But, I will add the definition of highly irregularity to the question. thank you – Shahrooz Dec 23 2011 at 11:56 Unfortunately, many of trees are not highly irregular. The definition of "Highly Irregular Graphs" is from a paper with this name that " Yousef Alavi, Gary Chartrand, F.R.K Chung, Paul Erdos, R.L. Graham and Ortrud R. Oellermann" published it in Journal of Graph Theory, Vol 11,No 2, 235-249(1987). – Shahrooz Dec 23 2011 at 12:06 1 I think that nobody here (including me) remembered that "highly irregular" is a defined property. It isn't very common. Please add a definition of "minimal" too, since it can mean many things. – Brendan McKay Dec 23 2011 at 12:48 Sorry, you are right. I will edit the question and add the definition of minimal word. Thank you – Shahrooz Dec 23 2011 at 15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394609928131104, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/04/19/
The Unapologetic Mathematician Lebesgue Measure So we’ve identified a measure on the ring $\mathcal{R}$ of finite disjoint unions of semiclosed intervals. Now we want to apply our extension and completion theorems. The smallest $\sigma$-ring $\mathcal{S}$ containing $\mathcal{R}$ is also the smallest one containing the collection $\mathcal{P}$ of semiclosed intervals. As it turns out, it’s also a $\sigma$-algebra. Indeed, we can write the whole real line $\mathbb{R}$ as the countable disjoint union of elements of $\mathcal{P}$. $\displaystyle\mathbb{R}=\bigcup\limits_{i=-\infty}^\infty\left[i,i+1\right)$ and so $\mathbb{R}$ itself must be in $\mathcal{S}$. We call $\mathcal{S}$ the $\sigma$-algebra of “Borel sets” of the real line. Our measure $\mu$ — defined on elements of $\mathcal{P}$ by $\mu(\left[b,a\right))=b-a$ — is not just $\sigma$-finite, but actually finite on $\mathcal{R}$. And thus its extension to $\mathcal{S}$ will still be $\sigma$-finite. The above decomposition of $\mathbb{R}$ into a countable collection of sets of finite $\mu$-measure shows us that the extended measure is, in fact, totally $\sigma$-finite. But our measure might not be complete. As the smallest $\sigma$-algebra containing $\mathcal{P}$, $\mathcal{S}$ might not contain all subsets of sets of $\mu$-measure zero. And thus we form the completions $\overline{\mathcal{S}}$ of our $\sigma$-algebra and $\bar{\mu}$ of our measure. We call $\overline{\mathcal{S}}$ the $\sigma$-algebra of “Lebesgue measurable sets”, and $\bar{\mu}$ is “Lebesgue measure” (remember, it’s pronounced “luh-BAYG”). In fact, the incomplete measure $\mu$ on Borel sets is also often called Lebesgue measure. Posted by John Armstrong | Analysis, Measure Theory | 9 Comments About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147940278053284, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/1296/solve-a-recurrence-using-the-master-theorem
Solve a recurrence using the master theorem This is the recursive formula for which I'm trying to find an asymptotic closed form by the master theorem: $$T(n)=9T(n/27)+(n \cdot \lg(n))^{1/2}$$ I started with $a=9,b=27$ and $f(n)=(n\cdot \lg n)^{1/2}$ for using the master theorem by $n^{\log_b(a)}$, and if so $n^{\log_{27}(9)}=n^{2/3}$ but I don't understand how to play with the $(n\cdot \lg n)^{1/2}$. I think that the $(n\cdot \lg n)^{1/2}$ is bigger than $n^{2/3}$ but I'm sure I skip here on something. I think it fits to the third case of the master theorem. - 2 No. $(n\lg n)^{1/2} = o(n^{2/3})$. – JeffE Apr 16 '12 at 8:38 2 There is no question here. – Raphael♦ Apr 16 '12 at 19:00 1 Answer $f(n) = (n\cdot \lg n)^{1/2}$ and $n^{\log_b a}=n^{2/3}$, thus $f(n) = O(n^{\log_b a})$ and even $f(n) = O(n^{\log_b a - \epsilon})$ for $\epsilon < 1/6$. Why? because $$\lim_{n\to\infty} \frac{f(n)}{n^{\log_b a - \epsilon}} = \lim_{n\to\infty}\frac{n^{1/2}\lg^{1/2}n}{n^{2/3-\epsilon}} = \lim_{n\to\infty} \frac{\lg^{1/2}n}{n^{1/6-\epsilon}} =0 \quad\text{for }\epsilon< 1/6$$ Thus case 1 of the Master theorem should apply, and $T(n) = \Theta(n^{2/3})$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9714753031730652, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/4174/estimating-the-probability-that-a-software-change-fixed-a-problem?answertab=votes
# Estimating the probability that a software change fixed a problem At work we have a hardware device that is failing for some yet to be determined reason. I have been tasked to see if I can make this device not fail by making changes to its software driver. I have constructed a software test bench which iterates over the driver functions which I feel are most likely to cause the device to fail. So far I have forced 7 such failures and the iterations that the device failed on are as follows: 100 22 36 44 89 24 74 Mean = 55.57 Stdev = 31.81 Next, I made some software changes to the device driver and was able to run the device for 223 iterations without failure before I manually stopped the test. I want to be able to go back to my boss and say "The fact that we were able to run the device for 223 iterations without failure means that my software change has a X% probability of fixing the problem." I would also be satisfied with the converse probability that the device will still fail with this fix. If we were to assume that the iteration the device fails on is normally distributed, we can say that going 223 iterations without failure is 5.26 standard deviations from the mean which roughly has a 1-in-14 million chance of happening. However, because we only have a sample size of 7 (not including the 223), I'm fairly certain it would be unwise to assume normality. This is where I think the Student's t-test comes into play. Using the t-test with 6 degrees of freedom, I've calculated that the actual population mean has a 99% probability of being less than 94. So now my question to you guys is whether or not I am allowed to say with 99% certainty that hitting 223 iterations without failure is a 4.05 sigma event, i.e. $\frac{(223 - 94)}{31.81} = 4.05$ ? Am I allowed to use the 31.81 sample standard deviation in that calculation or is there some other test I should do to get a 99% confidence on what the maximum standard deviation is and then use that in my calculation for how many sigmas 223 really is away from the mean at the 99% confidence level? Thanks! UPDATE The answers I received here are beyond any expectation I had. I truly appreciate the time and thought many of you have put into your answers. There is much for me to think about. In response to whuber's concern that the data does not seem to follow an exponential distribution, I believe I have the answer for as to why. Some of these trials were run with what I thought would be a software fix but ultimately ended in failure. I would not be surprised if those trials were the 74 89 100 grouping that we see. Although I wasn't able to fix the problem it certainly seems like I was able to skew the data. I will check my notes to see if this is the case and my apologies for not remembering to include that piece of information earlier. Lets assume the above is true and we were to remove 74 89 100 from the data set. If I were to re-run the device with the original driver and get additional failure data points with values 15 20 23, how would you then compute the exponentially distributed parametric prediction limit at the 95% confidence level? Would you feel that this prediction limit is still a better statistic than assuming independent Bernoulli trials to find the probability of no failure at 223 iterations? Looking more closely at the wikipedia page on Prediction Limits I calculated the parametric prediction limits at the 99% confidence level assuming unknown population mean and unknown stdev on Excel as follows: $\bar{X_n} = 55.57$ $S_n = 31.81$ $T_a = T.INV\Bigl(\frac{1+.99}{2},6\Bigr)$ $Lower Limit = 55.57 - 3.707*31.81*\sqrt{1+\frac{1}{7}} = -70.51$ $Upper Limit = 55.57 + 3.707*31.81*\sqrt{1+\frac{1}{7}} = 181.65$ Since my trial of 223 is outside the 99% confidence interval of [-70.51 , 181.65] can I assume with 99% probability that this is fixed assuming that the underlying distribution is the T-Distribution? I wanted to make sure my understanding was correct even though the underlying distribution is most likely exponential, not normal. I have no clue in the slightest how to adjust the equation for an underlying exponential distribution. UPDATE 2 So I'm really intrigued with this 'R' software, I've never seen it before. Back when I took my stats class (several years ago) we used SAS. Anyway, with the cursory knowledge I gathered from Owe Jessen's example and a bit of help from Google, I think I came up with the following R code to produce the Prediction Limits with the hypothetical dataset assuming an Exponential Distribution Let me know if I got this right: ````fails <- c(22, 24, 36, 44, 15, 20, 23) fails_xfm <- fails^(1/3) Y_bar <- mean(fails_xfm) Sy <- sd(fails_xfm) df <- length(fails_xfm) - 1 no_fail <- 223 percentile <- c(.9000, .9500, .9750, .9900, .9950, .9990, .9995, .9999) quantile <- qt(percentile, df) UPL <- (Y_bar + quantile*Sy*sqrt(1+(1/length(fails_xfm))))^3 plot(percentile,UPL) abline(h=no_fail,col="red") text(percentile, UPL, percentile, cex=0.9, pos=2, col="red") ```` - The iteration numbers are surely just identifiers, and should not be used as numbers in computations. How many iterations did you do in total in your original trial? – James Nov 3 '10 at 16:11 @James I could be wrong but if the iterations are simply identifiers then most people will naturally report them in an increasing order like so: 22 24 36 44 ... The fact that the numbers are reported in an apparently random sequence suggests that each number represents the failed iteration number of 7 separate tests. – user28 Nov 3 '10 at 16:26 I take your point, but I'm still a little unsure of the value of the mean of these numbers. – James Nov 3 '10 at 16:57 @Srikant: Yes, your interpretation is spot on. To James' question, I think it would be fair to say that with this given device and software test bench, the device fails on average at the 55th iteration. Does that not sit will with you? – SiegeX Nov 3 '10 at 18:01 @SiegeX Your last approach has the right flavor, but please don't use the formulae on the Wikipedia page: they are only for normally distributed data. It is rare for failure time data to be normal. They typically are positively skewed. This implies the normal theory upper prediction limits can be (way) too low. BTW, a lower prediction limit is irrelevant here--although the fact that it is hugely negative is a clear indicator of how bad the normal theory methods are for these data. To adjust the equation, see the reference I provided in my answer. – whuber♦ Nov 3 '10 at 19:35 show 1 more comment ## 4 Answers This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts for a chance mechanism or uncertainty in three ways: 1. The data themselves can vary by chance. 2. Because of this, any estimates made from the data are uncertain. 3. The future statistic can also vary by chance. Estimating a probability distribution from the data handles (1). But if you simply compare the future value to predictions from that distribution you are ignoring (2) and (3). This will exaggerate the significance of any difference that you note. This is why it can be important to use a prediction limit method rather than some ad hoc method. Failure times are often taken to be exponentially distributed (which is essentially a continuous version of a geometric distribution). The exponential is a special case of the Gamma distribution with "shape parameter" 1. Approximate prediction limit methods for gamma distributions have been worked out, as published by Krishnamoorthy, Mathew, and Mukherjee in a 2008 Technometrics article. The calculations are relatively simple. I won't discuss them here because there are more important issues to attend to first. Before applying any parametric procedure you should check that the data at least approximately conform to the procedure's assumptions. In this case we can check whether the data look exponential (or geometric) by making an exponential probability plot. This procedure matches the sorted data values $k_1, k_2, \ldots, k_7$ = $22, 24, 36, 44, 74, 89, 100$ to percentage points of (any) exponential distribution, which can be computed as the negative logarithms of $1 - (1 - 1/2)/7, 1 - (2 - 1/2)/7, \ldots, 1 - (7 - 1/2)/7$. When I do that the plot looks decidedly curved, suggesting that these data are not drawn from an exponential (or geometric) distribution. With either of those distributions you should see a cluster of shorter failure times and a straggling tail of longer failure times. Here, the initial clustering is apparent at $22, 24, 26, 44$, but after a relatively long gap from $44$ to $74$ there is another cluster at $74, 89, 100$. This should cause us to mistrust the results of our parametric models. One approach in this situation is to use a nonparametric prediction limit. That's a dead simple procedure in this case: if the post-fix value is the largest of all the values, that should be evidence that the fix actually lengthened the failure times. If all eight values (the seven pre-fix data and the one post-fix value) come from the same distribution and are independent, there is only a $1/8$ chance that the eighth value will be the largest. Therefore, we can say with $1 - 1/8 = 87.5$% confidence that the fix has improved the failure times. This procedure also correctly handles the censoring in the last value, which really records a failure time of some unknown value greater than 233. (If a parametric prediction limit happens to exceed 233--and I suspect [based on experience and on the result of @Owe Jessen's bootstrap] it would be close if we were to calculate it with 95% confidence--we would determine that the number 233 is not inconsistent with the other data, but that would leave unanswered the question concerning the true time to failure, for which 233 is only an underestimate.) Based on @csgillespie's calculations, which--as I argued above--likely overestimate the confidence as $98.3$%, we nevertheless have found a window in which the actual confidence is likely to lie: it's at least $87.5$% and somewhat less than $98.3$% (assuming we have any faith in the geometric distribution model). I will conclude by sharing my greatest concern: the question as stated could easily be misinterpreted as an appeal to use statistics to make an impression or sanctify a conclusion, rather than provide genuinely useful information about uncertainty. If there are additional reasons to suppose that the fix has worked, then the best course is to invoke them and don't bother with statistics. Make the case on its technical merits. If, on the other hand, there is little assurance that the fix was effective--we just don't know for sure--and the objective here is to decide whether the data warrant proceeding as if it did work, then a prudent decision maker will likely prefer the conservative confidence level afforded by the non-parametric procedure. Edit For (hypothetical) data {22, 24, 36, 44, 15, 20, 23} the exponential probability plot is not terrifically non-linear: (If this looks non-linear to you, generate probability plots for a few hundred realizations of seven draws from an Exponential[25] distribution to see how much they will wiggle by chance alone.) Therefore with this modified dataset you can feel more comfortable using the equations in Krishnamoorthy et al. (op. cit.) to compute a prediction limit. However, the harmonic mean of 25.08 and relatively small SD (around 10) indicate the prediction limit for any typical confidence level (e.g., 95% or 99%) will be much less than 223. The principle in play here is that one uses statistics for insight and to make difficult decisions. Statistical procedures are of little (additional) help when the results are obvious. - +1 very nice answer. – csgillespie Nov 3 '10 at 16:49 thank you so much for this thoughtful answer. I have included an Update section in my question that attempts to address your concern as well as ask an additional question. – SiegeX Nov 3 '10 at 17:54 I've added an UPDATE 2 section that attempts to use the equations in the Normal Based Methods in a Gamma Distribution paper by Krishnamoorthy, Mathew, and Mukherjee. Do you agree with my results? If so, I intend to get more failure data by removing the fix then testing the new data set against an exponential probability plot to ensure they fit the exponential distribution. I'll then update my R code with the new dataset to get my new upper limits. – SiegeX Nov 4 '10 at 1:51 @SiegeX Good job! Using equation (2) of the paper with your data and your confidence levels, but using a different computing platform as a check, I obtain UPLs of 42.5763,50.1065,58.4814,71.5132,83.4132,121.271,143.914,220.331. That looks very close to what you have plotted. – whuber♦ Nov 4 '10 at 2:13 There are a few ways of doing this problem. The way I would tackle this problem is as follows. The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a failure. The geometric distribution has one parameter p, which is the probability of failure at each point. For your data set, we estimate p as follows: \begin{equation} \hat p^{-1} = \frac{100 + 22 + 36 + 44 + 89 + 24 + 74}{7} = 55.57 \end{equation} So $\hat p = 1/55.57 = 0.018$. From the CDF, the probability of having a run of 223 iterations and observing a failure is: \begin{equation} 1-(1-\hat p)^{223} = 0.983 \end{equation} So the probability of running 223 iterations and not having a failure is \begin{equation} 1- 0.983 = 0.017 \end{equation} So it seems likely (but not overwhelming so) that you have fixed the problem. If you have a run of about 300 iterations than the probability goes down to 0.004 Some notes 1. A bernoulli trial is just tossing a coin, i.e. there are only two outcomes. 2. The geometric distribution is usually phrased in terms of success (rather than failure). For you a success is when the machine breaks! - Surely p^ is 7/100 (or however many trails were used in the original test)? – James Nov 3 '10 at 16:09 No I don't think so. I took the question to mean that he ran device until he reached a failure. So it failed on 100 iterations, 22 iterations, ... – csgillespie Nov 3 '10 at 16:14 Then shouldn't each test have its own failure probability? – James Nov 3 '10 at 16:56 Each test does have it's own failure probability - it's 1. We've got the data to prove it ;) Anyway I'm assuming that each test is a replicate from the Geometric distribution. – csgillespie Nov 3 '10 at 16:59 3 Assuming a geometric distribution is assuming that the failures are independent. In other words, on any trial the device fails with probability p, independent of the other trials (like tossing a coin). This is a reasonable starting point, but if the failure is due to something like a memory leak, then the longer the device runs, the more likely it is to fail, and the assumption of independent failures would be invalid. – PeterR Nov 3 '10 at 19:06 show 4 more comments I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corrections greatly appreciated: ````fails <- c(100, 22, 36, 44, 89, 24, 74) # Observed data N <- 100000 # Number of replications Ncol <- length(fails) # Number of columns in the data-matrix boot.m <- matrix(sample(fails,N*Ncol,replac=T),ncol=Ncol) # The bootstrap data matrix # it draws a vector of Ncol results from the original data, and replicates this N-times p.hat <- function(x){p.hat = 1/(sum(x)/length(x))} # Function to calculate the # probability of failure p.vec <- apply(boot.m,1,p.hat) # calculates the probabilities for each of the # replications quant.p <- quantile(p.vec,probs=0.01) # calculates the 1%-quantile of the probs. hist(p.vec) # draws a histogram of the probabilities abline(v=quant.p,col="red") # adds a line where quant.p is no.fail <- 223 # Repetitions without a fail after the repair (prob.fail <- 1 - pgeom(no.fail,prob=quant.p)) # Prob of no fail after 223 reps with # failure prob qant.p ```` The idea was to get a worst-case value for the probability, and then use it to calculate the probability of observing no fail after 223 iterations, given the prior failure probability. The worst case of course being a low failure probability to begin with, which would raise the likelihood of observing no failure after 223 iterations without fixing the problem. The result was 6.37% - as I understand it, you would have had a 6%-probability of not observing a failure after 223 trials if the problem still exists. Of course, you could generate samples of trials and calculate the probability from that: ````boot.fails <- rbinom(N,size=no.fail, prob=quant.p) # repeats draws with succes-rate # quant.prob N times. mean(boot.fails==0) # Ratio of no successes ```` with the result of 6.51%. - What is this doing exactly? Im not familiar with R-code but if I had to guess it is using the original 7 figures to extrapolate out the exponential series with 100,000 data points? – SiegeX Nov 3 '10 at 18:26 I edited the code with comments (which in R are after a #) – Owe Jessen Nov 3 '10 at 18:59 PS: Of course, this is not really the worst case (which would be a draw of 7x100), but then you wouldn't have to use bootstrap, because you would ignore the other observations. – Owe Jessen Nov 3 '10 at 19:15 Thank you for the comments, I understand it much better now. However, could you put into layman's term what "quantile(p.vec,probs=0.01) #calculates the 1%-quantile of the probs" is essentially telling us? I see that we get a value of 0.0122 for the 1% quantile. How do I interpret that? – SiegeX Nov 3 '10 at 22:46 1 The quantile is a rank-based value. One you probably know is the median: You sort your observations according to size, and the median is the observation in the middle (say, you have the income of 100 people, then the median income is the 50th-highest income). Now generalizing, the 1%-quantile is the probability at 1% of the observations. The advantage of a quantile is that it is non-parametric, you don't have to make assumptions about the underlying distribution, and that it is less biased than the quantile-function of a symmetrical distribution when you have skewed data. – Owe Jessen Nov 4 '10 at 9:16 I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - http://www.mcdowella.demon.co.uk/FlakyPrograms.html - this should work either from there or if you download it to your computer (which you are welcome to do). I think you have a total of 382 successes and 7 failures in the old version, and 223 successes and 0 failures in the new one, and that you could get this at random with probability about 4% even if the new version was no better. I suggest that you run it a bit more. You can play about with the web page to see how the probability changes if you survive longer - I would go for something over 1000 - in fact I'd try hard to turn it into something I could run automatically and then let it overnight to really blitz the problem. - That's a clever approach, but it seems biased to me, for two reasons. The first is that the occurrence of a failure in any experiment precludes the possibility of any more successes, calling into question the independence of failure and success which is assumed by Fisher's Exact Test. In other words, how can you model these results as if they were like drawing balls out of an urn? The second is the censoring of the last value at 223 has to be handled a little more carefully. (This problem can be surmounted.) – whuber♦ Nov 3 '10 at 19:11 Your website's abstract seems spot on to the problem I'm facing here. Thank you for taking the time to put together this page. Prior to the 223 continuous run, I also had two additional continuous runs of 102 and 60 which I stopped early for time consideration. There has been no failure between the 102, 60 and 223 runs so I think I can add those up for the figure in your "New Version" Successes cell. – SiegeX Nov 3 '10 at 21:34 @whuber: Actually, I think your first concern doesn't apply in this situation because once the device fails there can no longer be any more successes for that test because the device hard locks and requires me to cycle the power to bring it back to a normal state. There is still the concern about me censoring the data, however. – SiegeX Nov 3 '10 at 21:38 I agree that my approach is not completely correct for the data you actually have. I suspect that the error is small, especially in comparison to worries about non-independence of successive trials if the hardware is e.g. failing partly due to the build up of heat. I don't know this for sure, though. I personally would be happier if I could run enough trials to come up with a tail probability of 1E-6 or smaller, or if I could run experiments to find a way to provoke a hardware failure 100% of the time and then file a bug report the hardware guys could work on. – mcdowella Nov 4 '10 at 6:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412674903869629, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4338/minimum-variance-hedge-with-more-than-one-asset
# Minimum variance hedge with more than one asset My portfolio comprises of 3 assets A,B,C that are correlated and the variance-covariance structure is known. At any given point in time, my position in Asset A say is given to me. I need to construct a variance minimizing hedge using both B and C, given this position in A. Basically I am approaching the problem by directly constructing the Variance, as a function of say x and y, the positions in B and C, and minimizing the function by setting the partials w.r.t x and y to 0. This gives me 2 equations in x and y and I can solve them for x and y. My question is that, is this approach sound? What notion of risk is this really minimizing? I did not explicitly construct a risk model here? - ## 1 Answer If the variances are known to be $\sigma_0$, $\sigma_1$ and $\sigma_2$ and the correlations are $\rho_{01}$, $\rho_{02}$ and $\rho_{12}$ then you can do exactly as you suggest - write down the variance of the total portfolio as a function of your holdings $x_0$, $x_1$ and $x_2$ and set the partial derivatives with respect to $x_1$ and $x_2$ to zero. You end up with the following matrix equation $$\left[ \begin{matrix} \sigma_1^2 && \rho_{12}\sigma_1\sigma_2 \\ \rho_{12}\sigma_1\sigma_2 && \sigma_2^2 \end{matrix} \right] \left[ \begin{matrix} x_1\\x_2 \end{matrix} \right] = \left[ \begin{matrix} \rho_{01}\sigma_1 \\ \rho_{o2}\sigma_2 \end{matrix} \right] \sigma_0 x_0$$ which you can solve by inverting the matrix, as long as $\rho_{12}\neq \pm 1$ (ie your hedging assets aren't perfectly correlated or anticorrelated). As to whether this is sound - well, it depends what you mean by "sound". You are minimizing the total variance of your portfolio, conditional on you knowing the covariance matrix. That's about all you can say. Your "risk model" is a single dimension - you are saying that the only notion of risk that you care about is the total variance. A full risk evaluation would need a procedure for determining the covariance matrix, and some level of backtesting to determine if your forecast risk after the hedge is a reflection of the true risk you would see if you were to hold this portfolio. - And then there's the risk that the future is 100% correlated with the future; i.e. that the past covariance holds for the future. I suppose this is the source of rebalancing. – Phil H Oct 15 '12 at 14:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553101658821106, "perplexity_flag": "head"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_7&diff=prev&oldid=31429
User:Michiexile/MATH198/Lecture 7 From HaskellWiki (Difference between revisions) | | | | | |---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|---------------------------------------------------------------------| | | | Current revision (19:16, 4 November 2009) (edit) (undo) | | | Line 1: | | Line 1: | | | - | IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. | | | | - | | | | | | Last week we saw what an adjunction was. Here's one thing we can do | | Last week we saw what an adjunction was. Here's one thing we can do | | | with adjunctions. | | with adjunctions. | Current revision Last week we saw what an adjunction was. Here's one thing we can do with adjunctions. Now, let U be a left adjoint to F. We set T = UF. Then we have natural transformations μ:UFUF − > UF μX = UεFX ι:1 − > UF ιX = ηX such that μ is associative and ι is the unit of μ. These requirements remind us of the definition of a monoid - and this is not that much of a surprise. To see the exact connection, and to garner a wider spread of definitions. Contents 1 Algebraic objects in categories We recall the definition of a monoid: Definition A monoid is a set M equipped with an operation $\mu: M\times M\to M$ that we call composition and an operation $e: 1\to M$ that we call the identity, such that • $M \circ 1_M \times M = M \circ M \times 1_M$ (associativity) • $M \circ 1_M \times e = M \circ e\times 1_M = 1_M$ (unity) If we have a monoidal category - a category C with a bifunctor $\otimes: C\times C\to C$ called the tensor product which is associative (up to natural isomorphisms) and has an object I acting as a unit (up to natural isomorphisms) for the tensor product. The product in a category certainly works as a tensor product, with a terminal object acting as a unit. However, there is often reason to have a non-commutative tensor product for the monoidal structure of a category. This makes the category a cartesian monoidal category. For, say, abelian groups, or for vector spaces, we have the tensor product forming a non-cartesian monoidal category structure. And it is important that we do. And for the category of endofunctors on a category, we have a monoidal structure induced by composition of endofunctors: $F\otimes G = F\circ G$. The unit is the identity functor. Now, we can move the definition of a monoid out of the category of sets, and define a generic monoid object in a monoidal category: Definition A monoid object in a monoidal category C is an object M equipped with morphisms $\mu: M\otimes M\to M$ and $e: 1\to M$ such that • $M \circ 1_M \otimes M = M \circ M\otimes 1_M$ (associativity) • $M \circ 1_M \otimes e = M \circ e\otimes 1_M = 1_M$ (unity) As an example, a monoid object in the cartesian monoidal category Set is just a monoid. A monoid object in the category of abelian groups is a ring. A monoid object in the category of abelian groups, with the tensor product for the monoidal structure is a ring. And the composition UF for an adjoint pair is a monoid object in the category of endofunctors on the category. The same kind of construction can be made translating familiar algebraic definitions into categorical constructions with many different groups of definitions. For groups, the corresponding definition introduces a diagonal map $\Delta: G\to G\times G$, and an inversion map $i: M\to M$ to codify the entire definition. One framework that formalizes the whole thing, in such a way that the definitions themselves form a category is the theory of Sketches by Charles Wells. In one formulation we get the following definition: Definition A sketch S = (G,D,L,K) consists of a graph G, a set of diagrams D, a set L of cones in G and a set K of cocones in G. A model of a sketch S in a category C is a graph homomorphism $G\to C$ such that the image of each diagram in D is commutative, each of the cones is a limit cone and each of the cocones is a colimit cocone. A homomorphism of models is just a natural transformation between the models. We thus define a monad in a category C to be a monoid object in the category of endofunctors on that category. Specifically, this means: Definition A monad in a category C is an endofunctor $T: C\to C$ equipped with natural transformations $\mu: T^2\to T$ and $\eta: 1\to T$ such that the following diagrams commute: We can take this definition and write it out in Haskell code, as: ```class Functor m => MathematicalMonad m where return :: a -> m a join :: m (m a) -> m a   -- such that join . fmap return = id :: m a -> m a join . return = id :: m a -> m a join . join = join . fmap join :: m (m (m a)) -> m a``` Those of you used to Haskell will notice that this is not the same as the Monad typeclass. That type class calls for a natural transformation (>>=) :: m a -> (a -> m b) -> m b (or bind ). The secret of the connection between the two lies in the Kleisli category, and a way to build adjunctions out of monads as well as monads out of adjunctions. 2 Kleisli category We know that an adjoint pair will give us a monad. But what about getting an adjoint pair out of a monad? Can we reverse the process that got us the monad in the first place? There are several different ways to do this. Awodey uses the Eilenberg-Moore category which has as objects the algebras of the monad T: morphisms $h: Tx \to x$. A morphism $f: (\alpha: TA\to A)\to (\beta: TB\to B)$ is just some morphism $f:A\to B$ in the category C such that $f\circ\alpha = \beta\circ T(f)$. We require of T-algebras two additional conditions: • $1_A = h\circ\eta_A$ (unity) • $h\circ\mu_A = h \circ Th$ (associativity) There is a forgetful functor that takes some h to t(h), picking up the object of the T-algebra. Thus $U(h:TA\to A) = A$, and U(f) = f. We shall construct a left adjoint F to this from the data of the monad T by setting $FC = (\mu_C: T^2C\to TC)$, making TC the corresponding object. And plugging the corresponding data into the equations, we get: • $1_{TC} = \mu_C\circ\eta_{TC}$ • $\mu_C\circ\mu_{TC} = \mu_C\circ T\mu_C$ which we recognize as the axioms of unity and associativity for the monad. By working through the details of proving this to be an adjunction, and examining the resulting composition, it becomes clear that this is in fact the original monad T. However - while the Eilenberg-Moore construction is highly enlightening for constructing formal systems for algebraic theories, and even for the fixpoint definitions of data types, it is less enlightening to understand Haskell's monad definition. To get to terms with the Haskell approach, we instead look to a different construction aiming to fulfill the same aim: the Kleisli category: Given a monad T over a category C, equipped with unit η and concatenation μ, we shall construct a new category K(T), and an adjoint pair of functors U,F factorizing the monad into T = UF. We first define K(T)0 = C0, keeping the objects from the original category. Then, we set K(T)1 to be the collection of arrows, in C, on the form $A\to TB$. The composition of $f: A\to TB$ with $g: B\to TC$ is given by the sequence $A\to^f TB\to^{Tg} T^2C\to^{\mu_C} TC$ The identity is the arrow $\eta_A: A\to TA$. The identity property follows directly from the unity axiom for the monad, since ηA composing with μA is the identity. Given this category, we next define the functors: • U(A) = TA • $U(f: A\to TB) = TA\to^{Tf} T^2B\to^{\mu_B} TB$ • F(A) = A • $F(g: A\to B) = A\to^{\eta_A} TA\to^{Tg} TB$ This definition makes U,F an adjoint pair. Furthermore, we get • UF(A) = U(A) = TA • $UF(g: A\to B) = U(Tg\circ\eta_A) = \mu_B\circ T(Tg\circ\eta_A)$ $=\mu_B\circ T^2g \circ T\eta_A$, and by naturality of μ, we can rewrite this as $=Tg \circ \mu_A\circ T\eta_A = Tg\circ 1_{TA} = Tg$ by unitality of η. We've really just chased through this commutative diagram: Hence, the composite UF really is just the original monad functor T. But what's the big deal with this? you may ask. The big deal is that we now have a monad specification with a different signature. Indeed, the Kleisli arrow for an arrow f :: a -> b and a monad Monad m is something on the shape fk :: a -> m b . And the Kleisli factorization tells us that the Haskell monad specification and the Haskell monad laws are equivalent to their categorical counterparts. And the composition of Kleisli arrows is easy to write in Haskell: ```f :: a -> m b g :: b -> m c   (>>=) :: m a -> (a -> m b) -> m b -- Monadic bind, the Haskell definition   kleisliCompose f g :: a -> m c kleisliCompose f g = (>>= g) . f``` 3 Examples 3.1 The List monad Lists form a monad, with the following (redundant) definition: ```instance Monad [] where return x = [x]   [] >>= _ = [] (x:xs) >>= f = f x : xs >>= f   join [] = [] join (l:ls) = l ++ join ls``` As it turns out, the lists monad can be found by considering the free and forgetful functors between sets and monoids. Indeed, the lists are what we get from the Kleene star operation, which is the monad we acquire by composing the free monoid functor with the forgetful functor. 3.2 Error handling We can put a monadic structure on a coproduct A + B so that the monadic bind operation performs computations $A+B\to A'+B$ until some computation fails, returning an error, typed B, after which we bypass any further computations, just carrying the error out of the entire computation. The endofunctor here is + B. So the monad is given from a way to go from $A+B+B\to A+B$. Doing this is easy: in Haskell terms, we just remove the constructor differences between the two copies of B floating around. Mathematically, this is just using the functoriality of the coproduct construction on the inclusion maps into A + B. For our example, we shall return the first value of B to ever occur, thus making our join operator look like this: ```join :: (Either b (Either b a)) -> Either b a join (Left y) = Left y join (Right (Left y)) = Left y join (Right (Right x)) = Right x``` This gives us a Haskell monad defined by: ```instance Monad (Either b) where return x = Right x   Left y >>= _ = Left y Right x >>= f = f x``` 4 Additional reading • http://blog.sigfpe.com/2006/08/you-could-have-invented-monads-and.html (one of the least dramatic monads tutorials out there) • http://www.disi.unige.it/person/MoggiE/ftp/lc88.ps.gz (Moggi: Computational lambda-calculus and monads, one of the papers that started the interest in monads. Logic, dense reading.) 5 Homework Full marks will be given for 4 out of the 7 questions. 1. Prove that the Kleisli category adjunction is an adjunction. 2. Prove that the Eilenberg-Moore category adjunction is an adjunction. 3. Given monad structures on S and T, 4. The writer monad W is defined by • data Monoid m => W m x = W (x, m) • fmap f (W (x, m)) = W (f x, m) • return x = W (x, mempty) • join (W (W (x, m), n)) = W (x, m `mappend` n) 1. (2pt) Prove that this yields a monad. 2. (2pt) Give the Kleisli factorization of the writer monad. 3. (2pt) Give the Eilenberg-Moore factorization of the writer monad. 4. (2pt) Is there a nice, 'natural' adjunction factorizing the writer monad?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8791743516921997, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41942/inhomogeneous-effective-mass-in-a-2d-lattice
# Inhomogeneous Effective Mass in a 2D Lattice Consider a tight-binding square lattice in 2D. This lattice has two different nearest neighbor tunneling rates along the x and y directions; call them $J_{x}$ and $J_{y}$. All longer range tunneling rates are zero. How can I compute the response of the expected momentum of a very low momentum particle to a force applied along a vector n? Is the rate of change of momentum parallel to n? Where is the excess momentum going? I think the excess momentum is being lost to heating the environment. I think the rate of change of momentum IS parallel to the force applied along the vector n. - ## 1 Answer It is not parallel, the excess momentum can't go into heat, because heat is energy not momentum, but it does go into deflecting the atomic crystal as a whole a negligible amount. The description of this is by the discrete Schrodinger equation with a different value for the x and y mass: $$H= - {\partial_x^2\over 2m_x} - {\partial_y^2\over 2m_y} - F_x x - F_y y$$ The deflections can be found by linearly rescaling x and y to turn it into the ordinary rotationally invariant Schrodinger equation, and using the ordinary Newtonian limit (or you can use the short-wavelength geometric optics approximation directly, or you can note that it factorizes into independent Schrodinger equations in x and y of a standard form that you can take a classical limit on). The form of Newton's laws here is: $$a_x = {F_x \over m_x}$$ $$a_y = {F_y \over m_y}$$ This means that the force is not in the direction of the acceleration when the two hopping parameters are unequal. This doesn't violate any conservation law, the extra momentum is absorbed by the lattice--- you must remember that the effective theory is not translationally invariant, so doesn't conserve momentum. - I'll have to think about this, but by inspection you seem to have hit it right on the head. Have class today, might ask some Q's later. Thanks Ron! – Dylan Sabulsky Oct 29 '12 at 13:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306694269180298, "perplexity_flag": "head"}
http://openwetware.org/index.php?title=User:Pranav_Rathi/Notebook/OT/2010/12/10/Olympus_Water_Immersion_Specs&diff=481997&oldid=481996
User:Pranav Rathi/Notebook/OT/2010/12/10/Olympus Water Immersion Specs From OpenWetWare (Difference between revisions) | | | | | |----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | () | | () | | | Line 83: | | Line 83: | | | | The best theoretical value for the spotsize is needed to be multiplied by pi (121 X pi = 380 nm; i do not know why?). This value is seems to be closer to the experimental value. Also a quick way to estimate the spotsize: '''ω<sub>o</sub>=λ<sub>0</sub>/2n=400nm''' (if we also divide it by '''π''' we will get 127nm). | | The best theoretical value for the spotsize is needed to be multiplied by pi (121 X pi = 380 nm; i do not know why?). This value is seems to be closer to the experimental value. Also a quick way to estimate the spotsize: '''ω<sub>o</sub>=λ<sub>0</sub>/2n=400nm''' (if we also divide it by '''π''' we will get 127nm). | | | | | | | - | The results of the beam waist can be experimentally verified by directly measuring the spot size, but the process is rather cumbersome and hardly interesting[https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B5jWjFYiQdkfM2QxN2U4NTYtYTNmYi00OWNhLWJhOTEtNjgwMTk3Zjc4ZWVi&hl=en]https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B5jWjFYiQdkfZTI0MGUwYzMtYzcxNi00YTVjLWI1NzgtY2FlNTJlMGI3YTk2&hl=en. The results given here are the best in terms of the theoretical limits of the aberration free optics. Experimentally we suffer on number of bases: one of them is experimental-setup itself; because of aberrations introduces by index mismatch among water, oil and glass interfaces, which also introduce the multiple reflection. And we should also not forget that the focal plane of the objective is not infinitely thin in z-direction, which implies the out of focus rays degrading the overall image reducing the resolution and degrading the spotsize. | + | The results of the beam waist can be experimentally verified by directly measuring the spot size, but the process is rather cumbersome and hardly interesting[https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B5jWjFYiQdkfM2QxN2U4NTYtYTNmYi00OWNhLWJhOTEtNjgwMTk3Zjc4ZWVi&hl=en][https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B5jWjFYiQdkfZTI0MGUwYzMtYzcxNi00YTVjLWI1NzgtY2FlNTJlMGI3YTk2&hl=en]. The results given here are the best in terms of the theoretical limits of the aberration free optics. Experimentally we suffer on number of bases: one of them is experimental-setup itself; because of aberrations introduces by index mismatch among water, oil and glass interfaces, which also introduce the multiple reflection. And we should also not forget that the focal plane of the objective is not infinitely thin in z-direction, which implies the out of focus rays degrading the overall image reducing the resolution and degrading the spotsize. | | | | | | | | | | | Water immersion objective details WE are using Olympus UPLANSAPO (UIS 2) water immersion IR objective for DNA stretching and unzipping. The detail specifications of the objective can be found in the link:[1] Other specification are as follows; Mag 60X Wavelength 1064 NA 1.2 medium water Max ray angle 64.5degrees f # 26.5 Effective FL in water 1.5 to 1.6mm (distance between the focal spot and the exit aperture surface) Entrance aperture diameter 8.5mm Exit aperture diameter 6.6mm Working distance .28mm Cover glass correction .13 to .21 (we use .15) Resolution and achievable spot size The resolution and the spotsize (beam waist) presented here is in the theoretical limits; we cannot achieve better than this. Resolution and spotsize are diffraction limited and to reach these limits our optics has to be perfect; no aberrations and other artifacts. Since our optics is not perfect and very clean we can hardly reach these limits in real life; definitely the resolution and spotsize in real is worse than the numbers presented here. A good way to do a quick estimation of the resolution (diameter of the airy disk) is that its 1/3 of a wavelength λ=.580 μm; λ/3*n= 145 nm. Since we do all our experiments in water we will have take index of water in account (n=1.33). I am ignoring the NA of the condenser in the calculations. • Wavelength of the visible light λv = .590μm. • Wavelength of the IR λIR = 1.064μm. • Diameter of the incident beam at the exit pupil (D=2ω'o.)=6500μm. (ω'o is the incident beam waist) • Focal length of the objective f=1500μm. • Angular resolution inside water: $\mathrm{\theta} = \sin^{-1}\frac{1.22\lambda_v}{nD}= 8.1e^{-5}rad$ • Spatial resolution in water; $\mathrm{\Delta l} = \frac{1.22f\lambda_v}{nD}= 122nm$ Since we are not too sure of the focal length of the objective, so i derived the resolution formula in terms of the numerical aperture NA (the math can be seen through this link[2]). • Resolution in terms of NA: $\mathrm{\Delta l} = \frac{1.22\lambda_v}{2n}\sqrt{(\frac{n}{NA})^{2}-1}= 127nm$ • Now the minimum spotsize (beam waist ωo) can be calculated using the same formula where Δl=2ωo. But this time for infrared wavelength • Minimum beam waist; $\mathrm{\omega_o} = \frac{1.22f\lambda_{IR}}{2nD}= 112nm$ and the beam diameter 224nm. • In terms of NA: $\mathrm{\omega_o} = \frac{1.22\lambda_{IR}}{4n}\sqrt{(\frac{n}{NA})^{2}-1}= 116nm$ and the beam diameter 232nm. I also used Gaussian approach to calculate the spot size and the results are not much different, which proves that either approach is right. • With Gaussian approach; $\mathrm{\omega_o} = \frac{\lambda_{IR}}{\pi n}\sqrt{(\frac{n}{NA})^{2}-1}= 121nm$ and the beam diameter 242nm. Results are not much different and either approach is right. BUT zeroth order Gaussian approximation (paraxial) is not correct for high NA (for high convergent beams) objective lens. All above approaches are based on paraxial approximation (when diffraction angle is less than 30o). For high convergent beam like here this approximation is not valid any more and that's why calculated spotsize here is an orders of magnitude less. For better approximation one have to use spotsize equation given by electromagnetic field theory with higher order Gaussian corrections[3][4][5][6]. As we reach higher order corrections we get better and better theoretical results. But still it will be in the range of above calculate spotsize. The best theoretical value for the spotsize is needed to be multiplied by pi (121 X pi = 380 nm; i do not know why?). This value is seems to be closer to the experimental value. Also a quick way to estimate the spotsize: ωo=λ0/2n=400nm (if we also divide it by π we will get 127nm). The results of the beam waist can be experimentally verified by directly measuring the spot size, but the process is rather cumbersome and hardly interesting[7][8]. The results given here are the best in terms of the theoretical limits of the aberration free optics. Experimentally we suffer on number of bases: one of them is experimental-setup itself; because of aberrations introduces by index mismatch among water, oil and glass interfaces, which also introduce the multiple reflection. And we should also not forget that the focal plane of the objective is not infinitely thin in z-direction, which implies the out of focus rays degrading the overall image reducing the resolution and degrading the spotsize.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8745138645172119, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57725/strassen-algorithm-7-multiplications
## Strassen Algorithm 7 multiplications ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Strassen Algoritm is a well-known matrix multiplication divide and conquer algorithm. The trick of the algorithm is reducing the number of multiplications to 7 instead of 8. I was wondering, can we reduce any further? Can we only do 6 multiplications? Also, what happens if we divide the NxN arrays into 9 arrays each of (N/3)x(N/3) instead of 4 arrays of (N/2)x(N/2). Can we then do less multiplications? - 1 In view of your comment on Igor Rivin's answer: first reading your question, it seemed to me (and I guess to him) that you are interested in fast matrix multiplication in a sort of general way. In particular, the second paragraph caused this impression. – quid Mar 7 2011 at 21:35 ## 5 Answers You may be interested to know that there's a way to multiply $3\times3$ matrices using only 23 multiplications (where the naive method uses $27$). See Julian D. Laderman, A noncommutative algorithm for multiplying $3\times3$ matrices using $23$ muliplications, Bull. Amer. Math. Soc. 82 (1976) 126–128, MR0395320 (52 #16117). As for doing $2\times2$ with fewer than $7$ multiplications, this was proved impossible just a few years ago. See J M Landsberg, The border rank of the multiplication of $2\times2$ matrices is seven, J. Amer. Math. Soc. 19 (2006), 447–459, MR2188132 (2006j:68034). EDIT: As Mariano points out, Landsberg acknowledged a gap in the proof. But don't panic. The review, and my preceding paragraph, were based on the electronic version of Landsberg's paper. The print version (which is freely available on the AMS website) is different. It says, "Hopcroft and Kerr [12] and Winograd [22] proved independently that there is no algorithm for multiplying $2\times2$ matrices using only six multiplications." Those references are J. E. Hopcroft and L. R. Kerr, On minimizing the number of multiplications necessary for matrix multiplication, SIAM J. Appl. Math. 20 (1971), 30–36, MR0274293 (43:58). S.Winograd, On multiplication of $2\times2$ matrices, Linear Algebra and Appl. 4 (1971), 381–388, MR0297115 (45:6173). - There is a gap in that proof, according to the review; the latter mentions an erratum, but I cannot find it using MathSciNet. – Mariano Suárez-Alvarez Mar 8 2011 at 0:01 1 Wait... log_3(23) > log_2(7), so what is the point?... – Michal R. Przybylek Mar 8 2011 at 1:04 2 Michal, I was just trying to answer the question about dividing things in threes instead of twos, I didn't claim it was any better. But as far as I know (and I hope someone will catch me up, if my information is out of date), it has not been proved that $23$ is best possible for $3\times3$ matrices, all that's known is that it's at least $19$. – Gerry Myerson Mar 8 2011 at 2:09 What Landsberg claims is stronger than what Hopcroft-Kerr and Winograd show. Landsberg claims that the border rank of $2 \times 2$ matrix multiplication is 7; this is stronger than proving that the rank of matrix multiplication is 7. Nevertheless, it looks like Landsberg patched his proof at arxiv.org/abs/math/0407224 – Ryan Williams Mar 12 2011 at 2:31 1 @unknown, try Marcus Blaser, On the complexity of the multiplication of matrices of small formats, J. Complexity 19 (2003) 43-60, MR 2003k:68040. – Gerry Myerson Nov 22 2011 at 5:36 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Everything you ever wanted to know can be gleaned from: http://en.wikipedia.org/wiki/Matrix_multiplication - I don't see where my questions are answered in the link you gave. It's justs repeats that "is based on a clever way of multiplying two 2 × 2 matrices which requires only 7 multiplications (instead of the usual 8), " – Johny Mar 7 2011 at 20:57 There are references to the relevant papers in the wikipedia article. Since the methods are completely elementary, just read the papers. – Igor Rivin Mar 8 2011 at 4:36 Of possible relevance to your question (though it might go into more geometric considerations than you care to read): Generalizations of Strassen's equations for secant varieties of Segre varieties (J. Landsberg and L. Manivel, Comm. Algebra 2008) - I am not an expert in this field, but can hardly recall that: - Strassen algorithm is optimal for divisions on 4 submatrices - there are algorithms using a much larger number of submatrices and behaving slightly better. If you are really interested in these results I can search for references, but be aware that the algorithms are of a very little practical use --- they just have big constant factors (and it is not the usual case that we are given a large dense matrix), and are not that robust (in the sense of numerical stability). - 2 Actually, Strassen's algorithm IS of practical use (the matrices have to be of the order of 200x200, although the precise numbers change with the hardware you use). There has been a lot of work done on numerical stability, and at least for Strassen this is well understood. – Igor Rivin Mar 7 2011 at 21:45 @Igor: I assume Michal refers to the practicality of the other algorithms rather than that of the Strassen algorithm. – Charles Mar 8 2011 at 1:30 @Charles: you might be right, I could not tell. In any case, I have recently been dealing with 10000x10000 matrices -- a size almost unimaginable 15 years ago, which means that methods which were once of academic interest only are (at least in principle) becoming practical. – Igor Rivin Mar 8 2011 at 4:39 Best solution is in page 481, lines -1,-2, -3 and -4 of The art of Computer Programming, Volume two, Second Edition of Donald E. Knuth. Also interesting to take a look to page 482. He also references the original paper of Strassen as ```Numer. Math. 13 (1969), 354-356``` in lines -5,-6,-7,-8 of the same page (481). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150145053863525, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6583/which-changes-of-metric-fix-all-open-balls-of-a-metric-space
Which changes of metric fix all open balls of a metric space? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In an earlier question, I was interested in counting the number of metric spaces on N points, where I considered two metric spaces to be the same if they had the same collection of open balls. Two questions: 1. What are the usual notions of metric space equivalence? Are any of them nontrivial for finite metric spaces? (For instance, the obvious one that two metric spaces are equivalent if their topology is the same is trivial for finite vector spaces). 2. If we say that two (labeled) metric spaces are equivalent if they have the same collection of open balls, in what ways can we operate on the metric d such that we get the same collection of open balls? For example, any two metric spaces on 2 points are equivalent in this way, so any allowable operation on metrics yields an equivalent metric space. Clearly we can always scale d by any positive real without changing the equivalence class. Now consider the two metric spaces ````x ---3--- y x ---3--- y \ / \ / 3 4 4 5 \ / \ / z z ```` In both cases, the nontrivial open balls are {x, z} and {x, y}, so these metric spaces are equivalent. How can I describe the operation I'm performing on d in general terms? What other operations on d will yield metric spaces that are equivalent in this sense? - You might be interested in the discussion here: mathoverflow.net/questions/5957/… – Qiaochu Yuan Nov 23 2009 at 17:25 1 Answer If you take the set of distances between $n$ points, then you can define a function on the real numbers that when applied to the distances preserves the open balls on $n$ points and defines a new metric as follows. Let $m$ be the number of distinct distances. First note that zero must be fixed second the set of new distances must preserve the original open balls and also preserve the triangle inequality and be positive then we can take the sum of the polynomials $\sum_1^k x(x-d_1)(x-d_2)$...(goes through all distances but d_k)$(d_k'/d_kf_k(d_k))(f_k(x)$ summing over all such terms for all $k$ where $f_k$ are arbitrary smooth functions not zero at $d_k$ and $d_k'$ is the desired new distance we can also add a term $x(x-d_1)$...$f(x)$ where $f(x)$ is an arbitrary smooth function for all distances in this way we preserve zero and changes all the distances to new distances satisfying the triangle inequality and having positive distances and preserving the open balls on the n points and alao preserving symmetry since if $x,y$ and $y,x$ have the same distance it will be sent to a new distance which will be the distance between $x$ and $y$ as well as $y$ and $x$. Thus the new set of distances will be a metric on the $n$ points preserving the open balls on the $n$ points. This provides a real function operating on the reals that maps one metric space to another by transforming the distances. It does not send metrics with two equal distances to metrics with two different distances so for the two equivalent metrics given it sends one to the other but that transformation is not reversible. For any set of inequalities on distances there is a realization as a metric space. Just Let the distances be close enough to one to satisfy the triangle inequality and set distances from points to themselves as zero and the triangle inequality will be satisfied. Let us look at what is happening in the case with three points mentioned in the problem. If one distance is maximum. It will determine the whole structure. Say it is the distance from $A$ to $B$ then at point $A$ it will be greater than the distance from $A$ to $C$ so at this point we will get the set containing $A$ and $C$ in the topology similarly we will get the set containing $B$ and $C$ in the topology. The set containing $A$ and $B$ will not be in the toplogy. In all cases the set of all elements and the set of individual elements will be in the topology so at this point we have determined the entire topology. Now we have done this without specifying distances $AC$ and $BC$ and the can be set arbitrarily as long as they are less than $AB$. In fact we can determine the entire set of topologies for three points by looking at the number of maxima. If there are two distances which are equal and larger than the third then the topology contains the entire set, single points and the two points of the third edges. If there all three distances are equal we have the topology consists of the three pionts and the entire set. Something similar happens in three dimensions if one distance $AB$ is greater than all the others. It forbids the three points sets containing the two points with this distance and it forces the two three point sets not containing it to be in the topology and that determines the entire structure of three points sets in the topology. - 2 I like this answer, but if you have the time, would you mind cutting the explanation into more sentences and writing the summation formula for the function? I find it somewhat hard to follow as is. – Elizabeth S. Q. Goodman Nov 24 2009 at 8:06 Seconded; I'm having trouble following exactly what your construction is, and why it works. Could you clarify? – Gabe Cunningham Nov 25 2009 at 1:32 I made some changes including adding a summation function and adding material. – Kristal Cantwell Nov 25 2009 at 18:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226632118225098, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/96364-derivative-1-1-function.html
# Thread: 1. ## Derivative of 1-1 function I'm having trouble on this problem Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function and hence its inverse is a 1-1 function. Find the following a. f'(2) b. f'(3) I got this so far 3/4 x^2 +1 then 3/4 (2)^2 +1 = 3+1 = 4 My first question is am I right by getting the derivative and sub the 2 in? my second question is if I am right then what do i do afterward? 2. Originally Posted by goldenroll Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function Are you sure? Originally Posted by goldenroll Find the following a. f'(2) b. f'(3) $f'(x) = \frac{x}{2}+1$ $f'(2) = \frac{2}{2}+1 = \dots$ $f'(3) = \frac{3}{2}+1 = \dots$ 3. Originally Posted by goldenroll I'm having trouble on this problem Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function and hence its inverse is a 1-1 function. Find the following a. f'(2) b. f'(3) I got this so far 3/4 x^2 +1 then 3/4 (2)^2 +1 = 3+1 = 4 My first question is am I right by getting the derivative and sub the 2 in? my second question is if I am right then what do i do afterward? The way you've writtenthis question confuses me a litle bit. The derivative of a function is not dependent on whether or not it has an inverse. But if you want $f'(2)\text{ and }f'(3)$, then $f'(x)=\frac{1}{2}x+1\Rightarrow{f'(2)}=2\text{ and }f'(3)=\frac{5}{2}$ A function $f$ is said to have an inverse function $f^{-1}$, if for every element in the range of $f$, there exists one, and only one corresponding element in the domain of $f$ Is the definition satisfied in your case? 4. Originally Posted by goldenroll I'm having trouble on this problem Let f(x)= 1/4 x^2 +x-1. This is a 1-1 function and hence its inverse is a 1-1 function. Find the following a. f'(2) b. f'(3) No, it's not 1-1 and does not have an inverse. Or are you restricting f to a given interval? [/quote]I got this so far 3/4 x^2 +1[/quote] What is equal to this? Certainly Not the derivative! f'(x)= (1/2)x+ 1. then 3/4 (2)^2 +1 = 3+1 = 4 My first question is am I right by getting the derivative and sub the 2 in? my second question is if I am right then what do i do afterward?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8915892839431763, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/40401/finding-a-vector-in-euclidian-space-that-minimizes-a-loss-function-subject-to-so
# Finding a vector in Euclidian space that minimizes a loss function subject to some constraints I'm trying to solve the following minimization problem, and I'm sure there must be a standard methodology that I could use, but so far I couldn't find any good references. Please let me know if you have anything in mind that could help or any references that you think would be useful for tackling this problem. Suppose you are given $K$ points, $p_i \in R^n$, for $i \in \{1,\ldots,K\}$. Assume also that we are given $K$ constants $\delta_i$, for $i \in \{1,\ldots,K\}$. We want to find the vector $x$ that minimizes: $\min_{x \in R^n} \sum_{i=1,\ldots,K} || x - p_i ||^2$ subject the following $K$ constraints: $\frac{ || x - p_i ||^2 } { \sum_{j=1,\ldots,K} ||x - p_j||^2} = \delta_i$ for all $i \in {1,\ldots,K}$. Any help is extremely welcome! Bruno edit: also, we know that $\sum_{i=1,\ldots,K} \delta_i = 1$. - @Bruno: The standard methodology is to use the method of linear least squares (the linear algebra way). Check out tutorial.math.lamar.edu/Classes/LinAlg/LeastSquares.aspx and en.wikipedia.org/wiki/… – InterestedGuest May 21 '11 at 0:02 @InterestedGuest: Yes, I was trying to solve this using least squares, but it's not clear to me how to deal with the constraints. I thought it could be a variation of the typical setting but with a slightly different loss function, but I'm not sure how to model it. In a way it looks like the opposite of LLE (Locally-Linear Embedding): here we are given an embedding (the space where the $p_i$'s live) and we want to find a point in it that is at some given relative distance of its K neighbors. However, while in LLE the embedding is being constructed, here the topology and geometry are fixed – Bruno May 21 '11 at 6:07 @Will: What do you mean, the function cannot change? For each $x$ that we could pick, it will be at a given distance of each of the $p_i$'s; we want the $x$ that minimizes the sum of those distances. However, at the same time we are constraining it in a way that the ratio of the distance of $x$ to a given $p_i$, relative to the total distances involved, is fixed. This is because we might want $x$ to be, for instance, 2 times closer to $p_1$ than to $p_2$. Could you please elaborate on your answer? – Bruno May 21 '11 at 6:10 @Will: you are right; I actually forgot to include the constraint that the given $\delta_i$'s for sure sum up to 1. The basic idea is this: we have K points $v_i$ in a given space, and are given some other point Y. Y might be 2x closer to $v_1$ than to $v_2$; so $\delta_1 = 1/3$ and $\delta_2 = 2/3$. Now we are given K points $p_i$ in some other space, of different dimensionality, such that their coordinates are basically $v_i$'s coordinates, scaled. Thus, all relative distances are preserved. We want to find the point $x$ in the 2nd space that would correspond to $Y$. We do so (...) – Bruno May 21 '11 at 7:00 (...) by looking for a point $x$ whose relative distance to each of its K neighbors in the new space is consistent with the relative distance of $Y$ to its K neighbors in the original space. If $v_1$ was mapped to $p_1$ and $v_2$ to $p_2$, $x$ should be 2x closer to $p_1$ than to $p_2$. Note that if the dimensionality of the 2nd space is much larger than the dimensionality of the 1st space, there could be several $x$'s that respect those contraints; we define that we want to find the one that is closer to the actual points that are given (the $p_i$'s), so we minimize the total sum of distances – Bruno May 21 '11 at 7:03 show 5 more comments ## 2 Answers You can try using Lagrange multiplier method - see wikipedia. - Yes, this works. If $n \leq K-2,$ you have no guarantee of any legal solution, even when the $\delta_i$ sum to 1, as required. It may be that the sample points, your $v_j$ and $Y,$ were in a Euclidean space of much lower dimension, however, that does not guarantee you can repeat that piece of luck if the new $n$ in $\mathbf R^n$ is too small. If $n = K -1,$ there should be a single feasible point, "near" the simplex with the $K$ points as vertices. No need (or ability) to minimize anything. Actually, unless the $\delta$'s are all equal, it appears there is a second feasible point far away. If all angles in the simplex are acute, there is a feasible point in its interior. So, my advice is, figure out how to find a feasible point when $n=K-1.$ If circumstance forces $n \geq K,$ rotate so the hyperplane containing all the $p_i$ becomes the hyperplane $x_1, x_2, \ldots, x_{K-1}, 0,0,\ldots,0,$ solve the problem there, then rotate back. Meanwhile, I see nothing wrong with a numerical method for finding the single feasible point near the simplex when $n=K-1.$ Easier than finding the intersection of a large number of spheres and planes. Note that, when $n=K,$ the full set of all feasible points is either a straight line (if all $\delta_i$ are equal) or, in fact, an actual circle. Go figure. In either case, meeting the hyperplane that contains the $p_i$'s orthogonally. For that matter, your easiest program is just to solve the problem in the original $v_i, Y$ location, that is, a numerical method that finds the point $Z$ near the $v_i$ simplex with the correct $\delta$'s. Then you can just map $Z$ along with the $v_i.$ - Thanks, Will. Let me think about your suggestions. In my case, $n >> K$ and the 2nd space is much larger than the 1st one. I am considering using a variation of how LLE constructs its embedding: assume that points lie on a manifold and reconstruct each one via a linear combination of its neighbors. Then, transfer "linear patches" of points from one space to the other, preserving the weights in that combination. If we constraint the best weights to be between 0 and 1, the mapping is invariant to rotations, scalings and translations, which makes the problem much easier. Thanks, once again! – Bruno May 22 '11 at 17:15 – Will Jagy May 22 '11 at 17:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540127515792847, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/103898-closed-form.html
Thread: 1. Closed Form How could one write a closed form function of the following.. F(0) = 0 F(1) = 1 F(2) = 4 F(n) = 3 * (F(n-1) – F(n-2)) + F(n-3) + 3 Computing f(3) = 3 * (4-1) + 0 + 3 = 12 f(4) = 3 * (12-4) + 1 + 3 = 28 f(5) = 3 * (28-12) + 4 + 3 = 55 I am not seeing a pattern here any help would be appreciated. 2. Consider $G(x)=F_1x+F_2x^2+...$. You have : $G(x)/x=F_1+F_2x+F_3x^2+...=1+F_2x+F_3x^2+...$ $(G(x)/x-1)/x = F_2+F_3x+F_4x^2... = 4+F_3x+F_4x^2...$ $((G(x)/x-1)/x-4)/x = F_3+F_4x + F_5x^2 +...$ Now $((G(x)/x-1)/x-4)/x = \sum_{j=0}^\infty F(j+3)x^j = \sum_{j=0}^\infty (3F(j+2)-3F(j+1)+F(j)+3)x^j$ $= 3(G(x)/x-1)/x-3G(x)/x+G(x)+\frac{3}{1-x}$. Solve for $G(x)$ and express it as partial fractions; the coefficient of $x^n$ will be $F_n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8414899110794067, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/114220/intuitive-explanation-for-a-polynomial-expansion?answertab=votes
# Intuitive explanation for a polynomial expansion? Is there an ituitive explanation for the formula: $$\frac{1}{\left(1-x\right)^{k+1}}=\sum_{n=0}^{\infty}\left(\begin{array}{c} n+k\\ n \end{array}\right)x^{n}$$ ? Taylor expansion around x=0 : $$\frac{1}{1-x}=1+x+x^{2}+x^{3}+...$$ differentiate this k times will prove this formula. but is there an easy explanation for this? Any thing similar to the binomial law to show that the coefficient of $x^{n}$ is $\left(\begin{array}{c} n+k\\ n \end{array}\right)$ . Thanks in advance. - Do you not understand the differentiation-based proof? (it is very simple). Or, are you seeking alternative proofs? – Gone Mar 2 '12 at 17:34 ## 3 Answers Consider the number of solutions (say $\displaystyle a_n$) to the equation: $$x_1 + x_2 + \dots + x_{k+1} = n$$ Where $x_i$ are non-negative integers, and $n$ is a non-negative integer. The Stars and Bars approach: choosing where to place $\displaystyle k$ bars, out of a possible $\displaystyle n+k$ spots, gives us that the number of solutions is exactly $a_n = \displaystyle \binom{n+k}{n}$ But, if you look at this using the Generating Functions approach, we see that $$(1+x + x^2 + \dots)^{k+1} = \sum_{n=0}^{\infty} a_n x^n$$ i.e. $$\frac{1}{(1-x)^{k+1}} = \sum_{n=0}^{\infty} a_n x^n = \sum_{n=0}^{\infty} \binom{n+k}{n} x^n$$ - Nice combinatorial derivation! (+1) – robjohn♦ Feb 28 '12 at 1:24 Oh, i see, it is because the coefficients of $x^i$ are one, the number of solution for that equation equals to the coefficient of the term $x^n$ in the RHS. Thanks a lot. very nice proof. – johnniac Feb 28 '12 at 4:55 If by the binomial law you mean $$(1+x)^n=\sum_k\binom{n}{k}x^k\tag{1}$$ then yes. Note that $$\binom{n}{k}=\frac{n(n-1)(n-2)\dots(n-k+1)}{k!}\tag{2}$$ Consider what $(2)$ looks like for a negative exponent, $-n$: $$\begin{align} \binom{-n}{k} &=\frac{-n(-n-1)(-n-2)\dots(-n-k+1)}{k!}\\ &=(-1)^k\frac{(n+k-1)(n+k-2)(n+k-3)\dots n}{k!}\\ &=(-1)^k\binom{n+k-1}{k}\tag{3} \end{align}$$ Plug $(3)$ into $(1)$ and we get $$\begin{align} \frac{1}{(1-x)^{n+1}} &=(1-x)^{-(n+1)}\\ &=\sum_k\binom{-(n+1)}{k}(-x)^k\\ &=\sum_k(-1)^k\binom{n+k}{k}(-x)^k\\ &=\sum_k\binom{n+k}{k}x^n\tag{4} \end{align}$$ - This derivation is creative. Thanks. I will never forget this proof!!! – johnniac Feb 28 '12 at 4:56 I got a question here, In the equation 1, the range of k is from 0 to n, in equation 4, the range of k is from 0 to infinite. Throughout the proof, where did you expand the range of k? – johnniac Mar 8 '12 at 18:04 @wenhoujx: Actually, in the binomial expansion $(1)$, $k$ ranges over all non-negative integers. It just happens that when $n$ is a non-negative integer and $k>n$, $\binom{n}{k}=0$. – robjohn♦ Mar 8 '12 at 19:15 Here’s one way of looking at it. Suppose that you have $$f(x)=\sum_{n\ge 0}a_nx^n\;,$$ and you multiply both sides by $\frac1{1-x}$: $$\begin{align*} \left(\frac1{1-x}\right)f(x)&=\left(\sum_{n\ge 0}x^n\right)\left(\sum_{n\ge 0}a_nx^n\right)\\ &=(1+x+x^2+\dots)(a_0+a_1x+a_2x^2+\dots)\\ &=a_0 +(a_0+a_1)x+(a_0+a_1+a_2)x^2+\dots\\ &=\sum_{n\ge }\left(\sum_{k=0}^na_k\right)x^n\;. \end{align*}$$ In other words, the coefficient of $x^n$ in the product is just $a_0+a_1+\dots+a_n$. Now think about the construction of Pascal’s triangle: $$\begin{array}{c} 1\\ 1&1\\ 1&2&1\\ 1&3&3&1\\ 1&4&6&4&1\\ 1&5&10&10&5&1\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots \end{array}$$ The binomial coefficient $\binom{n}k$ is the entry in row $n$, column $k$ (numbered from $0$). Moreover, since $$\binom{n}k=\binom{n-1}k+\binom{n-1}{k-1}\;,$$ each entry is the sum of the numbers above it in the column immediately to the left: this is the identity $$\sum_{i=0}^n\binom{i}k=\binom{n+1}{k+1}\;.\tag{1}$$ In the first ($k=0$) column of Pascal’s triangle you have the coefficients in the power series expansion of $\frac1{1-x}$. We saw above that the coefficients in the power series expansion of $\frac1{(1-x)^2}$ are just the cumulative sums of these coefficients, $1,2,3,\dots$, but these are just the entries in the second ($k=1$) column of Pascal’s triangle. Similarly, the coefficients in the power series expansion of $\frac1{(1-x)^3}$ are the cumulative sums of $1,2,3\dots$, or $1,3,6,\dots$, the numbers in the third ($k=2$) column of Pascal’s triangle. In general, the coefficients in the power series expansion of $\frac1{(1-x)^{k+1}}$ must be the binomial coefficients in the $k$ column of Pascal’s triangle, those of the form $\binom{n}k$. All that remains is to get the row indexing right: we want the $1$ that is the first non-zero entry in column $k$ to be the constant term. It’s in row $k$, so the coefficient of $x^n$ must in general be the binomial coefficient in row $n+k$, and we get $$\frac1{(1-x)^{k+1}}=\sum_{n\ge 0}\binom{n+k}kx^n=\sum_{n\ge 0}\binom{n+k}nx^n\;.$$ - You probably meant to sum on the upper index in $(1)$ since the sum you give is $2^n$. – robjohn♦ Feb 28 '12 at 1:29 Yeah your sum $(1)$ is $2^n$, what you probably meant to write was $$\sum_{j=k}^{n} \begin{pmatrix} j \\ k \end{pmatrix} = \begin{pmatrix} n+1 \\ k+1 \end{pmatrix}$$ or something like that. – Patrick Da Silva Feb 28 '12 at 1:37 A nice way to look at $(1+x+x^2+x^3\dots)^{k+1}$. (+1) – robjohn♦ Feb 28 '12 at 1:44 1 @Patrick: that's what I was trying to say :-) – robjohn♦ Feb 28 '12 at 1:46 I wanted to give more credibility to your comment =P The "Yeah" at the beginning suggested I was doing a follow-up. Giving the explicit description of the sum probably will make Brian change his answer more willingly, but nevertheless it's a very nice answer =) +1 from me too – Patrick Da Silva Feb 28 '12 at 2:37 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926452100276947, "perplexity_flag": "head"}
http://mathoverflow.net/questions/tagged/string-diagrams
## Tagged Questions 2answers 254 views ### Diagram calculus for weak 2-functors between bicatgories I have to do some messy calculations with weak 2-functors between bicategories, and I know the most efficient way to do it would be via some sort of string diagram methods. Also, i … 1answer 244 views ### String diagrams for (weak) monoidal categories Hi, In a strict monoidal category, where the associator, left and right unitor are identity morphisms we have the following relations between (string) diagrams: where $i_x$ and …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8825125694274902, "perplexity_flag": "middle"}
http://www.reference.com/browse/energy
Definitions Nearby Words Related Questions # energy [en-er-jee] /ˈɛnərdʒi/ energy, in physics, the ability or capacity to do work or to produce change. Forms of energy include heat, light, sound, electricity, and chemical energy. Energy and work are measured in the same units—foot-pounds, joules, ergs, or some other, depending on the system of measurement being used. When a force acts on a body, the work performed (and the energy expended) is the product of the force and the distance over which it is exerted. ## Potential and Kinetic Energy Potential energy is the capacity for doing work that a body possesses because of its position or condition. For example, a stone resting on the edge of a cliff has potential energy due to its position in the earth's gravitational field. If it falls, the force of gravity (which is equal to the stone's weight; see gravitation) will act on it until it strikes the ground; the stone's potential energy is equal to its weight times the distance it can fall. A charge in an electric field also has potential energy because of its position; a stretched spring has potential energy because of its condition. Chemical energy is a special kind of potential energy; it is the form of energy involved in chemical reactions. The chemical energy of a substance is due to the condition of the atoms of which it is made; it resides in the chemical bonds that join the atoms in compound substances (see chemical bond). Kinetic energy is energy a body possesses because it is in motion. The kinetic energy of a body with mass m moving at a velocity v is one half the product of the mass of the body and the square of its velocity, i.e., KE = 1/2mv2. Even when a body appears to be at rest, its atoms and molecules are in constant motion and thus have kinetic energy. The average kinetic energy of the atoms or molecules is measured by the temperature of the body. The difference between kinetic energy and potential energy, and the conversion of one to the other, is demonstrated by the falling of a rock from a cliff, when its energy of position is changed to energy of motion. Another example is provided in the movements of a simple pendulum (see harmonic motion). As the suspended body moves upward in its swing, its kinetic energy is continuously being changed into potential energy; the higher it goes the greater becomes the energy that it owes to its position. At the top of the swing the change from kinetic to potential energy is complete, and in the course of the downward motion that follows the potential energy is in turn converted to kinetic energy. ## Conversion and Conservation of Energy It is common for energy to be converted from one form to another; however, the law of conservation of energy, a fundamental law of physics, states that although energy can be changed in form it can be neither created nor destroyed (see conservation laws). The theory of relativity shows, however, that mass and energy are equivalent and thus that one can be converted into the other. As a result, the law of conservation of energy includes both mass and energy. Many transformations of energy are of practical importance. Combustion of fuels results in the conversion of chemical energy into heat and light. In the electric storage battery chemical energy is converted to electrical energy and conversely. In the photosynthesis of starch, green plants convert light energy from the sun into chemical energy. Hydroelectric facilities convert the kinetic energy of falling water into electrical energy, which can be conveniently carried by wires to its place of use (see power, electric). The force of a nuclear explosion results from the partial conversion of matter to energy (see nuclear energy). energy, sources of, origins of the power used for transportation, for heat and light in dwelling and working areas, and for the manufacture of goods of all kinds, among other applications. The development of science and civilization is closely linked to the availability of energy in useful forms. Modern society consumes vast amounts of energy in all forms: light, heat, electrical, mechanical, chemical, and nuclear. The rate at which energy is produced or consumed is called power, although this term is sometimes used in common speech synonymously with energy. ## Types of Energy Chemical and Mechanical Energy An early source of energy, or prime mover, used by humans was animal power, i.e., the energy obtained from domesticated animals. Later, as civilization developed, wind power was harnessed to drive ships and turn windmills, and streams and rivers were diverted to turn water wheels (see water power). The rotating shaft of a windmill or water wheel could then be used to crush grain, to raise water from a well, or to serve any number of other uses. The motion of the wind and water, as well as the motion of the wheel or shaft, represents a form of mechanical energy. The source of animal power is ultimately the chemical energy contained in foods and released when digested by humans and animals. The chemical energy contained in wood and other combustible fuels has served since the beginning of history as a source of heat for cooking and warmth. At the start of the Industrial Revolution, water power was used to provide energy for factories through systems of belts and pulleys that transmitted the energy to many different machines. Heat Energy The invention of the steam engine, which converts the chemical energy of fuels into heat energy and the heat into mechanical energy, provided another source of energy. The steam engine is called an external-combustion engine, since fuel is burned outside the engine to create the steam used inside it. During the 19th cent. the internal-combustion engine was developed; a variety of fuels, depending on the type of internal-combustion engine, are burned directly in the engine's chambers to provide a source of mechanical energy. Both steam engines and internal-combustion engines found application as stationary sources of power for different purposes and as mobile sources for transportation, as in the steamship, the railroad locomotive (both steam and diesel), and the automobile. All these sources of energy ultimately depend on the combustion of fuels for their operation. Electrical Energy Early in the 19th cent. another source of energy was developed that did not necessarily need the combustion of fuels—the electric generator, or dynamo. The generator converts the mechanical energy of a conductor moving in a magnetic field into electrical energy, using the principle of electromagnetic induction. The great advantage of electrical energy, or electric power, as it is commonly called, is that it can be transmitted easily over great distances (see power, electric). As a result, it is the most widely used form of energy in modern civilization; it is readily converted to light, to heat, or, through the electric motor, to mechanical energy again. The large-scale production of electrical energy was made possible by the invention of the turbine, which efficiently converts the straight-line motion of falling water or expanding steam into the rotary motion needed to turn the rotor of a large generator. Nuclear Energy The development of nuclear energy made available another source of energy. The heat of a nuclear reactor can be used to produce steam, which then can be directed through a turbine to drive an electric generator, the propellers of a large ship, or some other machine. In 1999, 23% of the electricity generated in the United States derived from nuclear reactors; however, since the 1980s, the construction and application of nuclear reactors in the United States has slowed because of concern about the dangers of the resulting radioactive waste and the possibility of a disastrous nuclear meltdown (see Three Mile Island; Chernobyl). ## Environmental Considerations The demand for energy has increased steadily, not only because of the growing population but also because of the greater number of technological goods available and the increased affluence that has brought these goods within the reach of a larger proportion of the population. For example, despite the introduction of more fuel-efficient motor vehicles (average miles per gallon increased by 34% between 1975 and 1990), the consumption of fuel by vehicles in America increased by 20% between 1975 and 1990. The rise in gasoline consumption is attributable to an increase in the number of miles the average vehicle traveled and to a 40% increase in the same period in the number of vehicles on the road. Since 1990 average fuel efficiency has changed relatively little, while the number of vehicles, the number of miles they travel, and the total amount of fuel consumed has continued to increase. As a result of the increase in the consumption of energy, concern has risen about the depletion of natural resources, both those used directly to produce energy and those damaged during the exploitation of the fuels or as a result of contamination by energy waste products (see under conservation of natural resources). Most of the energy consumed is ultimately generated by the combustion of fossil fuels, such as coal, petroleum, and natural gas, and the world has only a finite supply of these fuels, which are in danger of being used up. Also, the combustion of these fuels releases various pollutants (see pollution), such as carbon monoxide and sulfur dioxide, which pose health risks and may contribute to acid rain and global warming. In addition, environmentalists have become increasingly alarmed at the widespread destruction imposed on sensitive wildlands (e.g., the tropical rain forests, the arctic tundra, and coastal marshes) during the exploitation of their resources. ## The Search for New Sources of Energy The environmental consequences of energy production have led many nations in the world to impose stricter guidelines on the production and consumption of energy. Further, the search for new sources of energy and more efficient means of employing energy has accelerated. The development of a viable nuclear fusion reactor is often cited as a possible solution to our energy problems. Presently, nuclear-energy plants use nuclear fission, which requires scarce and expensive fuels and produces potentially dangerous wastes. The fuel problem has been partly helped by the development of breeder reactors, which produce more nuclear fuel than they consume, but the long-term hopes for nuclear energy rest on the development of controlled sources using nuclear fusion rather than fission. The basic fuels for fusion are extremely plentiful (e.g., hydrogen, from water) and the end products are relatively safe. The basic problem, which is expected to take decades to solve, is in containing the fuels at the extremely high temperatures necessary to initiate and sustain nuclear fusion. Another source of energy is solar energy. The earth receives huge amounts of energy every day from the sun, but the problem has been harnessing this energy so that it is available at the appropriate time and in the appropriate form. For example, solar energy is received only during the daylight hours, but more heat and electricity for lighting are needed at night. Despite technological advances in photovoltaic cells, solar energy has not become a more significantly more financially competitive source of energy. Although several solar thermal power plants are now in operation in California, they are not yet able to compete with conventional power plants on an economic basis. Some scientists have suggested using the earth's internal heat as a source of energy. Geothermal energy is released naturally in geysers and volcanoes. In California, some of the state's electricity is generated by the geothermal plant complex known as the Geysers, which has been in production since 1960, and in Iceland, which is geologically very active, roughly 90% of the homes are heated by geothermal energy. Still another possible energy source is tidal energy. A few systems have been set up to harness the energy released in the twice-daily ebb and flow of the ocean's tides, but they have not been widely used, because they cannot operate turbines continuously and because they must be built specifically for each site. Another direction of research and experimentation is in the search for alternatives to gasoline. Possibilities include methanol, which can be produced from wood, coal, or natural gas; ethanol, an alcohol produced from grain, sugarcane, and other agriculture plants and currently used in some types of U.S. motor fuel (e.g., gasohol and E85, a mixture of 85% ethanol and 15% gasoline); compressed natural gas, which is much less polluting than gasoline and is currently used by a 1.5 million vehicles around the world; and electricity, which if ever practicable would be cheaper and less polluting, especially if derived from solar energy, rather than gasoline. ## Bibliography See G. R. Harrison, The Conquest of Energy (1968); F. Barnaby, Man and the Atom: The Uses of Nuclear Energy (1971); W. G. Steltz and A. M. Donaldson, Aero-Thermodynamics of Steam Turbines (1981); T. N. Veziroglu, ed., Alternative Sources of Energy (1983 and 1985) and Renewable Energy Sources (Vol. 4, 1984); G. L. Johnson, Wind Energy Systems (1985). Energy, United States Department of, executive department of the federal government responsible for coordinating national activities relating to the production, regulation, marketing, and conservation of energy. The department is also responsible for the federal nuclear weapons program and the high risk research and development of energy technology. In the wake of the energy crisis of the mid-1970s, when the price of oil rapidly increased, concerns that the United States had no energy policy led President Carter to create (1977) the cabinet-level department. Former Secretary of Defense James Schlesinger was named the first secretary. The department consolidated the functions previously handled by the Federal Energy Administration, the Energy Research and Development Administration, and the Federal Power Commission, as well as certain energy-related tasks previously managed by other federal agencies. The Dept. of Energy emphasized energy conservation by encouraging voluntary energy curbs and through coordinated federal policy. Although Ronald Reagan criticized the department during his 1980 election campaign as an example of government wastefulness and unwarranted governmental control of private enterprise, he did not abolish the department once in office. The department's chief subdivisions direct programs in energy, environmental quality, national security, and science. The Federal Energy Regulatory Commission is an independent organization within the department. Vibrational energy retained by molecules even at a temperature of absolute zero. Since temperature is a measure of the intensity of molecular motion, molecules would be expected to come to rest at absolute zero. However, if molecular motion were to cease altogether, the atoms would each have a precisely known location and velocity (zero), and the uncertainty principle states that this cannot occur, since precise values of both position and velocity of an object cannot be known simultaneously. Thus, even molecules at absolute zero must have some zero-point energy. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. or atomic energy Energy released from atomic nuclei in significant amounts. In 1919 Ernest Rutherford discovered that alpha rays could split the nucleus of an atom. This led ultimately to the discovery of the neutron and the release of huge amounts of energy by the process of nuclear fission. Nuclear energy is also released as a result of nuclear fusion. The release of nuclear energy can be controlled or uncontrolled. Nuclear reactors carefully control the release of energy, whereas the energy release of a nuclear weapon or resulting from a core meltdown in a nuclear reactor is uncontrolled. Seealso chain reaction, nuclear power, radioactivity. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Internal energy of a system in thermodynamic equilibrium (see thermodynamics) by virtue of its temperature. A hot body has more thermal energy than a similar cold body, but a large tub of cold water may have more thermal energy than a cup of boiling water. Thermal energy can be transferred from one body, usually hotter, to a second body, usually colder, in three ways: conduction (see thermal conduction), convection, and radiation. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Radiation from the Sun that can produce heat, generate electricity, or cause chemical reactions. Solar collectors collect solar radiation and transfer it as heat to a carrier fluid. It can then be used for heating. Solar cells convert solar radiation directly into electricity by means of the photovoltaic effect. Solar energy is inexhaustible and nonpolluting, but converting solar radiation to electricity is not yet commercially competitive, because of the high cost of producing large-scale solar cell arrays and the inherent inefficiency in converting light to electricity. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Energy stored by an object by virtue of its position. For example, an object raised above the ground acquires potential energy equal to the work done against the force of gravity; the energy is released as kinetic energy when it falls back to the ground. Similarly, a stretched spring has stored potential energy that is released when the spring is returned to its unstretched state. Other forms of potential energy include electrical potential energy, chemical energy, and nuclear energy. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Sum of a system's kinetic energy (KE) and potential energy (PE). Mechanical energy is constant in a system that experiences no dissipative forces such as friction or air resistance. For example, a swinging pendulum that experiences only gravitation has greatest KE and least PE at the lowest point on the path of its swing, where its speed is greatest and its height least. It has least KE and greatest PE at the extremities of its swing, where its speed is zero and its height is greatest. As it moves, energy is continuously passing back and forth between the two forms. Neglecting friction and air resistance, the pendulum's mechanical energy is constant. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Relationship between mass (math.m) and energy (math.E) in Albert Einstein's special theory of relativity, expressed math.E = math.mmath.c2, where math.c equals 186,000 mi/second (300,000 km/second), the speed of light. Whereas mass and energy were viewed as distinct in earlier physical theories, in special relativity a body's mass can be converted into energy in accordance with Einstein's formula. Such a release of energy decreases the body's mass (see conservation law). Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Form of energy that an object has by reason of its motion. The kind of motion may be translation (motion along a path from one place to another), rotation about an axis, vibration, or any combination of motions. The total kinetic energy of a body or system is equal to the sum of the kinetic energies resulting from each type of motion. The kinetic energy of an object depends on its mass and velocity. For instance, the amount of kinetic energy math.Kmath.E of an object in translational motion is equal to one-half the product of its mass math.m and the square of its velocity math.v, or math.Kmath.E = 12math.mmath.v2, provided the speed is low relative to the speed of light. At higher speeds, relativity changes the relationship. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. or ionization energy Amount of energy required to remove an electron from an isolated atom or molecule. There is an ionization potential for each successive electron removed, though that associated with removing the first (most loosely held) electron is most commonly used. The ionization potential of an element is a measure of its ability to enter into chemical reactions requiring ion formation or donation of electrons and is related to the nature of the chemical bonding in the compounds formed by elements. Seealso binding energy, ionization. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Ratio of the quantity of heat required to raise the temperature of a body one degree to that required to raise the temperature of an equal mass of water one degree. The term is also used to mean the amount of heat, in calories, required to raise the temperature of one gram of a substance by one Celsius degree. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Amount of heat that must be added or removed during a chemical reaction to keep all substances involved at the same temperature. If it is positive (heat must be added), the reaction is endothermic; if it is negative (heat is given off), the reaction is exothermic. Accurate heat of reaction values are needed for proper design of equipment used in chemical processes; they are usually estimated from compiled tables of thermodynamics data (heats of formation and heats of combustion of many known materials). The activation energy is unrelated to the heat of reaction. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Characteristic amount of energy absorbed or released by a substance during a change in physical state that occurs without a change in temperature. Heat of fusion is the latent heat associated with melting a solid or freezing a liquid. Heat of vaporization is the latent heat associated with vapourizing a liquid or condensing (see condensation) a vapour. For example, when water reaches its boiling point and is kept boiling, it remains at that temperature until it has all evaporated; all the heat added to the water is absorbed as latent heat of vaporization and is carried away by the escaping vapour molecules. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Device for transferring heat from a substance or space at one temperature to another at a higher temperature. It consists of a compressor, a condenser, a throttle or expansion valve, an evaporator, and a working fluid (refrigerant). The compressor delivers vapourized refrigerant to the condenser in the space to be heated. There, cooler air condenses the refrigerant and becomes heated during the process. The liquid refrigerant then enters the throttle valve and expands, coming out as a liquid-vapour mixture at a lower temperature and pressure. It then enters the evaporator, where the liquid is evaporated by contact with the warmer space. The vapour then passes to the compressor and the cycle is repeated. A heat pump is a reversible system and is commonly used both to heat and to cool buildings. It operates on the same thermodynamic principles as refrigeration. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Any of several devices that transfer heat from a hot to a cold fluid. In many engineering applications, one fluid needs to be heated and another cooled, a requirement economically accomplished by a heat exchanger. In double-pipe exchangers, one fluid flows inside the inner pipe, and the other in the annular space between the two pipes. In shell-and-tube exchangers, many tubes are mounted inside a shell; one fluid flows in the tubes and the other flows in the shell, outside the tubes. Special-purpose devices such as boilers, evaporators, superheaters, condensers, and coolers are all heat exchangers. Heat exchangers are used extensively in fossil-fuel and nuclear power plants, gas turbines, heating and air conditioning, refrigeration, and the chemical industry. Seealso cooling system. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Ratio of heat absorbed by a material to the change in temperature. It is usually expressed as calories per degree in terms of the amount of the material being considered. Heat capacity and its temperature variation depend on differences in energy levels for atoms. Heat capacities are measured with a calorimeter and are important as a means of determining the entropies of materials. Seealso specific heat. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Energy transferred from one body to another as the result of a difference in temperature. Heat flows from a hotter body to a colder body when the two bodies are brought together. This transfer of energy usually results in an increase in the temperature of the colder body and a decrease in that of the hotter body. A substance may absorb heat without an increase in temperature as it changes from one phase to another—that is, when it melts or boils. The distinction between heat (a form of energy) and temperature (a measure of the amount of energy) was clarified in the 19th century by such scientists as J.-B. Fourier, Gustav Kirchhoff, and Ludwig Boltzmann. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Power obtained by using heat from the Earth's interior. Most geothermal resources are in regions of active volcanism. Hot springs, geysers, pools of boiling mud, and fumaroles are the most easily exploited sources. The ancient Romans used hot springs to heat baths and homes, and similar uses are still found in Iceland, Turkey, and Japan. Geothermal energy's greatest potential lies in the generation of electricity. It was first used to produce electric power in Italy in 1904. Today geothermal power plants are in operation in New Zealand, Japan, Iceland, Mexico, the U.S., and elsewhere. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Measure of the total combined energies within a system, derived from heats of transformation, disorder, and other forms of internal energy (e.g., electrostatic charges). A system will change spontaneously to achieve a lower total free energy. Thus, free energy is the driving force toward equilibrium conditions. The change in free energy between an initial and a final state is useful in evaluating certain thermodynamic processes and can be used to judge whether transformations will occur spontaneously. There are two forms of free energy, with different definitions and applications: the Helmholtz (see Hermann von Helmholtz) free energy, sometimes called the work function, and the Gibbs (see J. Willard Gibbs) free energy. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Law of statistical mechanics stating that, in a system in thermal equilibrium, on average, an equal amount of energy is associated with each independent energy state. It states specifically that a system of particles in equilibrium at absolute temperature math.T will have an average energy of 12math.kmath.T, where math.k is the Boltzmann constant, associated with each degree of freedom. For example, an atom of a gas has three degrees of freedom (its three position coordinates); therefore, it will have an average total energy of 32math.kmath.T. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Principle of physics according to which the energy of interacting bodies or particles in a closed system remains constant, though it may take different forms (e.g., kinetic energy, potential energy, thermal energy, energy in an electric current, or energy stored in an electric field, in a magnetic field, or in chemical bonds [see bonding]). With the advent of relativity physics in 1905, mass was recognized as equivalent to energy. When accounting for a system of high-speed particles whose mass increases as a consequence of their speed, the laws of conservation of energy and conservation of mass become one conservation law. Seealso Hermann von Helmholtz. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. In chemistry and physics, a theoretical model describing the states of electrons in solid materials, which can have energy values only within certain specific ranges, called bands. Ranges of energy between two allowed bands are called forbidden bands. As electrons in an atom move from one energy level to another, so can electrons in a solid move from an energy level in one band to another in the same band or in another band. The band theory accounts for many of the electrical and thermal properties of solids and forms the basis of the technology of devices such as semiconductors, heating elements, and capacitors (see capacitance). Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Energy required to separate a particle from a system of particles or to disperse all the particles of a system. Nuclear binding energy is the energy required to separate an atomic nucleus into its constituent protons and neutrons. It is also the energy that would be released by combining individual protons and neutrons into a single nucleus. Electron binding energy, or ionization potential, is the energy required to remove an electron from an atom, molecule, or ion, and also the energy released when an electron joins an atom, molecule, or ion. The binding energy of a single proton or neutron in a nucleus is about a million times greater than that of a single electron in an atom. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. Minimum amount of energy (heat, electromagnetic radiation, or electrical energy) required to activate atoms or molecules to a condition in which it is equally likely that they will undergo chemical reaction or transport as it is that they will return to their original state. Chemists posit a transition state between the initial conditions and the product conditions and theorize that the activation energy is the amount of energy required to boost the initial materials “uphill” to the transition state; the reaction then proceeds “downhill” to form the product materials. Catalysts (including enzymes) lower the activation energy by altering the transition state. Activation energies are determined by experiments that measure them as the constant of proportionality in the equation describing the dependence of reaction rate on temperature, proposed by Svante Arrhenius. Seealso entropy, heat of reaction. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. International organization officially founded in 1957 to promote the peaceful use of nuclear energy. Based in Vienna, its activities include research on the applicability of nuclear energy to medicine, agriculture, water resources, and industry; provision of technical assistance; development of radiation safeguards; and public relations programs. Following the Persian Gulf War, IAEA inspectors were called on to certify that Iraq was not manufacturing nuclear weapons. The IAEA and its director general, Mohamed ElBaradei, were awarded the Nobel Prize for Peace in 2005. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. International organization established in 1958 to form a common market for developing peaceful uses of atomic energy. It originally had six members; it now includes all members of the European Union. Among its aims were to facilitate the establishment of a nuclear energy industry on a European rather than a national scale, coordinate research, encourage construction of power plants, establish safety regulations, and establish a common market for trade in nuclear equipment and materials. In 1967 its governing bodies were merged into the European Community. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. In physics and other sciences, energy (from the Greek ἐνέργεια - , "activity, operation", from ἐνεργός - energos, "active, working) is a scalar physical quantity, an attribute of objects and systems that is conserved in nature. In physics textbooks energy is often defined as the ability to do work. Several different forms of energy, including, but not limited to, kinetic, potential, thermal, gravitational, sound energy, light energy, elastic, electromagnetic, chemical, nuclear, and mass have been defined to explain all known natural phenomena. While one form of energy may be transformed to another, the total energy remains the same. This principle, the conservation of energy, was first postulated in the early 19th century, and applies to any isolated system. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Although the total energy of a system does not change with time, its value may depend on the frame of reference. For example, a seated passenger in a moving airplane has zero kinetic energy relative to the airplane, but non-zero kinetic energy relative to the earth. ## History The word "energy" derives from Greek ἐνέργεια (energeia), which appears for the first time in the work Nicomachean Ethics of Aristotle in the 4th century BC. In 1021 AD, the Arabian physicist, Alhazen, in the , held light rays to be streams of minute energy particles, stating that "the smallest parts of light" retain "only properties that can be treated by geometry and verified by experiment" and that "they lack all sensible qualities except energy. In 1121, Al-Khazini, in The Book of the Balance of Wisdom, proposed that the gravitational potential energy of a body varies depending on its distance from the centre of the Earth. The concept of energy emerged out of the idea of vis viva, which Leibniz defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz claimed that heat consisted of the random motion of the constituent parts of matter — a view shared by Isaac Newton, although it would be more than a century until this was generally accepted. In 1807, Thomas Young was the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy." It was argued for some years whether energy was a substance (the caloric) or merely a physical quantity, such as momentum. He[who?] amalgamated all of these laws into the laws of thermodynamics, which aided in the rapid development of explanations of chemical processes using the concept of energy by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. During a 1961 lecture for undergraduate students at the California Institute of Technology, Richard Feynman, a celebrated physics teacher and Nobel Laureate, said this about the concept of energy: There is a fact, or if you wish, a law, governing natural phenomena that are known to date. There is no known exception to this law; it is exact, so far we know. The law is called conservation of energy; it states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity, which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number, and when we finish watching nature go through her tricks and calculate the number again, it is the same.| | |The Feynman Lectures on Physics Since 1918 it has been known that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. That is, energy is conserved because the laws of physics do not distinguish between different moments of time (see Noether's theorem). ## Energy in various contexts since the beginning of the universe The concept of energy and its transformations is useful in explaining and predicting most natural phenomena. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often described by entropy (equal energy spread among all available degrees of freedom) considerations, since in practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. The concept of energy is widespread in all sciences. • In biology, energy is an attribute of the biological structures that is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy is thus often said to be stored by cells in the structures of molecules of substances such as carbohydrates (including sugars) and lipids, which release energy when reacted with oxygen. • In chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. • In geology and meteorology, continental drift, mountain ranges, volcanos, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior. While meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the planet Earth. • In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). Energy transformations in the universe over time are characterized by various kinds of potential energy which has been available since the Big Bang, later being "released" (transformed to more active types of energy such as kinetic or radiant energy), when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released which was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process which ultimately uses the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs. In a slower process, heat from nuclear decay of these atoms in the core of the Earth releases heat, which in turn may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the heat energy, which may be released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store which has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy which has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks; but prior to this, represents energy that has been stored in heavy atoms since the collapse of long-destroyed stars created these atoms. In another similar chain of transformations beginning at the dawn of the universe, nuclear fusion of hydrogen in the Sun releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy which can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Such sunlight from our Sun may again be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. Sunlight is also captured by plants as chemical potential energy, when carbon dioxide and water are converted into a combustible combination of carbohydrates, lipids, and oxygen. Release of this energy as heat and light may be triggered suddenly by a spark, in a forest fire; or it may be available more slowly for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action. Through all of these transformation chains, potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in a number of ways over time between releases, as more active energy. In all these events, one kind of energy is converted to other types of energy, including heat. ## Regarding applications of the concept of energy Energy is subject to a strict global conservation law; that is, whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant. • The total energy of a system can be subdivided and classified in various ways. For example, it is sometimes convenient to distinguish potential energy (which is a function of coordinates only) from kinetic energy (which is a function of coordinate time derivatives only). It may also be convenient to distinguish gravitational energy, electric energy, thermal energy, and other forms. These classifications overlap; for instance thermal energy usually consists partly of kinetic and partly of potential energy. • The transfer of energy can take various forms; familiar examples include work, heat flow, and advection, as discussed below. • The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For example, the important public-service announcement, "Please conserve energy" uses vernacular notions of "conservation" and "energy" which make sense in their own context but are utterly incompatible with the technical notions of "conservation" and "energy" (such as are used in the law of conservation of energy). In classical physics energy is considered a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy-momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). ### Energy transfer Because energy is strictly conserved and is also locally conserved (wherever it can be defined), it is important to remember that by definition of energy the transfer of energy between the "system" and adjacent regions is work. A familiar example is . In simple cases this is written as: $Delta\left\{\right\}E = W$             (1) if there are no other energy-transfer processes involved. Here $Delta\left\{\right\}E$  is the amount of energy transferred, and $W$  represents the work done on the system. More generally, the energy transfer can be split into two categories: $Delta\left\{\right\}E = W + Q$             (2) where $Q$  represents the heat flow into the system. There are other ways in which an open system can gain or lose energy. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Winding a clock would be adding energy to a mechanical system. These terms may be added to the above equation, or they can generally be subsumed into a quantity called "energy addition term $E$" which refers to any type of energy carried over the surface of a control volume or system volume. Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a system, or energy from a laser beam adds to system energy, without either being either work-done or heat-added, in the classic senses). $Delta\left\{\right\}E = W + Q + E$             (3) Where E in this general equation represents other additional advected energy terms not covered by work done on a system, or heat added to it. Energy is also transferred from potential energy ($E_p$) to kinetic energy ($E_k$) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy can not be created or destroyed, so the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: $E_\left\{pi\right\} + E_\left\{ki\right\} = E_\left\{pF\right\} + E_\left\{kF\right\}$ The equation can then be simplified further since $E_p = mgh$ (mass times acceleration due to gravity times the height) and $E_k = frac\left\{1\right\}\left\{2\right\} mv^2$ (half times mass times velocity squared). Then the total amount of energy can be found by adding $E_p + E_k = E_\left\{total\right\}$. ### Energy and the laws of motion In classical mechanics, energy is a conceptually and mathematically useful property since it is a conserved quantity. ### The Hamiltonian The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics. ### The Lagrangian Another energy-related concept is called the Lagrangian, after Joseph Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. In non-relativistic physics, the Lagrangian is the kinetic energy minus potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (like systems with friction). ### Energy and thermodynamics #### Internal energy – the sum of all microscopic forms of energy of a system. It is related to the molecular structure and the degree of molecular activity and may be viewed as the sum of kinetic and potential energies of the molecules; it comprises the following types of energy: Type Composition of (U) the portion of the internal energy of a system associated with kinetic energies (molecular translation, rotation, and vibration; electron translation and spin; and nuclear spin) of the molecules. the internal energy associated with the phase of a system. the internal energy associated with the different kinds of aggregation of atoms in matter. the tremendous amount of energy associated with the strong bonds within the nucleus of the atom itself. those types of energies not stored in the system (e.g. heat transfer, mass transfer, and work), but which are recognized at the system boundary as they cross it, which represent gains or losses by a system during a process. the sum of sensible and latent forms of internal energy. #### The laws of thermodynamics According to the second law of thermodynamics, work can be totally converted into heat, but not vice versa.This is a mathematical consequence of statistical mechanics. The first law of thermodynamics simply asserts that energy is conserved, and that heat is included as a form of energy transfer. A commonly-used corollary of the first law is that for a "system" subject only to pressure forces and heat transfer (e.g. a cylinder-full of gas), the differential change in energy of the system (with a gain in energy signified by a positive quantity) is given by: $mathrm\left\{d\right\}E = Tmathrm\left\{d\right\}S - Pmathrm\left\{d\right\}V,$, where the first term on the right is the heat transfer into the system, defined in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated); and the last term on the right hand side is identified as "work" done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). Although this equation is the standard text-book example of energy conservation in classical thermodynamics, it is highly specific, ignoring all chemical, electric, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat, and because it contains a term that depends on temperature. The most general statement of the first law (i.e., conservation of energy) is valid even in situations in which temperature is undefinable. Energy is sometimes expressed as: $mathrm\left\{d\right\}E=delta Q+delta W,$, which is unsatisfactory because there cannot exist any thermodynamic state functions W or Q that are meaningful on the right hand side of this equation, except perhaps in trivial cases. ### Equipartition of energy The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles net energy is thus equally split between kinetic and potential. This is called equipartition principle - total energy of a system with many degrees of freedom is equally split among all available degrees of freedom. This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (= is given new available energy states which are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics. ### Oscillators, phonons, and photons In an ensemble (connected collection) of unsynchronized oscillators, the average energy is spread equally between kinetic and potential types. In a solid, (often referred to loosely as heat content) can be accurately described by an ensemble of thermal phonons that act as mechanical oscillators. In this model, thermal energy is equally kinetic and potential. In an ideal gas, the interaction potential between particles is essentially the delta function which stores no energy: thus, all of the thermal energy is kinetic. Because an electric oscillator (LC circuit) is analogous to a mechanical oscillator, its energy must be, on average, equally kinetic and potential. It is entirely arbitrary whether the magnetic energy is considered kinetic and the electric energy considered potential, or vice versa. That is, either the inductor is analogous to the mass while the capacitor is analogous to the spring, or vice versa. 1. By extension of the previous line of thought, in free space the electromagnetic field can be considered an ensemble of oscillators, meaning that can be considered equally potential and kinetic. This model is useful, for example, when the electromagnetic Lagrangian is of primary interest and is interpreted in terms of potential and kinetic energy. 2. On the other hand, in the key equation $m^2 c^4 = E^2 - p^2 c^2$, the contribution $mc^2$ is called the rest energy, and all other contributions to the energy are called kinetic energy. For a particle that has mass, this implies that the kinetic energy is $0.5 p^2/m$ at speeds much smaller than c, as can be proved by writing $E = mc^2$ √$\left(1 + p^2 m^\left\{-2\right\}c^\left\{-2\right\}\right)$ and expanding the square root to lowest order. By this line of reasoning, the energy of a photon is entirely kinetic, because the photon is massless and has no rest energy. This expression is useful, for example, when the energy-versus-momentum relationship is of primary interest. The two analyses are entirely consistent. The electric and magnetic degrees of freedom in item 1 are transverse to the direction of motion, while the speed in item 2 is along the direction of motion. For non-relativistic particles these two notions of potential versus kinetic energy are numerically equal, so the ambiguity is harmless, but not so for relativistic particles. ### Work and virtual work Work is roughly force times distance. But more precisely, it is $W = int mathbf\left\{F\right\} cdot mathrm\left\{d\right\}mathbf\left\{s\right\}$ This says that the work ($W$) is equal to the integral (along a certain path) of the force; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. ### Quantum mechanics In quantum mechanics energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. It thus can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of slow changing (non-relativistic) wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic wave in vacuum, the resulting energy states are related to the frequency by the Planck equation $E = hnu$ (where $h$ is the Planck's constant and $nu$ the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons. ### Relativity When calculating kinetic energy (= work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy - energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body: $E = m c^2$, where m is the mass, c is the speed of light in vacuum, E is the rest mass energy. For example, consider electron-positron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons. In general relativity, the stress-energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation. It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it. ## Measurement There is no absolute measure of energy, because energy is defined as the work that one system does (or can do) on another. Thus, only of the transition of a system from one state into another can be defined and thus measured. ### Methods The methods for the measurement of energy often deploy methods for the measurement of still more fundamental concepts of science, namely mass, distance, radiation, temperature, time, electric charge and electric current. Conventionally the technique most often employed is calorimetry, a thermodynamic technique that relies on the measurement of temperature using a thermometer or of intensity of radiation using a bolometer. ### Units Throughout the history of science, energy has been expressed in several different units such as ergs and calories. At present, the accepted unit of measurement for energy is the SI unit of energy, the joule. ## Forms of energy Classical mechanics distinguishes between potential energy, which is a function of the position of an object, and kinetic energy, which is a function of its movement. Both position and movement are relative to a frame of reference, which must be specified: this is often (and originally) an arbitrary fixed point on the surface of the Earth, the terrestrial frame of reference. It has been attempted to categorize all forms of energy as either kinetic or potential: this is not incorrect, but neither is it clear that it is a real simplification, as Feynman points out: Examples of the interconversion of energy Mechanical energy is converted into by Lever Brakes Dynamo Synchrotron Matches Particle accelerator ### Potential energy Potential energy, symbols Ep, V or Φ, is defined as the work done against a given force (= work of given force with minus sign) in changing the position of an object with respect to a reference position (often taken to be infinite separation). If F is the force and s is the displacement, $E_\left\{rm p\right\} = -int mathbf\left\{F\right\}cdot\left\{rm d\right\}mathbf\left\{s\right\}$ with the dot representing the scalar product of the two vectors. The name "potential" energy originally signified the idea that the energy could readily be transferred as work—at least in an idealized system (reversible process, see below). This is not completely true for any real system, but is often a reasonable first approximation in classical mechanics. The general equation above can be simplified in a number of common cases, notably when dealing with gravity or with elastic forces. #### Gravitational potential energy The gravitational force near the Earth's surface varies very little with the height, h, and is equal to the mass, m, multiplied by the gravitational acceleration, g = 9.81 m/s². In these cases, the gravitational potential energy is given by $E_\left\{rm p,g\right\} = mgh$ A more general expression for the potential energy due to Newtonian gravitation between two bodies of masses m1 and m2, useful in astronomy, is $E_\left\{rm p,g\right\} = -G\left\{\left\{m_1m_2\right\}over\left\{r\right\}\right\}$, where r is the separation between the two bodies and G is the gravitational constant, 6.6742(10)×10−11 m³kg−1s−2. In this case, the reference point is the infinite separation of the two bodies. #### Elastic potential energy Elastic potential energy is defined as a work needed to compress (or expand) a spring. The force, F, in a spring or any other system which obeys Hooke's law is proportional to the extension or compression, x, $F = -kx$ where k is the force constant of the particular spring (or system). In this case, the calculated work becomes $E_\left\{rm p,e\right\} = \left\{1over 2\right\}kx^2$. Hooke's law is a good approximation for behaviour of chemical bonds under normal conditions, i.e. when they are not being broken or formed. ### Kinetic energy Kinetic energy, symbols Ek, T or K, is the work required to accelerate an object to a given speed. Indeed, calculating this work one easily obtains the following: $E_\left\{rm k\right\} = int mathbf\left\{F\right\} cdot d mathbf\left\{x\right\} = int mathbf\left\{v\right\} cdot d mathbf\left\{p\right\}= \left\{1over 2\right\}mv^2$ At speeds approaching the speed of light, c, this work must be calculated using Lorentz transformations, which results in the following: $E_\left\{rm k\right\} = m c^2left\left(frac\left\{1\right\}\left\{sqrt\left\{1 - \left(v/c\right)^2\right\}\right\} - 1right\right)$ This equation reduces to the one above it, at small (compared to c) speed. A mathematical by-product of this work (which is immediately seen in the last equation) is that even at rest a mass has the amount of energy equal to: $E_\left\{rm rest\right\} = mc^2$ This energy is thus called rest mass energy. ### Thermal energy Examples of the interconversion of energy Thermal energy is converted into by Steam turbine Heat exchanger Thermocouple Hot objects Blast furnace Supernova Thermal energy (of some media - gas, plasma, solid, etc) is the energy associated with the microscopical random motion of particles constituting the media. For example, in case of monoatomic gas it is just a kinetic energy of motion of atoms of gas as measured in the reference frame of the center of mass of gas. In case of many-atomic gas rotational and vibrational energy is involved. In the case of liquids and solids there is also potential energy (of interaction of atoms) involved, and so on. A heat is defined as a transfer (flow) of thermal energy across certain boundary (for example, from a hot body to cold via the area of their contact. A practical definition for small transfers of heat is $Delta q = int C_\left\{rm v\right\}\left\{rm d\right\}T$ where Cv is the heat capacity of the system. This definition will fail if the system undergoes a phase transition—e.g. if ice is melting to water—as in these cases the system can absorb heat without increasing its temperature. In more complex systems, it is preferable to use the concept of internal energy rather than that of thermal energy (see Chemical energy below). Despite the theoretical problems, the above definition is useful in the experimental measurement of energy changes. In a wide variety of situations, it is possible to use the energy released by a system to raise the temperature of another object, e.g. a bath of water. It is also possible to measure the amount of electric energy required to raise the temperature of the object by the same amount. The calorie was originally defined as the amount of energy required to raise the temperature of one gram of water by 1 °C (approximately 4.1855 J, although the definition later changed), and the British thermal unit was defined as the energy required to heat one pound of water by 1 °F (later fixed as 1055.06 J). ### Electric energy Examples of the interconversion of energy Electric energy is converted into by Electric motor Resistor Transformer Light-emitting diode Electrolysis Synchrotron The electric potential energy of given configuration of charges is defined as the work which must be done against the Coulomb force to rearrange charges from infinite separation to this configuration (or the work done by the Coulomb force separating the charges from this configuration to infinity). For two point-like charges Q1 and Q2 at a distance r this work, and hence electric potential energy is equal to: $E_\left\{rm p,e\right\} = \left\{1over \left\{4piepsilon_0\right\}\right\}\left\{\left\{Q_1Q_2\right\}over\left\{r\right\}\right\}$ where ε0 is the electric constant of a vacuum, 107/4πc0² or 8.854188…×10−12 F/m. If the charge is accumulated in a capacitor (of capacitance C), the reference configuration is usually selected not to be infinite separation of charges, but vice versa - charges at an extremely close proximity to each other (so there is zero net charge on each plate of a capacitor). The justification for this choice is purely practical - it is easier to measure both voltage difference and magnitude of charges on a capacitor plates not versus infinite separation of charges but rather versus discharged capacitor where charges return to close proximity to each other (electrons and ions recombine making the plates neutral). In this case the work and thus the electric potential energy becomes $E_\left\{rm p,e\right\} = \left\{\left\{Q^2\right\}over\left\{2C\right\}\right\}$ If an electric current passes through a resistor, electric energy is converted to heat; if the current passes through an electric appliance, some of the electric energy will be converted into other forms of energy (although some will always be lost as heat). The amount of electric energy due to an electric current can be expressed in a number of different ways: $E = UQ = UIt = Pt = U^2t/R = I^2Rt$ where U is the electric potential difference (in volts), Q is the charge (in coulombs), I is the current (in amperes), t is the time for which the current flows (in seconds), P is the power (in watts) and R is the electric resistance (in ohms). The last of these expressions is important in the practical measurement of energy, as potential difference, resistance and time can all be measured with considerable accuracy. #### Magnetic energy There is no fundamental difference between magnetic energy and electric energy: the two phenomena are related by Maxwell's equations. The potential energy of a magnet of magnetic moment m in a magnetic field B is defined as the work of magnetic force (actually of magnetic torque) on re-alignment of the vector of the magnetic dipole moment, and is equal: $E_\left\{rm p,m\right\} = -mcdot B$ while the energy stored in a inductor (of inductance L) when current I is passing via it is $E_\left\{rm p,m\right\} = \left\{1over 2\right\}LI^2$. This second expression forms the basis for superconducting magnetic energy storage. #### Electromagnetic fields Examples of the interconversion of energy Electromagnetic radiation is converted into by Solar sail Solar collector Solar cell Non-linear optics Photosynthesis Mössbauer spectroscopy Calculating work needed to create an electric or magnetic field in unit volume (say, in a capacitor or an inductor) results in the electric and magnetic fields energy densities: $u_e=frac\left\{epsilon_0\right\}\left\{2\right\} E^2$ and $u_m=frac\left\{1\right\}\left\{2mu_0\right\} B^2$, in SI units. Electromagnetic radiation, such as microwaves, visible light or gamma rays, represents a flow of electromagnetic energy. Applying the above expressions to magnetic and electric components of electromagnetic field both the volumetric density and the flow of energy in e/m field can be calculated. The resulting Poynting vector, which is expressed as $mathbf\left\{S\right\} = frac\left\{1\right\}\left\{mu\right\} mathbf\left\{E\right\} times mathbf\left\{B\right\},$ in SI units, gives the density of the flow of energy and its direction. The energy of electromagnetic radiation is quantized (has discrete energy levels). The spacing between these levels is equal to $E = hnu$ where h is the Planck constant, 6.6260693(11)×10−34 Js, and ν is the frequency of the radiation. This quantity of electromagnetic energy is usually called a photon. The photons which make up visible light have energies of 270–520 yJ, equivalent to 160–310 kJ/mol, the strength of weaker chemical bonds. ### Chemical energy Examples of the interconversion of energy Chemical energy is converted into by Muscle Fire Fuel cell Glowworms Chemical reaction Chemical energy is the energy due to associations of atoms in molecules and various other kinds of aggregates of matter. It may be defined as a work done by electric forces during re-arrangement of electric charges, electrons and protons, in the process of aggregation. If the chemical energy of a system decreases during a chemical reaction, the difference is transferred to the surroundings in some form (often heat or light); on the other hand if the chemical energy of a system increases as a result of a chemical reaction - the difference then is supplied by the surroundings (usually again in form of heat or light). For example, when two hydrogen atoms react to form a dihydrogen molecule, the chemical energy decreases by 724 zJ (the bond energy of the H–H bond); when the electron is completely removed from a hydrogen atom, forming a hydrogen ion (in the gas phase), the chemical energy increases by 2.18 aJ (the ionization energy of hydrogen). It is common to quote the changes in chemical energy for one mole of the substance in question: typical values for the change in molar chemical energy during a chemical reaction range from tens to hundreds of kJ/mol. The chemical energy as defined above is also referred to by chemists as the internal energy, U: technically, this is measured by keeping the volume of the system constant. However, most practical chemistry is performed at constant pressure and, if the volume changes during the reaction (e.g. a gas is given off), a correction must be applied to take account of the work done by or on the atmosphere to obtain the enthalpy, H: ΔH = ΔU + pΔV A second correction, for the change in entropy, S, must also be performed to determine whether a chemical reaction will take place or not, giving the Gibbs free energy, G: ΔG = ΔH − TΔS These corrections are sometimes negligible, but often not (especially in reactions involving gases). Since the industrial revolution, the burning of coal, oil, natural gas or products derived from them has been a socially significant transformation of chemical energy into other forms of energy. the energy "consumption" (one should really speak of "energy transformation") of a society or country is often quoted in reference to the average energy released by the combustion of these fossil fuels: 1  tonne of coal equivalent (TCE) = 29 GJ 1 tonne of oil equivalent (TOE) = 41.87 GJ On the same basis, a tank-full of gasoline (45 litres, 12 gallons) is equivalent to about 1.6 GJ of chemical energy. Another chemically-based unit of measurement for energy is the "tonne of TNT", taken as 4.184 GJ. Hence, burning a tonne of oil releases about ten times as much energy as the explosion of one tonne of TNT: fortunately, the energy is usually released in a slower, more controlled manner. Simple examples of chemical energy are batteries and food. When you eat the food is digested and turned into chemical energy which can be transformed to kinetic energy. ### Nuclear energy Examples of the interconversion of energy Nuclear binding energy is converted into by Alpha radiation Sun Beta radiation Gamma radiation Radioactive decay Nuclear isomerism , along with electric potential energy, provides the energy released from nuclear fission and nuclear fusion processes. The result of both these processes are nuclei in which strong nuclear forces bind nuclear particles more strongly and closely. Weak nuclear forces (different from strong forces) provide the potential energy for certain kinds of radioactive decay, such as beta decay. The energy released in nuclear processes is so large that the relativistic change in mass (after the energy has been removed) can be as much as several parts per thousand. Nuclear particles (nucleons) like protons and neutrons are not destroyed (law of conservation of baryon number) in fission and fusion processes. A few lighter particles may be created or destroyed (example: beta minus and beta plus decay, or electron capture decay), but these minor processes are not important to the immediate energy release in fission and fusion. Rather, fission and fusion release energy when collections of baryons become more tightly bound, and it is the energy associated with a fraction of the mass of the nucleons (but not the whole particles) which appears as the heat and electromagnetic radiation generated by nuclear reactions. This heat and radiation retains the "missing" mass, but the mass is missing only because it escapes in the form of heat and light, which retain the mass and conduct it out of the system where it is not measured. The energy from the Sun, also called solar energy, is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million metric tons of solar matter per second into light, which is radiated into space, but during this process, the number of total protons and neutrons in the sun does not change. In this system, the light itself retains the inertial equivalent of this mass, and indeed the mass itself (as a system), which represents 4 million tons per second of electromagnetic radiation, moving into space. Each of the helium nuclei which are formed in the process are less massive than the four protons from they were formed, but (to a good approximation), no particles or atoms are destroyed in the process of turning the sun's nuclear potential energy into light. ### Surface energy If there is any kind of tension in a surface, such as a stretched sheet of rubber or material interfaces, it is possible to define surface energy. In particular, any meeting of dissimilar materials that don't mix will result in some kind of surface tension, if there is freedom for the surfaces to move then, as seen in capillary surfaces for example, the minimum energy will as usual be sought. A minimal surface, for example, represents the smallest possible energy that a surface can have if its energy is proportional to the area of the surface. For this reason, (open) soap films of small size are minimal surfaces (small size reduces gravity effects, and openness prevents pressure from building up. Note that a bubble is a minimum energy surface but not a minimal surface by definition). ## Transformations of energy One form of energy can often be readily transformed into another with the help of a device- for instance, a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction, the conversion of energy between these processes is perfect, and the pendulum will continue swinging forever. Energy can be converted into matter and vice versa. The mass-energy equivalence formula E = mc², derived by several authors: Olinto de Pretto, Albert Einstein, Friedrich Hasenöhrl, Max Planck and Henri Poincaré, quantifies the relationship between mass and rest energy. Since $c^2$ is extremely large relative to ordinary human scales, the conversion of ordinary amount of mass (say, 1 kg) to other forms of energy can liberate tremendous amounts of energy (~$9x10^\left\{16\right\}$ Joules), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (particles) are found in high energy nuclear physics. In nature, transformations of energy can be fundamentally classed into two kinds: those that are thermodynamically reversible, and those that are thermodynamically irreversible. A reversible process in thermodynamics is one in which no energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, however, quantum states of lower energy, present as possible exitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal). As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do produce work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less. ## Law of conservation of energy Energy is subject to the law of conservation of energy. According to this law, energy can neither be created (produced) nor destroyed by itself. It can only be transformed. Most kinds of energy (with gravitational energy being a notable exception) are also subject to strict local conservation laws, as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa. Conservation of energy is the mathematical consequence of translational symmetry of time (that is, the indistinguishability of time intervals taken at different time) - see Noether's theorem. According to energy conservation law the total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. This law is a fundamental principle of physics. It follows from the translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. Thus is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured. In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by $Delta E Delta t ge frac \left\{ hbar \right\} \left\{2 \right\}$ which is similar in form to the Heisenberg uncertainty principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena. ## Energy and life Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants; chemical energy in some form in the case of animals—to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria C6H12O6 + 6O2 → 6CO2 + 6H2O C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP ADP + HPO42− → ATP + H2O The rest of the chemical energy in the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains when split and reacted with water, is used for other metabolism (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work: gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that most real machines manage higher efficiencies. However, in growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. ## Further reading • Alekseev, G. N. (1986). Energy and Entropy. Moscow: Mir Publishers. • Walding, Richard,  Rapkins, Greg,  Rossiter, Glenn New Century Senior Physics. Melbourne, Australia: Oxford University Press. ISBN 0-19-551084-4. • Smil, Vaclav Energy in nature and society: general energetics of complex systems. Cambridge, USA: MIT Press. ISBN 987-0-262-19565-2. ## External links • Compact description of various energy sources. Energy sources and ecology. • Conservation of Energy • Energy Information Administration - Official energy statistics from the U.S. government • Europe's Energy Portal - Official energy news and statistics from the European Union • EnergyWiki • Glossary of Energy Terms • Middle East Energy & Power News • What does energy really mean? From Physics World
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 47, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9337592720985413, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/165777-general-solution-quasilinear-equations.html
# Thread: 1. ## General Solution of Quasilinear Equations Hi all, I understand everything in the following example up until the words "Now, consider..." Could somebody help me to understand how we get the $\frac{d}{ds}(\frac{x+u}{y})$ part? 2. It's simply the product rule. You are differentiating $\frac{x+ u}{y}= (x+ u)\frac{1}{y}$ with respect to s. Presumably, x and y are functions of s but u, here, is independent of s so we can treat it as a constant. The derivative of $(x+ u)\frac{1}{y}$ is $\frac{x+ u}{dx}\frac{1}{y}+ (x+u)\frac{d\frac{1}{y}}{ds}= \frac{dx}{ds}\frac{1}{y}+ (x+ u)\left(-\frac{1}{y^2}\frac{dy}{ds}\right)$. 3. Sorry, I should have mentioned that all 3 functions, x, y and u are functions of s here. I understand the product rule, but I don't have a clue where the (x+u)/y sprung from. My understanding of this problem is that we're trying to find two functions $\Phi(x,y,u)$ = constant $\Psi(x,y,u)$ = constant as the solution of the equation satisfies $F(\Phi,\Psi)$ = 0 Since $\frac{d}{ds}(\frac{x+u}{y})$ = 0, (x+u)/y is a constant, and so there's $\Phi(x,y,u)$. I just don't understand how we got there. In the notes I have, there is alternative way of solving these quasilinear equations, which involves integrating the characteristic equations, then parametrising the initial line and setting s = 0. Does this sound familiar to anyone?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408727288246155, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/3110/how-to-choose-a-data-center-for-deploying-high-frequency-trading-strategies?answertab=active
# How to choose a data center for deploying high frequency trading strategies? We are in the process of selecting the data center for deploying our high frequency strategies. Does anyone has some questionnaire that can be used to figure out that what type of infrastructure (hardware, processor, cores, ram, network interface card, switch, OS, etc. etc.) will be the best for our strategies? I know this is a big question but any useful input based on your past experience will be very helpful... - 3 Sorry, I can't take this question seriously. Surely if you developed true HFT models you've considered proximity and latency to matching engines and associated venue fee structures very carefully. Right? – Louis Marascio Mar 22 '12 at 13:58 1 Only you can assess what your needs are. – chrisaycock♦ Mar 22 '12 at 14:05 ## 3 Answers be careful: if you want to implement a smart routing strategy (addressing more than one trading venue), you should at least have one component of your strategy located at the same distance (i.e. same time to each matching engine) from all the orderbooks you plan to interact with. Just say that you detect at $\tau_0$ that bid price in the matching engine $B$ is higher than the ask price in the matching engine $A$. The time to reach $B$ is $\tau_B$ and to reach $A$ is $\tau_A$. Your two orders will reach $A$ at $\tau_0+\tau_A$ and $B$ at $\tau_0+\tau_B$. The larger $|\tau_A-\tau_B|$, the higher the probability that one of the two quotes will have been removed one way or the other (cancelled or traded), and you will have to pay the full bid-ask spread to unwind your position. In detail, if you have some tactics that can be run at one order-book only, it is good to be as close as possible from it; but for a simple bid-ask crossing arbitrage: • say that you see a quantity of $Q_a$ at the best ask (at price $P_a$) on a trading venue $A$ and a quantity $Q_b$ at the best bid (at price $P_b=P_a+u$) on another venue $B$. • You want to buy $Q=\min(Q_a,Q_b)$ on $A$ and sell it on $B$ (you will earn $Q \times u$). • You send your two orders simultaneously to $A$ and $B$ at $\tau_0$, say that $\tau_A<<\tau_B$ 1. It hits $A$ at $\tau_0+\tau_A$, assuming that $\tau_A$ is small, you bought $Q$ as you expected. 2. in between the market participant who owns the order on $B$ realized that he crossed the spread without being executed: we sent a cancel. 3. his cancel has $\tau_B-\tau_A$ milliseconds to reach $B$ before the order you sent at $\tau_0$, and your pair would be ruined. More generally, any observer of the price formation process has $\tau_B-\tau_A - 2\tau_{ex}$ (where $\tau_{ex}$ is its average latency to $A$ and $B$) to remove $Q_b$ one way or the other and ruin your pair. The lower $|\tau_A-\tau_B|$, the better, it is even sometime better to slow one of your message to ensure a small $|\tau_A-\tau_B|$. Of course, the larger $(\tau_A+\tau_B)/2$, the worst. You have to find the best balance between $\tau_A$ and $\tau_B$ such that $|\tau_A-\tau_B|$ and $(\tau_A+\tau_B)/2$ are as small as possible. - lehalle, can you elaborate on why "same distance" matters? – Ryogi Apr 17 '12 at 1:53 @RYogi it seams lehalle is stating that if your trying to do arbitrage (think forex and two trading venues with different prices for the same coin) it will make more sense to have a similar latency for both trading venues (say 200ms) than to have 10ms for one and 390ms for the other. – Frankie Aug 6 '12 at 0:07 yes: it is a well-known effect. I tried to modify my answer, @RYogi. – lehalle Aug 6 '12 at 6:02 Ok firstly while i am only an amateur quant i just happen to be a professional data center designer in the UK. I design data centers from small N resilient rooms to multi megawatt co-location facilities. Ultimately the question of whether to move your equipment into a co-location facility depends entirely on your business needs. To properly understand this you need to assess the capabilities of your existing facilities with respects to the connectivity of the site, security, power availability, average external ambient conditions and the cost of floor space vs office space. Simply put you should invest in a consultancy for an expert to perform a needs analysis and site assessment, with this information they can write a report with some indicative costing of building a new data centre and any site limitations. Using this information you can then quantify any decision on whether to co-locate or build a new facility. If you are serious about owning a data center then i urge you to contact a professional as soon as possible, the earlier the engagement the more effective the design and the more informed the business decisions. Retrospective designing is painful for everyone!! I hope this helps. - 1 even though your input as a data center (DC) expert is very valued, and I loved your answer, I think it has never crossed the mind of the OP to build it's own DC! ;) – Frankie Aug 6 '12 at 0:04 • Depends if your trading strategy requires co-location; based on speed of execution and latency. i.e. if your strategy is market data event driven; then yes; you do need to be co-located • Co-location vs close proximity. • Cost. Co-location usually more expensive than close proximity • Services provided. i.e.; if your host provider have a market data API that you understand; it might be the only choice • HFT strategies rely heavily on Market data. If you are trading on multiple exchanges; the chances that you colocated close to one exchange but not the others! At the end of the day; it is all about cost. If colocating will make you 50% more money; then it is worth it. If you are going to make only 10% extra; it may not be worth it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461966753005981, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/177397/trace-of-product-of-matrices?answertab=active
# Trace of product of matrices Problem: Let $\{A_r\},\{B_r\},\{C_r\}$ each be families of (determinant 1) 2x2 matrices in $SL(2,\mathbb{R})$ such that each family is continuously indexed by a parameter $0<r<1$. Is there a known formula for calculating $tr(W)$, where $W$ is $(A_rB_rC_r)^nA_r$, $(A_rB_rC_r)^nA_rB_r$, or $(A_rB_rC_r)^n$? Progress thus far: The indexing is sufficiently complicated that simply multiplying and looking for a pattern has not been successful. Each entry of each family is a ratio of quadratics in $r$, and I do not remember the right computational linear algebra to attack this problem. Motivation: the families of matrices represent isometries of the hyperbolic plane, and I am interested in knowing when such isometries change from elliptic to parabolic to hyperbolic - I've adjusted the question to say that I'm looking at matrices with determinant 1. I only care about the absolute value of the trace, but I'm looking to plot it as a function of the indexing variable $r$, so a polynomial output would be nice. – KReiser Aug 1 '12 at 2:07 2 If you can solve for the eigenvalues and eigenvectors of $A_r B_r C_r$ as a function of $r$ (this is something of a big if), then you can work over $\mathbb{C}$ and change basis so that $A_r B_r C_r$ is in Jordan normal form. Things are not so bad from here. – Qiaochu Yuan Aug 1 '12 at 2:22 Instead of thinking of them as families of matrices in $SL(2,\mathbb R)$, you can instead think of them as single matrices $A,B,C \in SL\big(2,{\mathbb R}(r)\big)$. – Greg Martin Aug 1 '12 at 5:53 ## 1 Answer $\def\Tr{\mathrm{Tr}}$Let $M(r) = A(r) B(r) C(r)$. Let the characteristic polynomial of $M(r)$ be $$\lambda^3 + x(r) \lambda^2 + y(r) \lambda + z(r)$$ Each of $x(r)$, $y(r)$ and $z(r)$ is a rational function, computable by a computer algebra system. If you can solve this cubic then proceed as Qiaochu says. (And it is definitely worth putting this cubic into a computer algebra system to see whether it simplifies in some surprising way.) But solving cubics is usually painful, so here is an alternative: The Cayley-Hamilton theorem tells you that $$M(r)^3 + x(r) M(r)^2 + y(r) M(r) + z(r)=0$$ so $$\Tr M(r)^{n+3} = - x(r) \Tr M(r)^{n+2} - y(r) \Tr M(r)^{n+1} - z(r) \Tr M(r)^n$$ So this gives a linear recursion for $\Tr M(r)^n$ which you can use to compute $\Tr M(r)^n$ pretty easily, and similarly for your other traces. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400636553764343, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Courant_number
# Courant–Friedrichs–Lewy condition (Redirected from Courant number) In mathematics, the Courant–Friedrichs–Lewy condition (CFL condition) is a necessary condition for convergence while solving certain partial differential equations (usually hyperbolic PDEs) numerically by the method of finite differences.[1] It arises in the numerical analysis of explicit time-marching schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation will produce incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.[2] ## Heuristic description The information behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal length,[3] then this length must be less than the time for the wave to travel to adjacent grid points. As a corollary, when the grid point separation is reduced, the upper limit for the time step also decreases. In essence, the numerical domain of dependence of any point in space and time (which data values in the initial conditions affect the numerical computed value at that point) must include the analytical domain of dependence (where in the initial conditions has an effect on the exact value of the solution at that point) in order to assure that the scheme can access the information required to form the solution. ## The CFL condition In order to make a reasonably formally precise statement of the condition, it is necessary to define the following quantities • Spatial coordinate: it is one of the coordinates of the physical space in which the problem is posed. • Spatial dimension of the problem: it is the number $n$ of spatial dimensions i.e. the number of spatial coordinates of the physical space where the problem is posed. Typical values are $n=1$, $n=2$ and $n=3$. • Time: it is the coordinate, acting as a parameter, which describes the evolution of the system, distinct from the spatial coordinates. The spatial coordinates and the time are supposed to be discrete valued independent variables, whose minimal steps are called respectively the interval length[4] and the time step: the CFL condition relates the length of the time step to a function interval lengths of each spatial variable. Operatively, the CFL condition is commonly prescribed for those terms of the finite-difference approximation of general partial differential equations which model the advection phenomenon.[5] ### The one-dimensional case For one-dimensional case, the CFL has the following form: $C = \frac {u\,\Delta t} {\Delta x} \leq C_{max}$ where the dimensionless number is called the Courant number, • $u$ is the velocity (whose dimension is length/time) • $\Delta t$ is the time step (whose dimension is time) • $\Delta x$ is the length interval (whose dimension is length). The value of $C_{max}$ changes with the method used to solve the discretised equation. If an explicit (time-marching) solver is used then typically $C_{max} = 1$. Implicit (matrix) solvers are usually less sensitive to numerical instability and so larger values of $C_{max}$ may be tolerated. ### The two and general n-dimensional case In the two-dimensional case, the CFL condition becomes $C = \frac {u_ x\,\Delta t}{\Delta x} + \frac {u_ y\,\Delta t}{\Delta y} \leq C_{max}$ with obvious meaning of the symbols involved. By analogy with the two-dimensional case, the general CFL condition for the $n$-dimensional case is the following one: $C = \Delta t \sum_{i=1}^n\frac{u_{x_i}}{\Delta x_i} \leq C_{max}.$ The interval length is not required to be the same for each spatial variable $\Delta x_i, i = 1, ..., n$. This "degree of freedom" can be used in order to somewhat optimize the value of the time step for a particular problem, by varying the values of the different interval in order to keep it not too small. ## Implications of the CFL condition ### The CFL condition is only a necessary one The CFL condition is a necessary condition, but may not be sufficient for the convergence of the finite-difference approximation of a given numerical problem. Thus, in order to establish the convergence of the finite-difference approximation, it is necessary to use other methods, which in turn could imply further limitations on the length of the time step and/or the lengths of the spatial intervals. ### The CFL condition can be a very strong requirement The CFL condition can be a very limiting constraint on the time step $\Delta t$: for example, in the finite-difference approximation of certain fourth-order nonlinear partial differential equations, it can have the following form $\frac{\Delta t}{(\Delta x)^4} < C u$ meaning that a decrease in the length interval $\Delta x$ requires a fourth order decrease in the time step $\Delta t$ for the condition to be fulfilled. Therefore, when solving particularly stiff problems, efforts are often made to avoid the CFL condition, for example by using implicit methods. ## Notes 1. See reference Courant, Friedrichs & Lewy 1928. There exists also an English translation of the 1928 German original: see references Courant, Friedrichs & Lewy 1956 and Courant, Friedrichs & Lewy 1967. 2. This situation commonly occurs when a hyperbolic partial differential operator has been approximated by a finite difference equation, which is then solved by numerical linear algebra methods. 3. Precisely, this is the hyperbolic part of the PDE under analysis. ## References •  . • Courant, R.; Friedrichs, K.; Lewy, H. (September 1956) [1928], On the partial difference equations of mathematical physics, AEC Research and Development Report, NYO-7689, New York: AEC Computing and Applied Mathematics Centre – Courant Institute of Mathematical Sciences, pp. V + 76, archived from the original on October 23, 2008 .: translated from the German by Phyllis Fox. This is an earlier version of the paper Courant, Friedrichs & Lewy 1967, circulated as a research report. • Courant, R.; Friedrichs, K.; Lewy, H. (March 1967) [1928], "On the partial difference equations of mathematical physics", 11 (2): 215–234, MR 0213764, Zbl 0145.40402 . A freely downlodable copy can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8939867615699768, "perplexity_flag": "head"}
http://www.cfd-online.com/W/index.php?title=Combustion&diff=12724&oldid=9779
[Sponsors] Home > Wiki > Combustion # Combustion ### From CFD-Wiki (Difference between revisions) Jump to: navigation, search | | | | | |--------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Joan (Talk | contribs)m (Turbulent flame speed) | | Peter (Talk | contribs) m (Reverted edits by Alicethomas (Talk) to last version by Peter) | | | (11 intermediate revisions not shown) | | | | | Line 676: | | Line 676: | | | | :<math> \theta = 1 - \varepsilon \Gamma - (1 - Y^*_{F,u})</math> | | :<math> \theta = 1 - \varepsilon \Gamma - (1 - Y^*_{F,u})</math> | | | where <math>\Gamma</math> is the first-order development of the departure of <math>\theta</math> from the maximum value due to the incomplete combustion, and <math>1-Y_{F,u}^*=1-\theta_b</math> is the reduction of temperature for non-stoichiometric cases. Injected into the above equation: | | where <math>\Gamma</math> is the first-order development of the departure of <math>\theta</math> from the maximum value due to the incomplete combustion, and <math>1-Y_{F,u}^*=1-\theta_b</math> is the reduction of temperature for non-stoichiometric cases. Injected into the above equation: | | - | :<math> \frac{1}{\varepsilon}\frac{\partial^2 \Gamma}{\partial \Xi^2} = \Lambda (\varepsilon \Gamma)^{n_F} (Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} +\varepsilon\Gamma\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} )^{n_O}\exp{-\beta\frac{1-Y_{F,u}^*+\varepsilon\Gamma}{1-\alpha(1-Y_{F,u}^*+\varepsilon\Gamma)}}</math> | + | :<math> \frac{1}{\varepsilon}\frac{\partial^2 \Gamma}{\partial \Xi^2} = \Lambda (\varepsilon \Gamma)^{n_F} (Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{O,s}}s +\varepsilon\Gamma\frac{Y_{F,s}}{Y_{O,s}}s )^{n_O}\exp{-\beta\frac{1-Y_{F,u}^*+\varepsilon\Gamma}{1-\alpha(1-Y_{F,u}^*+\varepsilon\Gamma)}}</math> | | | Although the full develoment is not achieved, a number of scaling may be highlighted: | | Although the full develoment is not achieved, a number of scaling may be highlighted: | | | # because the temperature cannot be much below unity, <math>Y_{F,u}^*</math> must be close to 1 <math> {\mathcal O}(\varepsilon)</math> . For clarity, it is not expanded in an <math>\varepsilon</math> series. | | # because the temperature cannot be much below unity, <math>Y_{F,u}^*</math> must be close to 1 <math> {\mathcal O}(\varepsilon)</math> . For clarity, it is not expanded in an <math>\varepsilon</math> series. | | Line 684: | | Line 684: | | | | The burning rate eigenvalue, <math>\Lambda</math>, is naturally expanded as: <math> \Lambda = \varepsilon^{-n_O-n_F-1}(\Lambda_0 + {\mathcal O}(\varepsilon)) </math>. The low-order equation to be solved is: | | The burning rate eigenvalue, <math>\Lambda</math>, is naturally expanded as: <math> \Lambda = \varepsilon^{-n_O-n_F-1}(\Lambda_0 + {\mathcal O}(\varepsilon)) </math>. The low-order equation to be solved is: | | | :<math> | | :<math> | | - | \frac{d^2 \Gamma}{d \Xi^2} = \frac{1}{2}\frac{d(\Gamma_{\Xi})^{'2}}{d\Gamma} = \Lambda_0\exp{-\beta (1-Y_{F,u}^*)} \Gamma^{n_F} (\frac{Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} }{\varepsilon}+\Gamma\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} )^{n_O}\exp{-\Gamma}</math> | + | \frac{d^2 \Gamma}{d \Xi^2} = \frac{1}{2}\frac{d(\Gamma_{\Xi})^{'2}}{d\Gamma} = \Lambda_0\exp{-\beta (1-Y_{F,u}^*)} \Gamma^{n_F} (\frac{Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{O,s}}s}{\varepsilon}+\Gamma\frac{Y_{F,s}}{Y_{O,s}}s)^{n_O}\exp{-\Gamma}</math> | | | :<math> \frac{d\Gamma}{d\Xi}(-\infty)=-\theta_b, \qquad \frac{d\Gamma}{d\Xi}(\infty)=0 </math> | | :<math> \frac{d\Gamma}{d\Xi}(-\infty)=-\theta_b, \qquad \frac{d\Gamma}{d\Xi}(\infty)=0 </math> | | | The boundary conditions are obtained from the matching of the outer solutions on the right and left sides of the flame as written above (the outer solutions are reached at infinity for a very small magnifying factor <math>\varepsilon</math>). | | The boundary conditions are obtained from the matching of the outer solutions on the right and left sides of the flame as written above (the outer solutions are reached at infinity for a very small magnifying factor <math>\varepsilon</math>). | | | Once integrated with respect to those boundary conditions, the burning-rate eigenvalue (from which <math> \dot M </math> is extracted) is obtained as: | | Once integrated with respect to those boundary conditions, the burning-rate eigenvalue (from which <math> \dot M </math> is extracted) is obtained as: | | - | :<math> \Lambda_0 = \Bigg( 2\int_0^{\infty}d\Gamma\; \Gamma^{n_F} (\beta (Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} )+\Gamma\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} )^{n_O}\exp{-\beta (1-Y_{F,u}^*)}\exp{-\Gamma} \Bigg )^{-1} | + | :<math> \Lambda_0 = \Bigg( 2\int_0^{\infty}d\Gamma\; \Gamma^{n_F} (\beta (Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{O,s}}s )+\Gamma\frac{Y_{F,s}}{Y_{O,s}}s )^{n_O}\exp{-\beta (1-Y_{F,u}^*)}\exp{-\Gamma} \Bigg )^{-1} | | | </math> | | </math> | | - | The RHS integral is not developed for clarity but presents no peculiar difficulties. | + | For a flame at stoichiometry, the RHS is equal to <math>2(n_F+n_O)!</math>. The comparison of this value of <math>\Lambda_0</math> with the extinction of the diffusion flame from the same chemistry in the same conditions presented in the section dedicated to [[#Extinction| diffusion flame extinction analysis]] allows one to this relationship: | | | | + | :<math> | | | | + | \frac{(n_F+n_O)!\lambda\rho_s}{Cp||\vec M||^2}=\frac{2Z_s^2(1-Z_s)^2}{\rho^2_s\chi_s} | | | | + | </math> | | | | + | This provides a link between characteristic quantities of diffusion flame (scalar dissipation at extinction) and of premixed flame at stoichiometry (deflagration). This link is an illustration of the physics of the flame: in the premixed mode, the flame is in permanent imbalance and propagates at a speed that brings into the reaction zone as much as reactant that the chemistry can burn. Recast in the framework of a diffusion flame near extinction, this maximum amount that the chemistry can eat is brought by the dissipation. So, it is natural that this type of relationship is found. Note that the unit scaling of <math>\delta_e</math> has been used. | | | | | | | | The development has been carried out at the first order in <math> \varepsilon </math>. As soon as a second order development is attempted, some expressions are no more analytically tractable. On the other hand, a second order development allows introduce the temperature-dependent trends of some terms in <math> \Lambda </math>. Physical results are retrieved such as a slight decrease of the speed for a positive sensitivity of transport parameters to temperature around equilibrium conditions. | | The development has been carried out at the first order in <math> \varepsilon </math>. As soon as a second order development is attempted, some expressions are no more analytically tractable. On the other hand, a second order development allows introduce the temperature-dependent trends of some terms in <math> \Lambda </math>. Physical results are retrieved such as a slight decrease of the speed for a positive sensitivity of transport parameters to temperature around equilibrium conditions. | | Line 705: | | Line 709: | | | | [[Image:PremixedProfiles.jpg|thumb|Typical profiles in 1-D premixed flame at stoichiometry. Representative value of global chemistry parameters. Specie profiles are simply the complementary to the temperature profile for simple chemistry.]] | | [[Image:PremixedProfiles.jpg|thumb|Typical profiles in 1-D premixed flame at stoichiometry. Representative value of global chemistry parameters. Specie profiles are simply the complementary to the temperature profile for simple chemistry.]] | | | The picture exhibiting profiles for different Zeldovitch and heat release parameters shows the factual impact: upstream, the exact exponential profile is recovered and corresponds to the pre-heating region (thermal thickness). In the reaction zone (just upstream of the extremum), the departure from the exact solution is due to the kinetic effect. This kinetic effect is more pronounced when <math> \beta </math> is lower because the lower the Zeldovitch parameter, the lower the temperature reaction zone can be without leading to extinction. The flame takes this opportunity to maximize its transfer in heat and reactant with the cold zone. This is the physical understanding of an inverse dependence of the maximum flame speed with <math> \beta </math>. | | The picture exhibiting profiles for different Zeldovitch and heat release parameters shows the factual impact: upstream, the exact exponential profile is recovered and corresponds to the pre-heating region (thermal thickness). In the reaction zone (just upstream of the extremum), the departure from the exact solution is due to the kinetic effect. This kinetic effect is more pronounced when <math> \beta </math> is lower because the lower the Zeldovitch parameter, the lower the temperature reaction zone can be without leading to extinction. The flame takes this opportunity to maximize its transfer in heat and reactant with the cold zone. This is the physical understanding of an inverse dependence of the maximum flame speed with <math> \beta </math>. | | - | | | | | | | | | | | ==== Modification of the Flame Speed with Curvature ==== | | ==== Modification of the Flame Speed with Curvature ==== | | Line 1,169: | | Line 1,172: | | | | The flamelet model is certainly the most known topology-based combustion model. | | The flamelet model is certainly the most known topology-based combustion model. | | | In this model, the integrity of the laminar flame structure is conserved. The impact of the turbulence on an element of flame is evaluated in terms of equivalent control parameters in a laminar flame experiment, for instance the imposed strain. The question of the combustion stricto sensus is not different from the laminar case and the key point of the turbulent combustion modelling is the route to translate turbulent flow dynamics in the neighbourhood of the reaction zone into control parameters of a separated laminar flame configuration. | | In this model, the integrity of the laminar flame structure is conserved. The impact of the turbulence on an element of flame is evaluated in terms of equivalent control parameters in a laminar flame experiment, for instance the imposed strain. The question of the combustion stricto sensus is not different from the laminar case and the key point of the turbulent combustion modelling is the route to translate turbulent flow dynamics in the neighbourhood of the reaction zone into control parameters of a separated laminar flame configuration. | | | | + | | | | | + | The flamelet model takes advantage of the hierarchy problem appearing in turbulent combustion, namely, the fact that on one hand, the turbulence structures cover a spectrum of scales and, on the other hand, the typical thickness of a flame is on the low end of that spectrum. This hierarchy decouples the mixing of the species that is operated by the turbulent cascade from the chemistry that happens exclusively in the flame. The above section on the [[#Diffusion Burning Process|diffusion flame solution]] has illustrated this through a typical relationship: | | | | + | :<math> | | | | + | \dot\omega=\rho\chi\frac{d^2T}{dZ^2} | | | | + | </math> | | | | + | This explains that the reaction rate is determined by contributions coming from different hierarchies: the turbulent mixing (<math>\rho\chi</math>), and the reaction occuring in the structure of the flame (<math>\frac{d^2T}{dZ^2}</math>, remember the profile of ''T'' versus ''Z'' is completely linked to the chemistry and its strength). | | | | + | | | | | + | | | | | + | === Flame surface density model === | | | | + | | | | | + | The flame surface density model is a typical topological, finite-rate combustion model applied to premixed combustion. In this type of model, one considers that the heat release in a given volume is the product of the heat release per unit flame surface, times the total flame surface present in this volume. The idea is to get the increased exchange surface available for combustion where the flame is highly contorded by turbulence. | | | | + | | | | | | | | | == Reactor Models == | | == Reactor Models == | | Line 1,335: | | Line 1,350: | | | | | | | | | ==== Flamelets based on G-equation ==== | | ==== Flamelets based on G-equation ==== | | - | ==== Flame surface density model ==== | | | | - | | | | | - | The flame surface density model is a typical topological, finite-rate combustion model applied to premixed combustion. In this type of model, one considers that the heat release in a given volume is the product of the heat release per unit flame surface, times the total flame surface present in this volume. The idea is to get the increased exchange surface available for combustion where the flame is highly contorded by turbulence. | | | | | | | | | | === Non-premixed combustion === | | === Non-premixed combustion === | ## Latest revision as of 22:14, 17 March 2011 The power of Fire, or Flame, for instance, which we designate by some trivial chemical name, thereby hiding from ourselves the essential character of wonder that dwells in it as in all things, is with these old Northmen, Loke, a most swift subtle Demon of the brood of the Jötuns... From us too no Chemistry, if it had not Stupidity to help it, would hide that Flame is a wonder. What is Flame? Carlyle on Heroes Odin and Scandinavian Mythology., [1]. ## What is combustion -- physics versus modelling Combustion phenomena consist of many physical and chemical processes which exhibit a broad range of time and length scales. A mathematical description of combustion is not always trivial, although some analytical solutions exist for simple situations of laminar flame. Such analytical models are usually restricted to problems in zero or one-dimensional space. # Nomenclature In a general manner, we shall try to stick to the official standards, the nomenclature for heat transfer, that may be found in an issue of Journal of Heat Transfer [2] $u'$ Turbulent integral RMS velocity # Fundamental Aspects ## Main Specificities of Combustion Chemistry Combustion can be split into two processes interacting with each other: thermal, and chemical. The chemistry is highly exothermal (this is the reason of its use) but also highly temperature dependent, thus highly self-accelerating. In a simplified form, combustion can be represented by a single irreversible reaction involving 'a' fuel and 'an' oxidizer: $\frac{\nu_F}{\bar M_F}Y_F + \frac{\nu_O}{\bar M_O} Y_O \rightarrow Product + Heat$ Although very simplified compared to real chemistry involving hundreds of species (and their individual transport properties) and elemental reactions, this rudimentary chemistry has been the cornerstone of combustion analysis and modelling. The most widely used form for the rate of the above reaction is the Arrhénius law: $\dot\omega = \rho^2 A \left ( \frac{Y_F}{\bar M_F} \right )^{n_F} \left (\frac{Y_O}{\bar M_O}\right )^{n_O} \exp^{-T_a/T}$ $T_a$ is the activation temperature, high in combustion, consistently with the temperature dependence. This is where the high non-linearity in temperature is modelled. A is the pre-exponential constant. One of the interpretation of the Arrhénius law comes from gas kinetic theory: the number of molecules whose kinetic energy is larger than the minimum value allowing a collision to be energetic enough to trig a reaction is proportional to the exponential term introduced above divided by the square root of the temperature. This interpretation allows one to think that the temperature dependence of A is very weak compared to the exponential term. A is eventually considered as constant. The reaction rate is also naturally proportional to the molecular density of each of the reactant. Nonetheless, the orders of reaction $n_i$ are different from the stoichiometric coefficients as the single-step reaction is global, not governed by collision for it represents hundreds of elementary reactions. If one goes into the details, combustion chemistry is based on chain reactions, decomposed into three main steps: (i) generation (where radicals are created from the fresh mixture), (ii) branching (where products and new radicals appear from interaction of radicals with reactants), and (iii) termination (where radicals collide and turn into products). The branching step tends to accelerate the production of active radicals (autocatalytic). The impact is nevertheless small compared to the high non-linearity in temperature. This explains why single-step chemistry has been sufficient for most of the combustion modelling work up to now. The fact that a flame is a very thin reaction zone separating, and making the transition between, a frozen mixture and an equilibrium is explained by the high temperature dependence of the reaction term, modelled by a large activation temperature, and a large heat release (the ratio of the burned and fresh gas temperatures is about 7 for typical hydrocarbon flames) leading to a sharp self-acceleration in a very narrow area. To evaluate the order of magnitude of the quantities, the terms in the exponential argument are normalized: $\beta=\alpha\frac{T_a}{T_s} \qquad \alpha=\frac{T_s-T_f}{T_s}$ $\beta$ is named the Zeldovitch number and $\alpha$ the heat release factor. Here, $T_s$ has been used instead of $T_b$, the conventional notation for burned gas temperature (at final equilibrium). $T_s$ is actually $T_b$ for a mixture at stoichiometry and when the system is adiabatic, i.e. this is the reference highest temperature that can be obtained in the system. $T_f$ is the ambient temperature of the fresh gases. That said, typical value for $\beta$ and $\alpha$ are 10 and 0.9, giving a good taste of the level of non-linearity of the combustion process with respect to temperature. Actually, the reaction rate is rewritten as: $\dot\omega = \rho^2 A \left ( \frac{Y_F}{\bar M_F} \right )^{n_F} \left (\frac{Y_O}{\bar M_O}\right )^{n_O} \exp^{-\frac{\beta}{\alpha}}\exp^{-\beta\frac{1-\theta}{1-\alpha(1-\theta)}}$ where the non-dimensionalized temperature is: $\theta=\frac{T-T_f}{T_s-T_f}$ The non-linearity of the reaction rate is seen from the exponential term: • ${\mathcal O}(\exp^{-\beta})$ for $\theta$ far from unity (in the fresh gas) • ${\mathcal O}(1)$ for $\theta$ close to unity (in the reaction zone close to the burned gas whose temperature must be close to the adiabatic one $T_s$), more exactly $1-\theta \sim {\mathcal O}(\beta^{-1})$ Temperature Non-Linearity of the Source Term: the Temperature-Dependent Factor of the Reaction Term for Some Values of the Zeldovitch and Heat Release Parameters Note that for an infinitely high activation energy, the reaction rate is piloted by a $\delta(\theta)$ function. The figure, beside, illustrates how common values of $\beta$ around 10 tend to make the reaction rate singular around $\theta$ of unity. Two set of values are presented: $\beta = 10$ and $\beta = 8$. The first magnitude is the representative value while the second one is a smoother one usually used to ease numerical simulations. In the same way, two values for the heat release $\alpha$, 0.9 and 0.75, are explored. The heat release is seen to have a minor impact on the temperature non-linearity. ## Transport Equations Additionally to the Navier-Stokes equations, at least with variable density, the transport equations for a reacting flow are the energy and species transport equations with their appropriate boundary conditions that may be of Dirichlet or Neumann type. In usual notations, the specie i transport equation is written as: $\frac{D \rho Y_i}{D t} = \nabla\cdot \rho D_i\vec\nabla Y_i - \nu_i\bar M_i\dot\omega$ and the temperature transport equation: $\frac{D\rho C_p T}{Dt} = \nabla\cdot \lambda\vec\nabla T + Q\nu_F \bar M_F \dot\omega$ The diffusion is modelled thanks to Fick's law that is a (usually good) approximation to the rigorous diffusion velocity calculation. Regarding the temperature transport equation, it is derived from the energy transport equation under the assumption of a low-Mach number flow (compressibility and viscous heating neglected). The low-Mach number approximation is suitable for the deflagration regime (as it will be demonstrated below), which is the main focus of combustion modelling. Hence, the transport equation for temperature, as a simplified version of the energy transport equation, is usually retained for the study of combustion and its modelling. Note: The species and temperature equations are not closed as the fields of velocity and density also need to be computed. Through intense heat release in a very small area (the jump in temperature in typical hydrocarbon flames is about seven and so is the drop in density in this isobaric process, and the thickness of a flame is of the order of the millimetre), combustion influences the flow field. Nevertheless, the vast majority of combustion modelling has been developed based on the species and temperature equations, assuming simple flow fields, sometimes including hydrodynamics perturbations. ### Low-Mach Number Equations In compressible flows, when the motion of the fluid is not negligible compared to the speed of sound (which is the speed at which the molecules can reorganize themselves), the heap of molecules results in a local increase of pressure and temperature moving as an acoustic wave. It means that, in such a system, a proper reference velocity is the speed of sound and a proper pressure reference is the kinetic pressure. A contrario, in low-Mach number flows, the reference speed is the natural representative speed of the flow and the reference pressure is the thermodynamic pressure. Hence, the set of reference quantities to characterize a low-Mach number flow is given in the table below: Density $\rho_o$ A reference density (upstream, average, etc.) Velocity $U_o$ A reference velocity (inlet average, etc.) Temperature $T_o$ A reference temperature (upstream, average, etc.) Pressure (static) $P_o=\rho_o \bar r T_o$ From Boyle-Mariotte Length $L_o$ A reference length (representative of the domain) Time $L_o/U_o$ Energy $C_p T_o$ Internal energy at constant reference pressure. $C_p$ must also be chosen in a reference thermodynamical state. The equations for fluid mechanics properly adimensionalized can be written: Mass conservation: $\frac{D\rho}{Dt} =0$ Momentum: $\frac{D\rho\vec U}{Dt}=-\frac{1}{\gamma M}\vec\nabla P+\nabla\cdot\frac{1}{Re}\bar\bar\Sigma$ Total energy: $\frac{D\rho e_T}{Dt}-\frac{1}{RePr}\nabla\cdot\lambda\vec\nabla T=-\frac{\gamma-1}{\gamma}\nabla\cdot P\vec U +\frac{1}{Re}M^2(\gamma-1)\nabla\cdot \vec U\bar\bar\Sigma + \frac{\rho}{C_pT_o(U_o/L_o)}Q\nu_F \bar M_F\dot\omega$ Specie: $\frac{D\rho Y}{Dt}-\frac{1}{ScRe}\nabla\cdot\rho D\vec\nabla Y=-\frac{\rho}{U_o/L_o}\nu\bar M\dot\omega$ State law: $P=\rho T$ The low-Mach number equations are obtained considering that $M^2$ is small. 0.1 is usually taken as the limit, which recovers the value of a Mach number of 0.3 to characterize the incompressible regime. Considering the energy equation, in addition to the terms with $M^2$ in factor in the equation, the total energy reduces to internal energy as: $e_T = T/\gamma+M^2(\gamma-1)\vec U^2/2$. Moreover, the work of pressure is considered as negligible because the gradient of pressure is negligible (low-Mach number approximation is indeed also named isobaric approximation) and the flow is assumed close to a divergence-free state. For the same reason, volumic energy and enthalpy variations are assumed equal as they only differ through the addition of pressure. Hence, redimensionalized, the low-Mach number energy equation leads to the temperature equation as used in combustion analysis: $\frac{D\rho C_p T}{Dt} = \nabla\cdot \lambda\vec\nabla T + Q\nu_F \bar M_F \dot\omega$ ### The Damköhler Number A flame is a reaction zone. From this simple point of view, two aspects have to be considered: (i) the rate at which it is fed by reactants, let call $\tau_d$ the characteristic time, and the strength of the chemistry to consume them, let call the characteristic chemical time $\tau_c$. In combustion, the Damköhler number, Da, compares these both time scales and, for that reason, it is one of the most integral non-dimensional groups: $Da=\frac{\tau_d}{\tau_c}$. If Da is large, it means that the chemistry has always the time to fully consume the fresh mixture and turn it into equilibrium. Real flames are usually close to this state. The characteristic reaction time, $(Ae^{-T_a/T_s})^{-1}$, is estimated of the order of the tenth of a ms. When Da is low, the fresh mixture cannot be converted by a too weak chemistry. The flow remains frozen. This situation happens with ignition or misfire, for instance. The picture of a deflagration lends itself to a description based on the Damköhler number. A reacting wave progresses towards the fresh mixture through preheating of the upstream closest layer. The elevation of the temperature strengthens the chemistry and reduces its characteristic time such that the mixture changes from a low-Da region (far upstream, frozen) to a high-Da region in the flame (intense reaction to equilibrium). ## Conservation Laws The processus of combustion transforms the chemical enthalpy into sensible enthalpy (i.e. rises the temperature of the gases thanks to the heat released). Simple relations can be drawn between species and temperature by studying the source terms appearing in the above equations: $\frac{Y_F}{\nu_F\bar M_F} - \frac{Y_O}{\nu_O\bar M_O} = \frac{Y_{F,u}}{\nu_F\bar M_F} - \frac{Y_{O,u}}{\nu_O\bar M_O}$ $Y_F + \frac{Cp T}{Q} = Y_{F,u} + \frac{CpT_u}{Q}$ Those coupling functions are named Schwalb-Zeldovich variables. Hence $T_b = T_u + \frac{Q Y_{F,u}}{Cp}$, $Y_{O,b} = Y_{O,u} - \frac{\nu_O \bar M_O}{\nu_F \bar M_F} Y_{F,u} = Y_{O,u} -sY_{F,u}$ and $Y_{F,b} = 0$. Here, the example has been taken for a lean case. As mentioned in Sec. Main Specificities, the stoichiometric state is used to non-dimensionalize the conservation equations: $Y_i^* = Y_{i,u}^* - \theta\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} \qquad ; \qquad Y_i^*=Y_i/Y_{i,s}$. A comprehensive form of the reaction rate can be reconstituted to understand the difficulty of numerically resolving the reaction zone: $\dot\omega = B \prod_{i=O,F}(Y_{i,u}^*-\theta\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} )^{n_i} \exp{\left ( -\beta\frac{1-\theta}{1-\alpha(1-\theta)}\right)}$ where $B$ stands for all the constant terms present in this reaction rate, plus density. Source Term versus Temperature For the stoichiometric case and a global order of two, the reaction rate is graphed versus the reduced temperature for different values of the heat release and Zeldovitch parameter. A high value of $\beta$ makes the reaction rate very sharp, versus temperature. It means that reaction is significant beyond a temperature level (sometimes called ignition temperature) that is close to one (the exponential term above is non-negligible for $1-\theta \sim \beta^{-1}$). The heat release has qualitatively the same impact but not so strong. Transposed to the case of a flame sheet, it effectively shows that the reaction exists only in a fraction of the thermal thickness of the flame (the region close to the flame that the latter preheats, hence, where the reduced temperature rises from 0 to 1 here) where the temperature deviates few from the maximal one (density can be assumed as constant and equal to its burned-gas value). Numerically capturing such a sharp reaction zone can be costly and the lower values of $\beta$ and $\alpha$ as presented here are usually preferred whenever possible. Most problems in combustion involve turbulent flows, gas and liquid fuels, and pollution transport issues (products of combustion as well as for example noise pollution). These problems require not only extensive experimental work, but also numerical modelling. All combustion models must be validated against the experiments as each one has its own drawbacks and limits. In this article, we will address the modeling fundamentals only. The combustion models are often classified on their capability to deal with the different combustion regimes. # Three Combustion Regimes Depending on how fuel and oxidizer are brought into contact in the combustion system, different combustion modes or regimes are identified. Traditionally, two regimes have been recognized: the premixed regime and the non-premixed regime. Over the last two decades, a third regime, sometime considered as a hybrid of the two former ones to a certain extend, has risen. It has been named partially-premixed regime. ## The Non-Premixed Regime Sketch of a diffusion flame This regime is certainly the easiest to understand. Everybody has already seen a lighter, candle or gas-powered stove. Basically, the fuel issues from a nozzle or a simple duct into the atmosphere. The combustion reaction is the oxidization of the fuel. Because fuel and oxidizer are in contact only in a limited region but are separated elsewhere (especially in the feeding system) this configuration is the safest. Diffusion of oxidant and fuel has to occur simultaneously to reaction to sustain combustion, the flame being a surface of separation of fuel and oxidant streams. The non-premixed flame has some other advantages. By controlling the flows of both reactants, it is (theoretically) possible to locate the stoichiometric interface, and thus, the location of the flame sheet. Moreover, the strength of the flame can also be controlled through the same process. Depending on the width of the transition region from the oxidizer to the fuel side, the species (fuel and oxidizer) feed the flame at different rates. This is because the diffusion of the species is directly dependent on the unbalance (gradient) of their distribution. A sharp transition from fuel to oxidizer creates intense diffusion of those species towards the flame, increasing its burning rate. This burning rate control through the diffusion process is certainly one of the reasons of the alternate name of such a flame and combustion mode: diffusion flame and diffusion regime. Because a diffusion flame is fully determined by the inter-penetration of the fuel and oxidizer streams, it is convenient to introduce a tracer of the state of the mixture. This is the role of the mixture fraction, usually called Z or f. Z is usually taken as unity in the fuel stream and is null in the oxidizer stream. It varies linearly between this two bounds such that at any point of a frozen flow the fuel mass fraction is given by $Y_F=ZY_{F,o}$ and the oxidizer mass fraction by $Y_O = (1-Z)Y_{O,o}$. $Y_{F,o}$ and $Y_{O,o}$ are the fuel and oxidizer mass fractions in the fuel and oxidizer streams, respectively. The mixture fraction posses a transport equation that is expected to not have any source term as a tracer of a mixture must be a conserved scalar. First, the fuel and oxidizer mass fraction transport equations are written in usual notations: $\frac{D \rho Y_F}{D t} = \nabla\cdot \rho D\vec\nabla Y_F - \nu_F\bar M_F\dot\omega$ $\frac{D \rho Y_O}{D t} = \nabla\cdot \rho D\vec\nabla Y_O - \nu_O\bar M_O\dot\omega$ The two above equations are linearly combined in a single one in a manner that the source term disappears: $\frac{D \rho (\nu_O\bar M_O Y_F-\nu_F\bar M_F Y_O)}{D t} = \nabla\cdot \rho D\vec\nabla (\nu_O\bar M_O Y_F-\nu_F\bar M_F Y_O)$ The quantity $(\nu_O\bar M_O Y_F-\nu_F\bar M_F Y_O)$ is thus a conserved scalar, one recognizes one of the Schwalb-Zeldovich quantities introduced in Sec. Conservation Laws. One also remarks that this transport equations combination is made in the equidiffusional approximation, i.e. all the scalars, including temperature, diffuse at the same rate. The last step is to normalize it such that it equals unity in the pure fuel stream ($Y_F=Y_{F,o}$ and $Y_O=0$) and is null in the pure oxidizer stream ($Y_F=0$ and $Y_O=Y_{O,o}$). The resulting normalized passive scalar is the mixture fraction: $Z=\frac{\Phi(Y_F/Y_{F,o})-(Y_O/Y_{O,o})+1}{\Phi+1}\qquad \Phi=\frac{\nu_O\bar M_O Y_{F,o}}{\nu_F\bar M_F Y_{O,o}}$ governed by the transport equation: $\frac{D \rho Z}{D t} = \nabla\cdot \rho D\vec\nabla Z$ The stoichiometric interface location (and thus the approximate location of the flame if the flow is reacting) is where $\Phi(Y_F/Y_{F,o})-(Y_O/Y_{O,o})$ vanishes (or $Y_F$ and $Y_O$ are both null in the reacting case). This leads to a stoichiometry definition: $Z_s=\frac{1}{1+\Phi}$ As the mixture fraction qualifies the degree of inter-penetration of fuel and oxidizer, the elements originally present in these molecules are conserved and can be directly traced back to the mixture fraction. This has led to an alternate defintion of the mixture fraction, based on element conservation. First, the elemental mass fraction $X_{j}$ of element j is linked to the species mass fraction $Y_i$: $X_j = \sum_{i=1}^{n} \frac{a_{i,j} \bar M_j}{\bar M_i} Y_i$ where $a_{i,j}$ is a matrix counting the number of element j atoms in specie molecule named i and n is the number of species in the mixture. The group pictured by the summation above is a linear combination of $Y_i$. Because the transport equations of species mass fraction, few lines earlier, are also linear, a transport equation for the elemental mass fraction can be written: $\frac{D \rho X_j}{D t} = \nabla\cdot \sum_{i=1}^n\frac{a_{i,j}\bar M_j}{\bar M_i}\rho D_i\vec\nabla Y_i$ For mass is conserved, the linear combination of the source terms vanishes. Furthermore, by taking the same diffusion coefficient $D_i$ for all the species, the elemental mass fraction transport equation has exactly the same form as the specie transport equation (except the source term). Notice that the assumption of equal diffusion coefficient was also made in the previous definition of the mixture fraction and is justified in turbulent combustion modelling by the turbulence diffusivity flattening the diffusion process in high Reynolds number flows. Hence, the elemental mass fraction transport equation has the same structure as the mixture fraction transport equation seen above. Properly renormalized to reach unity in the fuel stream and zero in the oxidizer stream, the elemental mass fraction is a convenient way of determining the mixture fraction field in a flow. Indeed, it is widely used in practice for this purpose. #### Simplified Diffusion Flame Solution Because Schwalb-Zeldovitch variables rid off any source, they allow a first approximation of the flame description independent of the kinetics. From the definition of $Z$, the Schwalb-Zeldovich variables have the following expression: $\left \{ \begin{array}{lll} Y_O-sY_F & = & Y_{O,o}-Z(sY_{F,o}+Y_{O,o}) \\ \frac{Q Y_F}{C_p} + T & = & T_{O,o} + Z(T_{F,o}-T_{O,o}+\frac{Q Y_{F,o}}{C_p}) \end{array} \right.$ Two simplified limit cases for the diffusion flame equation can be built from those relations: • In the frozen flow, the chemical source term is null everywhere. Hence, $Y_F$ and $Z$ follow strictly the same linear transport equation. Then, $Y_F = Y_{F,o} Z$ as anticipated above. Follow: $Y_O=Y_{O,o}(1-Z)$, $T=T_{O,o}+Z(T_{F,o}-T_{O,o})$. • The equilibrium solution, also with zero source term, the 'historical' diffusion flame description laid out by Burke and Schumann in 1928: • Rich side: $Y_O = 0$, $Y_F = -Y_{O,o}/s + Z(Y_{F,o}+Y_{O,o}/s)$, $T=T_{F,o}+(1-Z)(T_{O,o}-T_{F,o}+Y_{O,o}\frac{Q}{sC_p})$ • Lean side: $Y_F = 0$, $Y_O = Y_{O,o} - Z(sY_{F,o}+Y_{O,o})$, $T=T_{O,o}+Z(T_{F,o}-T_{O,o}+Y_{F,o}\frac{Q}{C_p})$ From the value of $Z$ at stoichiometry above, the adiabatic flame temperature is obtained: $T_s=T_{O,o}+Z_s(T_{F,o}-T_{O,o}+Y_{F,o} \frac{Q}{C_p})$ Remark 1: The Burke-Schumann solution for a diffusion flame, despite its age, is still one of the most used model, under some refreshments such as Mixed-Is-Burned, or Equilibrium combustion model where the burned value of the quantities is obtained by Gibbs function extrema instead of the full adiabatic consumption. The reason is that the concentration of the species and temperature can be calculated from the mixture fraction field, which is in turn described in terms of statistics in the turbulent flow. Remark 2: In the Burke-Schuman solution, as the chemistry is infinitely fast, the flame is reduced to an interface separating the rich and lean sides. Therefore, quantities must be continuous at the flame and their gradients observe the jump conditions, from Sec. Conservation Laws: $\left \{ \begin{array}{lll} s\rho D_F||\vec\nabla Y_F|| & = & \rho D_O||\vec\nabla Y_O|| \\ \frac{Q}{C_p}\rho D_F||\vec\nabla Y_F|| & = & \lambda ||\vec\nabla T|| \end{array} \right .$ Remark 3 From Remark 2, it is seen that the reactants fluxes must reach the flame in stoichiometric proportion (the fluxes and not the values themselves). Therefore, the relations developed in Remark 2 are true even without the equidiffusional approximation, whereas the relation based on $Z$, no. On the other hand, the heat released by chemistry is conducted away from the flame thanks to the temperature gradients on both sides. #### Dissipation Rate and Non-Equilibrium Effect A very important quantity, derived from the mixture fraction concept, is the scalar dissipation rate, usually noted: $\chi$. In the above introduction to non-premixed combustion, it has been said that a diffusion flame is fully controlled through: (i) the position of the stoichiometric line, dictating where the flame sheet lies; (ii) the gradients of fuel on one side and oxidizer on the other side, dictating the feeding rate of the reaction zone through diffusion and thus the strength of combustion. According the the mixture fraction definition, the location of the stoichiometric line is naturally tracked through the $Z_s$ iso-line and it is seen here how the mixture fraction is a convenient tracer to locate the flame. In the same manner, the mixture fraction field should also be able to give information on the strength of the chemistry as the gradients of reactants are directly linked to the mixture fraction distribution. The feeding rate of the reaction zone is characterized with the help of the inverse of a time. Because it is done through diffusion, it must be obtained through a combination of mixture fraction gradient and diffusion coefficient (dimensional analysis): $\chi_s = \frac{(\rho D)_s}{\rho_s} ||\vec \nabla Z||^2_s \qquad (\rho D)_s = \left ( \frac{\lambda}{C_p} \right )_s$ where the subscript s refers to quantities taken effectively where the reacting sheet is supposed to be, close to stoichiometry. This deduction of the scalar dissipation rate, scaling the feeding rate of the flame, is obtained here through physical arguments and is to be derived from equations below in a more mathematical manner. Note that the transport coefficient for the mixture fraction is identified to the one for temperature. This notation is usually used in the literature to emphasize that the rate of temperature diffusion (that is commensurable to the rate of species diffusion) is the retained parameter (as introduced in the following approach highlighting the role of the scalar dissipation rate). Depending if the diffusion process is free (mixing layer, boundary layer) or controlled (stagnation), the dissipation rate and then the non-premixed combustion may be of unsteady or steady nature. ##### Ignition / Burning / Extinction Curve Let consider a mixing layer between fuel and oxidizer whose strain (and then intensity of reactant inter-diffusion) is carefully controlled. Let start from a flame already existing. Let also use the Damköhler number as defined above with the inverse of the scalar dissipation rate being the characteristic mechanical time of flow of reactants feeding the reaction zone. For high Da (low dissipation rate), the reactants diffuse slowly. The reaction is not very intense but the chemistry is fully achieved such that the maximum temperature is reached. Now, let decrease slowly (to avoid any unsteady effect on the chemistry) the Da. On one hand, the rate of feeding of flame through diffusion increases, and so does the reaction rate (that said, the reaction rate normalized by the dissipation rate decreases as the Da). On the other hand, the chemistry may not have the time to `eat' every reactant molecules that begin to leak through the flame: the fully sensible value is not realized and a lower temperature is achieved. With still increasing the dissipation rate, this mechanism leads to lower temperature in the flame zone down to a level that cannot be sustained by combustion (that strongly depends on temperature). The diffusion flame leaves the diffusion-controlled burning regime and extinguishes suddenly. It is said to be quenched. This is experienced in real life when blowing off a small wood fire. Slightly blowing increases the transfer between reactants and strengthens the reaction but blowing too much extinguishes the fire. S-Curve Diffusion Flame. If now the start state is a frozen mixing layer between fuel and oxidant, at low Da (high dissipation rate), the flow remains frozen. When increasing Da, there will be a point where the chemistry will self-accelerate and the flame will light up through a sudden increase in temperature. Because the starting temperature is low, the ignition Da number is higher that the extinction Da above, exhibiting an hysteresis phenomenon. When looking at the trend of the maximum temperature only versus Da, a curve with a shape in `S' appears, named as S-Curve for diffusion flames (see beside figure). ##### Flamelet Equation In turbulent combustion modelling, the flamelet regime is identified as a regime that saves the integrity of the flame structure (Sec. The Wrinkled Regime). Turbulent eddies do not enter the structure and only contord the flame at a large scale. Therefore, it is usual to call the flamelet equation, the equation that describes the diffusion flame structure when not perturbed by turbulence. The model problem is usually retained as the counterflow diffusion flame. The strain imposed to the flame mimicks the one due to inhomogeneities in a real turbulent flow. Sketch of the Stretched Mixing Layer Created by the Counter-Flow Configuration. A counter-flow diffusion flame is basically made of two ducts, front-to-front, one issuing fuel at a mass fraction $Y_{F,o}$ (may be diluted by an inert) and temperature $T_{F,o}$, and the other issuing oxidizer at a mass fraction $Y_{O,o}$ (the inert is usually di-nitrogen in air such that $Y_{O,o} =.23310$ is a value commonly encountered) and temperature $T_{O,o}$. A stretched mixging layer develops at the location the flows oppose. This stretch can be modulated by the flow rate from the feeding ducts. The diffusion flame sits around the iso-surface corresponding to stoichiometry. The figure beside sketches this widely used configuration, with the coordinate frame origin taken at the stagnation location in the middle of the mixing layer. The Howarth-Dorodnitzyn transform and the Chapman approximation are applied to the above mixture fraction transport equation. In the Chapman approximation, the thermal dependence of $(\rho D)$ is approximated as $\rho^{-1}$. The Howarth-Dorodnitzyn transform introduces $\rho$ in the space coordinate system: $\vec\nabla \rightarrow \rho\vec\nabla \quad ; \quad \nabla\cdot\rightarrow \rho\nabla\cdot$. The effect of these both mathematical operations is to `digest' the thermal variation of quantities such as density or transport coefficient. Hence, the mixture fraction equation comes in a simpler mathematical shape: $\rho\vec U \cdot \vec\nabla Z = \rho_s ( \rho D )_s \Delta Z$ Here the references are taken in the stoichiometric zone (s subscript). For the sake of simplicity, we shall use the potential flow result for this counter-flow problem with two identical jets although the presence of the flame (the associated density change) jeopardizes this theory, strictly speaking. The velocity solution is thus: $\left \{ \begin{array}{lll} u & = & -a\int\frac{dx}{\rho} \\ v & = & \frac{a}{2}\int\frac{dr}{\rho} \end{array} \right.$ $a$ is named the stretch of the mixing layer and is the inverse of a time. In such a flow, the transport equation for the mixture fraction may be written as a purely 1-D problem (assuming constant properties): $-ax\frac{dZ}{dx} = \rho_s (\rho D)_s \frac{d^2Z}{dx^2}$ from which a `length' scale emerges for the mixing layer: $\sqrt{\rho_s (\rho D)_s /a}$. Non-dimensionalized, with $\eta = x \sqrt{a/\rho_s (\rho D)_s}$: $\eta \frac{dZ}{d\eta} + \frac{d^2 Z}{d \eta^2} = 0$ The boundary conditions may be approximated as $\eta \rightarrow +\infty\quad ; \quad Z\rightarrow 1$ and $\eta \rightarrow -\infty\quad ; \quad Z\rightarrow 0$. Note that this approximation requires a thin layer compared to the counter-flow distance, and thus a relatively high strain rate. Then, the solution for $Z$: $Z=\frac{1}{2}(1+erf{\frac{\eta}{\sqrt{2}}})$ The scalar dissipation rate is deduced from this equation as: $\chi = \frac{1}{2\pi}\exp\left( - [erf^{-1} (2Z)]^2 \right) \frac{a\rho^2}{\rho_s^2}$ Because combustion is highly temperature-dependent, T is certainly the scalar to which attention must be paid for. The temperature equation in the Low-Mach Number regime (Sec. Low-Mach Number Equations) is written below in steady-state: $\rho\vec U \cdot \vec\nabla T = \nabla\cdot \frac{\lambda}{C_p}\vec\nabla T + \frac{Q}{C_p}\nu_F \bar M_F \dot\omega$ After the Chapman approximation and the Howarth-Dorodnitzyn transform: $\frac{\rho}{\rho_s}\vec U \cdot \vec\nabla T = \left ( \frac{\lambda}{C_p}\right )_s \nabla\cdot \vec\nabla T + \frac{Q}{\rho\rho_s C_p}\nu_F \bar M_F \dot\omega$ Here the references are taken in the flame, i.e. close to the stoichiometric line (s subscript). In a non-premixed system, strictly speaking, $\vec U$, is not really relevant as the flame is fully controlled by the diffusion process. Notwithstanding, in practice, non-premixed flames must be stabilized by creating a strain in the direction of diffusion. This is the reason why the velocity is left in the equation. Because a diffusion flame is fully described by the mixture fraction field, a change in coordinate can be applied: $\left ( \begin{array}{c} r \\ x \end{array}\right ) \longrightarrow \left ( \begin{array}{c} r \\ Z \end{array}\right )$ The Jacobian of the transform is given as: $\left [ \begin{array}{cc} \frac{\partial r}{\partial r} & \frac{\partial Z}{\partial r} \\ \frac{\partial r}{\partial x} & \frac{\partial Z}{\partial x} \end{array}\right ] = \left [ \begin{array}{cc} 1 & 0 \\ 0 & (\rho l_d)^{-1} \end{array}\right ]$ Note that the diffusive layer of thickness $l_d$ is defined as the region of transition between fuel and oxidizer and is thus given by the gradient of Z along the x direction. This transform is applied to the vectorial operators: $\nabla\cdot = \nabla_r\cdot+\nabla_x\cdot=\nabla_r\cdot+\nabla_x\cdot Z\nabla_Z\cdot$ $\vec\nabla = \vec\nabla_r+\vec\nabla_x=\vec\nabla_r+\vec\nabla_x Z\nabla_Z\cdot$ With this transform, the above temperature equation looks like: $\frac{\rho}{\rho_s}\vec U \cdot (\vec\nabla_r T + \vec\nabla Z \nabla_Z\cdot T) = \left (\frac{\lambda}{C_p}\right )_s \left ( \nabla_r\cdot \vec\nabla_r T + \nabla_x\cdot Z \nabla_Z\cdot \vec\nabla_x Z \nabla_Z\cdot T + \nabla_x\cdot Z \nabla_Z\cdot \vec\nabla_r T + \nabla_r\cdot \vec\nabla_x Z \nabla_Z\cdot T \right ) + \frac{Q}{\rho\rho_s C_p}\nu_F \bar M_F \dot\omega$ As mentioned above, the velocity and the variation along the tangential direction to the main flame structure r are not supposed to play a major role. To be convinced, a variable scaling may be done, considering that the reaction zone extends over a small fraction $\varepsilon$ of the diffusion thickness $l_d$ around stoichiometry. Then the convective term of the above equation is ${\mathcal O} ( \varepsilon^{-1} )$ and the diffusive term is ${\mathcal O} ( \varepsilon^{-2} )$. By emphasizing the role of the gradient of Z along x as a key parameter defining the configuration the following equation is obtained: $0 = \left (\frac{\lambda}{C_p}\right )_s ||\vec\nabla_x Z||^2 \Delta_Z T + \frac{Q}{\rho\rho_s C_p}\nu_F \bar M_F \dot\omega$ This equation (sometimes named the flamelet equation) serves as the basic framework to study the structure of diffusion flames. It highlights the role of the dissipation rate with respect to the strength of the source term and shows that the dissipation rate calibrates the combustion intensity. ##### Ignition The flamelet equation may be used to explore the behaviour of the mixing-layer for not too high strain rate. As mentioned earlier, the Damk&oumlaut;hler may be high enough to allow a self-ignition. In this regime, the following assumptions may be made: • the exponential argument in the reaction term, $T_a/T$, is linearized around the frozen temperature value (indeed, as the activation temperature is large, a small change in the temperature from its frozen value is magnified in the reaction rate magnitude): $T_a/T = T_a/T_o-T_a/T_o^2(T-T_o) = T_a/T_o - \phi$. It can be seen that a small change of temperature leading to $T_a/T_o^2(T-T_o)\quad {\mathcal O} (1)$ impacts the exponential term of the reaction rate. • the reactants may be approximated by their frozen flow values: $Y_F=Y_{F,o}Z-\frac{Cp T_o^2}{QT_a}\phi\approx Y_{F,o}Z \quad ; \quad Y_O=Y_{O,o}(1-Z)-s\frac{Cp T_o^2}{QT_a} \phi \approx Y_{O,o}(1-Z)$, under the hypothesis of large activation energy. The flamelet equation is recast as: $\Delta_Z\phi = - Z^{n_F}(1-Z)^{n_O} e^{\phi}\overbrace{\frac{Q\nu_F}{\rho_s\lambda_s}\frac{A}{\nabla^2_x Z \bar M_F^{n_F-1}\bar M_O^{n_O}} \frac{T_a}{T_o^2} e^{-\frac{T_a}{T_o}}Y_{F,o}^{n_F}Y_{O,o}^{n_O}}^{\mathcal D}$ In this expression $\mathcal D$ is a Damk\"ohler number. This expression must be solved numerically but, with the help of simplifications, an analytical solution may be found. It must be noticed that this analytical solution has no quantitative prediction capabilities. However, it is sufficient to illustrate the critical Damk\"ohler number for ignition for pedagogical purpose. First, the equation is integrated from $Z_s$ to $Z$ 'somewhere in the rich side'. $\int_{Z_s}^{Z} dZ \Delta_Z\phi = \left [\frac{d\phi}{dZ} \right]_{Z_s}^{Z} = - \int_{Z_s}^{Z} dZ Z^{n_F}(1-Z)^{n_O} e^{\phi}{\mathcal D}$ $\phi$ may be approximated as, on the asymptote $\hat{\phi}(Z-1)/(Z_s-1)$ with a maximum $\hat{\phi}$ on the stoichiometric line, where ignition is supposed the strongest. Then, on the stoichiometric line, the derivative of $\phi$ vanishes and, sufficiently far from the stoichiometric line it tends to $\hat{\phi}/(Z_s-1)$. To keep the solution simple, we choose the integration limit to be on the border of the region where $\phi$ remains within 10% of its maximum such that it may be approximated as constant on the right hand side. The value of $Z$ that fits this trade-off of being sufficiently far from $Z_s$ to allow the derivative of $\phi$ to reach its asymptote and sufficiently close to prevent $\phi$ from decreasing is estimated to 40% of $Z_s-Z$. This estimation, quite disputable, is based on temperature profile observed in diffusion flames at low Da when the diffusive effect strongly smoothes it. Ignition Da curve in a cold diffusion layer. The end equation has the form: $\hat{\phi}e^{-\hat{\phi}}=-{\mathcal D} \overbrace{(Z_s-1)\left (\frac{Z^2-Z_s^2}{2}-\frac{Z^3-Z_s^3}{3}\right )}^{a}$ This makes $\mathcal D$ a function of $\hat{\phi}$, maximum temperature in the ignition zone at stoichiometry, that has a maximum for $\hat{\phi}=1$. This gives the bottom tipping point of the S-curve presented above. For the academical case $Z_s = .5$ and $n_F=n_O=1$, this gives $\mathcal D = 16.$ which is in a satisfactory agreement with the full numerical solution. It is interesting to note that the numerical solution follows the physics process of ignition of the diffusion layer by slowly increasing the Damk\"ohler number. Therefore, it cannot catch properly the turning point and the doubled-value relation. Instead, once the maximum Damk\"ohler number corresponding to ignition is reached, the reduced temperature soars sharply. Further increasing D in the forbidden range, no solution can be found. Physically, for larger values of the Damköhler number, the configuration jumps from this nearly frozen situation to an equilibrium situation with a well developed diffusion flame, see the S-Curve above. The simplified equation for ignition is thus unadapted and a specific solution for a developed diffusion flame is now to be presented. ##### Diffusion Burning Process From Sec. Simplified Solution of Diffusion Flame, the temperature jump at stoichiometry is identified as $Q Y_{F,o} Z_s/Cp$. Although chemistry in a diffusion flame is usually very fast, one expects a very residual amount of reactants in the reaction zone laying along the stoichiometric line. It thus means that equilibrium is not reached in practice and that the temperature is somewhat lower than the equilibrium temperature. Thermodynamics is moderated by kinetics effects or, in other words, the curves T(Z) and Y(Z) depend on the dissipation rate and are not simply made of straight lines meeting at stoichiometry as in the simplified Burke-Schumann solution for diffusion flames. To a certain extend, the flamelet equation Sec. Flamelet Equation may be used to present a solution for the non-premixed flame when the flame is already ignited, and the chemistry strength controlled by diffusion (we have stressed the author that an interesting aspect of diffusion flames is the ability to control the reaction rate through the level of mixing in the neighbourhood of the flame). As already emphasized, due to a strong non-linearity in temperature, the structure of a burning flame can never be far from the Burke-Schumann simplified solution. Non-dimensionalized, the temperature solution in the flamelet may be written as: $\theta=1+(Z-Z_s)\Delta T_o + \frac{Z_s - Z}{1-Z_s} -\varepsilon\Gamma$ in the rich side of the mixing layer, and: $\theta=1+(Z-Z_s)\Delta T_o + \frac{Z-Z_s}{Z_s} -\varepsilon\Gamma$ in the lean side. On recognizes the structure of the Burke-Schumann solution (Sec. Simplified Diffusion Flame Solution). $\Delta T_o$ is the non-dimensionalized temperature difference between the reactants feed streams: $\Delta T_o = \frac{T_{F,o}-T_{O,o}}{Q Y_{F,o} Z_s/C_p}$ Those solutions are amended by the term $\varepsilon\Gamma$ with $\Gamma\sim{\mathcal O}(1)$ in the reaction zone and vanishing on the sides. This amendment corresponds to the departure from the equilibrium solution due to kinetics effect. The quantity $\varepsilon$ is taken small for the physical reasons explained above. It is interesting to introduce $\gamma=2(Z_s-1)(Z_s\Delta T_o+1)+1$, combining the temperature difference between the reactant feeding streams and the stoichiometry of the combustion) as a synthetic parameter to inform about the geometry of the flame structure in the (Z,T) diagram (Burke-Schumann): $\theta=1-\frac{(Z-Z_s)}{2Z_s(1-Z_s)}(\gamma+1) -\varepsilon\Gamma$ in the rich side of the mixing layer, and: $\theta=1-\frac{(Z-Z_s)}{2Z_s(1-Z_s)}(\gamma-1) -\varepsilon\Gamma$ in the lean side. For $\gamma=0$ the slope in temperatue vs Z is the same on both sides of the flame. For $\gamma > 0$ the rich side slope is the sharpest, and for $\gamma <0$ this is the lean side. An interesting case is when $|\gamma|>1$: one for the feeding stream enters the flame zone with a temperature higher than the flame temperature. In that case, whatever the dissipation rate, the flame is stretched with a hot stream that sustains its temperature such that it is never extinguished. Structure of a diffusion flame in temperature and species mass fractions vs mixture fraction burning under the diffusion-controlled regime. The departure in temperature from equilibrium in the reaction zone is reflected into the species concentrations, as it corresponds to still unburned reactants. With the help of the above Conservation Laws one finds: • Lean side: $Y_F=\varepsilon\Gamma Y_{F,o}Z_s$, $Y_O=Y_{O,o}(1-Z)-s(Y_{F,o}Z-\varepsilon\Gamma Y_{F,o}Z_s)$ • Rich side: $Y_F=Y_{F,o}Z+\frac{1}{s}(s Y_{F,o}Z_s\varepsilon\Gamma-Y_{O,o}(1-Z))$, $Y_O=sY_{F,o}Z_s \varepsilon\Gamma$ The picture beside illustrates the structure of a diffusion flame in its `normal' regime. Recasting the Flamelet Equation above in terms of the reduced temperature, we find: $\chi_s\frac{\partial^2 \theta}{\partial Z^2} + \dot\omega\frac{\nu_F\bar M_F}{\rho\rho_s^2 Y_{F,o} Z_s} = 0$ To describe the structure of the diffusion flame, the reduced mixture fraction is set: $\xi = \frac{Z-Z_s}{2Z_s(1-Z_s)\varepsilon}$ The utility of the reduced mixture fraction is to focus on the reaction zone. This reaction zone is supposed located on the stoichiometric line (this is why the reduced mixture fraction is centred on $Z_s$) and to be very thin (reason of the introduction of the magnifying factor $\varepsilon$). One may convince oneself by considering in the above equation linking $\theta$ to $Z$ that one must stay close to the stoichiometric line if one wants the non-equilibrium effect to be sensible compared to the adiabatic drop of the temperature. The equation is also written in terms of $\Gamma$, the non-equilibrium departure rescaled by $\varepsilon$ and the source term as presented in Main Specificities of Combustion Chemistry is explicated: $\left \{ \begin{array}{llll} \frac{\chi_s}{4Z_s^2(1-Z_s)^2\varepsilon}\frac{\partial^2 \Gamma}{\partial \xi^2} & = & A \frac{\rho\nu_F}{\rho_s^2} \left ( \frac{Y_{F,o}}{\bar M_F} \right )^{n_F-1} \left ( \frac{Y_{O,o}}{\bar M_O} \right )^{n_O} e^{-\frac{\beta}{\alpha}} Z_s^{n_F+n_O-1} \varepsilon^{n_F+n_O} \Gamma^{n_F} \Phi^{n_O} (\Gamma-2\xi)^{n_O} e^{-\beta\varepsilon (\xi(\gamma-1)+\Gamma)} & Z<Z_s \\ \frac{\chi_s}{4Z_s^2(1-Z_s)^2\varepsilon}\frac{\partial^2 \Gamma}{\partial \xi^2} & = & A \frac{\rho\nu_F}{\rho_s^2} \left ( \frac{Y_{F,o}}{\bar M_F} \right )^{n_F-1} \left ( \frac{Y_{O,o}}{\bar M_O} \right )^{n_O} e^{-\frac{\beta}{\alpha}} Z_s^{n_F+n_O-1} \varepsilon^{n_F+n_O} (\Gamma+2\xi)^{n_F} \Phi^{n_O} \Gamma^{n_O} e^{-\beta\varepsilon (\xi(\gamma+1)+\Gamma)} & Z>Z_s \end{array} \right.$ It is seen that one can form a Damköhler number as (see Sec. Damköhler Number for the general meaning of the Damköhler): $Da=\frac{A\rho\nu_F}{\chi_s\rho_s^2}\left(\frac{Y_{F,o}}{\bar M_F}\right)^{n_F-1}\left(\frac{Y_{O,o}}{\bar M_O}\right )^{n_O}e^{-\frac{\beta}{\alpha}}\Phi^{n_O} 4(1-Z_s)^2 Z_s^{n_F+n_O+1}$ and that this number must scale with $\varepsilon^{n_F+nu_O+1}$. So, to have a thin reaction zone, the Damköhler number must be high. Physically, it means that when fuel and oxidizer meet close to the stoichiometric line they combine quickly such that the reaction can be completed in a very narrow region. In the diffusion burning process, Da is sufficiently high to have $\beta\varepsilon << 1$ such that the departure from equilibirum for the temperature has no impact in the exponential term at the end of the RHS of the equations above. This exponential term approximates to unity (in the flame zone, so for small $\xi$). Physically, this is the Burke-Schumann solution when the chemistry is sufficiently fast such that the kinetics limitation (modelled by this exponential term in the reaction rate) is inexistent. This gives more details on the structure of the diffusion flame for the simplified solution presented above Sec. Simplified Diffusion Flame Solution. In particular, the relationship between Da and $\varepsilon$ allows an estimation of the quantities in the flame zone. As for the boundary conditions, in the diffusion regime, the chemistry is fast enough to deplete the reactants diffusing into the reaction zone. Then, no presence of fuel on the oxidizer side and vice versa is expected. In a concise manner, we must have: $\lim_{\xi \rightarrow \infty} \Gamma = \lim_{\xi \rightarrow \infty} \Gamma\pm 2\xi = 0$ ##### Extinction When, for a given Damköhler number, the Zeldovitch parameter is too high such that its product with $\varepsilon$ is no more negligible, the combustion regime enters an extinction process. Physically, the small departure from equilibrium is sufficient to impact the RHS exponential of the temperature equations above such that the non-linear temperature dependence is sensitive. $Da \varepsilon^{n_F+n_O+1} \sim {\mathcal O}(1)$ and $\beta \varepsilon \sim {\mathcal O}(1)$ lead to the construction of a reduced Damköhler number $\delta = Da /\beta^{n_F+n_O+1}$. This reduced Damköhler number is expected to be around unity in the extinction regime, meaning that the Da is still high enough to sustain combustion but that the non-linearity of combustion due to a high value of $\beta$ makes it very dependent to the temperature such that a weakening Damköhler, further enhancing the departure (temperature drop) from equilibrium, will quench the flame. The fact that the chemistry is not fast enough to `eat' the reactants entering the flame and turn them into full equilibrium is synonymous to say that, at extinction, we expect a leakage of one or both reactants across the flame such that a concentration of the order of $\varepsilon$ may be found on the oxidizer side for the fuel and/or vice versa. This quantity is a function of $\delta$. When the leakage is too intense, the combustion is quenched. There is thus a quenching $\delta=\delta_e$. Depending on the value of $\gamma$, as said above, the temperature slope on either side of the flame may be different. When reaching extinction, it is expected that the leakage happens for the reactant coming from the side with the weaker temperature slope through the side with the stronger temperature slope. The maximum temperature zone is lower than the adiabatic flame temperature, as previously mentioned, and also shifted towards the side of the flame with the weaker temperature slope. Hence, depending of the value of $\gamma$ the flame at extinction is described by one of the two equations above. Fortunately, due to the symmetrical role of $\gamma$, a synthetic correlation for the Da at extinction can be obtained from the numerical solution of the above set of equations (with $n_i = 1$): $\delta_e=e(1-|\gamma| - (1-|\gamma|)^2 + .26(1-|\gamma|)^3+.055(1-|\gamma|)^4)$ This correlation, together with its equation now in a generic form for each side of the flame: $\Gamma_{\xi\xi}=\delta\Gamma(\Gamma+2|\xi|)e^{-\Gamma}e^{-|\xi|(1-|\gamma|)} \qquad sign(\xi)=-sign(\gamma)$ has an especial place in combustion theory and modelling. It has been first proposed in [3] in 1974 and is since used as a canonical equation for problems that may be reduced to. It is also at the heart of understanding of non-premixed combustion modelling such as the flamelet model that will be introduced later. It is important to notice that due to the leakage of reactants at extinction, the boundary conditions must be weaker than for the diffusion controlled regime: $\lim_{\xi\rightarrow \infty} \Gamma_{\xi} = 0, 2\times (sign(\xi))$ 0 on the side where the flame lies at extinction (depending on $\gamma$ as seen above and where the equation is written) and 2 for the other. The idea is that, compared to the diffusion-controlled regime, the shape of the Burke-Schumann asymptotes is conserved but with a shift corresponding to the unburned leakage and the associated drop in sensible enthalpy. Then, compared to the diffusion-controlled regime the asymptotic boundary gradients are the same, but not the actual values. In the case the flame is highly asymmetrical ($|\gamma| \rightarrow 1$) either because one of the reactant streams is preheated or because the stoichiometry is far from $Z_s = .5$, an analytical approach may be found for $\delta_e$. This approach allows us to illustrate qualitatively the mechanism of extinction described only with words so far. Let take the case $\gamma < 0$, meaning that the slopes (temperature and species) are sharper on the oxidizer side. Then, the maximum temperature zone is shifted on the fuel side (it may be seen for instance even for a moderate departure from the symmetry as in the figure above) and a leakage of fuel is expected on the oxidizer side. In a first approximation, the leakage amount is approximated as $b/(1+\gamma)$, with $b\;{\mathcal O}(1)$. So, for highly unsymmetrical flames ($\gamma\rightarrow -1$), this leakage is considered important before extinction may occur. One may convince oneself by considering that the asymmetry is due to a preheating of the fuel stream. When $\gamma\rightarrow -1$, it means that the preheating is such that the fuel stream temperature is close to the adiabatic temperature of the system. One may imagine that the flame is fairly robust and may bear a high level of leakage before being quenched. Actually, when $|\gamma|\approx 1-\varepsilon$, by looking at the equation linking $\theta$, $Z$, and $\Gamma$, it is seen that an especial scaling happens between the non-equilibrium departure from the B-S solution and the drop of the B-S temperature away from stoichiometry. The non-premixed flame has thus the freedom to switch to a `premixed' regime on the side with the weakest temperature slope of the B-S solution and at a location corresponding to a strong chemistry. Hence, the fuel leakage on the oxidizer side is quantified as $\lim_{\xi\rightarrow -\infty} (\Gamma+2\xi) = b/(1+\gamma)$ (remember the maximum temperature location is expected on the fuel side and the corresponding equation is used for the diffusion flame as developed above). The new asymptote intersects the $\xi$ axis at the location $b/2/(1+\gamma)$, that is taken as the flame position at extinction (in other words, the dissipation has the effect of shifting with respect to the Burke-Schumann infinitely fast chemistry / equilibrium). We use this shift to make a change of frame of reference, which is now centred around the flame position at extinction as just defined: $\zeta = \xi - b/2/(1+\gamma)$. The equation for the flame above becomes: $\Gamma_{\zeta\zeta} = \delta_e e^{-\frac{b}{2}} \left (\frac{b}{1+\gamma}\right)^{n_F} e^{-\Gamma-\zeta(1+\gamma)} \Gamma^{n_O}\left ( 1 + \frac{1+\gamma}{b} (\Gamma +2\zeta) \right )^{n_F}$ With this shift, stronger boundary conditions may be devised: • $\zeta \rightarrow -\infty$, we are on the oxidizer side of the flame in the new frame of reference and the fuel concentration must equals its leakage level, then $\Gamma+2\zeta\rightarrow 0$, • $\zeta \rightarrow +\infty$, given the fact that we are in the limit case $\gamma\rightarrow -1$, it is expected that the oxidizer is consumed instead of leaking through the fuel side: $\Gamma\; ; \; \Gamma_{\zeta}\rightarrow 0$. This equation may be simplified to the first order in $1+\gamma$: $\Gamma_{\zeta\zeta} = \delta_e e^{-\frac{b}{2}}\left ( \frac{b}{1+\gamma} \right)^{n_F} e^{-\Gamma} \Gamma^{n_O}$ A first integral gives: $\left [\frac{\Gamma_{\zeta}}{2}\right ]_{-\infty}^{+\infty} = \delta_e e^{-\frac{b}{2}} \left [\frac{b}{1+\gamma}\right ]^{n_F} \int_{\Gamma(-\infty)}^{\Gamma(+\infty)}d\Gamma e^{-\Gamma} \Gamma^{n_O}$ With the boundary conditions: • $\Gamma,\;\Gamma_{\zeta}\stackrel{\rightarrow}{+\infty}0$ • $\Gamma \stackrel{\rightarrow}{-\infty} +\infty$ • $\Gamma_{\zeta}\stackrel{\rightarrow}{-\infty} -2$ one obtains: $2=\delta_e e^{-\frac{b}{2}}\left (\frac{b}{1+\gamma} \right )^{n_F} n_O!$ The RHS is a measure of the integrated reaction rate across the flame, which is a function of the leakage term b. One retrieves some of the arguments developed previously. This RHS is positive and increases in value with b up to a maximum corresponding to $b=2n_F$. Then it vanishes because the dissipation imposed to the flame is too strong for the chemistry. At this maximum, the minimum Damköhler that the flame can deal with is found, which is identified as the Damköhler number at extinction: $\delta_e = \frac{2 e^{n_F}}{n_O!\left (\frac{2 n_F}{1+\gamma}\right )^{n_F} }$ For $n_F=n_O=1$ the first term of the correlation from [2] as provided above is recovered. Effect of the Damköhler number on the temperature profile. The figure beside illustrates the effect of the Damköhler number on the temperature profile across the flame (i.e. vs the mixture fraction). Those solutions have been obtained from the flamelet equation presented above with reactant orders and free stream mass fractions equal to unity. The stoichiometry is $Z_s = .5$ and the reactant free stream temperatures are the same. Two values of $\delta$ are chosen: 5, which is far from extinction and 2, which is relatively closer to extinction for this set of combustion parameters as seen below. It is retrieved that reducing the ratio chemistry strength / dissipation strength by reducing $\delta$ leads to a larger departure from equilibrium in the flame zone, around $Z_s$. The temperature profile is smoothed out in the flame. Effect of the Damköhler number on the flamelet maximum temperature. The figure on the right tracks the change in the maximal temperature in the flame zone for decreasing $\delta$. This figure illustrates well the non-linearity of combustion vs temperature: once the extinction value is passed, the temperature drop is as sharp as a cliff, meaning a sudden end of chemistry. Physically, this is what is experienced when blowing to much on a wood fire: as explained earlier, blowing reduces the Damköhler number. Below a threshold value, one experiences a sudden blow-off / extinction of the fire. Reduced Damköhler number at extinction vs diffusion flame asymmetry factor. The Damköhler number at extinction is given in function of $\gamma$ in the figure beside. It is retrieved that when the asymmetry is sensitive, there is considerable improvement of the flame robustness as predicted. Remark: To develop this extinction analysis, we have assumed that the Zeldovitch parameter was asymptotically high. This is what is commonly named Asymptotic Energy Analysis (AEA). It uses the fact that combustion, under normal conditions, is highly temperature dependent by assuming that the activation temperature of the combustion chemistry tends to infinity. For the numerical simulation of the same extinction solutions, we use finite values of the Zeldovitch parameter. When the combustion parameters are scaled to values representative of actual chemistry, it is seen that the discrepancies with the AEA may be substantial. In order to get close to the analytical development, the numerical simulations must use unreasonably high values of the Zeldovitch parameter. It must also be noticed that pushing up to the second order the perturbation analysis will bring a correction that is even worse since it will predict an infinitely robust flame at $\gamma=0$. ## The Premixed Regime Sketch of a premixed flame In contrast to the non-premixed regime above, the reactants are here well mixed before entering the combustion chamber. Chemical reaction can occur everywhere and this flame can propagate upstream into the feeding system as a subsonic (deflagration regime) chemical wave. This presents lots of safety issues. Some situations prevent them: (i) the mixture is made too rich (lot of fuel compared to oxidizer) or too lean (too much oxidizer) such that the flame is close to its flammability limits (it cannot easily propagate); (ii) the feeding system and regions where the flame is not wanted are designed such that they impose strong heat loss to the flame in order to quench it. For a given thermodynamical state of the mixture (composition, temperature, pressure), the flame has its own dynamics (speed, heat release, etc) on which there is few control: the wave exchanges mass and energy through diffusion process in the fresh gases. On the other hand, those well defined quantities are convenient to describe the flame characteristics. The mechanism of spontaneous propagation towards fresh gas through the thermal transfer from the combustion zone to the immediate slice of fresh gas such that the ignition temperature is eventually reached for this latter was highlighted as early as by the end of the 19th century by Mallard and LeChatelier. The reason the chemical wave is contained in a narrow region of reaction propagating upstream is the consequence of the discussion on the non-linearity of the combustion with temperature in the Sec. Fundamental Aspects. It is of interest to compare the orders of magnitude of the temperature dependent term $\exp{(-\beta(1-\theta)/(1-\alpha(1-\theta)))}$ of the reaction source upstream in the fresh gas ($\theta\rightarrow 0$) and in the reaction zone close to equilibrium temperature ($\theta\rightarrow 1$) for the set of representative values: $\beta = 10$ and $\alpha=0.9$. It is found that the reaction is about $10^{43}$ times slower in the fresh gas than close to the burned gas. It is known that the chemical time scale is about 0.1 ms in the reaction zone of a typical flame, then the typical reaction time in the fresh gas in normal conditions is about $10^{39} s$. To be compared with the order of magnitude of the estimated Universe age: $1 0^{17} s$. Non-negligible chemistry is only confined in a thin reaction zone stuck to the hot burned gas at equilibrium temperature. In this zone, the #Damköhler number is high, in contrast to in the fresh mixture. It is natural and convenient to consider that the reaction rate is strictly zero everywhere except in this small reaction zone (one recovers the Dirac-like shape of the reaction profile, provided that one can see the upstream flow as a region of increasing temperature towards the combustion zone and the downstream flow as in fully equilibrium). As the premixed flame is a reaction wave propagating from burned to fresh gases, the basic parameter is known to be the progress variable. In the fresh gas, the progress variable is conventionally put to zero. In the burned gas, it equals unity. Across the flame, the intermediate values describe the progress of the reaction to turn into burned gas the fresh gas penetrating the flame sheet. A progress variable can be set with the help of any quantity like temperature, reactant mass fraction, provided it is bounded by a single value in the burned gas and another one in the fresh gas. The progress variable is usually named c, in usual notations: $c=\frac{T-T_f}{T_b-T_f}$ It is seen that c is a normalization of a scalar quantity. As mentioned above, the scalar transport equations are assumed linear such that the transport equation for c can be obtained directly. Actually, the transport equation for T (Sec. Transport Equations) is linear if constant heat capacity is further assumed (combustion of hydrocarbon in air implies a large excess of nitrogen whose heat capacity is only slightly varying) and the progress variable equation is directly obtained (here for a default of fuel - lean combustion): $\frac{D\rho c}{Dt}=\nabla\cdot\rho D\vec\nabla c + \frac{\nu_F\bar M_F}{Y_{F,u}}\dot\omega \qquad \rho D = \frac{\lambda}{C_p}$ The fact that the default or excess of fuel has been discussed above leads to the introduction of another quantity: the equivalence ratio. The equivalence ratio, usually noted $\Phi$, is the ratio of two ratios. The first one is the ratio of the mass of fuel with the mass of oxidizer in the mixture. The second one is the same ratio for a mixture at stoichiometry. Hence, when the equivalence ratio equals unity, the mixture is at stoichiometry. If it is greater than unity, the mixture is named rich as there is an excess of fuel. In contrast, when it is smaller than unity the mixture is named lean. The equivalence ratio presented here for premixed flames has little connection with the equivalence ratio introduced earlier regarding the non-premixed regime. Basically, the equivalence ratio as defined for non-premixed flames gives the equivalence ratio of a premixed mixture with the same mass of fuel and oxidizer. Moreover, the equivalence ratio as defined for a premixed mixture can be obtained based on the mixture fraction (it is thus the local equivalence ratio at a point in the non-homogeneous mixture described by the mixture fraction). From the definitions given above: $\Phi=\frac{Z}{1-Z}\frac{1-Z_s}{Z_s}$ #### Premixed Flame Péclet Number Earlier in this section, it has been said that a premixed flame posses its own dynamics, as a free propagating surface, and has thus characteristic quantities. For this reason, a Péclet number may be defined, based on these quantities. The Péclet number has the same structure as the Reynolds number but the dynamical viscosity is replaced by the ratio of the thermal conductivity and the heat capacity of the mixture. The thickness $\delta_L$ of a premixed flame is essentially thermal. It means that it corresponds to the distance of the temperature rise between fresh and burned gases. This thickness is below the millimetre for conventional flames. The width of the reaction zone inside this flame is even smaller, by about one order of magnitude. This reaction zone is stuck to the hot side of the flame due to the high thermal dependency of the combustion reactions, as seen above. Hence, the flame region is essentially governed by a convection-diffusion process, the source term being negligible in most of it. It is convenient to write the progress variable transport equation in a steady-state framework. The quantities at flame temperature ($_f$) are used to non-dimensionalize the equation: $\overbrace{(\rho ||\vec S_L||)}^{||\vec M||}\frac{\vec S_L}{S_L}\vec\nabla c = (\rho D)_f \nabla\cdot (\rho D)^* \vec\nabla c$ Note that the source term is neglected, consistently with what has been said above. This convection-diffusion equation makes appear a first approximation of a flame Péclet number: $Pe_f = \frac{||\vec M|| \delta_L}{(\rho D)_f} \approx 1$ From the Péclet number, it is possible to obtain an expression for the flame velocity (remembering that $\delta_L/S_{L,f} \approx \tau_c$, vid. inf. Sec. Three Turbulent-Flame Interaction Regimes): $S_{L,f}^2\approx \frac{(\rho D)_f}{\rho_f \tau_c}$ For typical hydrocarbon flames, the speed is some tens of centimetres per second and the diffusivity is some $10^{-5}$ square metres per second. The chemical time in the reaction zone of about one tenth of a millisecond is recovered. We anticipate here a result that will be obtained in the detailed analysis of the flame speed (vid. inf.), that is: the laminar flame speed is proportional to the square root of the diffusion coefficient and the reaction rate. This result is of interest to anticipate the trends of the premixed flame dynamics. #### Details of the Premixed Unstrained Planar Flame A plane combustion wave propagating in a homogeneous fresh mixture is the reference case to describe the premixed regime. At constant speed, it is convenient to see the flame at rest with a flowing upstream mixture. This is actually the way propagating flames are usually stabilized. In the frame of description, the physics is 1-D, steady with a uniform (the flame is said unstrained) mass flowing across the system. Two types of equations are thus sufficient to describe the problem, the temperature transport equation and the species transport equations, as in Sec. Transport Equations. The transport coefficients will be chosen as equal: $\rho D_i = \lambda / C_p$ (unity Lewis numbers). Suppose the 1-D domain is described thanks to a conventional (Ox) axis with a flame propagating towards negative x (this is the conventional usage), the boundary conditions are: • in the frozen mixture: • $Y_i \rightarrow Y_{i,u} \qquad ; \qquad x\rightarrow-\infty$ • $T \rightarrow T_u \qquad ; \qquad x\rightarrow-\infty$ • in the burned gas region supposed at equilibrium: • $Y_i \rightarrow Y_{i,b} \qquad ; \qquad x\rightarrow+\infty$ • $T \rightarrow T_b \qquad ; \qquad x\rightarrow+\infty$ $Y_{i,b}$ and $T_b$ are obtained from Sec. Conservation Laws. The quantities that have been mentioned just above (scalar and temperature profiles, mass flow rate through the system) are the solution to be sought. According to the discussions above, the temperature transport equation in its full normalized form may be written as (lean /stoichiometric case): $||\vec M|| \frac{\partial \theta}{\partial x} = \frac{\partial \ }{\partial x}\frac{\lambda}{Cp} \frac{\partial \theta}{\partial x} + \frac{\nu_F \bar M_F B}{Y_{F,s}} \prod_{i=O,F}(Y_{i,u}^*-\theta\frac{Y_{F,s}}{Y_{i,s}}s^{\delta_{i,O}} )^{n_i} \exp{-\beta\frac{1-\theta}{1-\alpha(1-\theta)}}$ This equation is further simplified by the variable change $d\xi=||\vec M||/(\lambda/Cp)dx$: $\frac{\partial \theta}{\partial \xi}=\frac{\partial^2 \theta}{\partial \xi^2} + \overbrace{\frac{\lambda/Cp}{||\vec M||^2}\frac{\nu_F \bar M_F B}{Y_{F,s}}}^{\Lambda} \prod_{i=O,F}(Y_{i,u}^*-\theta)^{n_i} \exp{-\beta\frac{1-\theta}{1-\alpha(1-\theta)}}$ Although somewhat out of scope, the existence and unicity of the solution of this type of equation are usually demonstrated with the help of the Schauder Theorem and Maximum Principle. From the point of view of physicists and engineers, the solution that is found analytically is de facto considered as the unique solution of the equation. ##### Scenarii of Combustion Process in the Phase Portrait In the frame moving with the flame, both phase variables are the reduced temperature and its gradient. To ease the reading with usual notations, it is written: $X_1 = \theta \quad ; \quad X_2 = \partial \theta / \partial \xi = \dot \theta$. The system arises: $\left\{ \begin{array}{lll} \dot X_1 & = & X_2 \\ \dot X_2 & = & X_2 - \varpi(X_1) \end{array} \right.$ with $\varpi$ being the full source term in the above equation. In the frame moving with the flame, two singular nodes are found in the frozen flow $(X_1,X_2) = (0,0)$ and the equilibrium region $(\theta_b,0)$, i.e when $\varpi(X_1)$ vanishes. $x_1,x_2$ are defined as small departures from the singular nodes such that the linearized system in their neighbourhood is: $\left\{ \begin{array}{lll} \dot x_1 & = & l_{1,1} x_1 + l_{1,2} x_2 \\ \dot x_2 & = & l_{2,1} x_1 + l_{2,2} x_2 \end{array} \right.$ provided: $l_{1,1} = 0 \quad l_{1,2} = 1 \quad l_{2,1} = -\varpi'_{X_1 = 0,\theta_b} \quad l_{2,2} = 1$. The characteristic polynom is, in usual notations: $s^2 - s + \varpi'$ such that the eigenvalues are: $s^{\pm}=\frac{1\pm\sqrt{1-4\varpi'}}{2}$ A priori, those eigenvalues may be (i) real distinct, (ii) real identical, or (iii) conjugated complex. In the first case, the orbits in the phase diagram are organized, in the immediate neighbourhood of the singular node, with respect to the eigenvectors directions associated to the eigenvalues. The following task is to identify the nature of those eigenvalues and of the corresponding nodes. Because $0 < X_1 < \theta_b$ is bounded, complex eigenvalues are excluded as they would lead to a spiral node. This remark is important for the node on the cold side as it imposes a bound: $\varpi'_{X_1=0} \le \frac{1}{4}$ As the mass flow rate through the flame is included into $\varpi$, it imposes a minimum value on the flame speed to tackle with the cold boundary difficulty (rise of the chemical rate in the frozen flow). In this condition, it is an unstable node (improper in case of equality). On the other hand, because $\varpi'|_{X_1=\theta_b}$ is not positive, the node on the hot side is found as a saddle point. The overall scenario of combustion within the flame is thus an orbit leaving the cold node to join the hot node by branching on a trajectory compatible with the negative eigenvalue of the saddle. Sketch of orbits for a combustion process across a premixed front. Dashed lines represent forbidden orbits from the physics. The red line describes the orbit expected in an idealized combustion process. It must be noted that the associated eigenvectors are of the form: $\left ( \begin{array}{l} t \\ t s^{\pm} \end{array} \right ); \quad t\in \Re^*$ that is, on the cold node, a positive departure on $X_1$ following any of the two eigendirections, leads to a consistent creation of positive temperature gradient, while on the hot node, only the stable direction will allow a consistent creation of a positive temperature gradient for any departure of the temperature towards region where it is inferior to $\theta_b$. Another remark is the structure of the eigendirections. The leaving directions on the cold node have a slope larger than the one of $\varpi$ while the stable direction of the hot node has a slope smaller than the one of $\varpi$. It means that there is some point where the orbit must cross the profile of the chemical term versus temperature. For that temperature, the gradient equals the reaction rate through construction of the phase space. When looking back to the equation of the premixed flame, it happens in a region of inflexion for the temperature (the second order derivative must vanish). Furthermore, at this intersection, the orbit is horizontal (if the frame of reference for $(X_1, X_2)$ is Cartesian) due to the shape of the premixed flame equation above that can be recast into $X_{2,X_1}' = (X_2-\varpi)/X_2$. Close to the cold node, the orbits have a shape of parabola whose axis is the direction with the largest eigenvalue magnitude. Close to the hot node, the orbits have an hyperboloid shape with asymptots as the eigendirections. Now the ingredients are here to draw a sketchy scenario of the combustion in a premixed flame. It will be superimposed on the reaction rate graph studied in Sec. Fundamentals. Some typical orbits from the above analysis are drawn in the figure on the right. The basic geometrical arguments developed are reproduced. In particular, the dashed lines represent forbidden orbits by the physics (boundedness of $X_1$, irreversibility). Orbits must be travelled from left to right, corresponding to increasing free parameter $\xi$. It is of integral importance to remember that, in combustion in conventional conditions, the source term is highly non-linear. Therefore, it is localized in a very thin sheet and, upstream, $\varpi$ and $\varpi' \rightarrow 0$. Only the most unstable eigendirection of the cold node is compatible as an orbit. The other trajectories, being paraboloidal, are tangent to the other direction that is flat at the limit. It physically means an elevation of temperature in bulk, that is contradictory to what is expected from a highly non-linear combustion term. Hence, the additional orbit in red is the one expected in idealized combustion conventional conditions. Sketch of orbits for a combustion process across a premixed front, DNS simulations for the usual combinations of $\beta$ and $\alpha$. Note that the case with $\alpha = 0$ does not mean an athermal reaction but the mathematical simplification of the denominator of the exponential argument of the source term. The following picture is the result of actual computations of the above 1-D flame equation (stoichiometry and $n_i = 1$) with the help of a high-order (6) code. The already presented combinations of $\beta$ and $\alpha$ are used and the orbits are retrieved. For most of the cases, as predicted above, the system selects a solution leaving the cold node with the most unstable direction (identifiable with its slope close to unity for vanishing $\varpi'_{X_1=0}$). There is also an additional curve, for $\beta = 10$ with the denominator of the exponential argument suppressed. This curve is remarkable as the solution selected by the system leaves the cold node in a manner fully controlled by the $\varpi'_{X_1=0}$. This is a very singular solution, not expected in combustion in conventional conditions, as explained above. The purpose of this remark is to question the well-posedness of considering simplifying the exponential argument for $\beta$ "sufficiently high", as it is usually proposed for this type of modelling. As observed, the dynamical system analysis demonstrates a switch in the nature of the solution selected. At the physical level, when the orbit follows the most unstable direction with a slope close to unity, it means that $X_2$ "follows" $X_1$, which is a signature of a diffusion process. In other words, the preheating mechanism of a premixed flame propagation as proposed for more than one century is in work. On the other hand, when $X_2$ is dependent on the evolution of $\varpi'$, it shows that "cold" chemistry drives the solution in the frozen flow and not the acknowledged mechanism of deflagration. ##### Flame Solution As already mentioned, the flame system may be split into three zones. Upstream, the conventional mechanism of deflagration is supported by diffusion of heat. Downstream, the mixture is at equilibrium after combustion. In between, there exists the reaction layer. For large $\beta$ the reaction layer is very thin such that it can be seen as a discontinuity between the fresh and burned gases. This is this difference in scales that introduces the use of the asymptotic method to resolve some of the flame characteristics, such as speed, time, heat region thickness, or reaction zone thickness. The domain is partitioned, according to this zoning defined by the scales driving the physics with an outer domain, driven by large scales and an inner domain refining the description within the discontinuity. If the discontinuity (flame reaction zone) is at $\xi=0$, everywhere but 0, the equation is simplified as: $\frac{\partial^2 \theta}{\partial \xi^2} = \frac{\partial\theta}{\partial \xi}$ • For $\xi>0$, it is expected that the mixture has reached equilibrium chemistry, such that: $\forall \xi > 0, \quad \theta=\theta_b \quad ; \quad \partial \theta / \partial \xi =0$. • For $\xi < 0$, this is the preheat zone and the solution is $\theta = \theta_b e^{\xi}$ with $\theta$ reaching $\theta_b$ at the disconstinuity and vanishing, together with its gradient, far upstream in the frozen mixture. The solution for the species and the value of $\theta_b$ are obtained from Conservation Equations above. The 'big picture' is thus an exponential variation in the thermal thickness matched with a plateau in the downstream region, the line of matching being the discontinuity (flame) that has no thickness at this scale of description. To refine the analysis in the discontinuity region, a magnifying factor $\varepsilon$ is used to stretch the coordinates: $\xi = \varepsilon \Xi$. The inner solution is thus a slowly-varying function of $\Xi$. Hence, in this inner region, the equation for the premixed flame becomes: $\frac{1}{\varepsilon}\frac{\partial \theta}{\partial \Xi}=\frac{1}{\varepsilon^2}\frac{\partial^2 \theta}{\partial \Xi^2} +\varpi$ In order to stretch and 'look inside' a discontinuity, $\varepsilon$ is very small. It yields two remarks: 1. convection is negligible compared to diffusion. The heat losses from the reaction zone are essentially diffusion driven. 2. The reaction zone is governed by a diffusion-reaction budget and the reaction term $\varpi$ must be strong to balance the intense heat loss due to the sharp diffusion (the zone is very thin, hence the gradients are sharp). The mechanism is thus different from the outer region that was convection-diffusion driven. Each quantity is developed in a series of $\varepsilon$. At the leading order, for $\theta$, in the lean case, the conservation relations (Sec. Conservation Laws) yield: $\theta = 1 - \varepsilon \Gamma - (1 - Y^*_{F,u})$ where $\Gamma$ is the first-order development of the departure of $\theta$ from the maximum value due to the incomplete combustion, and $1-Y_{F,u}^*=1-\theta_b$ is the reduction of temperature for non-stoichiometric cases. Injected into the above equation: $\frac{1}{\varepsilon}\frac{\partial^2 \Gamma}{\partial \Xi^2} = \Lambda (\varepsilon \Gamma)^{n_F} (Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{O,s}}s +\varepsilon\Gamma\frac{Y_{F,s}}{Y_{O,s}}s )^{n_O}\exp{-\beta\frac{1-Y_{F,u}^*+\varepsilon\Gamma}{1-\alpha(1-Y_{F,u}^*+\varepsilon\Gamma)}}$ Although the full develoment is not achieved, a number of scaling may be highlighted: 1. because the temperature cannot be much below unity, $Y_{F,u}^*$ must be close to 1 ${\mathcal O}(\varepsilon)$ . For clarity, it is not expanded in an $\varepsilon$ series. 2. The denominator of the exponential argument simplifies to unity for small $\varepsilon$. 3. To get a finite rate in the reaction zone, $\varepsilon$ scales with $\beta^{-1}$. The burning rate eigenvalue, $\Lambda$, is naturally expanded as: $\Lambda = \varepsilon^{-n_O-n_F-1}(\Lambda_0 + {\mathcal O}(\varepsilon))$. The low-order equation to be solved is: $\frac{d^2 \Gamma}{d \Xi^2} = \frac{1}{2}\frac{d(\Gamma_{\Xi})^{'2}}{d\Gamma} = \Lambda_0\exp{-\beta (1-Y_{F,u}^*)} \Gamma^{n_F} (\frac{Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{O,s}}s}{\varepsilon}+\Gamma\frac{Y_{F,s}}{Y_{O,s}}s)^{n_O}\exp{-\Gamma}$ $\frac{d\Gamma}{d\Xi}(-\infty)=-\theta_b, \qquad \frac{d\Gamma}{d\Xi}(\infty)=0$ The boundary conditions are obtained from the matching of the outer solutions on the right and left sides of the flame as written above (the outer solutions are reached at infinity for a very small magnifying factor $\varepsilon$). Once integrated with respect to those boundary conditions, the burning-rate eigenvalue (from which $\dot M$ is extracted) is obtained as: $\Lambda_0 = \Bigg( 2\int_0^{\infty}d\Gamma\; \Gamma^{n_F} (\beta (Y_{O,u}^*-Y_{F,u}^*\frac{Y_{F,s}}{Y_{O,s}}s )+\Gamma\frac{Y_{F,s}}{Y_{O,s}}s )^{n_O}\exp{-\beta (1-Y_{F,u}^*)}\exp{-\Gamma} \Bigg )^{-1}$ For a flame at stoichiometry, the RHS is equal to $2(n_F+n_O)!$. The comparison of this value of $\Lambda_0$ with the extinction of the diffusion flame from the same chemistry in the same conditions presented in the section dedicated to diffusion flame extinction analysis allows one to this relationship: $\frac{(n_F+n_O)!\lambda\rho_s}{Cp||\vec M||^2}=\frac{2Z_s^2(1-Z_s)^2}{\rho^2_s\chi_s}$ This provides a link between characteristic quantities of diffusion flame (scalar dissipation at extinction) and of premixed flame at stoichiometry (deflagration). This link is an illustration of the physics of the flame: in the premixed mode, the flame is in permanent imbalance and propagates at a speed that brings into the reaction zone as much as reactant that the chemistry can burn. Recast in the framework of a diffusion flame near extinction, this maximum amount that the chemistry can eat is brought by the dissipation. So, it is natural that this type of relationship is found. Note that the unit scaling of $\delta_e$ has been used. The development has been carried out at the first order in $\varepsilon$. As soon as a second order development is attempted, some expressions are no more analytically tractable. On the other hand, a second order development allows introduce the temperature-dependent trends of some terms in $\Lambda$. Physical results are retrieved such as a slight decrease of the speed for a positive sensitivity of transport parameters to temperature around equilibrium conditions. Unstrained planar premixed flame speed with respect to fuel mass fraction (lean case) for a single global irreversible Arrh\'enius term. Symbols are obtained from a high-order DNS code. Continuous line is the theory exposed here. The usual combinations of $\beta$ and $\alpha$ are used. Very high values of $\beta$ (and thus negligible effect of $\alpha$ are also presented to show the problem of slow convergence for finite value of the Zeldovitch parameter. Not that $\alpha = 0$ does not mean an athermal reaction but the mathematical simplification of the denominator of the exponential argument of the source term. The image beside illustrates the response of the premixed flame speed with respect to fuel concentration (chosen as the limiting component here; equivalent findings are obtained for default of oxidizer) for different values of the chemical parameters $\beta$ and $\alpha$ (as noted above, the pre-exponential constant $A$ impacts only the overall magnitude). The theoretical expression (continuous line) is tested against a 1-D high accuracy code with the given chemistry implemented. It is seen that: • the Zeldovitch parameter drives the drop for non-stoichiomeric mixtures, • the drop is relatively well modelled by the theoretical expression, and • the absolute magnitude converges slowly towards the theoretical one when increasing $\beta$ in the code (effect of the finiteness of $\beta$ in the real case). Typical profiles in 1-D premixed flame at stoichiometry. Representative value of global chemistry parameters. Specie profiles are simply the complementary to the temperature profile for simple chemistry. The picture exhibiting profiles for different Zeldovitch and heat release parameters shows the factual impact: upstream, the exact exponential profile is recovered and corresponds to the pre-heating region (thermal thickness). In the reaction zone (just upstream of the extremum), the departure from the exact solution is due to the kinetic effect. This kinetic effect is more pronounced when $\beta$ is lower because the lower the Zeldovitch parameter, the lower the temperature reaction zone can be without leading to extinction. The flame takes this opportunity to maximize its transfer in heat and reactant with the cold zone. This is the physical understanding of an inverse dependence of the maximum flame speed with $\beta$. #### Modification of the Flame Speed with Curvature Flame seen as an interface between fresh and burned gases. Its curved profile towards the burned side increases the transfers with the fresh side. When the thickness of the flame $(\lambda/Cp)_f/||\vec M||$ is considered small compared to inhomogeneities existing in the flow, the flame can be reduced to an interface between fresh and burned gases. This interface may not be strictly plan in the general case. For instance, when the interface is curved towards the burned gas, it offers a larger opportunity for transfer of mass and heat with the fresh gas. As the ability of the chemistry to burn the coming matter is limited, the flame has thus to reduce its displacement speed. The figure beside provides a 2-D sketch of this situation that happens in contorded turbulent fields. Curvature effect is thus an ingredient appearing in combustion models. To mathematically give the expression showing that the curvature influences the flame speed (for small curvature), the non-dimensionalized temperature equation across a planar premixed flame above is slightly recast: $||\vec M^c||||\vec\nabla_{\xi}\theta|| - \varpi= \nabla_{\xi}\cdot\vec\nabla_{\xi}\theta=-\nabla_{\xi}\cdot(||\vec\nabla_{\xi}\theta||\vec n) = - \vec\nabla_{\xi}||\vec\nabla_{\xi}\theta||\cdot\vec n - ||\vec\nabla_{\xi}\theta||\nabla_{\xi}\cdot\vec n$ In this expression, $\dot M^c$ is the flame mass flow rate when perturbed by a curvature normalized by the reference one developed above. $\vec n$ is the normal to the iso-temperature pointing towards the fresh gas, what is named the normal to the flame. From the expression developed above and for a slightly perturbed flame, the gradients, production and diffusion terms are very close to the unperturbed flame. Only the last term, the normal divergence, does not exist for a non-perturbed flame. Hence, the above equation may be simplified against the one for unperturbed flame presented earlier: $||\vec M^c|| = 1 - \nabla_{\xi}\cdot \vec n$ This is the divergence of the flame normal that contains the information upon geometrical perturbation (curvature) that impacts the speed. Local geometrical approximation of a flame surface The key is thus to get an idea of the geometrical significance of the normal divergence. The normal divergence theorem says that this is the sum of the principal curvatures of the flame surface at the location considered. To give a good mental picture, the simplest configuration without loss of generality is to consider the flame surface approached by an osculatory revolution ellipsoid, as in the figure beside. The location of the approximation is the intersection of the Ox axis and the flame surface in red (where the ellipsoid is tangent to the flame). At this location, the normal divergence is the sum of the curvature of the basic ellipse (before its rotation around Oy) at its minor extremum, and the curvature of the circle corresponding to the rotation of this point around Oy when the 3-D shape is formed. Giving a lecture on conical coordinate systems is beyond the purpose but the interested reader may want to follow the corresponding steps to check this result: (i) write the divergence of a vector in the ellipsoid coordinate system, (ii) consider that the vector is the normal, i.e. it has only one constant component perpendicular to the local ellipsoidal iso-coordinate, to simplify the divergence expression, (iii) split the resulting terms into two parts by identifying the second derivative of the basic shape 2-D ellipse as one principal curvature, and the inverse of the radius of the circle corresponding to the rotation of the ellipse around Oy at the point where the flame surface is approached, in the figure this is simply the minor axis of the ellipse. The expression is readily written as: $||\vec M^c|| = 1 - (R_1^{-1}+R_2^{-1})$ where $R_1$ and $R_2$ are the two radius of curvature (non-dimensionalized by the reference flame thickness) local to the surface. For instance, they are the small axis of the basic ellipse and its radius of curvature at its minor maximum when the surface is approached by the osculatory ellipsoid as above. #### Natural Instabilities of Premixed Flames Another aspect accounted for in turbulent combustion modelling of premixed flame is the creation of flame surface (corrugation) by hydrodynamics instabilities. These instabilities have been mentioned by Darrieus as early as 1938. The motivation of discussing about this is also related to the framework of description used that is widely employed for combustion models development. This framework is named the hydrodynamics limit, where the flame is isolated as a zero-thickness interface in the flow, and has been first introduced just above. In this framework, any diffusive and energetic aspects disappear and the set of equations is limited to two incompressible Euler systems. One system in the fresh gases (with a constant density of cold mixture). One system in the burned gases (with a constant density of mixture at equilibrium). To understand the basic properties of a premixed flame leading to the birth of instabilities, it is first important to realize that a premixed flame, in the hydrodynamic limit, behaves as a dioptre with a refractive index in the burned gases larger than in the fresh gases. For a flow with an angle of attack on the flame (i.e. a flame not strictly 1-D perpendicular to the flow), the tangential component of the flow speed relative to the flame surface is conserved while the normal component is accelerated by a factor corresponding to the ratio of the density in order to conserve mass flow across the interface. Hence, the streamlines are pushed towards the normal to the flame when crossing, that is similar to rays of light entering a refracting medium. Basic mechanism explaining the unstable nature of a planar premixed flame. If one considers a region of a premixed flame that bumps a little bit towards the fresh gas (see figure beside -- the same approach is symmetrically true for the bump towards the burned gas), the local stream tube slightly opens on the bumpy interface before being refracted and coming back to its original section at constant mass flow rate. Hence, just in front of the bump, the gas velocity in this stream tube decreases and does not oppose the flame motion, allowing the bump to increase in magnitude. This is the fundamental mechanism of such instabilities. The equations sets are in usual notations: $\left \{ \begin{array}{lll} U_x + V_y & = & 0 \\ U_t + UU_x + VU_y & = & \frac{P_x}{\rho} \\ V_t +UV_x+VV_y & = & \frac{P_y}{\rho} \end{array} \right.$ $x$ is the coordinate used above along the flame path and $y$ is in the tangential plan. $u$ and $v$ are the respective velocities. Those equations may be normalized by steady-state reference quantities such as flame speed, flame length, gauge pressure and density with respect to fresh gas properties and are written on both sides of the interface. These sets of equations must match at the interface (flame surface) through conservation of mass flow rate and momentum: $\left \{ \begin{array}{lll} \rho_u (\vec U_u -\vec U_{f})\cdot \vec n & = & \rho_b (\vec U_b -\vec U_{f})\cdot \vec n\\ (\rho_u \vec U_u (\vec U_u -\vec U_{f})+P_u)\cdot \vec n & = & (\rho_b \vec U_b (\vec U_b -\vec U_{f})+P_b)\cdot \vec n \end{array} \right.$ The subscripts $u$ and $b$ stand for unburned and burned sides, respectively. The subscript $f$ points the interface. In these equations, the flame motion $U_f$ is reintroduced because we want to track the instability movement over the mean position of the flame. This instability motion is described by the equation $x=F(y,t)$ such that the motion of the interface normal to itself is obtained as: $\vec U_f\cdot\vec n = - \frac{F_t}{\sqrt{1+F^2_y}}$ with a normal to the front: $\vec n = \left ( \begin{array}{l} -\frac{1}{\sqrt{1+F^2_y}} \\ \frac{F_y}{\sqrt{1+F^2_y}} \end{array} \right )$ As the instability motion is considered at its birth, i.e. when it is still small, the quantities are linearized with $\varepsilon$ as a small parameter: $F=\varepsilon f$ $\vec U = \vec {\mathcal U} + \varepsilon\vec u$ $P = {\mathcal P} + \varepsilon p$ The Eulerian system is reduced to (at the first order in $\varepsilon$, the order zero is simplified for homogeneous flow): $\left \{ \begin{array}{lll} u_x + v_y & = & 0 \\ u_t + {\mathcal U}u_x & = & \frac{p_x}{\rho} \\ v_t +{\mathcal U}v_x & = & \frac{p_y}{\rho} \end{array} \right.$ The one-sided equations for the jump conditions become (terms up to the first order in $\varepsilon$ are kept): $\left \{ \begin{array}{lll} \rho (\vec U-\vec U_f)\cdot\vec n \times \sqrt{1+F^2_y} & = & -\rho {\mathcal U}-\rho\varepsilon u+\rho\varepsilon f_t \\ (\rho \vec U (\vec U -\vec U_{f})+P)\cdot \vec n \times \sqrt{1+F^2_y} & = & \left \{ \begin{array}{l} -\rho {\mathcal U}^2 -2\rho{\mathcal U}\varepsilon u+\rho{\mathcal U}\varepsilon f_t - {\mathcal P} -\varepsilon p \\ -\rho {\mathcal U}\varepsilon v + {\mathcal P}\varepsilon f_y \end{array} \right. \end{array} \right.$ The following jump conditions are obtained (by recalling that unburned steady-state quantities are chosen as references): • From the first equation at the order 0, $\rho_b U_b = \rho_u U_u \Leftrightarrow U_b=\rho_u/\rho_b$ • From the first equation at the order 1, $\rho_u f_t - \rho_u u_u = \rho_b f_t -\rho_b u_b$ • From the second equation at the order 0, $-\rho_u {\mathcal U}_u^2 - {\mathcal P}_u = -\rho_b {\mathcal U}_b^2 - {\mathcal P}_b \Leftrightarrow {\mathcal P}_b = 1 - \rho_u/\rho_b$ • From the second equation at the order 1, $-2\rho_u{\mathcal U}_u u_u +\rho_u {\mathcal U}_u f_t -p_u = -2\rho_b{\mathcal U}_b u_b +\rho_b {\mathcal U}_b f_t -p_b \Leftrightarrow p_b-p_u = 2 u_u-2 u_b$ • From the third equation at order 1 (no order 0), $-\rho_u {\mathcal U}_u v_u +{\mathcal P}_u f_y = -\rho_b {\mathcal U}_b v_b +{\mathcal P}_b f_y \Leftrightarrow v_b-v_u = f_y(1-\rho_u/\rho_b)$ The solution for the linearized, autonomous Euler system is: $\left ( \begin{array}{l} u \\ v \\ p \end{array} \right ) = \left ( \begin{array}{l} \bar u \\ \bar v \\ \bar p \end{array} \right ) \exp{(\sigma x)}\exp{(\alpha t -iky)}$ where one recognizes an account for the perturbation of the field in the $x$ direction ($\sigma$), the wave number of the instability following $y$, $k$ and the growth rate with time $\alpha$. The eigenvalues of the system are used to determine the $x$ dependence of the solution $\sigma = - \alpha/U,\; k,\; -k$, with the positive ones applying on the fresh side and the negative ones on the burned size to have a vanishing perturbation far from the flame. The eigenmodes give the flow pertubations on either side of the flame: $x<0: \left ( \begin{array}{l} u \\ v \\ p \end{array} \right ) = a\left ( \begin{array}{l} 1 \\ -i \\ -1-\frac{\alpha}{k} \end{array} \right ) e^{kx+\alpha t - iky}$ $x>0: \left ( \begin{array}{l} u \\ v \\ p \end{array} \right ) = b\left ( \begin{array}{l} 1 \\ i \\ -1+\rho_b\frac{\alpha}{k} \end{array} \right ) e^{-kx+\alpha t - iky} + c\left ( \begin{array}{l} 1 \\ i\rho_b\frac{\alpha}{k} \\ 0 \end{array} \right ) e^{-\rho_b \alpha x+\alpha t - iky}$ The jump conditions above applied to this perturbation field yield the following system ($f$ has also been put into the harmonic form $f = \bar f e^{\alpha t-iky}$): $\left \{ \begin{array}{lll} \alpha \bar f -a & = & \rho_b \alpha \bar f -\rho_b (b+c) \\ a(1-\frac{\alpha}{k}) & = & b(1+\rho_b\frac{\alpha}{k}) +2c \\ a + b+ c\frac{\rho_b \alpha}{k} & = & k\bar f (\frac{1}{\rho_b}-1) \end{array} \right .$ Additionaly, from the definition of the flame path $F$, we have the kinematic relation $u_u = a = \alpha \bar f$. The above set of equations forms a system of four unknowns $a,\; b,\; c,\; \alpha/k$ for a given (but unknown) shape information on the flame bump $k\bar f$. Solving for the growth rate gives: $\left (\frac{\alpha}{k} - \frac{1}{\rho_b}\right )\left(\frac{\alpha}{k} + 2 + \rho_b\frac{\alpha}{k} -\left (\frac{1}{\rho_b}-1\right )\left (\frac{\alpha}{k}\right )^{-1} \right )=0$ This third-degree polynom has three solutions, namely the dispersion relation found by Darrieus, a stable mode, and the trivial solution (with no physical meaning), respectively: $\frac{\alpha}{k} = \left ( \frac{1}{\rho_b + 1}\left (\sqrt{1+\frac{1-\rho_b^2}{\rho_b}}-1 \right ),\qquad \frac{1}{\rho_b + 1}\left (-\sqrt{1+\frac{1-\rho_b^2}{\rho_b}}-1 \right ),\qquad \frac{1}{\rho_b}\right )$ Remark 1 By considering the shape of the eigenvectors giving the perturbation, one recovers that, upstream of the flame, the flow is potential (the rotational of the velocity vector is null and we have chosen a constant density flow, that is barotropic and divergence free), while downstream, additionally to another perturbation of the potential type, one finds a vorticity mode (the mode with the eigenvalue $-\rho_b\alpha$. The drift of vorticity at the crossing of the flame front is a known property of flames. Remark 2 In more conventional literature, the strange trivial solution for the instability growth rate is swept under the rug. Given the pedagogical nature of this electronic documentation, we can dig a little bit as the appearance of this trivial, fool solution is a good example of some modelling issue. It is important to realize that a mathematical model and the physics it describes belong to different realities. Hence, the mathematical model will generate all the solutions that the mathematics can reach in its own space. Some of them are still connected to the physics. Some others, like this trivial solution, belongs only to the mathematical solution space, an indirect way of pointing out a model limitation. The model under scope here is the hydrodynamic limit. In this model, the domain is divided into two subdomains, one upstream of the flame interface, one downstream, that are put in relation with each other through a limited number of jump conditions. In reality, the physics connects these both domains much more tightly. The curious reader will have observed that the trivial solution makes the equation system for $a,\; b,\; c$ (i.e. for a fixed growth rate) undetermined, and so are the jump conditions. By generating this trivial solution, the mathematics decouples both domains. The information coming from upstream to downstream (thanks to the shape of the linearized Euler system), a solution is found only for the upstream domain, the downstream being not solved and remains undetermined in lack of information. This solution cannot happen in reality because the fresh and burned gases in a real system are connected by many aspects, and not only by the jump conditions. #### Stretch / Compression of Premixed Flame The last of the `big' three turbulent ingredients (including aforementioned curvature and instabilities) impacting the flame at the local level is the compression or stretch of a premixed flame due to inhomogeneities in the flow, likely to happen with turbulence. Interpretation of flow inhomogeneities stresses on a 1-D premixed flame in terms of compression or stretch. The physics is pictured in the beside figure. In the flame area, the mass flow rate along the main direction may decrease (or increase) with distance. In a frame of reference whose origin is the core of the flame, a stretch (compression) results due to the difference in mass flow rate entering and leaving the flame zone. Important Remark Here, we are interested in the inhomogeneity in the mass flow rate along one direction only. We do NOT write that the flame volume is a region of mass source/sink. In the same manner as above for the curvature effect, we introduce a (small) inhomogeneity of the flow as $\vec\nabla\vec M$ such that the following equation can be substracted from the unperturbed flame equation. $||\vec M^{s/c}||||\vec\nabla_{\xi}\theta|| - \vec n\cdot \vec\nabla\vec M \cdot \vec n ||\vec\nabla_{\xi}\theta|| - \varpi= \nabla_{\xi}\cdot\vec\nabla_{\xi}\theta$ Hence, for a small perturbation, the linearized dependence of the flame on stretch/compression is: $||\vec M^{s/c}|| = 1 + \vec n \cdot \vec\nabla\vec M \cdot \vec n$ ## The Partially-Premixed Regime Ideal sketch of a partially-premixed flame This regime is somewhat less academics and has been recognized two decades ago. It is acknowledged as a hybrid of the premixed and the non-premixed regimes but the degree of interaction of these two modes of combustion to accurately describe a partially-premixed flame is still not well understood. It can be simply pictured by a lifted diffusion flame. Let us consider fuel issuing from a nozzle into the air. If the exit velocity is large enough, for some fuels, the flame lifts off the rim of the nozzle. It means that below the flame base, fuel and oxidizer have room to premix. Hence, the flame base propagates into a premixed mixture. However, it cannot be reduced to a premixed flame (although it is often simplified as this): (i) the mixing is not perfect and the different parts of the flame front constituting the flame base burn in mixtures of different thermodynamical states. This provides those parts with different deflagration capabilities such that the flame base has a complex shape. Indeed, it is convex, naturally leaded by the part burning at stoichiometry, unless `exotic' feeding temperatures are used. (ii) Because the mixture is not homogeneous, transfer of species and temperature driven by diffusion occurs in a direction perpendicular to the propagation of the flame base. Because the flame front is not flat, those transfers act as a connection vehicle across the different parts of the leading front. (iii) The unburned left downstream by the sections of the leading front not burning at stoichiometry diffuse towards each other to form a diffusion flame as described above. The connection of the leading front with the trailing diffusion flame has been evidenced as complicated and the siege of transfers of species and temperature. These two last items are the state-of-the-art difficulties in understanding those flames and do not appear in the models although it has been demonstrated they have a major impact and are certainly a fundamental characteristic of partially-premixed flames. The partially-premixed flame is usually described using c and Z as introduced earlier. Because the framework is essentially non-premixed, the mixture fraction is primarily used to describe the flame. Regarding the head of the flame where partial-premixing has an impact, each part of the front is described with a local progress variable. The need of defining a local progress variable is that each section of the partially-premixed front has a different equivalence ratio leading to a different definition of c: $c=\frac{T-T_u}{T_b(Z)-T_u}$ # Three Turbulent-Flame Interaction Regimes Combustion is a chemical reaction and turbulence is recognized as a powerful mean to enhance chemical reaction by accelerating fuel and oxidizer mixing. In the case of combustion, the impact is more subtle. Through the strong heat release in a narrow zone, a flame is able to accelerate and distord the streamlines. It may lead to flame-generated turbulence as the vortices created behind a premixed flame subject to Darrieus instabilities, vid. sup. Sec. Hydrodynamics Instabilities, or, a contrario, kill the existing eddy motion. Turbulence may also be diminished through a flame because gas viscosity increases with temperature. On the other hand, turbulence may curve a flame and disturb its internal structure, increasing the reaction rate and/or stretching the flame to extinction. Nevertheless, the enhancement of mixing and chemistry strength by turbulence is well illustrated in the premixed regime through integral quantities as the flame speed. Turbulent premixed flame speed is several times larger than the laminar premixed flame speed and may be approximately scaled as: $\frac{S_T}{S_L} = 1. + \frac{\sqrt{k}}{S_L}$ where k is the turbulence kinetic energy. It may appear odd to try to describe here what is the reason of combustion modelling research: the interaction of the turbulence with chemistry. However, one of the first steps in building knowledge in turbulent combustion was the qualitative exploration of what might be the dynamics of a flame in a turbulent environment. This led to what is now known as combustion diagrams. As explained above, the premixed regime lends itself the easiest to such an approach as it exhibits natural intrinsic quantities which are not as objectively identifiable in the other combustion regimes. Note that these quantities may depend on the geometry of the flame: for instance turbulence can bend a flame sheet, leading to a change in its dynamics compared to the flat flame propagating in a medium at rest. In this section, the turbulence-flame interaction modes will be described for a premixed flame. Only remarks will be added regarding the non-premixed and partially-premixed regimes. An integral quantity to assess the interaction between a premixed flame sheet and the turbulence is the Karlovitz number Ka. It compares the characteristic time of flame displacement with the characteristic time of the smallest structures (that are also the fastest) of the turbulence. $Ka= \frac{\tau_c}{\tau_k}$ $\tau_c$ is the chemical time of the flame. To estimate it, it is necessary to come back to the above progress variable transport equation in a steady-state framework. $\overbrace{(\rho S_L)}^{\dot M}\frac{\vec S_L}{S_L}\vec\nabla c = (\rho D)_f\nabla\cdot (\rho D)^* \vec\nabla c + \dot\omega_c$ The premixed wave propagates at a speed $S_L$ because it is fed by reactants diffusing inside the combustion zone and which are preheated because temperature diffuses in the reverse direction. The speed at which the flame progresses is thus related to the rate of species diffusion into the reaction zone which are then consumed. As the premixed flame is a free propagating wave whose speed of propagation is only limited by the chemical strength, the characteristic chemical time is based only on the diffusion and the mass flow rate experienced by the flame: $\tau_c = \frac{\rho (\rho D)_f}{\dot M^2}$ The smallest eddies are the ones being dissipated by the viscous forces. Their characteristic time is estimated thanks to a combination of the viscosity and the flux of turbulent energy to be dissipated (also called turbulent dissipation $\varepsilon=u'^3/l_t$): $\tau_k = \sqrt{\frac{l_t\nu}{u'^3}}$ Thanks to those definitions of chemical and small structure times, it is possible to give another definition of the Karlovitz number: $Ka=\left (\frac{\delta_L}{l_k} \right)^2$ which is the square of the ratio between the premixed flame thickness and the small structure scale: Ka actually compares scales. To arrive to this latter result, the three following assumptions must be used: (i) the flame thickness is obtained thanks to the premixed flame Péclet number (vid. sup.); (ii) the turbulence small structure (Kolmogorov eddies) scale is given by: $l_k=(\nu^3/\varepsilon)^{1/4}$ following the same dimensional argument as for the estimation of its time; and (iii) scalar diffusion scales with viscosity. #### Remark Regarding the Diffusion Flame From what has been presented above, a diffusion flame does not have characteristic scales. Setting a turbulence combustion regime classification for non-premixed flames has still not been answered by research. Some laws of behaviour will only be drawn around the scalar dissipation rate which is the parameter of integral importance for a diffusion flame. Indeed, the dynamics of a diffusion flame is determined by the strain rate imposed by the turbulence. As for the premixed flames, the shortest eddies (Kolmogorov) are the ones having the largest impact. The diffusive layer is thus given by the size of the Kolmogrov eddies: $l_d\approx l_k$ and the typical diffusion time scale (feeding rate of the reaction zone) is given by the characteristic time of the Kolmogorov eddies: $\tau_k^{-1}\approx \chi_s$ as the Reynolds number of the Kolmogorov structures is unity. Here, $\chi_s$ is the sample-averaging of $\chi$ based on (conditioned) stoichiometric conditions, where the flame is expected to be. ## The Wrinkled Regime Wrinkled flamelet regime This regime is also called the flamelet regime. Basically, it assumes that the flame structure is not affected by turbulence. The flame sheet is convoluted and wrinkled by eddies but none of them is small enough to enter it. Locally magnifying, the laminar flame structure is maintained. This regime exists for a Karlovitz number below unity (vid. sup.), i.e. chemical time smaller than the small structure time or flame thickness smaller than small structure scale. Notwithstanding, the laminar flame dynamics can be disrupted for $u'>S_L$. In that case, although the flame structure is not altered by the small structures, it can be convected by large structures such that areas of different locations in the front interact. It shows that, even with a small Karlovitz number, the turbulence effect is not always weak. ## The Corrugated Regime Corrugated flamelet regime The formal definition of a flame is the region of temperature rise. However, the volume where the reaction takes place is about one order of magnitude smaller, embedded inside the temperature rise region and close to its high temperature end. Hence, there exist some levels of turbulence creating eddies able to enter the flame zone but still large enough to not affect the internal reaction sheet. In other words, the flame thermal region is thickened by turbulence but the reaction zone is still in the wrinkled regime. This situation is called the Corrugated Regime. Due to the structure of the Karlovitz number, once written in terms of length scales (vid. sup.), this situation arises for an increase of the Karlovitz number by two orders of magnitude compared to the value for the wrinkled regime. Hence, in the range $1 < Ka < 100$, the laminar structure of the reaction zone is still preserved but not the one of the preheat zone. ## The Thickened Regime Thickened regime In this last case, turbulence is intense enough to generate eddies able to affect the structure of the reaction zone as well. In practice, it is expected that those eddies are in the tail of the energy spectrum such that their lifetime is very short. Their impact on the reaction zone is thus limited. Obviously, Ka > 100. A topological description is of little relevance here and a well-stirred reactor model fits better. # Description of Real Mixtures As explained above, combustion has been analyzed and modelled with the help of simple irreversible chemistry involving `a' fuel and `an' oxidizer. However, combustion is a chemical reaction highly complicated, except maybe for ozone and hydrogen in oxygen. A chemical mechanism involving more than thousand species and several hundreds of reactions between them is still unsuficient to describe the combustion of a simple hydrocarbon molecule as methane in air. This is due to the chemistry route from reaction to reaction that may depend on the conditions such as pressure and temperature. Also, species are unstable at high temperature, meaning that the combustion products are usually exposed to dissociations. Therefore, the full conversion of the chemical enthalpy into sensible enthalpy is not realized with substantial drop in combustion temperature. Furthermore, the calorimetry and transport properties of the species in the mixture are not unique: the heat capacities are temperature dependent and species diffuse at various speeds within each other, potentially leading to exotic phenomena for light molecules as hydrogen. ## Governing Equations for Chemically Reacting Flows Together with the usual Navier-Stokes equations for compresible flow (See Governing equations), additional equations are needed in reacting flows for species. In the earlier paragraphs dedicated to combustion fundamentals, we have seen that the description is reduced to simple transport equations. However, in real mixtures, numerous species with different transport properties may exist. Their transport equations are coupled together more or less directly by their rate of creation or disappearence (chemical reaction source/sink terms), and by their transport properties relatively to each other. They are also coupled to the Navier-Stokes equations (mass, momentum, and energy) thanks to density, velocity, pressure and temperature taking a role in their diffusion and convection fluxes and their source terms. Due to these complicated behaviours, it is rather chosen to speak in terms of diffusion velocities $V^j_k$ such that the transport equation for the mass fraction $Y_k$ of the k-th specie in a complex mixture takes the form: $\frac{\partial}{\partial t} \left( \rho Y_k \right) + \frac{\partial}{\partial x_j} \left( \rho u^j Y_k\right) = - \frac{\partial}{\partial x_j} \left( \rho V_k^j Y_k\right)+ \dot \omega_k$ $\dot \omega_k$ denotes the specie reaction rate. A non-reactive (passive) scalar (like the mixture fraction $Z$) is goverened by the following transport equation $\frac{\partial}{\partial t} \left( \rho Z \right) + \frac{\partial}{\partial x_j} \left( \rho u_j Z \right) = \frac{\partial}{\partial x_j} \left( \rho D \frac{\partial Z}{\partial x_j}\right)$ where $D$ is the diffusion coefficient of the passive scalar. ### RANS equations In turbulent flows, Favre averaging is often used to reduce the scales (see Reynolds averaging) and the mass fraction transport equation is transformed to $\frac{\partial \overline{\rho} \widetilde{Y}_k }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Y}_k}{\partial x_j}= \frac{\partial} {\partial x_j} \left( \overline{\rho D_k \frac{\partial Y_k} {\partial x_j} } - \overline{\rho} \widetilde{u''_i Y''_k } \right) + \overline{\dot \omega_k}$ where the turbulent fluxes $\widetilde{u''_i Y''_k}$ and reaction terms $\overline{\dot \omega_k}$ need to be closed. The passive scalar turbulent transport equation is $\frac{\partial \overline{\rho} \widetilde{Z} }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Z} }{\partial x_j}= \frac{\partial} {\partial x_j} \left( \overline{\rho D \frac{\partial Z} {\partial x_j} } - \overline{\rho} \widetilde{u''_i Z'' } \right)$ where $\widetilde{u''_i Z''}$ needs modelling. A common practice is to model the turbulent fluxes using the gradient diffusion hypothesis. For example, in the equation above the flux $\widetilde{u''_i Z''}$ is modelled as $\widetilde{u''_i Z''} = -D_t \frac{\partial \tilde Z}{\partial x_i}$ where $D_t$ is the turbulent diffusivity. Since $D_t >> D$, the first term inside the parentheses on the right hand of the mixture fraction transport equation is often neglected (Peters (2000)). This assumption is also used below. In addition to the mean passive scalar equation, an equation for the Favre variance $\widetilde{Z''^2}$ is often employed $\frac{\partial \overline{\rho} \widetilde{Z''^2} }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Z''^2} }{\partial x_j}= \frac{\partial}{\partial x_j} \left( \overline{\rho} \widetilde{u''_i Z''^2} \right) - 2 \overline{\rho} \widetilde{u''_i Z'' } - \overline{\rho} \widetilde{\chi}$ where $\widetilde{\chi}$ is the mean Scalar dissipation rate defined as $\widetilde{\chi} = 2 D \widetilde{\left| \frac{\partial Z''}{\partial x_j} \right|^2 }$ This term and the variance diffusion fluxes needs to be modelled. ### LES equations The Large eddy simulation (LES) approach for reactive flows introduces equations for the filtered species mass fractions within the compressible flow field. Similar to the #RANS equations, but using Favre filtering instead of Favre averaging, the filtered mass fraction transport equation is $\frac{\partial \overline{\rho} \widetilde{Y}_k }{\partial t} + \frac{\partial \overline{\rho} \widetilde{u}_j \widetilde{Y}_k}{\partial x_j}= \frac{\partial} {\partial x_j} \left( \overline{\rho D_k \frac{\partial Y_k} {\partial x_j} } - J_j \right) + \overline{\dot \omega_k}$ where $J_j$ is the transport of subgrid fluctuations of mass fraction $J_j = \widetilde{u_jY_k} - \widetilde{u}_j \widetilde{Y}_k$ and has to be modelled. Fluctuations of diffusion coefficients are often ignored and their contributions are assumed to be much smaller than the apparent turbulent diffusion due to transport of subgrid fluctuations. The first term on the right hand side is then $\frac{\partial} {\partial x_j} \left( \overline{ \rho D_k \frac{\partial Y_k} {\partial x_j} } \right) \approx \frac{\partial} {\partial x_j} \left( \overline{\rho} D_k \frac{\partial \widetilde{Y}_k} {\partial x_j} \right)$ ## Reaction mechanisms Combustion is mainly a chemical process. Although we can, to some extent, describe a flame without any chemistry information, modelling of the flame propagation requires the knowledge of speeds of reactions, product concentrations, temperature, and other parameters. Therefore fundamental information about reaction kinetics is essential for any combustion model. A fuel-oxidizer mixture will generally combust if the reaction is fast enough to prevail until all of the mixture is burned into products. If the reaction is too slow, the flame will extinguish. If too fast, explosion or even detonation will occur. The reaction rate of a typical combustion reaction is influenced mainly by the concentration of the reactants, temperature, and pressure. A stoichiometric equation for an arbitrary reaction can be written as: $\sum_{j=1}^{n}\nu' (M_j) = \sum_{j=1}^{n}\nu'' (M_j),$ where $\nu$ denotes the stoichiometric coefficient, and $M_j$ stands for an arbitrary species. A one prime holds for the reactants while a double prime holds for the products of the reaction. The reaction rate, expressing the rate of disappearance of reactant i, is defined as: $RR_i = k \, \prod_{j=1}^{n}(M_j)^{\nu'},$ in which k is the specific reaction rate constant. Arrhenius found that this constant is a function of temperature only and is defined as: $k= A T^{\beta} \, exp \left( \frac{-E}{RT}\right)$ where A is pre-exponential factor, E is the activation energy, and $\beta$ is a temperature exponent. The constants vary from one reaction to another and can be found in the literature. Reaction mechanisms can be deduced from experiments (for every resolved reaction), they can also be constructed numerically by the automatic generation method (see [Griffiths (1994)] for a review on reaction mechanisms). For simple hydrocarbons, tens to hundreds of reactions are involved. By analysis and systematic reduction of reaction mechanisms, global reactions (from one to five step reactions) can be found (see [Westbrook (1984)]). # Models Classes In a previous section dedicated to combustion / turbulence coupling, Sec. Three Turbulent-Flame Interaction Regimes, it has been anticipated that the topology of the flame (or its destruction) in the turbulent field provides insight on the type of impact the turbulence has on the combustion zone. Therefore, combustion models are sometimes identified within two classes: the topological class and the reactor class. ## Topological Models Topological models assume that the flame is a surface that can be tracked by the solver. The location of this surface may be associated with a resolution of a particular field (for instance, an iso-surface of a reaction progress variable in premixed combustion, or an iso-surface of the mixture fraction field in non-premixed equation). In terms of flame-turbulence interaction, they may cover the whole spectrum and treat the flame surface considering the laminar structure of this latter is conserved (this is the case for a flamelet model) or not (for instance, G-equation models with a turbulent flame speed containing correlations for flames at high Ka). ### Flamelet Model The flamelet model is certainly the most known topology-based combustion model. In this model, the integrity of the laminar flame structure is conserved. The impact of the turbulence on an element of flame is evaluated in terms of equivalent control parameters in a laminar flame experiment, for instance the imposed strain. The question of the combustion stricto sensus is not different from the laminar case and the key point of the turbulent combustion modelling is the route to translate turbulent flow dynamics in the neighbourhood of the reaction zone into control parameters of a separated laminar flame configuration. The flamelet model takes advantage of the hierarchy problem appearing in turbulent combustion, namely, the fact that on one hand, the turbulence structures cover a spectrum of scales and, on the other hand, the typical thickness of a flame is on the low end of that spectrum. This hierarchy decouples the mixing of the species that is operated by the turbulent cascade from the chemistry that happens exclusively in the flame. The above section on the diffusion flame solution has illustrated this through a typical relationship: $\dot\omega=\rho\chi\frac{d^2T}{dZ^2}$ This explains that the reaction rate is determined by contributions coming from different hierarchies: the turbulent mixing ($\rho\chi$), and the reaction occuring in the structure of the flame ($\frac{d^2T}{dZ^2}$, remember the profile of T versus Z is completely linked to the chemistry and its strength). ### Flame surface density model The flame surface density model is a typical topological, finite-rate combustion model applied to premixed combustion. In this type of model, one considers that the heat release in a given volume is the product of the heat release per unit flame surface, times the total flame surface present in this volume. The idea is to get the increased exchange surface available for combustion where the flame is highly contorded by turbulence. ## Reactor Models ### Infinitely Fast Chemistry All combustion models can be divided into two main groups according to the assumptions on the reaction kinetics. We can either assume the reactions to be infinitely fast - compared to e.g. mixing of the species, or comparable to the time scale of the mixing process. The simple approach is to assume infinitely fast chemistry. Historically, mixing of the species is the older approach, and it is still in wide use today. It is therefore simpler to solve for #Finite rate chemistry models, at the overshoot of introducing errors to the solution. ### Premixed combustion Premixed flames occur when the fuel and oxidiser are homogeneously mixed prior to ignition. These flames are not limited only to gas fuels, but also to the pre-vaporised fuels. Typical examples of premixed laminar flames is bunsen burner, where the air enters the fuel stream and the mixture burns in the wake of the riser tube walls forming nice stable flame. Another example of a premixed system is the solid rocket motor where oxidizer and fuel and properly mixed in a gum-like matrix that is uniformly distributed on the periphery of the chamber. Premixed flames have many advantages in terms of control of temperature and products and pollution concentration. However, they introduce some dangers like the autoignition (in the supply system). #### Eddy Break-Up model The Eddy Break-Up model is the typical example of mixed-is-burnt combustion model. It is based on the work of Magnussen, Hjertager, and Spalding and can be found in all commercial CFD packages. The model assumes that the reactions are completed at the moment of mixing, so that the reaction rate is completely controlled by turbulent mixing. Combustion is then described by a single step global chemical reaction $F + \nu_s O \rightarrow (1+\nu_s) P$ in which F stands for fuel, O for oxidiser, and P for products of the reaction. Alternativelly we can have a multistep scheme, where each reaction has its own mean reaction rate. The mean reaction rate is given by $\bar{\dot\omega}_F=A_{EB} \frac{\epsilon}{k} min\left[\bar{C}_F,\frac{\bar{C}_O}{\nu}, B_{EB}\frac{\bar{C}_P}{(1+\nu)}\right]$ where $\bar{C}$ denotes the mean concentrations of fuel, oxidiser, and products respectively. A and B are model constants with typical values of 0.5 and 4.0 respectively. The values of these constants are fitted according to experimental results and they are suitable for most cases of general interest. It is important to note that these constants are only based on experimental fitting and they need not be suitable for all the situations. Care must be taken especially in highly strained regions, where the ratio of $k$ to $\epsilon$ is large (flame-holder wakes, walls ...). In these, regions a positive reaction rate occurs and an artificial flame can be observed. CFD codes usually have some remedies to overcome this problem. This model largely over-predicts temperatures and concentrations of species like CO and other species. However, the Eddy Break-Up model enjoys a popularity for its simplicity, steady convergence, and implementation. ### Non-premixed combustion Non premixed combustion is a special class of combustion where fuel and oxidizer enter separately into the combustion chamber. The diffusion and mixing of the two streams must bring the reactants together for the reaction to occur. Mixing becomes the key characteristic for diffusion flames. Diffusion burners are easier and safer to operate than premixed burners. However their efficiency is reduced compared to premixed burners. One of the major theoretical tools in non-premixed combustion is the passive scalar mixture fraction $Z$ which is the backbone on most of the numerical methods in non-premixed combustion. #### Conserved scalar equilibrium models The reactive problem is split into two parts. First, the problem of mixing , which consists of the location of the flame surface which is a non-reactive problem concerning the propagation of a passive scalar; And second, the flame structure problem, which deals with the distribution of the reactive species inside the flamelet. To obtain the distribution inside the flame front we assume it is locally one-dimensional and depends only on time and the scalar coodinate. We first make use of the following chain rules $\frac{\partial Y_k}{\partial t} = \frac{\partial Z}{\partial t}\frac{\partial Y_k}{\partial Z}$ $\frac{\partial Y_k}{\partial x_j} = \frac{\partial Z}{\partial x_j}\frac{\partial Y_k}{\partial Z}$ and transformation $\frac{\partial }{\partial t}= \frac{\partial }{\partial t} + \frac{\partial Z}{\partial t} \frac{\partial }{\partial Z}$ upon substitution into the species transport equation (see #Governing Equations for Reacting Flows), we obtain $\rho \frac{\partial Y_k}{\partial t} + Y_k \left[ \frac{\partial \rho}{\partial t} + \frac{\partial \rho u_j}{\partial x_j} \right] + \frac{\partial Y_k}{\partial Z} \left[ \rho \frac{\partial Z}{\partial t} + \rho u_j \frac{\partial Z}{\partial x_j} - \frac{\partial}{\partial x_j}\left( \rho D \frac{\partial Z}{\partial x_j} \right) \right] = \rho D \left( \frac{\partial Z}{\partial x_j} \frac{\partial Z}{\partial x_j} \right) \frac{\partial^2 Y_k}{\partial Z^2} + \dot \omega_k$ The second and third terms in the LHS cancel out due to continuity and mixture fraction transport. At the outset, the equation boils down to $\frac{\partial Y_k}{\partial t} = \frac{\chi}{2} \frac{\partial ^2 Y_k}{\partial Z^2} + \dot \omega_k$ where $\chi = 2 D \left( \frac{\partial Z}{\partial x_j} \right)^2$ is called the scalar dissipation which controls the mixing, providing the interaction between the flow and the chemistry. If the flame dependence on time is dropped, even though the field $Z$ still depends on it. $\dot \omega_k= -\frac{\chi}{2} \frac{\partial ^2 Y_k}{\partial Z^2}$ If the reaction is assumed to be infinetly fast, the resultant flame distribution is in equilibrium. and $\dot \omega_k= 0$. When the flame is in equilibrium, the flame configuration $Y_k(Z)$ is independent of strain. ##### Burke-Schumann flame structure The Burke-Schuman solution is valid for irreversible infinitely fast chemistry. With a reaction in the form of $F + \nu_s O \rightarrow (1+\nu_s) P$ If the flame is in equilibrium and therefore the reaction term is 0. Two possible solution exists, one with pure mixing (no reaction) and a linear dependence of the species mass fraction with $Z$. Fuel mass fraction $Y_F=Y_F^0 Z$ Oxidizer mass fraction $Y_O=Y_O^0(1-Z)$ Where $Y_F^0$ and $Y_O^0$ are fuel and oxidizer mass fractions in the pure fuel and oxidizer streams respectively. The other solution is given by a discontinuous slope at stoichiometric mixture fraction $Z_{st}$ and two linear profiles (in the rich and lean side) at either side of the stoichiometric mixture fraction. Both concentrations must be 0 at stoichiometric, the reactants become products infinitely fast. $Y_F=Y_F^0 \frac{Z-Z_{st}}{1-Z_{st}}$ and oxidizer mass fraction $Y_O=Y_O^0 \frac{Z-Z_{st}}{Z_{st}}$ ## Finite rate chemistry ### Non-premixed combustion #### Flamelets based on conserved scalar Peters (2000) define Flamelets as "thin diffusion layers embedded in a turbulent non-reactive flow field". If the chemistry is fast enough, the chemistry is active within a thin region where the chemistry conditions are in (or close to) stoichiometric conditions, the "flame" surface. This thin region is assumed to be smaller than Kolmogorov length scale and therefore the region is locally laminar. The flame surface is defined as an iso-surface of a certain scalar $Z$, mixture fraction in #Non premixed combustion. The same equation used in #Conserved scalar models for equilibrium chemistry is used here but with chemical source term different from 0 $\frac{\partial Y_k}{\partial t} = \frac{\chi}{2} \frac{\partial ^2 Y_k}{\partial Z^2} + \dot \omega_k.$ This approach is called the Stationary Laminar Flamelet Model (SLFM) and has the advantage that flamelet profiles $Y_k=f(Z,\chi)$ can be pre-computed and stored in a dtaset or file which is called a "flamelet library" with all the required complex chemistry. For the generation of such libraries ready to use software is avalable such as Softpredict's Combustion Simulation Laboratory COSILAB [1] with its relevant solver RUN1DL, which can be used for a variety of relevant geometries; see various publications that are available for download. Other software tools are available such as CHEMKIN [2] and CANTERA [3]. ##### Flamelets in turbulent combustion In turbulent flames the interest is $\widetilde{Y}_k$. In flamelets, the flame thickness is assumed to be much smaller than Kolmogorov scale and obviously is much smaller than the grid size. It is therefore needed a distribution of the passive scalar within the cell. $\widetilde{Y}_k$ cannot be obtained directly from the flamelets library $\widetilde{Y}_k \neq Y_F(Z,\chi)$, where $Y_F(Z,\chi)$ corresponds to the value obtained from the flamelets libraries. A generic solution can be expressed as $\widetilde{Y}_k= \int Y_F( \widetilde{Z},\widetilde{\chi}) P(Z,\chi) dZ d\chi$ where $P(Z,\chi)$ is the joint Probability Density Function (PDF) of the mixture fraction and scalar dissipation which account for the scalar distribution inside the cell and "a priori" depends on time and space. The most simple assumption is to use a constant distribution of the scalar dissipation within the cell and the above equation reduces to $\widetilde{Y}_k= \int Y_F(\widetilde{Z},\widetilde{\chi}) P(Z) dZ$ $P(Z)$ is the PDF of the mixture fraction scalar and simple models (such as Gaussian or a beta PDF) can be build depending only on two moments of the scalar mean and variance,$\widetilde{Z},Z''$. If the mixture fraction and scalar dissipation are consider independent variables,$P(Z,\chi)$ can be written as $P(Z) P(\chi)$. The PDF of the scalar dissipation is assumed to be log-normal with variance unity. $\widetilde{Y}_k= \int Y_F(\widetilde{Z},\widetilde{\chi}) P(Z) P(\chi) dZ d\chi$ In Large eddy simulation (LES) context (see #LES equations for reacting flow), the probability density function is replaced by a subgrid PDF $\widetilde{P}$. The same equation hold by replacing averaged values with filtered values. $\widetilde{Y}_k= \int Y_F(\widetilde{Z},\widetilde{\chi}) \widetilde{P}(Z) \widetilde{P}(\chi) dZ d\chi$ The assumptions made regarding the shapes of the PDFs are still justified. In LES combustion the subgrid variance is smaller than RANS counterpart (part of the large-scale fluctuations are solved) and therefore the modelled PDFs are thinner. #### Intrinsic Low Dimensional Manifolds (ILDM) Detailed mechanisms describing ignition, flame propagation and pollutant formation typically involve several hundred species and elementary reactions, prohibiting their use in practical three-dimensional engine simulations. Conventionally reduced mechanisms often fail to predict minor radicals and pollutant precursors. The ILDM-method is an automatic reduction of a detailed mechanism, which assumes local equilibrium with respect to the fastest time scales identified by a local eigenvector analysis. In the reactive flow calculation, the species compositions are constrained to these manifolds. Conservation equations are solved for only a small number of reaction progress variables, thereby retaining the accuracy of detailed chemical mechanisms. This gives an effective way of coupling the details of complex chemistry with the time variations due to turbulence. The intrinsic low-dimensional manifold (ILDM) method {Maas:1992,Maas:1993} is a method for in-situ reduction of a detailed chemical mechanism based on a local time scale analysis. This method is based on the fact that different trajectories in state space start from a high-dimensional point and quickly relax to lower-dimensional manifolds due to the fast reactions. The movement along these lower-dimensional manifolds, however, is governed by the slow reactions. It exploits the variety of time scales to systematically reduce the detailed mechanism. For a detailed chemical mechanism with N species, N different time scales govern the process. An assumption that all the time scales are relaxed results in assuming complete equilibrium, where the only variables required to describe the system are the mixture fraction, the temperature and the pressure. This results in a zero-dimensional manifold. An assumption that all but the slowest 'n' time scales are relaxed results in a 'n' dimensional manifold, which requires the additional specification of 'n' parameters (called progress variables). In the ILDM method, the fast chemical reactions do not need to be identified a priori. An eigenvalue analysis of the detailed chemical mechanism is carried out which identifies the fast processes in dynamic equilibrium with the slow processes. The computation of ILDM points can be expensive, and hence an in-situ tabulation procedure is used, which enables the calculation of only those points that are needed during the CFD calculation. --Fredgauss 07:37, 25 August 2006 (MDT) U. Maas, S.B. Pope. Simplifying chemical kinetics: Intrinsic low-dimensional manifolds in composition space. Comb. Flame 88, 239, 1992. Ulrich Maas. Automatische Reduktion von Reaktionsmechanismen zur Simulation reaktiver Str¨omungen. Habilitationsschrift, Universit ¨at Stuttgart, 1993. #### Conditional Moment Closure (CMC) In Conditional Moment Closure (CMC) methods we assume that the species mass fractions are all correlated with the mixture fraction (in non premixed combustion). From Probability density function we have $\overline{Y_k}= \int <Y_k|\eta> P(\eta) d\eta$ where $\eta$ is the sample space for $Z$. CMC consists of providing a set of transport equations for the conditional moments which define the flame structure. Experimentally, it has been observed that temperature and chemical radicals are strong non-linear functions of mixture fraction. For a given species mass fraction we can decomposed it into a mean and a fluctuation: $Y_k= \overline{Y_k} + Y'_k$ The fluctuations $Y_k'$ are usually very strong in time and space which makes the closure of $\overline{\omega_k}$ very difficult. However, the alternative decomposition $Y_k= <Y_k|\eta> + y'_k$ where $y'_k$ is the fluctuation around the conditional mean or the "conditional fluctuation". Experimentally, it is observed that $y'_k<< Y'_k$, which forms the basic assumption of the CMC method. Closures. Due to this property better closure methods can be used reducing the non-linearity of the mass fraction equations. The Derivation of the CMC equations produces the following CMC transport equation where $Q \equiv <Y_k|\eta>$ for simplicity. $\frac{ \partial Q}{\partial t} + <u_j|\eta> \frac{\partial Q}{\partial x_j} = \frac{<\chi|\eta> }{2} \frac{\partial ^2 Q}{\partial \eta^2} + \frac{ < \dot \omega_k|\eta> }{ <\rho| \eta >}$ In this equation, high order terms in Reynolds number have been neglected. (See Derivation of the CMC equations for the complete series of terms). It is well known that closure of the unconditional source term $\overline {\dot \omega_k}$ as a function of the mean temperature and species ($\overline{Y}, \overline{T}$) will give rise to large errors. However, in CMC the conditional averaged mass fractions contain more information and fluctuations around the mean are much smaller. The first order closure $< \dot \omega_k|\eta> \approx \dot \omega_k \left( Q, <T|\eta> \right)$ is a good approximation in zones which are not close to extinction. ##### Second order closure A second order closure can be obtained if conditional fluctuations are taken into account. For a chemical source term in the form $\dot \omega_k = k Y_A Y_B$ with the rate constant in Arrhenius form $k=A_0 T^\beta exp [-Ta/T]$ the second order closure is (Klimenko and Bilger 1999) $< \dot \omega_k|\eta> \approx < \dot \omega_k|\eta >^{FO} \left[1+ \frac{< Y''_A Y''_B |\eta>}{Q_A Q_B}+ \left( \beta + T_a/Q_T \right) \left( \frac{< Y''_A T'' |\eta>}{Q_AQ_T} + \frac{< Y''_B T'' |\eta>}{Q_BQ_T} \right) + ... \right]$ where $< \dot \omega_k|\eta >^{FO}$ is the first order CMC closure and $Q_T \equiv <T|\eta>$. When the temperature exponent $\beta$ or $T_a/Q_T$ are large the error of taking the first order approximation increases. Improvement of small pollutant predictions can be obtained using the above reaction rate for selected species like CO and NO. ##### Double conditioning Close to extinction and reignition. The conditional fluctuations can be very large and the primary closure of CMC of "small" fluctuations is not longer valid. A second variable $h$ can be chosen to define a double conditioned mass fraction $Q(x,t;\eta,\psi) \equiv <Y_i(x,t) |Z=\eta,h=\psi >$ Due to the strong dependence on chemical reactions to temperature, $h$ is advised to be a temperature related variable (Kronenburg 2004). Scalar dissipation is not a good choice, due to its log-normal behaviour (smaller scales give highest dissipation). A must better choice is the sensible enthalpy or a progress variable. Double conditional variables have much smaller conditional fluctuations and allow the existence of points with the same chemical composition which can be fully burning (high temperature) or just mixing (low temperature). The range of applicability is greatly increased and allows non-premixed and premixed problems to be treated without ad-hoc distinctions. The main problem is the closure of the new terms involving cross scalar transport. The double conditional CMC equation is obtained in a similar manner than the conventional CMC equations ##### LES modelling In a LES context a conditional filtering operator can be defined and $Q$ therefore represents a conditionally filtered reactive scalar. ### Linear Eddy Model The Linear Eddy Model (LEM) was first developed by Kerstein(1988). It is an one-dimensional model for representing the flame structure in turbulent flows. In every computational cell a molecular, diffusion and chemical model is defined as $\frac{\partial}{\partial t} \left( \rho Y_k \right) = \frac{\partial}{\partial \eta} \left( \rho D_k \frac{\partial Y_k}{\partial \eta }\right)+ \dot \omega_k$ where $\eta$is a spatial coordinate. The scalar distribution obtained can be seen as a one-dimensional reference field between Kolmogorov scale and grid scales. In a second stage a series of re-arranging stochastic event take place. These events represent the effects of a certain turbulent structure of size $l$, smaller than the grid size at a location $\eta_0$ within the one-dimensional domain. This vortex distort the $\eta$ field obtain by the one-dimensional equation, creating new maxima and minima in the interval $(\eta_0, \eta + \eta_0)$. The vortex size $l$ is chosen randomly based on the inertial scale range while $\eta_0$ is obtained from a uniform distribution in $\eta$. The number of events is chosen to match the turbulent diffusivity of the flow. ### PDF transport models Probability Density Function (PDF) methods are not exclusive to combustion, although they are particularly attractive to them. They provided more information than moment closures and they are used to compute inhomegenous turbulent flows, see reviews in Dopazo (1993) and Pope (1994). PDF methods are based on the transport equation of the joint-PDF of the scalars. Denoting $P \equiv P(\underline{\psi}; x, t)$ where $\underline{\psi} = ( \psi_1,\psi_2 ... \psi_N)$ is the phase space for the reactive scalars $\underline{Y} = ( Y_1,Y_2 ... Y_N)$. The transport equation of the joint PDF is: $\frac{\partial <\rho | \underline{Y}=\underline{\psi}> P }{\partial t} + \frac{ \partial <\rho u_j | \underline{Y}=\underline{\psi}> P }{\partial x_j} = \sum^N_\alpha \frac{\partial}{\partial \psi_\alpha}\left[ \rho \dot{\omega}_\alpha P \right] - \sum^N_\alpha \sum^N_\beta \frac{\partial^2}{\partial \psi_\alpha \psi_\beta} \left[ <D \frac{\partial Y_\alpha}{\partial x_i} \frac{\partial Y_\beta}{\partial x_i} | \underline{Y}=\underline{\psi}> \right] P$ where the chemical source term is closed. Another term appeared on the right hand side which accounts for the effects of the molecular mixing on the PDF, is the so called "micro-mixing " term. Equal diffusivities are used for simplicity $D_k = D$ A more general approach is the velocity-composition joint-PDF with $P \equiv P(\underline{V},\underline{\psi}; x, t)$, where $\underline{V}$ is the sample space of the velocity field $u,v,w$. This approach has the advantage of avoiding gradient-diffusion modelling. A similar equation to the above is obtained combining the momentum and scalar transport equation. The PDF transport equation can be solved in two ways: through a Lagrangian approach using stochastic methods or in a Eulerian ways using stochastic fields. #### Lagrangian The main idea of Lagrangian methods is that the flow can be represented by an ensemble of fluid particles. Central to this approach is the stochastic differential equations and in particular the Langevin equation. #### Eulerian Instead of stochastic particles, smooth stochastic fields can be used to represent the probability density function (PDF) of a scalar (or joint PDF) involved in transport (convection), diffusion and chemical reaction (Valino 1998). A similar formulation was proposed by Sabelnikov and Soulard 2005, which removes part of the a-priori assumption of "smoothness" of the stochastic fields. This approach is purely Eulerian and offers implementations advantages compared to Lagrangian or semi-Eulerian methods. Transport equations for scalars are often easy to programme and normal CFD algorithms can be used (see Discretisation of convective term). Although discretization errors are introduced by solving transport equations, this is partially compesated by the error introduced in Lagrangian approaches due to the numerical evaluation of means. A new set of $N_s$ scalar variables (the stochastic field $\xi$) is used to represent the PDF $P (\underline{\psi}; x,t) = \frac{1}{N} \sum^{N_s}_{j=1} \prod^{N}_{k=1} \delta \left[\psi_k -\xi_k^j(x,t) \right]$ ### Other combustion models #### MMC The Multiple Mapping Conditioning (MMC) (Klimenko and Pope 2003) is an extension of the #Conditional Moment Closure (CMC) approach combined with probability density function methods. MMC looks for the minimum set of variables that describes the particular turbulent combustion system. #### Fractals Derived from the #Eddy Dissipation Concept (EDC). ## References [1] Bone, W.A., and Townend, D.T.A. (1927), Flame and Combustion in Gases, Longmans, Green, and Co., Ltd. [2] (1999), "", Journal of Heat Transfer, Vol. 121, No. 4, pp. 770-73. [3] Liñan, A. (1974), "The Asymptotic Structure of Counter Flow Diffusion Flames for Large Activation Energies", Acta Astronautica, Vol. 1, No. 7/8, pp. 1007-1039. • Dopazo, C. (1993), "Recent development in PDF methods", Turbulent Reacting Flows, ed. P. A. Libby and F. A. Williams. • Fox, R.O. (2003), Computational Models for Turbulent Reacting Flows, ISBN 0-521-65049-6,Cambridge University Press. • Kerstein, A. R. (1988), "A linear eddy model of turbulent scalar transport and mixing", Comb. Science and Technology, Vol. 60,pp. 391. • Klimenko, A. Y., Bilger, R. W. (1999), "Conditional moment closure for turbulent combustion", Progress in Energy and Combustion Science, Vol. 25,pp. 595-687. • Klimenko, A. Y., Pope, S. B. (2003), "The modeling of turbulent reactive flows based on multiple mapping conditioning", Physics of Fluids, Vol. 15, Num. 7, pp. 1907-1925. • Kronenburg, A., (2004), "Double conditioning of reactive scalar transport equations in turbulent non-premixed flames", Physics of Fluids, Vol. 16, Num. 7, pp. 2640-2648. • Griffiths, J. F. (1994), "Reduced Kinetic Models and Their Application to Practical Combustion Systems", Prog. in Energy and Combustion Science,Vol. 21, pp. 25-107. • Peters, N. (2000), Turbulent Combustion, ISBN 0-521-66082-3,Cambridge University Press. • Poinsot, T.,Veynante, D. (2001), Theoretical and Numerical Combustion, ISBN 1-930217-05-6, R. T Edwards. • Pope, S. B. (1994), "Lagrangian PDF methods for turbulent flows", Annu. Rev. Fluid Mech, Vol. 26, pp. 23-63. • Sabel'nikov, V.,Soulard, O. (2005), "Rapidly decorrelating velocity-field model as a tool for solving one-point Fokker-Planck equations for probability density functions of turbulent reactive scalars", Physical Review E, Vol. 72,pp. 016301-1-22. • Valino, L., (1998), "A field montecarlo formulation for calculating the probability density function of a single scalar in a turbulent flow", Flow. Turb and Combustion, Vol. 60,pp. 157-172. • Westbrook, Ch. K., Dryer,F. L., (1984), "Chemical Kinetic Modeling of Hydrocarbon Combustion", Prog. in Energy and Combustion Science,Vol. 10, pp. 1-57. ## External links and sources Retrieved from "http://www.cfd-online.com/Wiki/Combustion"
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 578, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043929576873779, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/20541/how-does-instant-charging-of-one-plate-affect-the-potential-of-the-other-plate-o?answertab=votes
# How does instant charging of one plate affect the potential of the other plate of a floating capacitor? If I have an uncharged floating capacitor and I instantaneously connect one plate to some potential, then that plate will acquire some charge. In practice, the other floating plate will instantaneously rise to the same potential as the first one. But wouldn't this require the second plate having the same charge as the first? - Your presumption is wrong. The other plate will in fact retain its potential and charge if disconnected. – C.R. Feb 5 '12 at 0:30 ## 4 Answers Everything you've probably learned about capacitors, especially including the statement that opposite plates of the capacitor carry opposite charges, applies only to a capacitor in a circuit. If your capacitor is floating, so that the plates are not connected to anything, the charge on the plates is not going to change. If you hook up only one plate to a battery or something that pumps charge into it, the other plate is not magically going to acquire any charge. All that does happen is that the electric field from the first plate will push the electrons of the second plate toward one side or the other. You'll get a separation of the charge along the thickness of the second plate, but the total amount of charge on the plate doesn't change. - The charge will redistribute as follows: $$\frac{+Q}{2}\begin{vmatrix} \frac{+Q}{2} & \frac{-Q}{2} \\ \text{charged plate} & \text{free plate} \end{vmatrix}\frac{+Q}{2}$$ Note that the charges on the inner and outer sides of each plate are distinct. In a capacitor (even one not in a circuit), it is mandatory that the opposing faces have opposite charge, but the charge on the outer surfaces can have any value. This comes from the fact that field inside a metal must be zero (as it cannot have a current through it when considering electrostatic situations). The remaining charge distribution can be found through charge conservation. Whenever you have a set of n parallel plates, and you give some charges to each of the plates, the easiest way to calculate the distribution is to take the net charge, divide by two, and assign this charge to both of the outer faces of the outermost plates individually. Now use charge conservation per plate and the "opposing faces opposite charge" rule to calculate charges on remaining plates. For example, if I have three plates, and I give a charge Q,2Q,3Q to the plates, the distribution is first written as ($\frac{Q+2Q+3Q}{2}=3Q$): $$3Q | \qquad|\qquad | 3Q$$ Applying charge conservation on outermost plates (their net charge should be Q and 3Q respectively, $$3Q |-2Q \qquad|\qquad 0| 3Q$$ Using the opposite charges rule: $$3Q |-2Q \qquad 2Q|0\qquad 0| 3Q$$ Now, back to your original question. As you can see, the the second plate acquired a charge distribution but not a net charge. Now, the potential of the second plate will be $V_0-\frac{Qd}{2\epsilon_0 A}$ with the usual meanings for symbols ($V_0$ is potential of first plate). This is easily obtained from the fact that net field due to the first plate is$\frac{Q}{2\epsilon_0 A}$. Yes, the second plate has acquired a potential, but there's no harm in that. You do not need a wire to transmit potential; it is transmitted by the field (which in turn is transmitted by EM waves). For example, in an empty region of space, bring a charge q to a distance r from some point. That point just acquired a potential $\frac{q}{4\pi\epsilon_0 r}$, without any physical contact. Summing up: 1. The second plate does NOT acquire the same potential as the first one 2. The second plate will NOT acquire a net charge, but there will be a redistribution of charges 3. Potential needs no medium to be transmitted. 4. The potential of a plate or a point can change without the charge of the plate/point changing. - "This comes from the fact that field inside a metal must be zero." You may want to clarify that. I personally come to the conclusion with Gauss' law, but maybe there are other ways to look at it. – C.R. Feb 5 '12 at 14:48 Well, if it wasn't zero, then we would have a current. Field at any point inside a conducting material is zero (but not in cavities in the metal). This is pretty much assumed while dealing with electrostatics as we only deal with currentless situations. Even if we considered current, the conductor would have to polarize and stop the current at some point or the other. – Manishearth♦ Feb 5 '12 at 14:57 @KarsusRen How did you come to the conclusion via Gauss' law? If you used any formula for field and worked backwards, that's most probably circular reasoning, as most of the field formulae (as well as the fact that charges distribute themselves on the surface of conductors) come from the conjunction of this fact and Gauss' law. – Manishearth♦ Feb 5 '12 at 14:58 Added clarification.. – Manishearth♦ Feb 5 '12 at 15:02 1 Potential is just energy. The field of the first plate will polarize the second and therefore raise its potential. – John McVirgo Feb 6 '12 at 6:23 show 5 more comments You have three conducting systems, and therefore three relevant potential levels. (A) The potential of the disconnected capacitor plate. (B) The potential of the connected capacitor plate and its wire lead. (C) "Ground" ... When you say that you change the potential of (B), you must be talking about "relative to ground", let's say earth ground. If you model this as an electronic circuit, you have a large capacitance between (A) and (B) and a negligible (stray) capacitance between (A) and ground. Therefore, it's a voltage divider, and the voltage of (A) is a weighted average between the voltages of (B) and ground. But actually ~100% of the weight is in (B), and ~0% in ground. So (A) will follow the voltage of (B) whatever it is. (A) will not particularly care about the voltage of ground. Why would it? Ground is nothing special, it's just the wiring in the walls. To sum up, the capacitor is uncharged (no charge on either side of either plate), and (A) follows the voltage of (B). Or if you want to be pedantic, the capacitor is almost uncharged, and just a tiny bit of charge moves off the inside of the plate in (A) and into the (stray) capacitor connecting (A) and ground (i.e. the displaced charge would be coating the outside surface of the plate and all the surfaces of the disconnected lead wire, and meanwhile an equal and opposite charge would be coating pipes and other metal objects in the room). One aspect of this answer is, changing the voltage of something does not have to be particularly difficult. Unless that something has a large capacitance to ground, it takes only the slightest "nudge" to make its voltage jump up or down by any amount. - Nice analysis. A nit: in the second paragraph, I would say that (A) follows (B) instead of vice versa, since (A) is the floating plate. – Art Brown Jun 15 '12 at 22:31 thanks, i corrected it – Steve B Jun 15 '12 at 23:37 Capacitor plates always have nearly same potential, because the plates are at nearly same position in space. Very simple :) If capacitor plates have large potential difference, then there must be HUGE opposite charges on the plates. - This answer is simply wrong. Read Manishearth's. – C.R. Feb 5 '12 at 12:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9480341076850891, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/44161/list
Return to Answer 2 edited notation to make statement more precise 1) $f$ is isomorphic in codimension $d$ if it is an isomorphism near any codimension $d$ point in either $X$ or $Y$. Equivalently, there exists closed subsets $Z\subseteq X$ and $W\subseteq Y$ such that ${\rm codim}_XZ\geq d+1$, ${\rm codim}_YW\geq d+1$, and $X\setminus Z\simeq f:X\setminus Z\overset{\simeq}{\longrightarrow} Y\setminus W$ is an isomorphism. 2) By the Theorem of the Base of Néron–Severi, if $f$ is proper of finite type, then `$N_1(X/Y)_{\mathbb Q}$` and $N^1(X/Y)_{\mathbb Q}$ are finite-dimensional vector spaces of the same dimension. This is actually more than you need, because even without the finite type assumption it is true that the intersection pairing `$N_1(X/Y)_{\mathbb Q}\times N^1(X/Y)_{\mathbb Q}\to {\mathbb Q}$` is non-degenerate. 1 1) $f$ is isomorphic in codimension $d$ if it is an isomorphism near any codimension $d$ point in either $X$ or $Y$. Equivalently, there exists closed subsets $Z\subseteq X$ and $W\subseteq Y$ such that ${\rm codim}_XZ\geq d+1$, ${\rm codim}_YW\geq d+1$, and $X\setminus Z\simeq Y\setminus W$. 2) By the Theorem of the Base of Néron–Severi, if $f$ is proper of finite type, then `$N_1(X/Y)_{\mathbb Q}$` and $N^1(X/Y)_{\mathbb Q}$ are finite-dimensional vector spaces of the same dimension. This is actually more than you need, because even without the finite type assumption it is true that the intersection pairing `$N_1(X/Y)_{\mathbb Q}\times N^1(X/Y)_{\mathbb Q}\to {\mathbb Q}$` is non-degenerate.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924602210521698, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/17933/sinusoidal-vs-exponential-wave-functions-with-schrodingers-equation?answertab=oldest
# Sinusoidal vs exponential wave functions with Schrodinger's equation When solving Schrodinger's equation, we end up with the following differential equation: $$\frac{{d}^{2}\psi}{dx^2} = -\frac{2m(E - V)}{\hbar}\psi$$ As I understand it, the next step is to guess the wave function, so let $\psi = {e}^{i\kappa x}$ or let $\psi = \sin(\kappa x)$, both of which I understand as they satisfy the differential equation, but when would you use one over the other? My textbook seems to use both in different scenarios, but I can't seem to figure out what the conditions are for using exponential over sinusoidal and vice versa. - The condition is that the quantity multiplying $\psi$ on the right is positive or negative. If it is positive, then you have an exponential, if it is negative, a sinusoid. The sinusoids are complex exponentials. – Ron Maimon Dec 7 '11 at 3:35 ## 2 Answers This is really more of a mathematical question than a physical one, but in any case, here's a simple explanation: Hopefully you are familiar with the idea of a basis of a vector space. And hopefully you also know that for any given vector space, you can choose many different possible bases. For example, if the 2D plane is your vector space, you could choose the standard unit vectors $\hat{x}$ and $\hat{y}$, or you could choose $(\hat{x} \pm \hat{y})/\sqrt{2}$. Let's call these latter two $\hat{u}$ and $\hat{v}$. You can always express the members of one basis as linear combinations of the members of any other basis. In the 2D plane example: $$\begin{align}\hat{u} &= \frac{\hat{x} + \hat{y}}{\sqrt{2}} & \hat{x} &= \frac{\hat{u} + \hat{v}}{\sqrt{2}} \\ \hat{v} &= \frac{\hat{x} - \hat{y}}{\sqrt{2}} & \hat{y} &= \frac{\hat{u} - \hat{v}}{\sqrt{2}}\end{align}$$ Since every vector can be expressed as a linear combination of basis vectors, if you want to understand what happens to the whole vector space under some transformation (like rotation), then you can just look at the effect of the transformation on the basis vectors. You don't need to examine the effect of the transformation on every vector individually. That's why basis vectors are useful. As an example, consider what happens when you rotate the 2D plane by some angle. You wouldn't (and couldn't) draw the rotated form of every single vector in the plane; you just draw the rotated $x$ and $y$ axes, and that's enough for you to understand what happened to the plane and figure out the effect of the rotation on any other vector. This is exactly what's going on with the solutions to the Schroedinger equation. All the possible solutions to the equation form a vector space, called a Hilbert space. And like any other vector space, you can choose any of an infinite number of possible bases for the space. The set of all complex exponential functions $$\{e^{ikx}|k\in\mathbb{R}\}$$ is one possible basis. You can construct any function by making a suitable linear combination of these exponential functions, $$\psi(x) = \int\underbrace{\frac{1}{\sqrt{2\pi}}\psi(k)}_{\text{coefficient}}\underbrace{\vphantom{\frac{1}{\sqrt{2\pi}}}e^{ikx}}_{\text{basis vector}}\mathrm{d}k$$ (You may recognize this as the inverse Fourier transform.) Another possible basis is the set of all sine and cosine functions, $$\{\sin(kx),\cos(kx)|k\in\mathbb{R}_+\}$$ You can also express any function as a linear combination of sine and cosine functions: $$\psi(x) = \int\biggl(\underbrace{i\sqrt{\frac{2}{\pi}}\psi_s(k)}_{\text{coefficient}}\underbrace{\vphantom{\frac{1}{\sqrt{2\pi}}}\sin(kx)}_{\text{basis vector}} + \underbrace{\sqrt{\frac{2}{\pi}}\psi_c(k)}_{\text{coefficient}}\underbrace{\vphantom{\frac{1}{\sqrt{2\pi}}}\cos(kx)}_{\text{basis vector}}\biggr)\mathrm{d}k$$ Just as with the 2D plane vectors, you can express the elements of either basis in terms of the other basis: $$\begin{align}\sin(kx) &= \frac{e^{ikx} - e^{-ikx}}{2i} & e^{ikx} &= \cos(kx) + i\sin(kx) \\ \cos(kx) &= \frac{e^{ikx} + e^{-ikx}}{2} & e^{-ikx} &= \cos(kx) - i\sin(kx)\end{align}$$ Depending on the particular situation you're considering, it may be more convenient to use one basis or the other. For example, if you have a potential that goes to infinity at some point (e.g. infinite square well), you know that the wavefunction becomes zero there, and thus it's to your advantage to choose a basis where the basis functions actually do go to zero somewhere: the sine/cosine basis, rather than the exponential one. But in general, any problem can be done with either basis. It's simply a choice of convenience. - The most general solution is $Ae^{-ikx} + Be^{ikx}$ Depending on the coefficients $A$ and $B$ this can be equal to either of the example wave functions you gave, and these coefficients will be determined by boundary conditions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934004008769989, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/129351/extremely-hard-geometric-problem
# Extremely hard geometric problem Given a triangle ABC. BL is the bisector of angle ABC, H is the orthocenter and P is the mid-point of AC. PH intersects BL at Q. If $\angle ABC= \beta$, find the ratio $PQ:HQ$.If $QR\perp BC$ and $QS \perp AB$, prove that the orthocenter lies on $RS$. - What is $S$? A diagram would be nice. – robjohn♦ Apr 8 '12 at 17:36 1 Unless there are further constraints on $R$ and $S$, the claim to be proven is not generally true. – Ilmari Karonen Apr 8 '12 at 17:48 Ah, I see... presumably $R$ is supposed to lie on $BC$ and $S$ on $AB$. – Ilmari Karonen Apr 8 '12 at 17:58 3 @robjohn: I added a picture I drew quickly in GeoGebra, using the assumptions stated in the previous comment. – Ilmari Karonen Apr 8 '12 at 18:10 1 The answer to the first question is: $HQ:PQ=\frac{2 cos \beta}{1-cos\beta}$ – Adam Apr 8 '12 at 20:15 ## 1 Answer In the figures below, I have added the circumcenter, $U$, and the centroid, $E$. I have also placed $L$ on the circumcircle. $\hspace{8mm}$ Note that since both are perpendicular to $\overline{AC}$, we have $\overline{BH}\,||\,\overline{UP}$; furthermore, $|\overline{BH}|=2|\overline{UP}|$. The latter is because $\triangle PUE$ is similar to $\triangle BHE$ and $$P=\frac{A+C}{2}\text{ and }E=\frac{A+B+C}{3}\tag{1}$$ so that $$P-E=\frac{A-2B+C}{6}\text{ and }E-B=\frac{A-2B+C}{3}\tag{2}$$ Thus, $$|\overline{UP}|=R\cos(B)\text{ and }|\overline{BH}|=2R\cos(B)\tag{3}$$ where $R$ is the circumradius of $\triangle ABC$. Since the line containing $\overline{UP}$ is the perpendicular bisector of $\overline{AC}$, the point at which $\overrightarrow{UP}$ intersects the circumcircle of $\triangle ABC$ splits the arc between $A$ and $C$ in half. Of course, the bisector of $\angle ABC$ also splits the arc between $A$ and $C$ in half. Thus, the perpindicular bisector of $\overline{AC}$ and the bisector of $\angle ABC$ meet on the circumcircle at $L$. $\hspace{8mm}$ Note that $\triangle BHQ$ is similar to $\triangle LPQ$. Equation $(3)$ gives that $|\overline{UP}|=R\cos(B)$ so that $$|\overline{PL}|=R(1-\cos(B))\tag{4}$$ Therefore, $(3)$ and $(4)$ yield $$\begin{align} |\overline{HQ}|/|\overline{PQ}| &=|\overline{BQ}|/|\overline{LQ}|\\ &=|\overline{HB}|/|\overline{PL}|\\ &=\frac{2\cos(B)}{1-\cos(B)}\tag{5} \end{align}$$ which answers the first part. Because $\triangle BUL$ is isosceles with central angle $2A+B=\pi-(C-A)$, we have $$|\overline{BL}|=2R\sin\left(A+\frac{B}{2}\right)=2R\cos\left(\frac{C-A}{2}\right)\tag{6}$$ Equation $(5)$ yields that $|\overline{BQ}|/|\overline{BL}|=\frac{2\cos(B)}{1+\cos(B)}$. Thus, $(6)$ gives $$|\overline{BQ}|=2R\cos\left(\frac{C-A}{2}\right)\frac{2\cos(B)}{1+\cos(B)}\tag{7}$$ Let $X$ be the intersection of $\overline{BQ}$ and $\overline{RS}$. Since $X$ is on the angle bisector of $\angle ABC$, $\overline{RS}$ is perpendicular to $\overline{BQ}$ and $|\overline{BR}|=|\overline{BS}|$. Thus, $|\overline{BR}|/|\overline{BQ}|=|\overline{BX}|/|\overline{BR}|=\cos(B/2)$. Therefore, $$\frac{|\overline{BX}|}{|\overline{BQ}|}=\cos^2(B/2)=\frac{1+\cos(B)}{2}\tag{8}$$ Equations $(7)$ and $(8)$ yield $$|\overline{BX}|=2R\cos\left(\frac{C-A}{2}\right)\cos(B)\tag{9}$$ Since $\angle HBC=\frac\pi2-C$ and $\angle QBC=\frac{B}{2}$ we get that $\angle HBQ=\frac{C-A}{2}$. Using $(3)$, the orthogonal projection of $\overline{BH}$ onto $\overline{BQ}$ has length is $2R\cos(B)\cos\left(\frac{C-A}{2}\right)$. Thus, the orthogonal projection of $H$ onto $\overline{BQ}$ is $X$. Therefore, $H$ lies on $\overline{RS}$. - @AdamAndersson: This answer has been accepted and unaccepted twice. Is there something that you would like to see that is not here? – robjohn♦ May 8 '12 at 19:45 No, your answer is great. What do you use to draw graphics? – Adam Jun 24 '12 at 18:19 – robjohn♦ Jun 24 '12 at 18:26 – Adam Jun 24 '12 at 18:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372857809066772, "perplexity_flag": "head"}
http://www.oilfieldwiki.com/wiki/Mineral_physics
# Mineral physics From Oilfield Wiki - the Oilfield Encyclopedia Mineral physics is the science of materials that compose the interior of planets, particularly the Earth. It overlaps with petrophysics, which focuses on whole-rock properties. It provides information that allows interpretation of surface measurements of seismic waves, gravity anomalies, geomagnetic fields and electromagnetic fields in terms of properties in the deep interior of the Earth. This information can be used to provide insights into plate tectonics, mantle convection, the geodynamo and related phenomena. Laboratory work in mineral physics require high pressure measurements. The most common tool is a diamond anvil cell, which uses diamonds to put a small sample under pressure that can approach the conditions in the Earth's interior. ## Creating high pressures The pace of progress in mineral physics has been determined, to a large extent, by the technology for reproducing the high pressures and temperatures in the Earth's interior. The most common tools for achieving this have been: ### Shock compression Many of the pioneering studies in mineral physics involved explosions or projectiles that subjected a sample to a shock. For a brief time interval, the sample is under pressure as the shock wave passes through. Pressures as high as any in the Earth have been achieved by this method. However, the method has some disadvantages. The pressure is very non-uniform and is not adiabatic, so the pressure wave heats the sample up in passing. The conditions of the experiment must be interpreted in terms of a set of pressure-density curves called the Hugoniot curves.[1] ### Multi-anvil press Multi-anvil presses involve an arrangement of anvils to concentrate pressure from a press onto a sample. Unlike shock compression, the pressure exerted is steady, and the sample can be heated using a furnace. Pressures equivalent to depths of 700 km and temperatures of 1500°C can be attained. The apparatus is very bulky and cannot achieve pressures like those in the diamond anvil cell (below), but they can handle much larger samples that can be examined after the experiment.[2] ### Diamond anvil cell Schematics of the core of a diamond anvil cell. The diamond size is a few millimeters at most The diamond anvil cell is a small table-top device for concentrating pressure. It can compress a small (sub-millimeter sized) piece of material to extreme pressures, which can exceed 3,000,000 atmospheres (300 gigapascals).[3] This is beyond the pressures at the center of the Earth. The concentration of pressure at the tip of the diamonds is possible because of their hardness, while their transparency and high electrical conductivity allow a variety of probes can be used to examine the state of the sample. The sample can be heated to thousands of degrees. ## Properties of materials ### Equations of state To deduce the properties of minerals in the deep Earth, it is necessary to know how their density varies with pressure and temperature. Such a relation is called an equation of state (EOS). A simple example of an EOS that is predicted by the Debye model for harmonic lattice vibrations is the Mie-Grünheisen equation of state: $\left(\frac{dP}{dT} \right) = \frac{\gamma_D}{V}C_V,$ where $$C_V$$ is the heat capacity and $$\gamma_D$$ is the Debye gamma. The latter is one of many Grünheisen parameters that play an important role in high-pressure physics. A more realistic EOS is the Birch–Murnaghan equation of state.[4] ### Interpreting seismic velocities Inversion of seismic data give profiles of seismic velocity as a function of depth. These must still be interpreted in terms of the properties of the minerals. A very useful heuristic was discovered by Francis Birch: plotting data for a large number of rocks, he found a linear relation of the compressional wave velocity $$v_p$$ of rocks and minerals of a constant average atomic weight $$\overline{M}$$ with density $$\rho$$:[5][6] $v_p = a \overline{M} + b \rho$. This makes it possible to extrapolate known velocities for minerals at the surface to predict velocities deeper in the Earth. ### Other physical properties • Viscosity • Creep (deformation) • Melting • Electrical conduction and other transport properties ## References • {{#invoke:Citation/CS1|citation |CitationClass=journal }} • {{#invoke:Citation/CS1|citation |CitationClass=journal }} • {{#invoke:Citation/CS1|citation |CitationClass=journal }} • {{#invoke:Citation/CS1|citation |CitationClass=journal }} • {{#invoke:citation/CS1|citation |CitationClass=book }} • "Studying the Earth's formation: The multi-anvil press at work". Lawrence Livermore National Laboratory. Retrieved 29 September 2010. </dl>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.870037853717804, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/16788/solution-technique-to-optimize-sets-of-constraint-functions-with-objective-funct
# Solution Technique to Optimize Sets of Constraint Functions with Objective Function that is Heaviside Step Function I have the following constraint inequalities and equalities: $$Ax \leq b$$ $$A_{eq}x = b_{eq}$$ The problem is that the objective function, which I am asked to minimized, is defined as $$f=\sum\limits_{i=1}^{m}u([\sum\limits_{j=1}^{n}k_{ij}x_j]-c_{i})$$ where $u(x)$ is the Heaviside step function. Strictly speaking, this is not a linear programming problem ( although quite close!), so it can't be attacked by the standard linear programming techniques. I'm aware that I can approximate the step function into a smooth function, but this is not the route I plan to take now. What are the techniques that are available for this kind of problem? - 1 one choice is to replace the step function with a smooth approximation, and then to use some nonlinear solver. – user1709 Jan 8 '11 at 17:28 @Slowsolver, is there another choice? – Graviton Jan 9 '11 at 11:21 The objective should be to minimize $\sum_i u(t_i)$, not $\sum_i t_i$, right? I like Fanfan's approach though I'm not sure if it'll work in this case since if $\sum_j k_{ij} x_j - c_i \leq 0$, then $t_i$ is not necessarily 0 though it needs to be. – user32021 May 23 '12 at 10:52 Strike that. It would still work because of the minimization objective. If the objective were to maximize, then it would not. – user32021 May 23 '12 at 10:57 ## 1 Answer Your problem can be converted in a mixed-integer program in the following way: for each term in the objective function of the form $$u\Bigl(\sum_j k_{ij} x_j - c_i\Bigr) ,$$ introduce an auxiliary binary variable $t_i \in \{0,1\}$ and add the constraint $$\sum_j k_{ij} x_j - c_i \le M t_i$$ where $M$ is a sufficiently large constant (it has to be larger than all possible values for the argument of $u$). You can then replace the objective function by the sum of the binary variables, i.e. minimize $\sum_i t_i$ and check that the constraint ensures that the problem is equivalent to the original one (note that this assumes the convention $u(0)=0$). Mixed-integer programs are harder to solve than linear programs, but you are guaranteed to find a globally optimal solution exactly. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304323792457581, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/209984-equation-planes.html
# Thread: 1. ## Equation of planes Hi, can anyone explain to me why the equation for a plane3 which contains the intersection line of 2 other planes(plane1 and plane2), is the sum of the equations of plane 1 and plane 2? Note the planes all go through the origin. ie, if plane 1 is ax+by+cz=0 and if plane 2 is dx+ey+fz=0 any plane containing the intersecting lines of plane 1 and plane 2 is: (a+d)x + (b+e)y + (c+f)z = 0 Why? 2. ## Re: Equation of planes Originally Posted by darren86 Hi, can anyone explain to me why the equation for a plane3 which contains the intersection line of 2 other planes(plane1 and plane2), is the sum of the equations of plane 1 and plane 2? Notation: $\mathcal{R}=<x,y,z>$ and $P$ is a point. Here are two planes $N\cdot(\mathcal{R}-P)=0~\&~M\cdot(\mathcal{R}-P)=0,~N\not \parallel M,$ both containing $P$. The normals are $N~\&~M$ The equation of the line of their intersection is $P+t(N\times M)$. Because $(N+M)\cdot(N\times M)=0$ means the plane $(N+M)\cdot(\mathcal{R}-P)=0$ contains the line of intersection and has normal $N+M$. NOTE that answer is not unique. But it explains your question.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369591474533081, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/74944/on-bounded-homogeneous-connected-domains-of-cn/74959
## On bounded homogeneous connected domains of C^n ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) So let $D\subseteq \mathbb{C}^n$ be a bounded connected open set with a transitive action of its group of biholomorphisms (which we denote by $Hol(D)$). Note that I'm not assuming that $D$ is symmetric. We thus have that $D$ is "homeomorphic" to $Hol(D)/K$ where $K=Stab(d_0)$ for some $d_0\in D$. In the special case where $Hol(D)$ is a real Lie group and that $K$ is a maximal compact of $Hol(D)$ then by a theorem of Elie Cartan we have that $Hol(D)/K$ is homeomoprphic to $\mathbb{R}^m$ and thus contractible. Under my assumptions: (1) Is $Hol(D)$ always a Lie group? (2) Is $K$ always a maximal compact? (3) In general is $D$ always contractible (or simply connected)? - 3 Dear Hugo -- re "is Hol(D) always a Lie group": no, take $D=\mathbb{C}^2$; it has automorphisms of the form $(x,y)\mapsto (x,y+f(x))$ where $f$ is any holomorphic function $\mathbb{C}\to\mathbb{C}$. – algori Sep 8 2011 at 22:40 3 Maybe I am missing something: isn't $D=\mathbb{C}^\times$ an example where $D$ is not contractible? – MP Sep 8 2011 at 22:40 Yes you are right $\mathbb{C}^{\times}$ is a counter-example, so I'll redit my question – Hugo Chapdelaine Sep 8 2011 at 23:09 I forgot put that $D$ was bounded – Hugo Chapdelaine Sep 8 2011 at 23:21 ## 2 Answers Re question 3: a bounded homogeneous domain is biholomorphic to a Siegel domain, which is contractible. See e.g. Siegel domain and references therein (those references probably answer question 2 as well). Another useful link is Homogeneous bounded domain. upd: Another Google search gave the following references: "Homogeneous Bounded Domains and Siegel Domains" by Soji Kaneyuki, Springer LNM 241. "Theory of complex homogeneous bounded domains" by Yichao Xu, Mathematics and its applications 569. - Thanks a lot @algori for the reference – Hugo Chapdelaine Sep 9 2011 at 1:02 2 I fixed the links. – George Lowther Dec 2 2011 at 1:28 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It is a theorem of H. Cartan that $Hol(D)$ for any bounded such $D\subset \mathbb C^n$ is a finite dimensional real Lie group. See for example chapter 9 of "Several complex variables" by R. Narasimhan. - thanks @Gjergji – Hugo Chapdelaine Sep 9 2011 at 1:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8876864910125732, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/tagged/survival?page=4&sort=newest&pagesize=15
# Tagged Questions Survival analysis is concerned with modelling the time before subjects change state, typically time until death or failure. One key feature of such data is that they can be censored, that is, some subjects will not have changed state before the study ends. 2answers 79 views ### Testing survival against frequency of some event I have the following data for 200 cases/subjects: time to death over a 15 year period, $t$. A datum with a value of 180 months means that the subject did not die over the 15 year period. the ... 1answer 354 views ### How to interpret the output of survival analysis? I am beginner of survival analysis, so I am confused about some basic ideas in this area. I think the basic assumption in survival analysis is that time of event for a subject can be estimated given ... 2answers 148 views ### Difference between survival analysis and classification? Since the time variable can be treated as a normal feature in classification, why not using more powerful classification methods (such as, C4.5, SVM) to predict the occurrence of an event? Why lots of ... 1answer 266 views ### How to run survival analysis on big dataset? I am recently involved in a project that needs to analyze the survival time of objects. Therefore, I plan to use the rms package to build a Cox model. The problem is, since the dataset I have is so ... 2answers 702 views ### Cumulative Incidence vs. Kaplan Meier to estimate probability of failure To estimate the probability failure in medical sciences, it is not atypical to use 1-KM. However this does not account for competing risks, such as death by natural causes or causes unrelated with the ... 1answer 272 views ### Oddity in simulating Weibull survival times Based on this guide to simulating survival times in SAS and R, I used the following code to generate two survival functions: ... 1answer 159 views ### cox proportional hazard model am doing my dissertation involving cox model and i would like to understand how you interpret the survival table at mean of covariates. how i do u i determine the survival function from the output ... 0answers 57 views ### Inferential methods on large panel data with sparse clusters and rare outcomes Using longitudinal survey data on children using psychotropic medications, we are interested in estimating associations with medication classes, their persistence and adherence (longitudinal ... 1answer 883 views ### Difference between survdiff log-rank and coxph log-rank I'm using the survival package in R to analyze clinical data. I am analyzing two different groups of patients, when I calculate survdiff in order to compare the curves, I got p= 0.135, but when I ... 0answers 205 views ### Comparing Hazard Rate Ratios for different outcomes on the same sample Suppose I have survival data coming from a cohort study. The outcome of interest is death and a subject can die either because of cancer or because of other causes. The two outcomes under study are ... 3answers 191 views ### Critical appraisal of survival paper I must make a critical appraisal of a cancer survival paper. I hope someone here can give some hints. The paper investigates person-level markers of socio-economic status as exposure and looks for ... 2answers 202 views ### What is the difference between a hazard ratio and the e^coef of a Cox equation? As I understand it, the definition of the hazard ratio is the ratio of two hazard rates. Often the exp(coef) from a Cox model is also used as an estimate of the hazard ratio. These methods give two ... 1answer 423 views ### Cox regression - Hazard and Survival Estimates? I have a couple questions about Cox survival regression: 1) Is it true that the hazard function h(t) is not available (even WITHOUT time dependent covariates)- and if not, is it because the baseline ... 1answer 498 views ### P-value of a survival ROC c-index It is possible to calculate the c-index for time dependent outcomes (such as disease) using the survivalROC package in R. My question is : is it possible to produce a p-value for the c-index that is ... 2answers 1k views ### How to calculate predicted hazard rates from a Cox PH model? I have the following Cox PH model (Time, Event) ~ X + Y + Z I would like to get the predicted hazard rates (i am talking about hazard rates NOT hazard ratios) ... 1answer 98 views ### Time-wise treatment effect / survival analysis Let's say I have some kind of survival data - i.e. I'm giving a drug that may cause mortality. So I have three patients: A, B and C. All are given the drug at Time t1. Let's say patient A dies at ... 0answers 242 views ### Multiple imputation of time variables — which step to impute? Lets assume I have a survival analysis study with an exposure, two covariates, and two time related variables. Say date of diagnosis and date of death. Combined, the two time related variables will be ... 1answer 344 views ### Floating Point Overflow while computing the Kaplan-Meier estimator in SAS I try to estimate survival curves based on a Kaplan-Meier estimator using proc lifetest. However, SAS outputs an error message which I do not manage to circumvent. ... 1answer 391 views ### Martingale Residuals and Schoenfeld Residuals If the Martingale Residuals indicate that a predictor $X$ violates the proportional hazards assumption but the Schoenfeld Residuals indicate that it does not....which one should you trust? 2answers 229 views ### Proportional hazards assumption meaning Suppose we have a Cox-PH model and death is the outcome of interest. Suppose we have the predictors $X_1, X_2$ and $X_3$. Then does the PH assumption mean that for any person (i.e. arbitrary values of ... 0answers 79 views ### Problem with lagged covariates Suppose we have a binary variable $X$ that indicates whether a person ate pizza during the week. This variable is recorded for an entire year. So we have $X_{1}, \dots, X_{52}$ values (1 or 0). ... 1answer 258 views ### How to generate a group summary categorical variable for survival analysis in Stata? I have registry data for which I have data for individuals for each hospital attendance over a 10 year period. I wish to determine the effect of chronic antibiotic use upon acquisition of specific ... 1answer 121 views ### Survival analysis - help with determining source of error I am trying to determine the effect of four interventions/exposures upon the acquisition of an infection (the interventions aimed at preventing the infection). I'm using registry data. My analysis ... 1answer 814 views ### How to perform a Wilcoxon signed rank test for survival data in R? Say you have survival data like this: ... 1answer 2k views ### Interpretation and validation of a Cox proportional hazards regression model using R in plain English Can someone explain my Cox model to me in plain English? I fitted the following Cox regression model to all of my data using the cph function. My data are saved in an object called "Data". The ... 1answer 259 views ### Robust variance/covariance matrix in Poisson regression Suppose I have survival data with more than one row per subject, because I have splitted the follow-up time of each subject into pieces (maybe because I have one or more time-varying variables or ... 1answer 82 views ### Best terminology to describe a time-series problem I have a problem that I would like to investigate regarding time-series data, but as I'm inexperienced in the field of statistics, I am unsure of the best terminology to describe my problem (so I can ... 0answers 91 views ### Probability calculation in survival analysis I have 13 similar independent units and I need to calculate the probability at least 6 of them to survive for time less than 1.10. From a table with the Kaplan-Meir estimates I get the following; ... 1answer 320 views ### Stratified Cox model I am working on fitting a Cox model to predict. But several predictors violated the proportional hazards assumption. I am gonna to do stratified Cox model to adjust them. But the results of stratified ... 1answer 875 views ### How to do cross-validation with a Cox proportional hazards model? Suppose I have constructed a prediction model for the occurrence of a particular disease in one dataset (the model building dataset) and now want to check how well the model works in a new dataset ... 0answers 371 views ### Sample size and cross-validation methods for Cox regression predictive models I have a question I would like to pose to the community. I have recently been asked to provide statistical analysis for a tumor marker prognostic study. I have primarily used these two references to ... 1answer 1k views ### Different prediction plot from survival coxph and rms cph I've created my own slightly enhanced version of the termplot that I use in this example, you can find it here. I've previously posted on SO but the more I think about it I believe that this probably ... 0answers 253 views ### What data structure is necessary for survival analysis? I'm relatively new to survival analysis and try to get my data in the right shape. I have two tables both concerning the observed individuals. If I just would use one of the tables, I would have ... 1answer 175 views ### What statistic test apply to a cell count time line? I am a PhD student with some difficulties about what test to choose to verify an experiment. Hope you can give me some help! My experiment: I am counting the number of a subset of cells in Drosophila ... 3answers 217 views ### Survival analysis with categorical variable I have event time data for subjects with different categories (A, B, C etc.) yearly observed. To my understanding my data is both right and interval censored (?). Subjects' category can change from ... 1answer 293 views ### Predicting the time until an expected event occurs I have a situation where an event is supposed to occur every x minutes for a number of different sites (each site could be configured for a different time interval x). From time to time the event may ... 1answer 146 views ### Compare survival of one population to the general population I have the survival dataset of a population with a special disease. I´d like to compare this population with the general population to see whether this population has a decreased life-expectancy ... 1answer 97 views ### Inconsistency of the Breslow estimator The Breslow estimate is commonly used in the Cox proportional hazards model. However this paper by Deborah Burr Burr, D. (1994). On Inconsistency of Breslow's Estimator as an Estimator of the ... 0answers 133 views ### Variance of the Kaplan-Meier estimate for dependent observations Can someone help me find a way to estimate the variance of the Kaplan-Meier estimate with dependent observations? Specifically, I have failure time data from patients with several different ... 1answer 133 views ### Are survivor functions meaningful with proportional hazards models? Does the survivor function estimated after running a Proportional Hazards model give valid predicted probabilities of an event happening after a number of time periods? The only source I've found on ... 3answers 198 views ### Understanding survival at time function This question is related to a few others (Here,Here) on the topic as I have been searching for information. Hopefully this one is sufficient. 1) I am seeing differences in the relationship between ... 1answer 274 views ### How can I censor entries at failure or an exit age in STATA? I am attempting to construct a survival analysis in STATA whereby subjects fail if the failure condition is met (Staph) but are also removed from the model once they are over 1100 days old. My ... 1answer 255 views ### How can I display a survival curve with right censoring marks? I have the following survival data and have constructed a survival plot however cannot mark the right censored points (which makes me think my survival graph is also incorrect) - how can I manage this ... 2answers 224 views ### What happens if a survival curve doesn't reach 0.5? Does this mean you can't compute the median? 1answer 191 views ### How do you plot survival functions for specific values of covariates in SAS? Suppose I do the following: proc phreg data = new; model time*censor(0) = x y; run; Also suppose $x$ is a binary variable and $y$ is a ... 1answer 398 views ### SAS Code for Survival Analysis In PROC PHREG, how do you set a continuous variable at a certain value as the reference level? For example, suppose x = 3.5, 3.6, 4.3, 5.4 and we want x = 6 to be the reference level. How would you do ... 1answer 249 views ### Proportional hazards assumption The proportional hazards assumption basically says that the hazard rate does not vary with time. That is, $\text{HR}(t) \equiv \text{HR}$. When can we assume this? What if the hazard ratios at ... 3answers 917 views ### Finding median survival time from survival function Is the best way to find the median survival time from a survival plot just to draw a horizontal line from $p = 0.5$ to the curve and project down to the x-axis? 2answers 2k views ### Time dependent coefficients in R - how to do it? Update: Sorry for another update but I've found some possible solutions with fractional polynomials and the competing risk-package that I need some help with. The problem I can't find an easy way ... 1answer 196 views ### Sampling distribution is skewed in a fully Bayesian inference of MCMC in Cox PH models I used the MCMC method to estimate linear models with a fully Bayesian inference previously, and had no problem from estimated coefficients. Recently I use the same way in a semiparametric Cox PH ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9111330509185791, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/19655/does-a-moving-escalator-make-it-easier-to-walk-up-the-steps
# Does a moving escalator make it easier to walk up the steps? I was discussing with my colleagues why it feels easier to walk up an escalator when it is moving. My natural assumption was that the movement of the escalator imparts some extra acceleration on the rider that helps to move up the stairs. But my colleagues insisted that this was nonsense, and that the affect is purely psychological (i.e. it just seems easier). We actually came up with three contradictory hypotheses, and I'm not sure which is right: 1. The escalator is constantly accelerating the rider since without constant acceleration the body wouldn't be able to counteract the force of gravity (i.e. my theory). 2. The rider is not accelerating since no acceleration is needed to maintain a constant velocity. 3. The acceleration of the escalator actually makes it harder to get to the next step since it pushes the rider against the current step. Which of these is correct? - Compare to What's the difference between running up a hill and running up an inclined treadmill?. Personally I doubt that it is easier (or harder, it's roughly a inertial frame), which would imply that the sense of motion is fooling you into feeling that it's easier. – dmckee♦ Jan 17 '12 at 18:39 It is possible that the act of bouncing against the stairs as you run up makes the escalator go backwards a little at each step, pulling the stairs down a little. Without such an effect, there would be no change. – Ron Maimon Jan 18 '12 at 5:19 ## 2 Answers Once you get yourself moving, the escalator does not accelerate you and does not assist your running up hill. The only advantage an escalator gives you is that you keep moving up even if you don't put any effort into it. Next time you feel like running up an escalator (which is not entirely safe), you might consider repeating the experiment with your eyes closed. That will eliminate the visual effect but you will still feel the air flow on your face. This is basic Newton's law on inertial frames. You cannot detect steady motion. This is why you can throw dice in the gambling compartment of a luxury airliner that is moving at 400 miles per hour. Or why you can exist on the earth's surface without realizing that it is moving (due to the earth's turning) at an even faster rate (depending on latitude). - Number two is correct: 2.The rider is not accelerating since no acceleration is needed to maintain a constant velocity. The first option offered: 1.The escalator is constantly accelerating the rider since without constant acceleration the body wouldn't be able to counteract the force of gravity (i.e. my theory). Here's why that's wrong. To apply Newton's second law, $F = ma$, all system forces and their interaction must be taken into account. If you hold an object in the palm of your hand, you are applying some force on that object, but that object is not accelerating and it's not even moving. This is because the force you apply on that object is equal and opposite to the force of gravity. In the case of an escalator, a motor provides some torque (rotational force) which is acting agianst forces of friction in the system and gravity which weighs down the passengers. After all forces are taken into account, the velocity of the escalator is (approximately) constant therefore the acceleration is zero and walking up those stairs is no easier than walking up a regular set of stairs. Walking up the stairs of an escalator may even be more difficult than walking up a regular set of stairs because increased velocity relative to the surrounding atmosphere causes an increase in fluid friction acting on the walker. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539216160774231, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Gr%C3%B6bner_basis
Gröbner basis In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field K[x1, ..,xn]. A Gröbner basis allows to deduce easily many important properties of the ideal and the associated algebraic variety, like the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving a system of polynomial equations and computing the image of an algebraic variety under a projection or a rational map. One can view Gröbner basis computation as a multivariate, non-linear generalization of both Euclidean algorithm for computing polynomial greatest common divisors, and Gaussian elimination for linear systems.[1] Gröbner bases were introduced in 1965, together with an algorithm to compute them (Buchberger's algorithm), by Bruno Buchberger in his PH.D. thesis. He named them after his advisor Wolfgang Gröbner. The Association for Computing Machinery awarded him its 2007 Paris Kanellakis Theory and Practice Award for this work. An analogous concept for local rings was developed independently by Heisuke Hironaka in 1964, who named them standard bases. Since then, the theory of Gröbner bases has been extended by many authors in various directions. It has been generalized to other structures like polynomials over principal ideal rings or polynomial rings, and also some classes of non-commutative rings and algebras. Background Polynomial ring Main article: Polynomial ring Gröbner bases are primarily defined for ideals in a polynomial ring $R=K[x_1,\ldots,x_n]$ over a field K. Although the theory works for any field, most Gröbner basis computations are done either when K is the field of rationals or the integers modulo a prime number. A polynomial is a sum $c_1M_1 +\cdots + c_mM_m$ where the $c_i$ are nonzero elements of K and the $M_i$ are monomials or power products of the variables. This means that a monomial M is a product $M=x_1^{a_1}\cdots x_n^{a_n},$ where the $a_i$ are nonnegative integers. The vector $A=[a_1, \ldots ,a_n]$ is called the exponent vector of M. The notation is often abbreviated as $x_1^{a_1}\cdots x_n^{a_n}=X^A.$ Note that a monomial is uniquely defined by its exponent vector; this allows to represent, in a computer, a monomial by its exponent vector, and a polynomial by a list of pairs consisting in a coefficient and an exponent vector. This allows a much more efficient implementation, by avoiding to include in the representation of a polynomial a lot of occurrences of the names of the variables and of the operators of addition, multiplication and exponentiation. If $F=\{f_1,\ldots, f_k\}$ is a finite set of polynomials in such a polynomial ring R, the ideal generated by F, denoted $\langle f_1,\ldots, f_k\rangle,$ is the set of the linear combinations of the elements of F $\langle f_1,\ldots, f_k\rangle = \left\{\sum_{i=1}^k g_i f_i\;|\; g_1,\ldots, g_k\in K[x_1,\ldots,x_n]\right\}$ Monomial ordering Main article: Monomial order All operations related to Gröbner bases need the choice of a total order on the monomials, has the following properties of compatibility with multiplication. For every monomials M, N, P then 1. $M<N \Longleftrightarrow MP<NP$ 2. $M<MP$ A total order satisfying these condition is sometimes called an admissible ordering. These conditions implies Noetherianity, which means that every strictly decreasing sequence of monomials is finite. Although Gröbner basis theory does not depend on a particular choice of an admissible monomial ordering, three monomial orderings are specially important for the applications: • the lexicographical ordering, commonly called lex of plex (for pure lexical ordering) • the total degree reverse lexicographical ordering, commonly called degrevlex • the elimination ordering lexdeg The Gröbner basis theory were initially introduced for the lexicographical ordering. It appeared soon that the Gröbner basis for degrevlex is almost always much easier to compute, and that it is almost always easier, for computing a lex Gröbner basis, to compute first the degrevlex basis and then using a change of ordering algorithm. When elimination is needed, degrevlex is not convenient; both lex and lexdeg may be used, but again, many computations are relatively easy with lexdeg and almost impossible with lex. Once a monomial ordering is fixed, the terms of a polynomial (product of a monomial by a nonzero coefficient) are naturally ordered by decreasing monomials (for this order). This makes the representation of a polynomial as an ordered list of pairs coefficient–exponent vector a canonical representation of the polynomials. The first (greatest) term, coefficient and monomial of a polynomial p for this ordering are respectively called leading term, leading monomial and leading coefficient, and denoted, in this article lt(p), lm(p) and lc(p). Reduction The concept of reduction, also called multivariate division or normal form computation is central to Gröbner basis theory. It is a multivariate generalization of the Euclidean division of univariate polynomials. In this section, we suppose a fixed monomial ordering, which will not be referred to explicitly. Given two polynomials f and g, one says that f is reducible by g, if some monomial m in f is multiple of the leading monomial lm(g) of g. If m is the leading monomial of f, one says that f is lead-reducible by g. If c is the coefficient of m in f and m = q lm(g), the one-step reduction of f by g is the operation that associates to f the polynomial $\operatorname{red}_1(f,g)=f-\frac{c}{\operatorname{lc}(g)}\,q\, g.$ The main properties of this operation is that the resulting polynomial does not contain the monomial m and that the monomials greater than m (for the monomial ordering) remain unchanged. This operation is not, in general, uniquely defined; if several monomials in f are multiples of lm(g), one choose arbitrarily the one that is reduced. In practice, it is better to choose the greatest for the monomial ordering, because, otherwise, subsequent reductions could reintroduce the monomial that has been just removed. Given a finite set G of polynomials, one says that f is reducible or lead-reducible by f if it is reducible or lead-reducible, respectively, by an element of G. If it is the case, then one defines $\operatorname{red}_1(f,G)=\operatorname{red}_1(f,g)$. Again, this operation is not uniquely defined, and this is this non-uniqueness which is the starting point of the Gröbner basis theory. The (complete) reduction of f by G consists in applying iteratively this operator $\operatorname{red}_1$ until getting a polynomial $\operatorname{red}(f,G)$, which is irreducible by G. It is called a normal form of f by G. In general, it is not uniquely defined (this is not a canonical form), and this is this non-uniqueness which is the starting point of the Gröbner basis theory. For Gröbner basis computations, unless at the end, it is not necessary to completely reduce, a lead-reduction is sufficient, which saves a large amount of computation. The definition of the reduction shows immediately that, if h is a normal form of f by G, then we have $f=h+\sum_{g\in G} q_g\,g,$ where the $q_g$ are polynomials. In the case of univariate polynomials, if G is reduced to a single element g, then h is the remainder and qg the quotient of the Euclidean division of f by g, and the division algorithm is exactly the process of lead-reduction. This is for this reason that some authors use the term of multivariate division instead of reduction. Formal definition A Gröbner basis G of an ideal I in a polynomial ring R over a field is characterized by any one of the following properties, stated relative to some monomial order: • the ideal given by the leading terms of polynomials in I is itself generated by the leading terms of the basis G; • the leading term of any polynomial in I is divisible by the leading term of some polynomial in the basis G; • multivariate division of any polynomial in the polynomial ring R by G gives a unique remainder; • multivariate division of any polynomial in the ideal I by G gives remainder 0. All these properties are equivalent; different authors use different definitions depending on the topic they choose. The last two properties allow calculations in the factor ring R/I with the same facility as modular arithmetic. It is a significant fact of commutative algebra that Gröbner bases always exist, and can be effectively obtained for any ideal starting with a generating subset. Multivariate division requires a monomial ordering, the basis depends on the monomial ordering chosen, and different orderings can give rise to radically different Gröbner bases. Two of the most commonly used orderings are lexicographic ordering, and degree reverse lexicographic order (also called graded reverse lexicographic order or simply total degree order). Lexicographic order eliminates variables, however the resulting Gröbner bases are often very large and expensive to compute. Degree reverse lexicographic order typically provides for the fastest Gröbner basis computations. In this order monomials are compared first by total degree, with ties broken by taking the smallest monomial with respect to lexicographic ordering with the variables reversed. In most cases (polynomials in finitely many variables with complex coefficients or, more generally, coefficients over any field, for example), Gröbner bases exist for any monomial ordering. Buchberger's algorithm is the oldest and most well-known method for computing them. Other methods are the Faugère's F4 and F5 algorithms, based on the same mathematics as the Buchberger algorithm, and involutive approaches, based on ideas from differential algebra. [2] There are also three algorithms for converting a Gröbner basis with respect to one monomial order to a Gröbner basis with respect to a different monomial order: the FGLM algorithm, the Hilbert Driven Algorithm and the Gröbner walk algorithm. These algorithms are often employed to compute (difficult) lexicographic Gröbner bases from (easier) total degree Gröbner bases. A Gröbner basis is termed reduced if the leading coefficient of each element of the basis is 1 and no monomial in any element of the basis is in the ideal generated by the leading terms of the other elements of the basis. In the worst case, computation of a Gröbner basis may require time that is exponential or even doubly exponential in the number of solutions of the polynomial system (for degree reverse lexicographic order and lexicographic order, respectively). Despite these complexity bounds, both standard and reduced Gröbner bases are often computable in practice, and most computer algebra systems contain routines to do so. The concept and algorithms of Gröbner bases have been generalized to submodules of free modules over a polynomial ring. In fact, if L is a free module over a ring R, then one may consider R⊕L as a ring by defining the product of two elements of L to be 0. This ring may be identified with $R[e_1, \ldots, e_k]/\left\langle \{e_ie_i|1\le e_i\le e_j\le l\}\right\rangle,$ where $e_1, \ldots, e_k$ is a basis of L. This allows to identify a submodule of L generated by $g_1, \ldots, g_k$ with the ideal of $R[e_1, \ldots, e_k]$ generated by $g_1, \ldots, g_k$ and $\{e_ie_i|1\le e_i\le e_j\le l\}\rangle.$ If R is a polynomial ring, this reduces the theory and the algorithms of Gröbner bases of modules to the theory and the algorithms of Gröbner bases of ideals. The concept and algorithms of Gröbner bases have also been generalized to ideals over various ring, commutative or not, like polynomial rings over a principal ideal ring or Weyl algebras. Properties and applications of Gröbner bases Unless explicitly stated, all the results that follow[3] are true for any monomial ordering (see this article for the definitions of the different orders that are mentioned below). It is a common misconception to think that the lexicographical order is needed for some of these results. On the contrary, the lexicographical order is, almost always, the most difficult to compute, and using it makes unpractical many computations that are relatively easy with graded reverse lexicographic order (grevlex), or, when elimination is needed, the elimination order (lexdeg) which restricts to grevlex on each block of variables. Equality of ideals Reduced Gröbner bases are unique for any given ideal and any monomial ordering. Thus two ideals are equal if and only if they have the same (reduced) Gröbner basis (usually a Gröbner basis software always produces reduced Gröbner bases). Membership and inclusion of ideals The reduction of a polynomial f by the Gröbner basis G of an ideal I yield 0 if and only if f is in I. This allows to test the membership of an element in an ideal. Another method consists in verifying that the Gröbner basis of G∪{f} is equal to 'G. To test if the ideal I generated by f1, ...,fk is contained in the ideal J, it suffices to test that every fi is in J. One may also test the equality of the reduced Gröbner bases of J and J∪{f1, ...,fk}. Solutions of a system of algebraic equations Main article: System of polynomial equations Any set of polynomials may be viewed as a system of polynomial equations by equating the polynomials to zero. The set of the solutions of such a system depends only of the generated ideal, and, therefore does not changes when the given generating set is replaced by the Gröbner basis, for any ordering, of the generated ideal. Such a solution, with coordinates in an algebraically closed field containing the coefficients of the polynomials is called a zero of the ideal. In the usual case of rational coefficients, this algebraically closed field is chosen as the complex field. An ideal does not has any zero (the system of equations is inconsistent) if and only 1 belongs to the ideal (this is Hilbert's Nullstellensatz), or, equivalently, if its Gröbner basis (for any monomial ordering) contains 1, or, also, if the corresponding reduced Gröbner basis is [1]. Given the Gröbner basis G of an ideal I, it has only a finite number of zeros, if and only if, for each variable x, G contains a polynomial with a leading monomial that is a power of x (without any other variable appearing in the leading term). If it is the case the number of zeros, counted with multiplicity, is equal to the number of monomials that are not multiple of any leading monomial of G. This number is called the degree of the ideal. When the number of zero is finite, the Gröbner basis for a lexicographical monomial ordering provides, theoretically a solution: the first coordinates of a solution is a root of the greatest common divisor of polynomials of the basis that depends only of the first variable. After substituting this root in the basis, the second coordinates of this solution is a root of the greatest common divisor of the resulting polynomials that depends only on this second variable, and so on. This solving process is only theoretical, because it implies GCD computation and root-finding of polynomials with approximate coefficients, which are not practicable because of numeric instability. Therefore, other methods have been developed to solve polynomial systems through Gröbnes bases (see System of polynomial equations for more details). Dimension, degree and Hilbert series The dimension of an ideal I in a polynomial ring R is the Krull dimension of the ring R/I and is equal to the dimension of the algebraic set of the zeros of I. It is also equal to number of hyperplanes in general position which are needed to have an intersection with the algebraic set, which is a finite number of points. The degree of the ideal and of its associated algebraic set is the number of points of this finite intersection, counted with multiplicity. In particular, the degree of an hypersurface is equal to the degree of its definition polynomial. Both degree and dimension depends only on the set of the leading monomials of the Gröbner basis of the ideal for any monomial ordering. The dimension is the maximal size of a subset S of the variables such that there is no leading monomial depending only on the variables in S. Thus, if the ideal has dimension 0, then for each variable x there is a leading monomial in the Gröbner basis that is a power of x. Both dimension and degree may be deduced from the Hilbert series of the ideal, which is the series $\sum_{i=0}^\infty d_it^i$, where $d_i$ is the number of monomials of degree i that are not multiple of any leading monomial in the Gröbner basis. The Hilbert series may be summed into a rational fraction $\sum_{i=0}^\infty d_it^i=\frac{P(t)}{(1-t)^d},$ where d is the dimension of the ideal and $P(t)$ is a polynomial such that $P(1)$ is the degree of the ideal. Although the dimension and the degree do not depend on the choice of the monomial ordering, the Hilbert series and the polynomial $P(t)$ change when one changes of monomial ordering. Most computer algebra systems that provide functions to compute Gröbner bases provide also functions for computing the Hilbert series, and thus also the dimension and the degree. Elimination The computation of Gröbner bases for an elimination monomial ordering allows computational elimination theory. This is based on the following theorem. Let us consider a polynomial ring $K[x_1,\ldots,x_n,y_1,\ldots,y_m]=K[X,Y],$ in which the variables are split into two subsets X and Y. Let us also choose an elimination monomial ordering "eliminating" X, that is a monomial ordering for which two monomials are compared by comparing first the X-parts, and, in case of equality only, considering the Y-parts. This implies that a monomial containing an X-variable is greater than every monomial independent of X. If G is a Gröbner basis of an ideal I for this monomial ordering, then $G\cap K[Y]$ is a Gröbner basis of $I\cap K[Y]$ (this ideal is often called the elimination ideal). Moreover, a polynomial belongs to $G\cap K[Y]$ if and only if its leading term belongs to $G\cap K[Y]$ (this makes very easy the computation of $G\cap K[Y]$, as only the leading monomials need to be checked). This elimination property has many applications, some of them are reported in the next sections. Another application, in algebraic geometry, is that elimination realizes the geometric operation of projection of an affine algebraic set into a subspace of the ambient space: with above notation, the (Zariski closure of) the projection of the algebraic set defined by the ideal I into the Y-subspace is defined by the ideal $I\cap K[Y].$ The lexicographical ordering such that $x_1>\cdots >x_n$ is an elimination ordering for every partition $\{x_1, \ldots, x_k\},\{x_{k+1},\ldots,x_n\}.$ Thus a Gröbner basis for this ordering carries much more information than usually necessary. This may explain why Gröbner bases for the lexicographical ordering are usually the most difficult to compute. Intersecting ideals If I and J are two ideals generated respectively by {f1, ..., fm} and {g1, ..., gk}, then a single Gröbner basis computation produces a Gröbner basis of their intersection I ∩ J. For this, one introduces a new indeterminate t, and one uses an elimination ordering such that the first block contains only t and the other block contains all the other variables (this means that a monomial containing t is greater than every monomial that do not contain t. With this monomial ordering, a Gröbner basis of I ∩ J consists in the polynomials that do not contain t, in the Gröbner basis of the ideal $K=\langle t^2, tf_1,\ldots, tf_m, (1-t)g_1, \ldots, (1-t)g_k\rangle.$ In other words, I ∩ J is obtained by eliminating t in K. This may be proven by remarking that the ideal K consists in the polynomials $(a-b)t+b$ such that $a\in I$ and $b\in J$. Such a polynomial is independent fo t if and only a=b, which means that $b\in I\cap J.$ Implicitization of a rational curve A rational curve is an algebraic curve that has a parametric equation of the form $\begin{align} x_1&=\frac{f_1(t)}{g_1(t)}\\ \vdots\\ x_n&=\frac{f_n(t)}{g_n(t)}, \end{align}$ where $f_i(t)$ and $g_i(t)$ are univariate polynomials for 1 ≤ i ≤ n. One may (and will) suppose that $f_i(t)$ and $g_i(t)$ are coprime (they have no non-constant common factors). Implicitization consists in computing the implicit equations of such a curve. In case of n = 2, that is for plane curves, this may be computed with the resultant. The implicit equation is the following resultant: $\text{Res}_t(g_1x_1-f_1,g_2x_2-f_2).$ Elimination with Gröbner bases allow to implicitize for any value of n, simply by eliminating t in the ideal $\langle g_1x_1-f_1,\ldots, g_nx_2-f_n\rangle.$ If n = 2, the result is the same as with the resultant, if the map $t \mapsto (x_1,x_2)$ is injective for almost every t. In the other case, the resultant is a power of the result of the elimination. Saturation When modeling a problem by polynomial equations, it is highly frequent that some quantities are supposed to be non zero, because, if they are zero, the problem becomes very different. For example, when dealing with triangles, many properties become false if the triangle is degenerated, that is if the length of one side is equal to the sum of the lengths of the other sides. In such situations, there is no hope to deduce relevant information from the polynomial system if the degenerate solutions are not dropped out. More precisely, the system of equations defines an algebraic set which may have several irreducible components, and one has to remove the components on which the degeneracy conditions are everywhere zero. This is done by saturating the equations by the degeneracy conditions, which may be done by using the elimination property of Gröbner bases. Definition of the saturation The localization of a ring consists in adjoining to it the formal inverses of some elements. This section concerns only the case of a single element, or equivalently a finite number of elements (adjoining the inverses of several elements is equivalent to adjoin the inverse of their product. The localization of a ring R by an element f is the ring $R_f=R[t]/(1-ft),$ where t is a new indeterminate representing the inverse of f. The localization of an ideal I of R is the ideal $R_fI$ of $R_f.$ When R is a polynomial ring, computing in $R_f$ is not efficient, because of the need to manage the denominators. Therefore, the operation of localization is usually replaced by the operation of saturation. The saturation with respect to f of an ideal I in R is the inverse image of $R_fI$ under the canonical map from R to $R_f.$ It is the ideal $I:f^\infty= \{g\in R| (\exists k\in \mathbb N) f^ng\in I\}$ consisting in all elements of R whose products by some power of f belongs to I. If J is the ideal generated by I and f in R[t], then $I:f^\infty=J\cap R.$ It follows that, if R is a polynomial ring, a Gröbner basis computation eliminating t allows to compute a Gröbner basis of the saturation of an ideal by a polynomial. The important property of the saturation, which ensures that it removes from the algebraic set defined by the ideal I the irreducible components on which the polynomial f is zero is the following: The primary decomposition of $I:f^\infty$ consists in the components of the primary decomposition of I that do not contain any power of f. Computation of the saturation A Gröbner basis of the saturation by f of an polynomial ideal generated by a finite set of polynomials F, may be obtained by eliminating t in $F\cup\{1_tf\},$ that is by keeping the polynomials independent of t in the Gröbner of $F\cup\{1-tf\}$ for an elimination ordering eliminating t. Instead of using F, one may also start from a Gröbner basis of F. It depends on the problems, which method is most efficient. However, if the saturation does not remove any component, that is if the ideal is equal to its saturated ideal, computing first the Gröbner basis of F is usually faster. On the other hand if the saturation removes some components, the direct computation may be dramatically faster. If one want to saturate with respect to several polynomials $f_1,\ldots, f_k$ or with respect to a single polynomial which is a product $f=f_1\ldots f_k,$ there are three ways to proceed, that give the same result but may have very different computation times (it depends on the problem which is the most efficient). • Saturating by $f=f_1\ldots f_k$ in a single Gröbner basis computation. • Saturating by $f_1,$ then saturating the result by $f_2,$ and so on. • Adding to F or to its Gröbner basis the polynomials $1-t_1f_1, \ldots, 1-t_kf_k,$ and eliminating the $t_i$ in a single Gröbner basis computation. Effective Nullstellensatz Hilbert's Nullstellensatz has two versions. The first one asserts the a set of polynomials has an empty set of common zeros in an algebraic closure of the field of the coefficients if and only if 1 belongs to the generated ideal. This is easily tested with a Gröbner basis computation, because 1 belongs to an ideal if and only if 1 belongs to the Gröbner basis of the ideal, for any monomial ordering. The second version asserts that the set of common zeros (in an algebraic closure of the field of the coefficients) of an ideal is contained in the hypersurface of the zeros of a polynomial f, if and only if a power of f belongs to the ideal. This may be tested by a saturating the ideal by f; in fact, a power of f belongs to the ideal if and only if the saturation by f provides a Gröbner basis containing 1. Implicitization in higher dimension By definition, an affine rational variety of dimension k may be described by parametric equations of the form $\begin{align} x_1&=\frac{p_1}{p_0}\\ \vdots\\ x_n&=\frac{p_n}{p_0}, \end{align}$ where $p_0,\ldots,p_n$ are n+1 polynomials in the k variables (parameters of the parameterization) $t_1,\ldots,t_k.$ Thus the parameters $t_1,\ldots,t_k$ and the coordinates $x_1,\ldots,x_n$ of the points of the variety are zeros of the ideal $I=\left\langle p_0x_1-p_1, \ldots, p_0x_n-pn\right\rangle.$ One could guess that, like in the case of curves (k = 1), that it suffices to eliminate the parameters to obtain the implicit equations of the variety. Unfortunately this is not always the case. If the $p_i$ have a common zero (sometimes called base point), every irreducible component of the non-empty algebraic set defined by the $p_i$ is an irreducible component of the algebraic set defined by I. It follows that, in this case, the direct elimination of the $t_i$ provides an empty set of polynomials. Therefore, if k>1, two Gröbner basis computations are needed to implicitize: 1. Saturate $I$ by $p_0$ to get a Gröbner basis $G$ 2. Eliminate the $t_i$ from $G$ to get a Gröbner basis of the ideal (of the implicit equations) of the variety. See also • Buchberger's algorithm • Faugère's F4 and F5 algorithms • Graver basis • Gröbner–Shirshov basis • Mathematica includes an implementation of the Buchberger algorithm, with performance-improving techniques such as the Gröbner walk, Gröbner trace, and an improvement for toric bases • Maple has implementations of the Buchberger and Faugère F4 algorithms • SINGULAR free software for computing Gröbner bases • Macaulay 2 free software for doing polynomial computations, particularly Gröbner bases calculations. • CoCoA free computer algebra system for computing Gröbner bases. • GAP free computer algebra system that can perform Gröbner bases calculations. • Sage (mathematics software) a free software package that provides a unified interface to several computer algebra systems (including SINGULAR and Macaulay), and includes a few Gröbner basis algorithms of its own. • Magma has a very fast implementation of the Faugère F4 algorithm. [4] • Regular chains are an alternative way to represent polynomial ideals and algebraic sets, with the purpose of decomposing into equi-dimensional components. In dimension zero, a normalized regular chain is also a lexicographical Gröbner basis. References 1. Lazard, D. (1983). "Gröbner bases, Gaussian elimination and resolution of systems of algebraic equations". Computer Algebra. Lecture Notes in Computer Science 162. pp. 146–156. doi:10.1007/3-540-12868-9_99. ISBN 978-3-540-12868-7. 2. Vladimir P. Gerdt, Yuri A. Blinkov (1998). Involutive Bases of Polynomial Ideals, Mathematics and Computers in Simulation, 45:519ff 3. David Cox, John Little, and Donal O'Shea (1997). Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Springer. ISBN 0-387-94680-2. Further reading • William W. Adams, Philippe Loustaunau (1994). An Introduction to Gröbner Bases. American Mathematical Society, Graduate Studies in Mathematics, Volume 3. ISBN 0-8218-3804-0 • Huishi Li (2011). Gröbner Bases in Ring Theory. World Scientific Publishing, ISBN 978-981-4365-13-0 • Thomas Becker, Volker Weispfenning (1998). Gröbner Bases. Springer Graduate Texts in Mathematics 141. ISBN 0-387-97971-7 • Bruno Buchberger (1965). An Algorithm for Finding the Basis Elements of the Residue Class Ring of a Zero Dimensional Polynomial Ideal. Ph.D. dissertation, University of Innsbruck. English translation by Michael Abramson in Journal of Symbolic Computation 41 (2006): 471–511. [This is Buchberger's thesis inventing Gröbner bases.] • Bruno Buchberger (1970). An Algorithmic Criterion for the Solvability of a System of Algebraic Equations. Aequationes Mathematicae 4 (1970): 374–383. English translation by Michael Abramson and Robert Lumbert in Gröbner Bases and Applications (B. Buchberger, F. Winkler, eds.). London Mathematical Society Lecture Note Series 251, Cambridge University Press, 1998, 535–545. ISBN 0-521-63298-6 (This is the journal publication of Buchberger's thesis.) • Buchberger, Bruno; Kauers, Manuel (2010). "Gröbner Bases". Scholarpedia (5): 7763. doi:10.4249/scholarpedia.7763. • Ralf Fröberg (1997). An Introduction to Gröbner Bases. Wiley & Sons. ISBN 0-471-97442-0. • Sturmfels, Bernd (November 2005), "What is . . . a Gröbner Basis?", 52 (10): 1199–1200, a brief introduction. • A. I. Shirshov (1999). "Certain algorithmic problems for Lie algebras". ACM SIGSAM Bulletin 33 (2): 3–6.  (translated from Sibirsk. Mat. Zh. Siberian Mathemaics Journal, 3 (1962), 292–296) • M. Aschenbrenner and C. Hillar, Finite generation of symmetric ideals, Trans. Amer. Math. Soc. 359 (2007), 5171–5192 (on infinite dimensional Gröbner bases for polynomial rings in infinitely many indeterminates).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 85, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147005081176758, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/204718-principal-components-analysis-print.html
# Principal components analysis Printable View • October 5th 2012, 04:59 PM kungalo Principal components analysis For the covariance matrix S it is known that all its p $\times$p elements are greater than zero 0. Prove that: a) Coefficients of the first principal component are all of the same sign, b) Coefficients of each other principal component cannot be all of the same sign. Any help greatly appreciated. Thanks • October 5th 2012, 08:58 PM chiro Re: Principal components analysis Hey kungalo. You have your covariance matrix which is all positive, and for the first principal component you are going to solve an eigen-decomposition problem on your covariance matrix (i.e. find the eigen-vectors and eigen-values and retain the one with the highest eigen-value since this corresponds to the variance of each component). You don't have orthogonality conditions on the first component which means you are only interested in getting the one with the highest variance. So what this boils down to is proving that all elements in the eigen-vector corresponding to the highest eigen-value are all positive (since the covariance matrix is positive definite, it will always have positive eigenvalues). As for the next one, this lies on the argument dealing with orthogonality since all later components will always be orthogonal to every other principal compnent. The easiest thing to do is that if the first component is <a,b,c,d,....> then if something is normal you know that X = PC1 Y = PCN then <X,Y> = 0 and the only way for PCN to have this relation is for at least one of the components of PCN to be negative (we also assume PCN is not the zero vector and has positive magnitude). • October 6th 2012, 03:47 PM kungalo Re: Principal components analysis I actually worked this one out. It relies on expressing the variance of the principal component as a sum of the elements of the covariance matrix and the elements of the coefficient vector. From there it's pretty easy to see that all the coefficients need to have the same sign in order to maximize the variance of the first principal component, and that there need to be positive and negative coefficients in the second principal component in order for it to be uncorrelated with the first. All times are GMT -8. The time now is 03:40 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318490028381348, "perplexity_flag": "head"}
http://en.m.wikipedia.org/wiki/On_shell_renormalization_scheme
On shell renormalization scheme In quantum field theory, and especially in quantum electrodynamics, the interacting theory leads to infinite quantities that have to be absorbed in a renormalization procedure, in order to be able to predict measurable quantities. The renormalization scheme can depend on the type of particles that are being considered. For particles that can travel asymptotically large distances, or for low energy processes, the on-shell scheme, also known as the physical scheme, is appropriate. If these conditions are not fulfilled, one can turn to other schemes, like the Minimal subtraction scheme. Fermion propagator in the interacting theory Knowing the different propagators is the basis for being able to calculate Feynman diagrams which are useful tools to predict, for example, the result of scattering experiments. In a theory where the only field is the Dirac field, the Feynman propagator reads $\langle 0 | T(\psi(x)\bar{\psi}(0))| 0 \rangle =iS_F(x) = \int \frac{d^4p}{(2\pi)^4}\frac{ie^{-ip\cdot x}}{p\!\!\!/-m+i\epsilon}$ where $T$ is the time-ordering operator, $|0\rangle$ the vacuum in the non interacting theory, $\psi(x)$ and $\bar{\psi}(x)$ the Dirac field and its Dirac adjoint, and where the left handside of the equation is the two-point correlation function of the Dirac field. In a new theory, the Dirac field can interact with another field, for example with the electromagnetic field in quantum electrodynamics, and the strength of the interaction is measured by a parameter, in the case of QED it is the bare electron charge, $e$. The general form of the propagator should remain unchanged, meaning that if $|\Omega\rangle$ now represents the vacuum in the interacting theory, the two-point correlation function would now read $\langle \Omega | T(\psi(x)\bar{\psi}(0))| \Omega \rangle = \int \frac{d^4q}{(2\pi)^4}\frac{i Z_2 e^{-i p\cdot x}}{p\!\!\!/-m_r+i\epsilon}$ Two new quantities have been introduced. First the renormalized mass $m_r$ has been defined as the pole in the Fourier transform of the Feynman propagator. This is the main prescription of the on-shell renormalization scheme (there is then no need to introduce other mass scales like in the minimal substraction scheme). The quantity $Z_2$ represents the new strength of the Dirac field. As the interaction is turned down to zero by letting $e\rightarrow 0$, these new parameters should tend to a value so as to recover the propagator of the free fermion, namely $m_r\rightarrow m$ and $Z_2\rightarrow 1$. This means that $m_r$ and $Z_2$ can be defined as a series in $e$ if this parameter is small enough (in the unit system where $\hbar=c=1$, $e=\sqrt{4\pi\alpha}\simeq 0.3$, where $\alpha$ is the fine-structure constant). Thus these parameters can be expressed as $Z_2=1+\delta_2$ $m = m_r + \delta m$ On the other hand, the modification to the propagator can be calculated up to a certain order in $e$ using Feynman diagrams. These modifications are summed up in the fermion self energy $\Sigma(p)$ $\langle \Omega | T(\psi(x)\bar{\psi}(0))| \Omega \rangle = \int \frac{d^4p}{(2\pi)^4}\frac{ie^{-i p\cdot x}}{p\!\!\!/-m - \Sigma(p) +i\epsilon}$ These corrections are often divergent because they contain loops. By identifying the two expressions of the correlation function up to a certain order in $e$, the counterterms can be defined, and they are going to absorb the divergent contributions of the corrections to the fermion propagator. Thus, the renormalized quantities, such as $m_r$, will remain finite, and will be the quantities measured in experiments. ↑Jump back a section Photon propagator Just like what has been done with the fermion propagator, the form of the photon propagator inspired by the free photon field will be compared to the photon propagator calculated up to a certain order in $e$ in the interacting theory. The photon self energy is noted $\Pi(q^2)$ and the metric tensor $\eta^{\mu\nu}$ (here taking the +--- convention) $\langle \Omega | T(A^{\mu}(x)A^{\nu}(0))| \Omega \rangle = \int \frac{d^4q}{(2\pi)^4}\frac{-i\eta^{\mu\nu}e^{-i p\cdot x}}{q^2(1 - \Pi(q^2)) +i\epsilon} = \int \frac{d^4q}{(2\pi)^4}\frac{-iZ_3 \eta^{\mu\nu}e^{-i p\cdot x}}{q^2 +i\epsilon}$ The behaviour of the counterterm $\delta_3=Z_3-1$ is independent of the momentum of the incoming photon $q$. To fix it, the behaviour of QED at large distances (which should help recover classical electrodynamics), i.e. when $q^2\rightarrow 0$, is used : $\frac{-i\eta^{\mu\nu}e^{-i p\cdot x}}{q^2(1 - \Pi(q^2)) +i\epsilon}\sim\frac{-i\eta^{\mu\nu}e^{-i p\cdot x}}{q^2}$ Thus the counterterm $\delta_3$ is fixed with the value of $\Pi(0)$. ↑Jump back a section Vertex function A similar reasoning using the vertex function leads to the renormalization of the electric charge $e_r$. This renormalization, and the fixing of renormalization terms is done using what is known from classical electrodynamics at large space scales. This leads to the value of the counterterm $\delta_1$, which is, in fact, equal to $\delta_2$ because of the Ward-Takahashi identity. It is this calculation that account for the anomalous magnetic dipole moment of fermions. ↑Jump back a section Rescaling of the QED Lagrangian We have considered some proportionality factors (like the $Z_i$) that have been defined from the form of the propagator. However they can also be defined from the QED lagrangian, which will be done in this section, and these definitions are equivalent. The Lagrangian that describes the physics of quantum electrodynamics is $\mathcal L = -\frac{1}{4} F_{\mu \nu} F^{\mu \nu} + \bar{\psi}(i \partial - m )\psi + e \bar{\psi} \gamma^\mu \psi A_{\mu}$ where $F_{\mu \nu}$ is the field strength tensor, $\psi$ is the Dirac spinor (the relativistic equivalent of the wavefunction), and A the electromagnetic four-potential. The parameters of the theory are $\psi,\; A,\;m$ and $e$. These quantities happen to be infinite due to loop corrections (see below). One can define the renormalized quantities (which will be finite and observable) : $\psi = \sqrt{Z_2} \psi_r \;\;\;\;\; A = \sqrt{Z_3} A_r \;\;\;\;\; m = m_r + \delta m \;\;\;\;\; e = \frac{Z_1}{Z_2 \sqrt{Z_3}} e_r \;\;\;\;\; with \;\;\;\;\; Z_i = 1 + \delta_i$ The $\delta_i$ are called counterterms (some other definitions of them are possible). They are supposed to be small in the parameter e. The Lagrangian now reads in terms of renormalized quantities (to first order in the counterterms) : $\mathcal L = -\frac{1}{4} Z_3 F_{\mu \nu,r} F^{\mu \nu}_r + Z_2 \bar{\psi}_r(i \partial - m_r )\psi_r - \bar{\psi}_r\delta m \psi_r + Z_1 e_r \bar{\psi}_r \gamma^\mu \psi_r A_{\mu,r}$ A renormalization prescription is a set of rules that describes what part of the divergences should be in the renormalized quantities and what parts should be in the counterterms. The prescription is often based on the theory of free fields, that is of the behaviour of $\psi$ and A when they do not interact (which corresponds to removing the term $e \bar{\psi} \gamma^\mu \psi A_{\mu}$ in the Lagrangian). ↑Jump back a section References • M. Peskin and D. Schroeder, An Introduction to Quantum Field Theory Addison-Weasley, Reading, 1995 • M. Srednicki, http://www.physics.ucsb.edu/~mark/qft.html Quantum Field Theory • T. Gehrmann, http://www.theorie.physik.uzh.ch/~pfmonni/QFTI_HS10/QFT_Skript.pdf Quantum Field Theory 1 ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 50, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8974640965461731, "perplexity_flag": "head"}
http://physicspages.com/2012/04/02/average-field-over-a-sphere/
Notes on topics in science ## Average field over a sphere Required math: calculus Required physics: electrostatics Reference: Griffiths, David J. (2007) Introduction to Electrodynamics, 3rd Edition; Prentice Hall – Problem 3.41. The average electric field inside a sphere of radius ${R}$ due to a point charge ${q}$ at position ${\mathbf{r}}$ inside the sphere is the field integrated over the volume of the sphere divided by the sphere’s volume: $\displaystyle \mathbf{E}_{av}=\frac{1}{4\pi\epsilon_{0}}\frac{3q}{4\pi R^{3}}\int\frac{\mathbf{r}'-\mathbf{r}}{\left|\mathbf{r}'-\mathbf{r}\right|^{3}}d^{3}\mathbf{r}'$ where the integral extends over the interior of the sphere. (Note that the statement of the problem in Griffiths’ book has a typo: the unit vector in the integral in part (a) should be script ‘r’, not ${\mathbf{r}}$.) Now suppose we have a uniformly charged sphere with charge density ${\rho}$ and wish to find the field at point ${\mathbf{r}}$ due to this charge. This time the field at ${\mathbf{r}}$ due to volume element ${d^{3}\mathbf{r}'}$ is ${\left(\mathbf{r}-\mathbf{r}'\right)\rho d^{3}\mathbf{r}'/4\pi\epsilon_{0}\left|\mathbf{r}'-\mathbf{r}\right|^{3}}$ so the overall field is $\displaystyle \mathbf{E}_{\rho}=\frac{1}{4\pi\epsilon_{0}}\int\rho\frac{\mathbf{r}-\mathbf{r}'}{\left|\mathbf{r}-\mathbf{r}'\right|^{3}}d^{3}\mathbf{r}'$ The two fields are the same if we set $\displaystyle \rho=-\frac{3q}{4\pi R^{3}}$ From Gauss’s law, we can work out ${\mathbf{E}_{\rho}}$ if we work out the integrals below for a sphere of radius ${r<R}$: | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \int\mathbf{E}_{\rho}\cdot d\mathbf{a}$ | $\displaystyle =$ | $\displaystyle \frac{1}{\epsilon_{0}}\int\rho d^{3}\mathbf{r}'$ | | $\displaystyle 4\pi r^{2}E_{\rho}$ | $\displaystyle =$ | $\displaystyle -\frac{1}{\epsilon_{0}}\frac{3q}{4\pi R^{3}}\frac{4\pi r^{3}}{3}$ | | $\displaystyle E_{\rho}$ | $\displaystyle =$ | $\displaystyle -\frac{1}{4\pi\epsilon_{0}}\frac{qr}{R^{3}}$ | Since ${\mathbf{E}_{\rho}}$ points in the radial direction due to symmetry, $\displaystyle \mathbf{E}_{\rho}=-\frac{1}{4\pi\epsilon_{0}}\frac{q}{R^{3}}\mathbf{r}$ The dipole moment of a single point charge ${q}$ at position ${\mathbf{r}}$ is ${\mathbf{p}=q\mathbf{r}}$, so the field can be written as $\displaystyle \mathbf{E}_{\rho}=\mathbf{E}_{av}=-\frac{1}{4\pi\epsilon_{0}}\frac{\mathbf{p}}{R^{3}}$ From the superposition principle, we can extend this result so that it applies to any distribution of charge within the sphere. If ${\mathbf{r}}$ is outside the sphere, the formula for ${\mathbf{E}_{av}}$ is the same as before, with the integral still extending over the interior of the sphere. The formula for ${\mathbf{E}_{\rho}}$ is also the same, but this time if we use Gauss’s law and integrate over a spherical surface of radius ${r>R}$, we get | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $\displaystyle \int\mathbf{E}_{\rho}\cdot d\mathbf{a}$ | $\displaystyle =$ | $\displaystyle \frac{1}{\epsilon_{0}}\int\rho d^{3}\mathbf{r}'$ | | $\displaystyle 4\pi r^{2}E_{\rho}$ | $\displaystyle =$ | $\displaystyle -\frac{1}{\epsilon_{0}}\frac{3q}{4\pi R^{3}}\frac{4\pi R^{3}}{3}$ | | $\displaystyle E_{\rho}$ | $\displaystyle =$ | $\displaystyle -\frac{1}{4\pi\epsilon_{0}}\frac{q}{r^{2}}$ | | $\displaystyle \mathbf{E}_{\rho}$ | $\displaystyle =$ | $\displaystyle -\frac{1}{4\pi\epsilon_{0}}\frac{q}{r^{2}}\hat{\mathbf{r}}$ | The RHS of the second line arises from the fact that all of the charged sphere is now interior to the surface of integration, while on the LHS, we are still integrating over a spherical surface of radius ${r>R}$. The average field produced by a point charge is thus the same as the field produced by this charge at the centre of the sphere. By superposition we can apply this argument to any distribution of charge external to the sphere. ### Like this: By , on Monday, 2 April 2012 at 10:06, under Electrodynamics, Physics. Tags: dipole moment, electric field, principle of superposition. 3 Comments ### Trackbacks • By Electric field of a dipole « Physics tutorials on Monday, 2 April 2012 at 11:57 [...] seen that the average field of a charge distribution within a sphere [...] • By Relation between polarizability and susceptibility « Physics tutorials on Wednesday, 28 November 2012 at 20:52 [...] average field for a charge distribution within a sphere of radius can be written in terms of its dipole moment, so if we assume that an [...] • By Average magnetic field within a sphere | Physics tutorials on Thursday, 25 April 2013 at 13:16 [...] worked out the average electric field at the centre of a sphere earlier, so can we do something similar for the average magnetic field? [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8980895280838013, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/609/keeloq-showing-that-decryption-is-indeed-the-inverse-of-encryption
# KeeLoq showing that decryption is indeed the inverse of encryption In some text I am reading, there is an exercise asking to show that KeeLoq decryption function is the inverse of the encryption function. Details about KeeLoq are given in the Wikipedia article. As I am no hardware guy, I have no clue how to approach this problem. Does someone happen to see, how to show the requested property? - ## 1 Answer It is difficult to find a complete specification, so I'm going by the Wikipedia article you linked. These images (public domain, from Wikipedia) describe one round of the encryption/decryption algorithm, with 528 rounds in total. At the start, the plaintext is put into the 32-bit shift register, and at the end the ciphertext is taken out of it (for decryption, the other way around). In each round, actually only one bit of data is actually changed (the new bit with number 31 for encryption, 0 for decryption), the other bits are only shifted around. If we number all the bits in the "data queue" over time by $L_0 \dots L_{559}$, with bit $j$ before round $i$ being $L_{i+j}$, $L_{i+32}$ is the new bit 31 after round $i$, and we can describe the encryption algorithm as follows (from the text to Figure 1 (page 9) in Periodic ciphers with small blocks and cryptanalysis of KeeLoq, which features a similar image): 1. Initialize with the plaintext: $L_{31}, \dots, L_0 := P_{31}, \ldots, P_0$. 2. For $i = 0, \ldots, 528 − 1$ do $$L_{i+32} := k_{i\ \bmod\ 64} ⊕ L_i ⊕ L_{i+16} ⊕ NLF(L_{i+31}, L_{i+26}, L_{i+20}, L_{i+9} , L_{i+1})$$ 3. The ciphertext is $C_{31}, \ldots, C_0 := L_{528+31}, \ldots, L_{528}$ The decryption is similar, but here we number the rounds from 528 to 1, and the bits over time as bit $j$ before round $i$ is $L_{i+j}$. $L_{i-1}$ is the output of round $i$. 1. Initialize with the ciphertext: $L_{528+31}, \dots, L_{528} := P_{31}, \ldots, P_0$. 2. For $i = 528, \ldots, 1$ do $$L_{i-1} := k_{i-1\ \bmod\ 64} ⊕ L_{i+31} ⊕ L_{i+15} ⊕ NLF(L_{i+30}, L_{i+25}, L_{i+19}, L_{i+8} , L_{i+0})$$ 3. The plaintext is $P_{31}, \ldots, P_0 := L_{31}, \ldots, L_{0}$ Now all that stays to show is that both equations $$L_{i-1} = k_{i-1\ \bmod\ 64} ⊕ L_{i+31} ⊕ L_{i+15} ⊕ NLF(L_{i+30}, L_{i+25}, L_{i+19}, L_{i+8} , L_{i+0})$$ and $$L_{i+32} = k_{i\ \bmod\ 64} ⊕ L_i ⊕ L_{i+16} ⊕ NLF(L_{i+31}, L_{i+26}, L_{i+20}, L_{i+9} , L_{i+1})$$ are indeed equivalent. (This is a simple index-shift, and reordering some terms.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285826086997986, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/coulombs-law?sort=votes&pagesize=15
# Tagged Questions The coulombs-law tag has no wiki summary. learn more… | top users | synonyms 5answers 2k views ### Does Coulomb's Law, with Gauss's Law, imply the existence of only three spatial dimensions? Coulomb's Law states that the fall-off of the strength of the electrostatic force is inversely proportional to the distance squared of the charges. Gauss's law implies that a the total flux through a ... 4answers 501 views ### Why are so many forces explainable using inverse squares when space is three dimensional? It seems paradoxical that the strength of so many phenomena (Newtonian gravity, Coulomb force) are calculable by the inverse square of distance. However, since volume is determined by three ... 2answers 314 views ### Using photons to explain electrostatic force I am trying to understand the idea of a force carrier with the following example. Let's say there are two charges $A$ and $B$ that are a fixed distance from each other. What is causing the force on ... 1answer 150 views ### How the inverse square law in electrodynamics is related to photon mass? I have read somewhere that one of the tests of the inverse square law is to assume nonzero mass for photon and then, by finding a maximum limit for it , determine a maximum possible error in ... 2answers 971 views ### Coulomb potential in 2D I know that the Coulomb potential is logarithmic is two dimensions, and that (see for instance this paper: http://pil.phys.uniroma1.it/~satlongrange/abstracts/samaj.pdf) a length scale naturally ... 1answer 284 views ### Are the Maxwell's equations enough to derive the law of Coulomb? Are the 8 Maxwell's equations enough to derive the formula for the electromagnetic field created by a stationary point charge, which is the same as the law of Coulomb? If I am not mistaken, due to ... 3answers 238 views ### Change in attraction of charged bodies If I insert a piece of glass between two objects carrying different charges, would they still attract? If they attract, does the piece of glass affect the force of attraction and is there any formula ... 3answers 112 views ### Precision of Coulomb's law Up to which precision has the coulomb law proven to be true? I.e. if you have two electrons in a vacuum chamber, 5 meters appart, have the third order terms been ruled out? Are there any theoretical ... 3answers 808 views ### Coulomb's Law: why is $k = \dfrac{1}{4\pi\epsilon_0}$ This was supposed to be a long question but something went wrong and everything I typed was lost. Here goes. Why is $k = \dfrac{1}{4\pi\epsilon_0}$ in Coulomb's law? Is this an experimental fact? ... 1answer 55 views ### What are the limits of applicability of Coulomb's Law? Coulomb's law is formally parallel to Newton's Law of Universal Gravitation, which is known to give way to General Relativity for very large masses. Does Coulomb's Law have any similar limits of ... 1answer 78 views ### Units for physical constants Someone told me that units for $G$ and $\epsilon_0$ (gravitational constant and Coulomb's constant) are placed there simply to make equations work dimensionally and that there is no real physical ... 2answers 704 views ### Electric potential due to a point charge in Gaussian/CGS units I learned electrostatics in SI units. In SI, the electrostatic potential due to a point charge $q$ located at $\textbf{r}$ is given by $\Phi(\textbf{r}) = \frac{q}{4 \pi \epsilon_0 |\textbf{r}|}$. ... 0answers 183 views ### Modified Coulomb potential I'm working through Byron and Fuller's "Mathematics of Classical and Quantum Physics" and came across this problem: If the electric potential of a point charge were \$\phi(r) = ... 3answers 242 views ### Similarity between the Coulomb force and Newton's gravitational force Coulomb force and gravitational force has the same governing equation. So they should be same in nature. A moving electric charge creates magnetic field, so a moving mass should create some force ... 2answers 106 views ### How to check units? I've got: $Q=\frac{Er^2}{k}$ how to check the units? I start with $\left[\frac{\text V}{\text m} \, \text m^2\right]$, tried replacing $[ \text V ]$ with $\left[ \frac{\text J}{\text C} \right]$, but ... 2answers 156 views ### In which cases is it better to use Gauss' law? I could, for example calculate the electric field near a charged rod of infinite length using the classic definition of the electric field, and integrating the: \overrightarrow{dE} = \frac{dq}{4 ... 3answers 397 views ### Electrostatic Potential Energy I have read many books on Mechanics and Electrodynamics and the one thing that has confused me about electrostatic potential energy is its derivation .One of the classical derivations is : ... 2answers 292 views ### Coulomb's law and Plasma Does Coulomb's law apply to Plasma? 1answer 121 views ### Finding the electric field on a point (x,y,z) using Coulomb's Law Using Gauss' Law, the answer is $$\frac{Q}{4 \pi \epsilon R^2}.$$ However if I were to do the integration using Coulomb's Law, I get \int_0^{2\pi} \int_{0}^{\pi}\int_r^a \frac{\rho \sin\theta dR ... 1answer 542 views ### How is Gauss' Law (integral form) arrived at from Coulomb's Law, and how is the differential form arrived at from that? On a similar note: when using Gauss' Law, do you even begin with Coulomb's law, or does one take it as given that flux is the surface integral of the Electric field in the direction of the normal to ... 1answer 205 views ### Gravity force strength in 1D, 2D, 3D and higher spatial dimensions Let's say that we want to measure the gravity force in 1D, 2D, 3D and higher spatial dimensions. Will we get the same force strength in the first 3 dimensions and then it will go up? How about if ... 2answers 99 views ### A particle of charge $-e$ orbits a particle of charge $Ze$, what is its orbital frequency? A point particle $P$ of charge $Ze$ is fixed at the origin in 3-dimensions, while a point particle $E$ of mass $m$ and charge $-e$ moves in the electric field of $P$. I have the Newtonian equation of ... 0answers 37 views ### Static electrical attraction [closed] Coulomb's law is used to calculate the electrical attraction between 2 charged particles, what formula do I use to calculate an electrical attraction magnitude between 2 plates? Let's assume the first ... 3answers 263 views ### Force inversely proportional to the squared distance Newton's law of universal gravitation: "Newton's law of universal gravitation states that every point mass in the universe attracts every other point mass with a force that is directly proportional to ... 3answers 152 views ### What was wrong with action a distance? It is usually said that the idea of fields was introduced (electric and magnetic fields) in electricity and magnetism after Coulomb's law to cure the conceptual problems of action at a distance. ... 1answer 153 views ### If 2 charges have the same sign, the coulomb force is positive but repulsive, while with 2 masses the gravitational force is positive but attractive If you have two point objects both the same positive charge and both of the same mass at a distance $r$ from each other. The force between them due to gravity is $F_g=\frac{Gmm}{r^2}$ and $F_g$ is ... 1answer 99 views ### How does one come up with the Coulomb's law? My teacher mentioned that field line density = no. of lines / area and the total area of a sphere is $4\pi r^2$ and so an electric force is inversely proportional to $r^2$. Actually, why can the total ... 1answer 2k views ### Electric field calculator [closed] Where can I find an electric field calculator? I'm looking for something that can use "x" (or any varable) as a point charge. specifically, I'm looking for something that can I can imput the field ... 1answer 58 views ### A ring placed along $y^2 + z^2 = 4$, $x = 0$ carries a uniform charge of $5 \mu\ C/m$. Find $D$ at $P(3,0,0)$ [closed] A ring placed along $y^2 + z^2 = 4$, $x = 0$ carries a uniform charge of $5 \mu\ C/m$. Find $D$ at $P(3,0,0)$ How do I solve this using Coulomb's Law? I used \$dE=\dfrac{dQ}{4\pi\epsilon_0 ... 1answer 86 views ### Gaussian Unit of Charge and Force I just read that in the Gaussian Units of charge The Final equation in Coulomb's law is as simple as $$\boldsymbol{F}=\frac{q_1q_2}{r^2}$$ No $\epsilon_0$ no $4\pi$ like you have in the $\mbox{SI}$ ... 0answers 486 views ### Placing charges using Coulombs law [closed] A charge +Q is located at the origin and a second charge, +4Q is at a distance d on the x-axis. where should a third charge, q, be placed, and what should be its sign and magnitude, so that all three ... 4answers 712 views ### What is potential at a point? What does potential at a point exactly mean? My teacher tells me that current flows from higher potential to lower potential but when I ask him the reason, he fails to give me a convincing answer. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371271729469299, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/180924-when-exactly-do-we-use-cauchy-riemann-equations.html
# Thread: 1. ## When exactly do we use the Cauchy-Riemann equations? I'm having a really hard time trying to figure out when to use the Cauchy-Riemann equations, u_x = v_y and u_y = -v_x. My textbook says that they are necessary, but not sufficient conditions for differentiability. My textbook also says that they can be used to show that a function is not differentiable at a point, by showing that the CR equations do not hold. So is that all they can be used for, to show non-differentiability? 2. Originally Posted by Alexrey So is that all they can be used for, to show non-differentiability? We can also use the definition of differentiability. For example $f(z)=\bar{z}$ $\dfrac{\overline{z+h}-\bar{z}}{h}=\dfrac{\bar{h}}{h}=e^{-2i\theta}$ so, the limit as $h\to 0$ depends on $\theta$ as a consequence , f is not differentiable at z. 3. I know that we can use the above to prove that a function is not differentiable, but lets say that I get a question that says, "Prove that the following function is NOT differentiable", then instead of using the above method that you mentioned, could I straight away use the Cauchy-Riemann equations to prove the statement? Also my textbook says that if a function has continuous first order partial derivatives that satisfy the Cauchy-Riemann equations, then it is differentiable, so if I get a question that says, "The following function has continuous partial derivatives, prove that it is differentiable", then again can I use the CR equations instead of the usual method for proving differentiability that you used? 4. Originally Posted by Alexrey I know that we can use the above to prove that a function is not differentiable, but lets say that I get a question that says, "Prove that the following function is NOT differentiable", then instead of using the above method that you mentioned, could I straight away use the Cauchy-Riemann equations to prove the statement? Of course you can, $f(z)=\bar{z}=x-iy\Rightarrow u_x=1\neq -1=v_y$ so, f is not differentiable at $z=x+iy$ Also my textbook says that if a function has continuous first order partial derivatives that satisfy the Cauchy-Riemann equations, then it is differentiable, so if I get a question that says, "The following function has continuous partial derivatives, prove that it is differentiable", then again can I use the CR equations instead of the usual method for proving differentiability that you used? Yes, you can. 5. Awesome, thanks very much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498515129089355, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/57291/list
## Return to Question 2 edited body If we have a variety X , over a field k, and x is a geometric point of X, and let $\bar x$be a geometric point of $X_{k^s} := X \times_k k^s$above x then we have the following short exact sequence: $1 \rightarrow \pi_1(X_{k^s}, \bar x) \rightarrow \pi_1(X,x) \rightarrow Gal(k) \rightarrow 1$ Implicit in this is a choice of $k^s$ (if you want, this is a choice of geometric point, z, on Spec(k); $\pi_1(Spec(k), z)=Gal(k)$). suppose f is a spliting of this short exact sequence , then consider the image of f,denote by Im(f) ,it is a subgroup of $\pi_1(X,x)$.Then is there any understanding of the the subgroup of $\pi_1(X,x)$generated by {Im(f)|all spliting f} ? And when it will it be the full group $\pi_1(X,x)$? 1 # Image of spliting of short exact sequence of algebraic fundamental groups If we have a variety X , over a field k, and x is a geometric point of X, and let $\bar x$be a geometric point of $X_{k^s} := X \times_k k^s$above x then we have the following short exact sequence: $1 \rightarrow \pi_1(X_{k^s}, \bar x) \rightarrow \pi_1(X,x) \rightarrow Gal(k) \rightarrow 1$ Implicit in this is a choice of $k^s$ (if you want, this is a choice of geometric point, z, on Spec(k); $\pi_1(Spec(k), z)=Gal(k)$). suppose f is a spliting of this short exact sequence , then consider the image of f,denote by Im(f) ,it is a subgroup of $\pi_1(X,x)$.Then is there any understanding of the the subgroup of $\pi_1(X,x)$generated by {Im(f)|all spliting f} ? And when it will be the full group $\pi_1(X,x)$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.90696781873703, "perplexity_flag": "head"}
http://mathoverflow.net/questions/80055/sums-of-binomials-with-even-coefficients/80068
Sums of binomials with even coefficients Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) While looking for a closed form of a expression I worked myself to a formula that resembles the Vandermonde convolution, but is summed over even binomial coefficients only. $\sum_{k=0}^n\sum_{l=0}^n{{2k+2l}\choose{2l}}{{4n-2k-2l}\choose{2n-2l}}$ I'm at a loss as to what to do with it. I can re-write it in several ways, but the principal problem remains. Is there a known technique to attack such sums? Thanks. - 1 Answer We have $$\sum_l\binom{a+l}lx^l=\frac1{(1-x)^{a+1}},$$ hence the generating function for the even terms of the sequence is $$\sum_l\binom{a+2l}{2l}x^{2l}=\frac12\left(\frac1{(1-x)^{a+1}}+\frac1{(1+x)^{a+1}}\right).$$ Consequently, \begin{multline*}\sum_l\binom{a+2l}{2l}\binom{b+2(n-l)}{2(n-l)}=\\ [x^{2n}]\frac14\left(\frac1{(1-x)^{a+1}}+\frac1{(1+x)^{a+1}}\right)\left(\frac1{(1-x)^{b+1}}+\frac1{(1+x)^{b+1}}\right),\end{multline*} where $[x^n]f$ denotes the $n$th coefficient of the power series for $f$. Plugging in the actual values for $a$ and $b$ and summing over $k$ gives \begin{align*} \sum_k\sum_l&\binom{2(k+l)}{2l}\binom{2(n-k)+2(n-l)}{2(n-l)}\\ &=[x^{2n}]\sum_{k=0}^n\frac14\left(\frac1{(1-x)^{2k+1}}+\frac1{(1+x)^{2k+1}}\right)\left(\frac1{(1-x)^{2(n-k)+1}}+\frac1{(1+x)^{2(n-k)+1}}\right)\\ &=[x^{2n}]\left[\frac{n+1}4\left(\frac1{(1-x)^{2n+2}}+\frac1{(1+x)^{2n+2}}\right)+\frac{1-x^2}{8x}\left(\frac1{(1-x)^{2n+2}}-\frac1{(1+x)^{2n+2}}\right)\right]\\ &=\frac{2n^2+4n+1}{2n+1}\binom{4n}{2n}=(2n^2+4n+1)C_{2n}. \end{align*} - For $n=1$, we have $\sum_k \sum_l$ ${2k+2l}\choose{2l}$ ${4n-2k-2l}\choose{2n-2l}$ $= \sum_l$ ${2l}\choose{2l}$ ${4-2l}\choose{2-2l}$ + ${2+2l}\choose{2l}$ ${2-2l}\choose{2-2l}$ = ${4}\choose{2}$ + ${2}\choose{2}$ + ${2}\choose{0}$ + ${4}\choose{2}$ = $14$. – Zack Wolske Nov 5 2011 at 8:19 Thanks! In retrospective I should have figured that out myself. After fixing numerical mistakes after the second-to-last $=$ it evaluates to $\frac{n+1}{2}{4n+1 \choose 2n+1}-\frac{1}{4}{4n \choose 2n+1}+\frac{1}{4}{4n+2 \choose 2n+1}$ – Rasto S. Nov 7 2011 at 15:41 I am sorry for the errors, I hope it is correct now. – Emil Jeřábek Nov 8 2011 at 14:49 Yep, it's fixed. – Zack Wolske Nov 8 2011 at 16:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7682862281799316, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=The_Monty_Hall_Problem&diff=14665&oldid=14097
# The Monty Hall Problem ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (16:00, 19 July 2010) (edit) (undo) | | | (15 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | - | |ImageName=Let's Make a Deal | + | |ImageName=The Monty Hall Problem | | | |Image=Mainimage.jpg | | |Image=Mainimage.jpg | | | |ImageIntro=The Monty Hall problem is a probability puzzle based on the 1960's game show Let's Make a Deal. | | |ImageIntro=The Monty Hall problem is a probability puzzle based on the 1960's game show Let's Make a Deal. | | Line 21: | | Line 21: | | | | In the diagram on the below, we can see what prize the contestant will win if he always stays with his initial pick after Monty opens a door. If the contestant uses the strategy of always staying, he will only win if he originally picked door 1. | | In the diagram on the below, we can see what prize the contestant will win if he always stays with his initial pick after Monty opens a door. If the contestant uses the strategy of always staying, he will only win if he originally picked door 1. | | | | | | | - | [[Image:stay6.jpg]] | + | [[Image:staydecision.jpg]] | | | | | | | | If the contestant always switches doors when Monty shows him a goat, then he will win if he originally picked door 2 or door 3. | | If the contestant always switches doors when Monty shows him a goat, then he will win if he originally picked door 2 or door 3. | | | | | | | - | [[Image:switch5.jpg]] | + | [[Image:switchdecision.jpg]] | | | | | | | | A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three. Since we know that the car is equally likely to be behind each of the three doors, we can generalize our strategy for the case where the car is behind door 1 to any placement of the car. The probability of winning by staying with the initial choice is 1/3, while the probability of winning by switching is 2/3. The contestant's best strategy is to always switch doors so he can drive home happy and goat-free. | | A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three. Since we know that the car is equally likely to be behind each of the three doors, we can generalize our strategy for the case where the car is behind door 1 to any placement of the car. The probability of winning by staying with the initial choice is 1/3, while the probability of winning by switching is 2/3. The contestant's best strategy is to always switch doors so he can drive home happy and goat-free. | | Line 42: | | Line 42: | | | | The most common misconception is that the odds of winning are 50-50 no matter which door a contestant chooses. Most people assume that each door is equally likely to contain the car since the probability was originally distributed evenly between the three doors. They believe that they have no reason to prefer one door, so it does not matter whether they switch or stick with their original choice. | | The most common misconception is that the odds of winning are 50-50 no matter which door a contestant chooses. Most people assume that each door is equally likely to contain the car since the probability was originally distributed evenly between the three doors. They believe that they have no reason to prefer one door, so it does not matter whether they switch or stick with their original choice. | | | | | | | - | This reasoning seems logical until we realize that the two doors cannot be equally likely to hide the car. The critical fact is that the Monty does not randomly choose a door to open, so they do have a reason to prefer a certain door. Marilyn defended her answer in a subsequent column addressing this point specifically. | + | This reasoning seems logical until we realize that the two doors cannot be equally likely to hide the car. The critical fact is that Monty's choice of which door to open is not random, so when he opens a door, it gives the contestant new information. | | | | | | | - | Suppose we pause after Monty has revealed a goat and a UFO settles down onto the stage and a little green woman emerges. The host asks her to point to one of the two unopened doors. Then the chances that she'll randomly choose the one with the prize are 1/2. ''But'', that's because she lacks the advantage the original contestant had—the help of the host. | + | Marilyn defended her answer in a subsequent column addressing this point specifically. Suppose we pause after Monty has revealed a goat and a UFO settles down onto the stage and a little green woman emerges. The host asks her to point to one of the two unopened doors. Then the chances that she'll randomly choose the one with the prize are 1/2. ''But'', that's because she lacks the advantage the original contestant had—the help of the host. | | | | | | | | "When you first choose door #1 from three, there's a 1/3 chance that the prize is behind that one and a 2/3 chance that it's behind one of the others. But then the host steps in and gives you a clue. If the prize is behind #2, the host shows you #3, and if the prize is behind #3, the host shows you #2. So when you switch, you win if the prize is behind #2 or #3. You win either way! But if you don't switch, you win only if the prize is behind door #1," Marilyn explained. | | "When you first choose door #1 from three, there's a 1/3 chance that the prize is behind that one and a 2/3 chance that it's behind one of the others. But then the host steps in and gives you a clue. If the prize is behind #2, the host shows you #3, and if the prize is behind #3, the host shows you #2. So when you switch, you win if the prize is behind #2 or #3. You win either way! But if you don't switch, you win only if the prize is behind door #1," Marilyn explained. | | Line 79: | | Line 79: | | | | Let the door picked by the contestant be called door ''a'' and the other two doors be called ''b'' and ''c''. Also, V<sub>''a''</sub>, V<sub>''b''</sub>, and V<sub>''c''</sub>, are the events that the car is actually behind door ''a'', ''b'', and ''c'' respectively. We begin by looking at a scenario that leads to Monty opening door ''b'', so let O<sub>''b''</sub> be the event that Monty Hall opens curtain ''b''. | | Let the door picked by the contestant be called door ''a'' and the other two doors be called ''b'' and ''c''. Also, V<sub>''a''</sub>, V<sub>''b''</sub>, and V<sub>''c''</sub>, are the events that the car is actually behind door ''a'', ''b'', and ''c'' respectively. We begin by looking at a scenario that leads to Monty opening door ''b'', so let O<sub>''b''</sub> be the event that Monty Hall opens curtain ''b''. | | | | | | | - | Then, the problem can be restated as follows: Is <math>P(V_a|O_b) = P(V_c|O_b)</math>? <math>P(V_a|O_b)</math> is the probability that door ''a'' hides the car given that Monty opens door ''b''. Similarly, <math>P(V_c|O_b)</math> is the probability that door ''c'' hides the car given that monty opens door ''b''. So, when <math>P(V_a|O_b) = P(V_c|O_b)</math> the probability that the car is behind one unopened door the same as the probability that the car is behind the other unopened door. If this is the case, it won't matter if the contestant stays or switches. | + | Then, the problem can be restated as follows: Is <math>P(V_a|O_b) = P(V_c|O_b)</math>? | | | | | | | | | + | | | | | + | <math>P(V_a|O_b)</math> is the probability that door ''a'' hides the car given that Monty opens door ''b''. Similarly, <math>P(V_c|O_b)</math> is the probability that door ''c'' hides the car given that monty opens door ''b''. So, when <math>P(V_a|O_b) = P(V_c|O_b)</math> the probability that the car is behind one unopened door the same as the probability that the car is behind the other unopened door. If this is the case, it won't matter if the contestant stays or switches. | | | | | | | | | | | | Line 102: | | Line 104: | | | | | | | | | <math>P(O_b|V_c) = 1 </math> because if the prize is behind door ''c'', Monty can only open door ''b''. | | <math>P(O_b|V_c) = 1 </math> because if the prize is behind door ''c'', Monty can only open door ''b''. | | - | | | | | | | | | | | Each of these probabilities is conditional on the fact that the prize is hidden behind a specific door, but we are assuming that each of these probabilities is mutually exclusive since the car can only be hidden behind one door. As a result, we know that P(''O''<sub>''b''</sub>) is equal to | | Each of these probabilities is conditional on the fact that the prize is hidden behind a specific door, but we are assuming that each of these probabilities is mutually exclusive since the car can only be hidden behind one door. As a result, we know that P(''O''<sub>''b''</sub>) is equal to | | Line 115: | | Line 116: | | | | | | | | | ::<math>= \frac{1}{2} </math> | | ::<math>= \frac{1}{2} </math> | | - | | | | | | | | | | | Then, we can use <math>P(O_b)</math>, <math>P(O_b|V_a)</math>, and <math>P(V_a)</math> to calculated <math>P(V_a|O_b)</math>. | | Then, we can use <math>P(O_b)</math>, <math>P(O_b|V_a)</math>, and <math>P(V_a)</math> to calculated <math>P(V_a|O_b)</math>. | | Line 124: | | Line 124: | | | | | | | | | :::<math>= \frac {1}{3} </math> | | :::<math>= \frac {1}{3} </math> | | - | | | | | | | | | | | Similarly, | | Similarly, | | Line 161: | | Line 160: | | | | This consistency is especially remarkable given that these studies include a range of different wordings, methods of presentations, languages, and cultures. | | This consistency is especially remarkable given that these studies include a range of different wordings, methods of presentations, languages, and cultures. | | | | | | | - | Marilyn quotes cognitive psychologist Massimo Piattelli-Palmarini in her own book saying "... no other statistical puzzle comes so close to fooling all the people all the time" and "that even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer." When the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. | + | Marilyn quotes cognitive psychologist Massimo Piattelli-Palmarini in her own book saying "... no other statistical puzzle comes so close to fooling all the people all the time" and "that even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer" (''Bostonia'' July/August 1991). When the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. | | | | | | | | One letter written to vos Savant by Dr. E. Ray Bobo of Georgetown University was especially critical of Marilyn's solution: "You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively toward the solution to a deplorable situation. How many irate mathematicians are needed to get you to change your mind?" | | One letter written to vos Savant by Dr. E. Ray Bobo of Georgetown University was especially critical of Marilyn's solution: "You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively toward the solution to a deplorable situation. How many irate mathematicians are needed to get you to change your mind?" | | Line 171: | | Line 170: | | | | [[Image:Candies2.jpg|right|250px]] | | [[Image:Candies2.jpg|right|250px]] | | | | | | | - | The most famous experiment in question is the 1956 study on rationalizing choices. The researchers studied which M&M colors were most preferred by monkeys. After identifying a few colors of M&Ms that were approximately equally favored by a monkey - say, red, blue, and yellow, - the researchers gave the monkey a choice between two of the colors. | + | The most famous experiment in question is the 1956 study "Postdecision changes in the desirability of alternatives" on rationalizing choices. The researchers studied which M&M colors were most preferred by monkeys. After identifying a few colors of M&Ms that were approximately equally favored by a monkey - say, red, blue, and yellow, - the researchers gave the monkey a choice between two of the colors. | | | | | | | | In one case, imagine that a monkey chose yellow over blue. Then, the monkey would be offered the choice between blue and red M&Ms. Researchers noted that about two-thirds of the time the monkey would choose red. The 1956 study claimed that their results reinforced the theory of rationalization: Once we reject something we are convinced that we never like it anyway. | | In one case, imagine that a monkey chose yellow over blue. Then, the monkey would be offered the choice between blue and red M&Ms. Researchers noted that about two-thirds of the time the monkey would choose red. The 1956 study claimed that their results reinforced the theory of rationalization: Once we reject something we are convinced that we never like it anyway. | | | | | | | - | Dr. Chen reexamined the experimental procedure, and says that monkey's rejection of blue might be attributable to statistics alone. Chen says that there must be some difference in preference between the original red, blue, and yellow. If this is the case, then the monkey's choice of yellow over blue wasn't arbitrary. Like Monty Hall's decision to open a door that hid a goat, the monkey's choice between yellow and blue discloses additional information. In fact, when a monkey favors yellow over blue, there's a two-thirds chance that it also started off with a preference for red over blue- which would explain why the monkeys chose red 2/3 of the time in the Yale experiment. | + | Dr. Chen reexamined the experimental procedure, and says that monkey's rejection of blue might be attributable to statistics alone. Chen says that although the three colors of M&M's are ''approximately'' equally favored, there must be some slight difference in preference between the original red, blue, and yellow. If this is the case, then the monkey's choice of yellow over blue wasn't arbitrary. Like Monty Hall's decision to open a door that hid a goat, the monkey's choice between yellow and blue discloses additional information. In fact, when a monkey favors yellow over blue, there's a two-thirds chance that it also started off with a preference for red over blue- which would explain why the monkeys chose red 2/3 of the time in the Yale experiment. | | | | | | | - | To why this is true, consider Chen's conjecture that monkeys must have some slight preference between the three colors they are being offered. The table below shows all the possible combinations of ways that a monkey could possibly rank its M&Ms. | + | To see why this is true, consider Chen's conjecture that monkeys must have some slight preference between the three colors they are being offered. The table below shows all the possible combinations of ways that a monkey could possibly rank its M&Ms. | | | | | | | - | [[Image:M&ms.jpg]] | + | [[Image:M&ms1_copy.jpg‎]] | | | | | | | | We can see that in the case where the monkey preferred yellow over blue, they monkey preferred red over blue in 2/3 of the rankings. | | We can see that in the case where the monkey preferred yellow over blue, they monkey preferred red over blue in 2/3 of the rankings. | | | | | | | - | Although Chen agrees that the study may have still discovered useful information about preferences, but he doesn't believe it has been measured correctly yet. "The whole literature suffers from this basic problem of acting as if Monty's choice means nothing." | + | Although Chen agrees that the study may have still discovered useful information about preferences, he doesn't believe it has been measured correctly yet. "The whole literature suffers from this basic problem of acting as if Monty's choice means nothing" (Tierney 2008). | | | | | | | | Monty Hall problem, the study of monkeys, and other problems involving unequal distributions of probability are notoriously difficult for people to solve correctly. Even academic studies may be littered with mistakes caused by difficulty interpreting statistics. | | Monty Hall problem, the study of monkeys, and other problems involving unequal distributions of probability are notoriously difficult for people to solve correctly. Even academic studies may be littered with mistakes caused by difficulty interpreting statistics. | | Line 201: | | Line 200: | | | | | | | | | The Monty Hall Problem: The Remarkable Story of Math's Most Contentious Brain Teaser. Jason Rosenhouse. | | The Monty Hall Problem: The Remarkable Story of Math's Most Contentious Brain Teaser. Jason Rosenhouse. | | | | + | | | | | + | Brehm, J. W. (1956) Postdecision changes in the desirability of alternatives, Journal of Abnormal and Social Psychology, 52, 384-9 | | | | | | | | http://www.math.jmu.edu/~lucassk/Papers/MHOverview2.pdf | | http://www.math.jmu.edu/~lucassk/Papers/MHOverview2.pdf | | | | + | | | | | + | |ToDo= | | | | + | Helper Pages: | | | | + | :* Bayes' Theorem | | | | + | :* Probability | | | |InProgress=No | | |InProgress=No | | | }} | | }} | ## Current revision The Monty Hall Problem Field: Algebra Image Created By: Grand Illusions The Monty Hall Problem The Monty Hall problem is a probability puzzle based on the 1960's game show Let's Make a Deal. When the Monty Hall problem was published in Parade Magazine in 1990, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. It remains one of the most disputed mathematical puzzles of all time. # Basic Description ### The Problem The show's host, Monty Hall, asks a contestant to pick one of three doors. One door leads to a brand new car, but the other two lead to goats. Once the contestant has picked a door, Monty opens one of the two remaining doors. He is careful never to open the door hiding the car. After Monty has opened one of these other two doors, he offers the contestant the chance to switch doors. Is it to his advantage to stay with his original choice, switch to the other unopened door, or does it not matter? ### The Solution If you answered that the contestant's decision doesn't matter, then you are among about 90% of respondents who were quickly able to determine that the two remaining doors must be equally likely to hide the car. You are also wrong. The answer to the Monty Hall Problem is viewed by most people—including mathematicians—as extremely counter–intuitive. It is actually to the contestant's advantage to switch: the probability of winning if the contestant doesn't switch is 1/3, but if the contestant switches, the probability becomes 2/3. To see why this is true, we examine each possible scenario below. We can first imagine the case where the car is behind door 1. In the diagram on the below, we can see what prize the contestant will win if he always stays with his initial pick after Monty opens a door. If the contestant uses the strategy of always staying, he will only win if he originally picked door 1. If the contestant always switches doors when Monty shows him a goat, then he will win if he originally picked door 2 or door 3. A player who stays with the initial choice wins in only one out of three of these equally likely possibilities, while a player who switches wins in two out of three. Since we know that the car is equally likely to be behind each of the three doors, we can generalize our strategy for the case where the car is behind door 1 to any placement of the car. The probability of winning by staying with the initial choice is 1/3, while the probability of winning by switching is 2/3. The contestant's best strategy is to always switch doors so he can drive home happy and goat-free. # Aids to Comprehension The Monty Hall problem has the distinction of being one of the rare math problems that has gained recognition on the front page of the Sunday New York Times. On July 21, 1991, the Times published a story that explained a heated argument between a Parade columnist, Marilyn vos Savant, and numerous angry readers. Many of these readers held distinguished degrees in mathematics, and the problem seemed far too elementary to warrant such difficulty in solving. Further explanation of readers' debate with vos Savant's can be found in the Why It's Interesting section. However, if you aren't completely convinced that switching doors is the best strategy, be aware that the Monty Hall problem has been called "math's most contentious brain teaser." The following explanations are alternative approaches to the problem that may help clarify that the best strategy is, in fact, switching doors. #### Why the Probability is not 1/2 The most common misconception is that the odds of winning are 50-50 no matter which door a contestant chooses. Most people assume that each door is equally likely to contain the car since the probability was originally distributed evenly between the three doors. They believe that they have no reason to prefer one door, so it does not matter whether they switch or stick with their original choice. This reasoning seems logical until we realize that the two doors cannot be equally likely to hide the car. The critical fact is that Monty's choice of which door to open is not random, so when he opens a door, it gives the contestant new information. Marilyn defended her answer in a subsequent column addressing this point specifically. Suppose we pause after Monty has revealed a goat and a UFO settles down onto the stage and a little green woman emerges. The host asks her to point to one of the two unopened doors. Then the chances that she'll randomly choose the one with the prize are 1/2. But, that's because she lacks the advantage the original contestant had—the help of the host. "When you first choose door #1 from three, there's a 1/3 chance that the prize is behind that one and a 2/3 chance that it's behind one of the others. But then the host steps in and gives you a clue. If the prize is behind #2, the host shows you #3, and if the prize is behind #3, the host shows you #2. So when you switch, you win if the prize is behind #2 or #3. You win either way! But if you don't switch, you win only if the prize is behind door #1," Marilyn explained. This is true because when Monty opens a door, he is reducing the probability that it contains a car to 0. When the contestant makes an initial pick, there is a 1/3 chance that he picked the car and a 2/3 chance that one of the other two doors has the car. When Monty shows him a goat behind one of those two doors, the 2/3 chance is only for the one unopened door because the probability must be 0 for the one that the host opened. #### An Extreme Case of the Problem Imagine that you are on Let's Make a Deal are there are now 1 million doors. You choose your door, then Monty opens all but one of the remaining doors, showing you that they hide goats. It’s clear that your first choice is unlikely to have been the right choice out of 1 million doors. Since you know that the car must be hidden behind one of the unopened doors and it is very unlikely to be behind your door, you know that it must be behind the other door. In fact, on average in 999,999 out of 1,000,000 times the other door will contain the prize because 999,999 out of 1,000,000 times the player first picked a door with a goat. Switching to the other door is the best strategy. #### Simulation Using a simulation is another useful way to show that the probability of winning by switching is 2/3. A simulation using playing cards allows us to perform multiple rounds of the game easily. One simulation proposed by vos Savant herself requires only two participants, a player and a host. Three cards are held by the host, one ace that represents the prize and two lower cards that represent the mules. The host holds up the three cards so only he can see their values. The contestant picks a card, and it is placed aside so that he still cannot see the value. Monty then reveals one of the remaining low cards which represents a mule. He must choose between the two lower cards if they both remain in his hand. If the card remaining in the host's hand is an ace, then this is recorded as a round where the player would have won by switching. Contrastingly, if the host is holding a low card, the round is recorded as one where staying would have won. Performing this simulation repeatedly will reveal that a player who switches will win the prize approximately 2/3 of the time. #### Play the Game http://www.nytimes.com/2008/04/08/science/08monty.html?_r=1 # A More Mathematical Explanation [Click to view A More Mathematical Explanation] The following explanation uses Bayes' Theorem to show how Monty revea [...] [Click to hide A More Mathematical Explanation] The following explanation uses Bayes' Theorem to show how Monty revealing a goat changes the game. Let the door picked by the contestant be called door a and the other two doors be called b and c. Also, Va, Vb, and Vc, are the events that the car is actually behind door a, b, and c respectively. We begin by looking at a scenario that leads to Monty opening door b, so let Ob be the event that Monty Hall opens curtain b. Then, the problem can be restated as follows: Is $P(V_a|O_b) = P(V_c|O_b)$? $P(V_a|O_b)$ is the probability that door a hides the car given that Monty opens door b. Similarly, $P(V_c|O_b)$ is the probability that door c hides the car given that monty opens door b. So, when $P(V_a|O_b) = P(V_c|O_b)$ the probability that the car is behind one unopened door the same as the probability that the car is behind the other unopened door. If this is the case, it won't matter if the contestant stays or switches. Using Bayes' Theorem, we know that $P(V_a|O_b)=\frac{P(V_a)*P(O_b|V_a)}{P(O_b)}$ $P(V_c|O_b)=\frac{P(V_c)*P(O_b|V_c)}{P(O_b)}$ Also, we can assume that the prize is randomly placed behind the curtains, so $P(V_a) = P(V_b) = P(V_c) = \frac{1}{3}$ Then we can calculate the conditional probabilities for the event Ob, which we can then use to calculate the probability of event Ob. First, we can calculate the conditional probability that Monty opens door b if the car is hidden behind door a. $P(O_b|V_a) = 1/2$ because if the prize is behind a, Monty can open either b or c. $P(O_b|V_b) = 0$ because if the prize is behind door b, Monty can't open door b. $P(O_b|V_c) = 1$ because if the prize is behind door c, Monty can only open door b. Each of these probabilities is conditional on the fact that the prize is hidden behind a specific door, but we are assuming that each of these probabilities is mutually exclusive since the car can only be hidden behind one door. As a result, we know that P(Ob) is equal to $P(O_b) = P(O_b \cap V_a) + P(O_b \cap V_b) + P(O_b \cap V_c)$ Using the equation for the probability of non-independent events, we can say $P(O_b)= P(V_a)P(O_b|V_a) + P(V_b)P(O_b|V_b) +P(V_c)P(O_b|V_c)$ $= \frac{1}{3} * \frac{1}{2} + \frac{1}{3} * 0 + \frac{1}{3} * 1$ $= \frac{1}{2}$ Then, we can use $P(O_b)$, $P(O_b|V_a)$, and $P(V_a)$ to calculated $P(V_a|O_b)$. $P(V_a|O_b) = \frac {P(V_a)*P(O_b|V_a)}{P(O_b)}$ $= \frac {\frac{1}{3} * \frac{1}{2}} {\frac{1}{2}}$ $= \frac {1}{3}$ Similarly, $P(V_c|O_b) = \frac {P(V_c)*P(O_b|V_c)}{P(O_b)}$ $= \frac {\frac{1}{3} * 1} {\frac{1}{2}}$ $= \frac {2}{3}$ The probability of Vc (the event that car is hidden behind door c) in this case is not equal to the probability of Va (the case where the car is hidden behind the door that Monty hasn't opened and the contestant hasn't selected). The contestant is offered an opportunity to switch to door c. We have calculated that the probability of winning when door c is selected is 2/3 and the probability of winning with the contestant's original choice, door a is 1/3. Since Monty is equally likely to open any of the three doors, we can generalize this strategy for any door that he opens. The probability that the car is hidden behind the contestant's original choice is 1/3, but the probability that the car is hidden behind the unopened and unselected door is 2/3. If the contestant switches, he doubles his chance of winning. # Why It's Interesting Variations of the problem have been popular game teasers since the 19th century, but the "Lets Make a Deal" version is most widely known. #### History of the Problem The earliest of several probability puzzles related to the Monty Hall problem is Bertrand's box paradox, posed by Joseph Bertrand in 1889. In Bertrand's puzzle there are three boxes: a box containing two gold coins, a box with two silver coins, and a box with one of each. The player chooses one random box and draws a coin without looking. The coin happens to be gold. What is the probability that the other coin is gold as well? As in the Monty Hall problem the intuitive answer is 1/2, but the actual probability 2/3. #### Ask Marilyn: A Story of Misguided Hatemail The question was originally proposed by a reader of “Ask Marilyn”, a column in Parade Magazine in 1990. Marilyn's correct solution, that switching doors was the best strategy, caused an uproar among mathematicians. While most people responded that switching should not matter, the contestant’s chances for winning in fact double if he switches doors. Part of the controversy, however, was caused by the lack of agreement on the statement of the problem itself. Most statements of the problem, including the one in Marilyn's column, do not match the rules of the actual game show. This was a source of great confusion when the problem was first presented. The main ambiguities in the problem arise from the fact that it does not fully specify the host's behavior. For example, imagine a host who wasn't required to always reveal a goat. The host's strategy could be to open a door only when the contestant has selected the correct door initially. This way, the host could try to tempt the contestant to switch and lose. When first presented with the Monty Hall problem, an overwhelming majority of people assume that switching does not change the probability of winning the car even when the problem was stated to remove all sources of ambiguity. An article by Burns and Wieth cited various studies on the Monty Hall problem that document difficulty solving the Monty Hall problem specifically. These previous articles reported 13 studies using standard versions of the Monty Hall dilemma, and reported that most people do not switch doors. Switch rates ranged from 9% to 23% with a mean of 14.5%, even when the problem was stated explicitely. This consistency is especially remarkable given that these studies include a range of different wordings, methods of presentations, languages, and cultures. Marilyn quotes cognitive psychologist Massimo Piattelli-Palmarini in her own book saying "... no other statistical puzzle comes so close to fooling all the people all the time" and "that even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer" (Bostonia July/August 1991). When the Monty Hall problem appeared in Parade, approximately 10,000 readers, including nearly 1,000 with PhDs, wrote to the magazine claiming the published solution was wrong. One letter written to vos Savant by Dr. E. Ray Bobo of Georgetown University was especially critical of Marilyn's solution: "You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively toward the solution to a deplorable situation. How many irate mathematicians are needed to get you to change your mind?" #### Monty and Monkeys A recent article published in The New York Times uncovered an interesting relationship between the Monty Hall problem and a study on cognitive dissonance using monkeys. If the calculations of Yale economist M. Keith Chen are correct, then some of the most famous experiments in psychology might be flawed. Chen believes the researchers drew conclusions based on natural inclination to incorrectly evaluate probability. The most famous experiment in question is the 1956 study "Postdecision changes in the desirability of alternatives" on rationalizing choices. The researchers studied which M&M colors were most preferred by monkeys. After identifying a few colors of M&Ms that were approximately equally favored by a monkey - say, red, blue, and yellow, - the researchers gave the monkey a choice between two of the colors. In one case, imagine that a monkey chose yellow over blue. Then, the monkey would be offered the choice between blue and red M&Ms. Researchers noted that about two-thirds of the time the monkey would choose red. The 1956 study claimed that their results reinforced the theory of rationalization: Once we reject something we are convinced that we never like it anyway. Dr. Chen reexamined the experimental procedure, and says that monkey's rejection of blue might be attributable to statistics alone. Chen says that although the three colors of M&M's are approximately equally favored, there must be some slight difference in preference between the original red, blue, and yellow. If this is the case, then the monkey's choice of yellow over blue wasn't arbitrary. Like Monty Hall's decision to open a door that hid a goat, the monkey's choice between yellow and blue discloses additional information. In fact, when a monkey favors yellow over blue, there's a two-thirds chance that it also started off with a preference for red over blue- which would explain why the monkeys chose red 2/3 of the time in the Yale experiment. To see why this is true, consider Chen's conjecture that monkeys must have some slight preference between the three colors they are being offered. The table below shows all the possible combinations of ways that a monkey could possibly rank its M&Ms. We can see that in the case where the monkey preferred yellow over blue, they monkey preferred red over blue in 2/3 of the rankings. Although Chen agrees that the study may have still discovered useful information about preferences, he doesn't believe it has been measured correctly yet. "The whole literature suffers from this basic problem of acting as if Monty's choice means nothing" (Tierney 2008). Monty Hall problem, the study of monkeys, and other problems involving unequal distributions of probability are notoriously difficult for people to solve correctly. Even academic studies may be littered with mistakes caused by difficulty interpreting statistics. #### 21 The 2008 movie 21 increased public awareness of the Monty Hall problem. 21 opens with an M.I.T. math professor using the Monty Hall Problem to explain theories to his students. The Monty Hall problem is included in the movie to show the intelligence of the main character because he is immediately able to solve such a notoriously difficult problem. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References http://www.nytimes.com/2008/04/08/science/08tier.html http://www.cs.dartmouth.edu/~afra/goodies/monty.pdf Heart of Mathematics. Edward B. Burger and Michael P. Starbird. http://en.wikipedia.org/wiki/Monty_Hall_problem The Monty Hall Problem: The Remarkable Story of Math's Most Contentious Brain Teaser. Jason Rosenhouse. Brehm, J. W. (1956) Postdecision changes in the desirability of alternatives, Journal of Abnormal and Social Psychology, 52, 384-9 http://www.math.jmu.edu/~lucassk/Papers/MHOverview2.pdf # Future Directions for this Page Helper Pages: • Bayes' Theorem • Probability Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513018131256104, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/39289/estimate-the-error-term-in-clt/39313
## estimate the error term in CLT ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X_m = \frac{1}{\sqrt{m}}\sum_{k=1}^m Z_k$ where $Z_k$ are iid equally likely on $\{\pm 1\}$. Then $X_m$ convergens to $X \sim \mathcal{N}(0,1)$ in distribution by CLT. Let $f$ be a smooth bounded function on $\mathbb{R}$. Then $\mathbb{E}[f(X_m)] \to \mathbb{E}[f(X)]$. I wonder if there is any general method to give sharp asymptotic estimate of the error term $\mathbb{E}[f(X_m)] - \mathbb{E}[f(X)]$, which I expect to be $\Theta(1/m)$. The scaling constant should depend on $f$ (as well as the distribution of $Z_k$ if they are not binary). For law of large number, this type of estimate can be done via the Delta method (e.g., to estimate $\mathbb{E}[f(\bar{Z})] - f(0)$). There must be a counterpart for CLT... I haven't found the Edgeworth expansion useful because it seems to work with distribution with densities. Edited: To be clear, I am only interested in some specific nice function (e.g., $f(x) = x^2 e^{-x^2/4}$) and finding a sharp expansion for the error term of the form, say, $c/m + o(1/m)$, where $c$ will depend n $f$. As pointed by Mark, the worst-case rate of all bounded smooth function $f$ is $1/\sqrt{m}$, which agrees with the upper bound given by Stein's method. - Do you perhaps mean to look at $f(\bar{X})$ instead of $f(X_m)$? For $f(x)=x^2$ your error term doesn't converge to 0. – Yaroslav Bulatov Sep 19 2010 at 19:02 No. I meant $f(X_m)$. For your $f$ the error term is zero. – mr.gondolier Sep 19 2010 at 19:12 1 oh...Z's are symmetric...so the error term is 0 regardless of m, right? – Yaroslav Bulatov Sep 19 2010 at 19:18 1 well if something converges to Gaussian weakly, then all its moments must converge. – mr.gondolier Sep 19 2010 at 19:25 Counterpart of Central Limit Theorem gives the distribution of $\sqrt{n}f(\bar{X})$. Distribution of $f(\sqrt{n}\bar{X})$ seems to have unusual behavior, for instance if $Z_i$'s are uniform on {0,1}, mean of $X_m$ goes to infinity, but because $f$ is bounded, distribution of $f(X_m)$ gets squished into a delta function – Yaroslav Bulatov Sep 19 2010 at 20:04 show 5 more comments ## 4 Answers Stein's method typically gives good Berry-Esseen type bounds for smooth test functions. See Chapter III of Stein's book (entirely viewable in Google Books). For example, specializing to your case of symmetric Bernoulli summands, equation (37) on p. 38 gives $$\vert \mathbb{E}f(X_m)-\mathbb{E}f(X)\vert \le \frac{2\Vert f' \Vert_\infty}{\sqrt{m}}.$$ For more general summands, there is some simple dependence on the third and fourth moments as well as $\Vert f \Vert_\infty$. Also, I'm pretty sure that $m^{-1/2}$ is the correct rate here even for Bernoullis, although I can't find a reference for a lower bound at the moment. Why do you expect better? - Mark, thanks for your reply. See my example below. Maybe I made a mistake? – mr.gondolier Sep 20 2010 at 15:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The Berry-Esseen theorem is a classical result of this sort. It predicts errors on the order of $m^{-1/2}$, however. - I didn't find Berry-Esseen directly applicable. It deals with CDF , that is, those $f$ which is indicator of a half-open interval. Maybe you can enlighten me how to do it. In fact I am more interested in lower bound. To give an example why $m^{-1/2}$ is not tight, consider $f(x) = x^4$. Then the error term equals $2/m$. – mr.gondolier Sep 19 2010 at 19:14 1 @mr.gondolier: Is $f(x) = x^4$ allowed? You state in the question that $f(x)$ is bounded. – Robby McKilliam Sep 19 2010 at 22:45 My argument is as follows: if an unbounded function like $x \mapsto x^4$ has an error term like $1/m$, I am positive that we can find a bounded function (e.g. by truncation) whose error term does not exceed $1/m$, which is strictly smaller than $1/\sqrt{m}$ Nate stated. – mr.gondolier Sep 19 2010 at 22:53 BTW, results giving error on the order of $m^{-1/2}$ assume non-zero derivatives. In your case, $f(\bar{X})$ converges to 0 at the rate of $m^{-2}$, this follows from Taylor expansion of $f(\bar{X})$ around 0 – Yaroslav Bulatov Sep 20 2010 at 19:03 If $X_m$ has cumulative distribution function $F_m$, and $X$ has cumulative distribution function $F$, then (at least formally) integration by parts gives you $$E(f(X_m))-E(f(X))=\int (F_m(x)-F(x)) df(x).$$ Now you can apply the Berry-Esseen bound. - 1 Also instead of Berry-Esseen it is better to apply non-uniform Berry-Esseen, i.e. something like (do not remember the constants exactly) $$|F_m(x) - F(x)| \le {\mathrm{const}\over\sqrt{m}} {1\over 1+x^3}$$ – Paul Yuryev Sep 20 2010 at 1:52 Sorry this is NOT an answer to my question... just some clarafications. The reason I think $m^{-1/2}$ is not tight is as follows. For example, take $f$ to be the characteristic function, we have $\mathbb{E}[e^{itX_m}] = (\mathbb{E}[e^{it Z/\sqrt{m}}])^m = (1 - t^2/(2m) + o(1/m))^m \to e^{-t^2/2} = \mathbb{E}[e^{itX}]$ at rate $1/m$, because $m\log(1-1/m) \to -1$ at rate $1/m$. Also, it seems all moments of $X_m$ converge to the moments of $X$ at rate $1/m$. Doing a Taylor expansion for those nice $f$ should also yield a rate of $1/m$? - 1 I'm not convinced by your heuristic for the characteristic function because you only get the $1/m$ rate for convergence of the log once $m \gg t^2$. So your argument shows that $\mathbb{E} e^{itX_m} \to \mathbb{E} e^{itX}$ at rate $1/m$, but with an implicit constant that may depend on $t$. Likewise, if $f$ is a polynomial, then $\mathbb{E} f(X_m) \to \mathbb{E} f(X)$ at rate $1/m$ with an implicit constant that may depend on the degree and coefficients of $f$. If, as you suggest above, you do a truncation and Taylor expansion you may lose some of that rate. – Mark Meckes Sep 20 2010 at 16:05 Just to clarify: You are correct about the $1/m$ rate for certain very nice functions, but I think that for the class of all bounded smooth functions the correct rate is $1/\sqrt{m}$, and that the loss is somehow due to the lack of uniformity in that $1/m$ rate for very nice functions. – Mark Meckes Sep 20 2010 at 16:09 I agree that there exists $f$ such that $1/sqrt{m}$ is tight. In fact I am not looking for uniform estimates but rather for a specific function $f(x) = x^2 e^(-x^2/4)$. In my OP, I said the scaling constant will depend on the function $f$. Based also on numerical result, I believe its rate is $1/m$. Do you think there is any method to produce a sharp expansion of the form $c/m + o(1/m)$? Any lower bound idea? – mr.gondolier Sep 20 2010 at 16:36 1 Okay, so you only expect $1/m$ for a specific $f$ you're interested in. You should edit the original question to make that clear, since it sounds like you're saying you expect $1/m$ for an arbitrary smooth bounded $f$. (I don't have an answer offhand but I'll think about it when I have some time.) – Mark Meckes Sep 20 2010 at 17:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 84, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259660840034485, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4721/should-i-use-garch-volatility-or-standard-deviation-in-cross-sectional-regressio/4722
# Should I use GARCH volatility or standard deviation in cross-sectional regression? I want to do a cross-sectional study where the historical, medium-long run volatility of some return series (call it $R_t$) is included as a regressor. Which of the following two estimates of volatility is superior in this context? $$\text{Option 1}$$ Of course, the simple standard deviation of historical returns over some window. $\boxed{\text{std.dev.}(R_t) = \sqrt{E[(R_t-E[R_t])^2]}}$ $$\text{Option 2}$$ Let's set up the GARCH(1,1) as an example of an alternative; • Mean equation: $R_t = \mu + \epsilon_t$ $\epsilon_t = z_t \sigma_t$ $z_t \sim N(0,1)$, $\epsilon_t \sim N(0,\sigma_t)$ • Variance equation: $\sigma_t^2 = \omega + k_1 \epsilon_{t-1}^2 + k_2 \sigma_{t-1}^2$ Then we have that $E[\sigma_t^2] = \omega + k_1 E[\epsilon_{t-1}^2] + k_2 E[\sigma_{t-1}^2]$ $\implies E[\sigma_t^2] = \omega + k_1 E[\sigma_t^2] + k_2 E[\sigma_t^2]$ $\implies \boxed{E[\sigma_t^2] = \frac{\omega}{1-k_1-k_2}}$ - ## 2 Answers I would recommend to use simple standard deviation (among the 2 options you offered). You are performing time series analysis of historical data points, you are not forecasting. Thus, why exposing yourself to a much more computationally intensive method? May I also point you to a related (not duplicate) thread: Why are GARCH models used to forecast volatility if residuals are often correlated? - @Jase, care to explain the unselecting of the answer and chosing a different answer which only adds that Garch models help in forecasting when you clearly stated that you do not look to forecast? I do not have an issue with the fact that you are at liberty to chose the best answer for you, I just do not follow your rational to chose an answer that agrees with mine and adds the forecasting point which you made clear you do not look to perform plus me pointing out that my answer does not pertain to forecasting volatility. Confused!!! – Freddy Dec 16 '12 at 9:08 add'l info: OP chose this answer then switched to a different one, was just curious of his/her thought process and was not trying to influence his final decision. – Freddy Dec 16 '12 at 10:25 Freddy I have been thinking about this for a while. I am still not sure. The reason I switched answers originally is because Bob made me consider EWMA when I had previously not considered it, and also because I don't view computational burden as an issue because I'm using low frequency data (meaning that, in my perception, your resulting recommendation lacked any substance when it came to my specific application). – Jase Dec 16 '12 at 11:28 ok, but then maybe you want to consider including a description of your specific application next time so people can give you a more targeted answer. Also, you do not need to rush marking an answer if you are unsure. Just my 2 cents. – Freddy Dec 16 '12 at 13:27 Yeah I should have. I think we've talked about it enough and we should get on with our lives :-). Thank you for your contribution (just because I changed correct answer doesn't mean I didn't find it helpful!) – Jase Dec 16 '12 at 13:47 Neither of the options is strictly superior over the other. I agree with Freddy about the disadvantages of GARCH. On the other hand, correcting for heteroskedasticity can help your model and forecasts* if it is present and persistent. Whether GARCH is your best choice is debatable. You could look at other sources to determine the volatility or, as an option 3, use EWMA on the data you already have to estimate volatility. • I assume you want to do forecasts at some point. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951435923576355, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/171791/computing-a-laurent-series
# Computing a Laurent series Let $$f(z) = \frac{1}{(2z-1)(z-3)}$$. Compute the Laurent series about the point z = 1 in the annular domain $$\frac{1}{2} < |z-1| < 2$$ My attempt: I broke f(z) up into the partial fraction decomposition: $$-\frac{2}{5(2z-1)} + \frac{1}{5(z-3)} = -\frac{2}{5}*\frac{1}{(1-\frac{(z+\frac{1}{2})}{2})} +\frac{1}{5}*\frac{1}{1-(z-2)} =$$ $$-\frac{2}{5}\sum_{n=0}^\infty(-1)^{n}\frac{(z+1)^{n}}{2^n}-\frac{1}{5}\sum_{n=0}^\infty(z-2)^n$$ And that was my answer. But I was told I was wrong, and I'm not sure where I went wrong in there. So if someone could point out where I went wrong, it would be greatly appreciated! - 1 You have a Laurent series around $z=-1$ plus a Laurent series around $z=2$. – anon Jul 17 '12 at 3:40 A) You were asked for a series with powers of z-1. B) The geometric series formula requires the ratio to be less than 1 in absolute value. – user31373 Jul 17 '12 at 3:41 So for A) I should manipulate my partial fractions so that I have z-1, but what should I do for B)? I'm not sure how I could make the ratio less than 1 in absolute value. – Rand Jul 17 '12 at 3:50 ## 2 Answers Writing $w=z-1$, we have by partial fractions $$\begin{array}{c l}\frac{1}{(2z-1)(z-3)} & =\frac{1}{(2w+1)(w-2)} \\ & =\frac{1}{5}\left(-\frac{2}{2w+1}+\frac{1}{w-2}\right)\\ & =-\frac{2/5}{1+2w}-\frac{1/10}{1-w/2}.\end{array}$$ And $|-2w|,|w/2|<1\iff \frac{1}{2}<|w|<2$. Do you see how this works out? - I think so. So we would have: $$-\frac{2}{5}*\frac{1}{1-(-2(z-1))} - \frac{1}{10}*\frac{1}{1-\frac{z-1}{2}} = -\frac{2}{5}*\sum_{n=0}^\infty [(z-1)^{n}(-2)^{n}] - \frac{1}{10}*\sum_{n=0}^\infty[\frac{(z-1)^{n}}{2^{n}}]$$and then I just combine the two series. – Rand Jul 17 '12 at 4:05 Yes. | Edit: Now no. | Edit: Now yes. – anon Jul 17 '12 at 4:08 $$\frac{1}{(2z-1)(z-3)}=\frac{1}{5}\left(\frac{1}{z-3}-\frac{1}{z-\frac{1}{2}}\right)=-\frac{1}{10}\frac{1}{1-\frac{z-1}{2}}-\frac{1}{5(z-1)}\frac{1}{\left(1+\frac{1}{2(z-1)}\right)}$$ Now only check that $$\left|\frac{z-1}{2}\right|<1\,\,,\,\,|2(z-1)|^{-1}<1$$ and you'll be able to use the developments $$\frac{1}{1-z}=1+z+z^2+...=\sum_{n=0}^\infty z^n\,\,,\,\,\frac{1}{1+z}=1-z+z^2-...=\sum_{n=0}^\infty(-1)^{n}z^n$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9591939449310303, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/92442/list
## Return to Answer 1 [made Community Wiki] Even the cyclicity of the groups of order 15, or the existence of a normal Sylow 5-subgroup in any group of order 100, is not merely a toy example. The fact that Sylow p-subgroups of a finite group are always conjugate is one way to prove that normal implies characteristic for a Sylow p-subgroup. (So if a group has simple subgroups of index 100 which generate it, and no normal subgroup of order 25, the group itself is simple. Hence, the Higman-Sims group is simple because the Mathieu group $M_{22}$ is simple. This is done in Wilson's "The Finite Simple Groups".) Two consequences of this are that if $P$ is a Sylow p-subgroup of a finite group $G$ and $K$ is a subgroup satisfying $N_{G}(P) \leq K \leq G$, then $[K:N_{G}(P)] \equiv [G:K] \equiv 1 \mod{p}$ and $K$ is self-normalizing in $G$. In particular, the maximal subgroups of $G$ containing $N_{G}(P)$ are constrained by these results. The cyclicity of groups of order 15 is more than just a toy example, since the cyclicity of groups of order 299 = 13*23 (which is provable the same way) is used in Thompson's original proof of the simplicity of the Conway group $Co_{1}$. (This proof also gives an example of the use of the Frattini argument.) If you want to prove the Burnside $p^{a}q^{b}$-Theorem, you need to exploit the existence of Sylow subgroups. This is one of the few commonalities of the character-theoretic and character-free proofs of the theorem. Via character theory, the basic group-theoretic result is that a finite group with a conjugacy class whose size is a power of a prime cannot be simple -- but you can only get a conjugacy class of size equal to a power of a prime in a group of order $p^{a}q^{b}$ by choosing a nontrivial central element of a Sylow subgroup (unless you made a bad choice and it's in the center of the whole group, in which case nonsimplicity of the group is immediate unless the group is cyclic of prime order). Eschewing character theory, Sylow subgroups are indispensable, whether you use the Glauberman $ZJ$-theorem or any other local-analytic tools to do the heavy lifting in the proof. They are also essential even for much lighter lifting which happens in these proofs. When using the transfer to prove a finite group satisfying certain conditions is not perfect, it's good to have a subgroup from which this fact is visible. It is good to have a subgroup $H$ such that one knows $\phi : G \to A$ is nontrivial because its restriction to $H$ is nontrivial. If p is a prime dividing $| \phi(G) |$, then any subgroup whose index is a nonmultiple of p will work. A Sylow p-subgroup of $G$ fits the bill perfectly, and often comes with a fair amount of information about its own structure, to boot. It is possible to build off of the Burnside $p^{a}q^{b}$-Theorem to prove the that the existence of Sylow systems characterizes finite solvable groups. Sylow system normalizers are all conjugate in a finite solvable group, and these facts form the starting point of the theory of finite solvable groups (which is substantial in its own right, as one can read in "Finite Soluble Groups", by Doerk and Hawkes).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359074831008911, "perplexity_flag": "head"}
http://mathoverflow.net/questions/37015?sort=votes
## Why is Beta the maximum entropy distribution over Bernoulli’s parameter? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Why is Beta(1,1) the maximum entropy distribution over the bias of a coin expressed as a probability given that: • If we express the bias as odds (which is over the support $[0, \infty)$), then Beta-prime(1,1) is the corresponding distribution to Beta(1,1). Isn't the maximum entropy distribution over the positive reals the exponential distribution (which is not Beta-prime(1,1))? • If we express the bias in log odds (which is over the support of the reals), then the logistic distribution (with mean 0 and scale 1) is the corresponding distribution to Beta(1,1). Beta(1,1) makes sense as maximum entropy because it's flat over its support. The other distributions are not flat. If we had chosen a different parametrization, we should clearly arrive at the corresponding distribution (not something else). How are the other two distributions the maximum entropy distributions over their support? There must be some other requirement that I'm missing. What is it? - You might want to cite some sources for the confusion. My guess is the underlying measure is not uniform over the real interval you are working with. – John Jiang Aug 29 2010 at 3:46 The exponential distribution is the one that maximizes entropy SUBJECT TO a constraint on the expected value. – Michael Hardy Aug 29 2010 at 5:00 Right, thanks. So, what's the constraint in this case? Surely, our expected odds is 1, but Beta-prime(1,1) is not the same distribution as Exponential(1). – Neil Aug 29 2010 at 18:44 John, what do you mean cite sources for the confusion? – Neil Aug 29 2010 at 18:46 ## 2 Answers I think there are two separate things going on here. One is the issue of a maximum entropy distribution. The other is of whether or not distributions are invariant under different parameterizations. Regarding the second matter, I think your statement "if we had chosen a different parameterization, we should clearly arrive at the corresponding distribution" is probably not quite right (I say probably because I may be interpreting you wrong). Only particular distributions have this property and sometimes are not probability distributions. See http://en.wikipedia.org/wiki/Jeffreys_prior if this is what you're interested in. ps I'd have preferred to leave this as a comment, but can't yet I guess. - Here's my thinking: If you know nothing about a coin, then we're told that the maximum entropy distribution over the coin's bias is Beta(1,1). So, your answer to the question, "what is the probability that the coin's bias is less than 20%?" is 20%. If someone asks, "what is the probability that the coin's bias is less than odds 1:4?", you should also answer 20%. This is true for every probability, and so it stands to reason that we should be allowed to choose whatever parametrization we want. – Neil Aug 29 2010 at 18:53 ...and we should get whatever pushforward probability measure would result given the mapping from the old sample space to the new one. (My wording might be off.) – Neil Aug 29 2010 at 18:54 I think you're right. I need to read about the Jeffreys prior. I don't understand why his 64-year-old paper "An Invariant Form for the Prior Probability in Estimation Problems" is not freely available. – Neil Aug 29 2010 at 19:04 2 This isn't an original source, but gives a clear look at the issue using your example: amstat.org/publications/jse/v12n2/zhu.pdf One basicaly requires Jacobians to cancel to get invariance. FWIW, the idea of reflecting "ignorance" with probability measures goes back all the way to Laplace and his principle of indifference. Max-ent and Jeffreys priors are two of the common approach. As you have discovered, they suggest different distributions. – R Hahn Aug 30 2010 at 13:05 The max ent approach in some sense amounts to picking a sufficient statistic and using it to derive the distribution. In the case of the exponential, the sample space is the positive real line and the sufficient statistic is the sample mean. More generally, you can think of specifying sufficient statistics and an invariance property (formally a transition kernel) and deriving your distributions this way. This falls under the very pretty theory of "extremal families"; see eg Schervish's Theory of Statistics. – R Hahn Aug 30 2010 at 13:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here's the Jeffreys paper: http://ifile.it/niok3wy/jeffreys-an_invariant_form-1946.pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211147427558899, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?s=3b6c1fb78f765c518067b3e8c96205eb&p=4277883
Physics Forums ## Normalizing a wave function - how the integration is done? I have been searching for an anwser everywhere, but i can't seem to understand something. In this topic (you don't need to read it) i managed to find out that "we can calculate normalisation factor ##\Psi_0## of a wavefunction ##\Psi## if we integrate probability ##|\Psi|^2## over some volume and equate it to 1". Hence: [itex] \int\limits_{V} |\Psi|^2 \, \textrm{d}V= 1 [/itex] Now how exactly do we integrate this? Please be specific, because in the post i linked to i got an anwser that the result of integration is [itex] \int\limits_{V} |\Psi|^2 \, \textrm{d}V = |\psi_0|^2 V [/itex] and i don't know how is this possible. Maybee my interpretation of this is wrong and this is why below i am supplying you with my interpretation. My interpretation: For the sake of clarity i will just choose some wave function for example ##\Psi = \Psi_0 \sin(\omega t - kx)##. I chose this as it is similar to an already known wave function of a sinusoidal wave ##A = A_0 \sin(\omega t - kx)## which i have been using allover wave physics. I don't know if i am allowed to choose the ##\Psi## like that because for now i don't know enough to know what i am alowed/not allowed to do in QM. If i understand this ##\Psi_0## in a vave function ##\Psi = \Psi_0 \sin(\omega t - kx)## is the normalisation factor i am seeking? (Please confirm this). So now i take an integral of the wavefunction and equate it to 1: [itex] \begin{split} \int \limits^{}_{V} \left|\Psi \right|^2 \, \textrm{d} V &= 1\\ \int \limits^{}_{V} \big|\Psi_0 \sin (\omega t - kx) \big|^2 \, \textrm{d} V &= 1\\ &\dots \end{split} [/itex] I get lost at the spot where i wrote down "##\dots##". I really don't know how to get ##|\psi_0|^2 V## as a result of integration. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Blog Entries: 1 Recognitions: Science Advisor Of course the answer you get depends on what wavefunction you start with. If, as in the thread you quoted, you choose ψ = ψ0 expi(kx - ωt), this is a traveling plane wave. Its probability density is |ψ|2 = ψ*ψ = ψ02 = const, so the integral gives you ψ02 V. If, on the other hand you choose ψ = ψ0 sin(kx - ωt), this is a standing wave. Its probability density is ψ02 sin2(kx - ωt) which is not constant, and you'll get a different integral. Quote by Bill_K Of course the answer you get depends on what wavefunction you start with. Yes i understand this. Quote by Bill_K |ψ|2 = ψ02 THIS is what i still am not certain of. If i try to calculate ##|\Psi| ^2## using ##\Psi = \Psi_0 e^{i(\omega t - kx)}## i get this: [itex] |\Psi|^2 = \left| \Psi_0 e^{i(\omega t - kx)} \right| ^2 = \overline{\Psi} \Psi = \underbrace{\Psi_0 e^{-i(\omega t - kx)}}_{conjugate} \Psi_0 e^{i(\omega t - kx)} = {\Psi_0}^2 \frac{\Psi_0 e^{i(\omega t - kx)}}{\Psi_0 e^{i(\omega t - kx)}} = \Psi_0^2 [/itex] Is my calculation legit? Please confirm. And please tell me how do i know that ##\Psi_0^2## is a constant and i should therefore integrate it as such? Blog Entries: 1 Recognitions: Science Advisor ## Normalizing a wave function - how the integration is done? Yes, that's correct. And a plane wave will have a constant amplitude, so ψ0 will be constant. But how can i calculate integral for wave function ##\Psi = \Psi_0 \sin(\omega t - kx)##. Could i simplify this by stating that the wave is travelling in ##x## direction and only integrate over ##x## or should i use a triple ##\iiint## and integrate over ##x##, ##y## and ##z##? I need some advice on how to calculate this integral: $\int\limits_V \left| \Psi_0 \sin(\omega t - kx) \right|^2 \, \textrm{d} V$ Mentor A plane wave that extends to infinity in all directions cannot be normalized in the usual sense, i.e. $$\int_{all space} {\Psi^* \Psi dx dy dz} = 1$$ (in three dimensions) Such waves are not physically realistic. The amplitude of a real-world wave function has to drop off to zero as we go "far enough" away from the center of the system. This leads to the concept of wave packets. Nevertheless, we often talk about plane waves as convenient idealizations or approximations over small regions of space. Quote by jtbell Such waves are not physically realistic. The amplitude of a real-world wave function has to drop off to zero as we go "far enough" away from the center of the system. How could i then modify the wave function ##\Psi = \Psi_0 \sin(\omega t - kx)## to be acceptable for QM? I think it would be best to go back and look at the fundamentals of QM, once you have, the answers to your questions will become obvious. The answer to your question is complicated and i don't know where to start explaining. Thread Tools | | | | |---------------------------------------------------------------------------------|---------------------------|---------| | Similar Threads for: Normalizing a wave function - how the integration is done? | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 7 | | | Quantum Physics | 1 | | | Advanced Physics Homework | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8458738923072815, "perplexity_flag": "middle"}
http://psychology.wikia.com/wiki/Autoregressive_moving_average_model
# Autoregressive moving average model Talk0 31,726pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In statistics, autoregressive moving average (ARMA) models, sometimes called Box-Jenkins models after George Box and G. M. Jenkins, are typically applied to time series data. Given a time series of data Xt, the ARMA model is a tool for understanding and, perhaps, predicting future values in this series. The model consists of two parts, an autoregressive (AR) part and a moving average (MA) part. The model is usually then referred to as the ARMA(p,q) model where p is the order of the autoregressive part and q is the order of the moving average part (as defined below). ## Autoregressive model The notation AR(p) refers to the autoregressive model of order p. The AR(p) model is written $X_t = c + \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t .\,$ where $\varphi_1, \ldots, \varphi_p$ are the parameters of the model, $c$ is a constant and $\varepsilon_t$ is an error term (see below). The constant term is omitted by many authors for simplicity. An autoregressive model is essentially an infinite impulse response filter with some additional interpretation placed on it. Some constraints are necessary on the values of the parameters of this model in order that the model remains stationary. For example, processes in the AR(1) model with |φ1| > 1 are not stationary. ### Example: An AR(1)-process An AR(1)-process is given by $X_t = c + \varphi X_{t-1}+\varepsilon_t,\,$ where $\varepsilon_t$ is a white noise process with zero mean and variance $\sigma^2$. (Note: The subscript on $\varphi_1$ has been dropped.) The process is covariance-stationary if $|\varphi|<1$. If $\varphi=1$ then $X_t$ exhibits a unit root and can also be considered as a random walk, which is not covariance-stationary. Otherwise, the calculation of the expectation of $X_t$ is straightforward. Assuming covariance-stationarity we get $\mbox{E}(X_t)=\mbox{E}(c)+\varphi\mbox{E}(X_{t-1})+\mbox{E}(\varepsilon_t)\Rightarrow \mu=c+\varphi\mu+0.$ thus: $\mu=\frac{c}{1-\varphi},$ where $\mu$ is the mean. For c = 0, then the mean = 0 and the variance is found to be: $\textrm{var}(X_t)=E(X_t^2)-\mu^2=\frac{\sigma^2}{1-\varphi^2}.$ The autocovariance is given by $B_n=E(X_{t+n}X_t)-\mu^2=\frac{\sigma^2}{1-\varphi^2}\,\,\varphi^{|n|}.$ It can be seen that the autocovariance function decays with a decay time of $\tau=-1/\ln(\varphi)$ [to see this, write $B_n=K\phi^{|n|}$ where $K$ is independent of $n$. Then note that $\phi^{|n|}=e^{|n|\ln\phi}$ and match this to the exponential decay law $e^{-n/\tau}$] The spectral density function is the inverse Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time inverse Fourier transform: $\Phi(\omega)= \frac{1}{\sqrt{2\pi}}\,\sum_{n=-\infty}^\infty B_n e^{-i\omega n} =\frac{1}{\sqrt{2\pi}}\,\left(\frac{\sigma^2}{1+\varphi^2-2\varphi\cos(\omega)}\right).$ This expression contains aliasing due to the discrete nature of the $X_j$, which is manifested as the cosine term in the denominator. If we assume that the sampling time ($\Delta t=1$) is much smaller than the decay time ($\tau$), then we can use a continuum approximation to $B_n$: $B(t)\approx \frac{\sigma^2}{1-\varphi^2}\,\,\varphi^{|t|}$ which yields a Lorentzian profile for the spectral density: $\Phi(\omega)= =\frac{1}{\sqrt{2\pi}}\,\frac{\sigma^2}{1-\varphi^2}\,\frac{\gamma}{\pi(\gamma^2+\omega^2)}$ where $\gamma=1/\tau$ is the angular frequency associated with the decay time $\tau$. An alternative expression for $X_t$ can be derived by first substituting $c+\varphi X_{t-2}+\varepsilon_{t-1}$ for $X_{t-1}$ in the defining equation. Continuing this process N times yields $X_t=c\sum_{k=0}^{N-1}\varphi^k+\varphi^NX_{\varphi-N}+\sum_{k=0}^{N-1}\varphi^k\varepsilon_{t-k}.$ For N approaching infinity, $\varphi^N$ will approach zero and: $X_t=\frac{c}{1-\varphi}+\sum_{k=0}^\infty\varphi^k\varepsilon_{t-k}$ It is seen that $X_t$ is white noise convolved with the $\varphi^k$ kernel plus the constant mean. By the central limit theorem, the $X_t$ will be normally distributed as will any sample of $X_t$ which is much longer than the decay time of the autocorrelation function. ### Calculation of the AR parameters The AR(p) model is given by the equation $X_t = \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t.\,$ It is based on parameters $\varphi_i$ where i = 1, ..., p. Those parameters may be calculated using Yule-Walker equations: $\gamma_m = \sum_{k=1}^p \varphi_k \gamma_{m-k} + \sigma_\varepsilon^2\delta_m$ where m = 0, ... , p, yielding p + 1 equations. $\gamma_m$ is the autocorrelation function of X, $\sigma_\varepsilon$ is the standard deviation of the input noise process, and δm is the Kronecker delta function. Because the last part of the equation is non-zero only if m = 0, the equation is usually solved by representing it as a matrix for m > 0, thus getting equation $\begin{bmatrix} \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \vdots \\ \end{bmatrix} = \begin{bmatrix} \gamma_0 & \gamma_{-1} & \gamma_{-2} & \dots \\ \gamma_1 & \gamma_0 & \gamma_{-1} & \dots \\ \gamma_2 & \gamma_{1} & \gamma_{0} & \dots \\ \dots & \dots & \dots & \dots \\ \end{bmatrix} \begin{bmatrix} \varphi_{1} \\ \varphi_{2} \\ \varphi_{3} \\ \vdots \\ \end{bmatrix}$ solving all $\varphi$. For m = 0 have $\gamma_0 = \sum_{k=1}^p \varphi_k \gamma_{-k} + \sigma_\varepsilon^2$ which allows us to solve $\sigma_\varepsilon^2$. #### Derivation The equation defining the AR process is $X_t = \sum_{i=1}^p \varphi_i\,X_{t-i}+ \varepsilon_t.\,$ Multiplying both sides by Xt-m and taking expected value yields $E[X_t X_{t-m}] = E\left[\sum_{i=1}^p \varphi_i\,X_{t-i} X_{t-m}\right]+ E[\varepsilon_t X_{t-m}].$ Now, $E[X_t X_{t-m}] =\gamma_m$ by definition of the autocorrelation function. The values of the noise function are independent of each other, and Xt − m is independent of εt where m is greater than zero. For m ≠ 0, $E[\varepsilon_t X_{t-m}] = 0$. For m = 0, $E[\varepsilon_t X_{t}] = E\left[\varepsilon_t (\sum_{i=1}^p \varphi_i\,X_{t-i}+ \varepsilon_t)\right] = \sum_{i=1}^p \varphi_i\, E[\varepsilon_t\,X_{t-i}] + E[\varepsilon_t^2] = 0 + \sigma_\varepsilon^2,$ Now we have $\gamma_m = E\left[\sum_{i=1}^p \varphi_i\,X_{t-i} X_{t-m}\right] + \sigma_\varepsilon^2 \delta_m.$ Furthermore, $E\left[\sum_{i=1}^p \varphi_i\,X_{t-i} X_{t-m}\right] = \sum_{i=1}^p \varphi_i\,E[X_{t} X_{t-m+i}] = \sum_{i=1}^p \varphi_i\,\gamma_{m-i},$ which yields the Yule-Walker equations: $\gamma_m = \sum_{i=1}^p \varphi_i \gamma_{m-i} + \sigma_\varepsilon^2 \delta_m.$ ## Moving average model The notation MA(q) refers to the moving average model of order q: $X_t = \varepsilon_t + \sum_{i=1}^q \theta_i \varepsilon_{t-i}\,$ where the θ1, ..., θq are the parameters of the model and the εt, εt-1,... are again, the error terms. The moving average model is essentially a finite impulse response filter with some additional interpretation placed on it. ## Autoregressive moving average model The notation ARMA(p, q) refers to the model with p autoregressive terms and q moving average terms. This model contains the AR(p) and MA(q) models, $X_t = \varepsilon_t + \sum_{i=1}^p \varphi_i X_{t-i} + \sum_{i=1}^q \theta_i \varepsilon_{t-i}.\,$ ## Note about the error terms The error terms εt are generally assumed to be independent identically-distributed random variables sampled from a normal distribution with zero mean: εt ~ N(0,σ2) where σ2 is the variance. These assumptions may be weakened but doing so will change the properties of the model. In particular, a change to the i.i.d. assumption would make a rather fundamental difference. ## Specification in terms of lag operator In some texts the models will be specified in terms of the lag operator L. In these terms then the AR(p) model is given by $\varepsilon_t = \left(1 - \sum_{i=1}^p \varphi_i L^i\right) X_t = \varphi X_t\,$ where φ represents polynomial $\varphi = 1 - \sum_{i=1}^p \varphi_i L^i.\,$ The MA(q) model is given by $X_t = \left(1 + \sum_{i=1}^q \theta_i L^i\right) \varepsilon_t = \theta \varepsilon_t\,$ where θ represents the polynomial $\theta= 1 + \sum_{i=1}^q \theta_i L^i.\,$ Finally, the combined ARMA(p, q) model is given by $\left(1 - \sum_{i=1}^p \varphi_i L^i\right) X_t = \left(1 + \sum_{i=1}^q \theta_i L^i\right) \varepsilon_t\,$ or more concisely, $\varphi X_t = \theta \varepsilon_t.\,$ ## Fitting models ARMA models in general can, after choosing p and q, be fitted by least squares regression to find the values of the parameters which minimize the error term. It is generally considered good practice to find the smallest values of p and q which provide an acceptable fit to the data. For a pure AR model then the Yule-Walker equations may be used to provide a fit. ## Generalizations The dependence of Xt on past values and the error terms εt is assumed to be linear unless specified otherwise. If the dependence is nonlinear, the model is specifically called a nonlinear moving average (NMA), nonlinear autoregressive (NAR), or nonlinear autoregressive moving average (NARMA) model. Autoregressive moving average models can be generalized in other ways. See also autoregressive conditional heteroskedasticity (ARCH) models and autoregressive integrated moving average (ARIMA) models. If multiple time series are to be fitted then a vectored ARIMA (or VARIMA) model may be fitted. If the time-series in question exhibits long memory then fractional ARIMA (FARIMA, sometimes called ARFIMA) modelling is appropriate. If the data is thought to contain seasonal effects, it may be modeled by a SARIMA (seasonal ARIMA) model. Another generalization is the multiscale autoregressive (MAR) model. A MAR model is indexed by the nodes of a tree, whereas a standard (discrete time) autoregressive model is indexed by integers. See multiscale autoregressive model for a list of references. ## References • George Box and F.M. Jenkins. Time Series Analysis: Forecasting and Control, second edition. Oakland, CA: Holden-Day, 1976. • Mills, Terence C. Time Series Techniques for Economists. Cambridge University Press, 1990. • Percival, Donald B. and Andrew T. Walden. Spectral Analysis for Physical Applications. Cambridge University Press, 1993. ## Read more • Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual... Autoregressive moving average • Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual... Independent component analysis • Heteroskedasticity # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8488113880157471, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/50140/direct-sum-of-an-algebra-and-its-opposite
# Direct sum of an algebra and its opposite I hate to do this, but I cannot seem to remember/find a particular result that I thought was true. Forgive me if I have some points wrong, since this is the point of my asking. I thought I remembered that the direct sum of an algebra and its opposite algebra was the universal enveloping algebra. For a Lie algebra, the opposite Lie algebra is just that with negative bracket. But I don't see why the sum of these should be the universal enveloping algebra of the Lie algebra. Edit: Qiaochu makes a great point below on the dimension in his comment. I also don't think this should be true of some associative algebras, i.e. consider an associative algebra and its opposite(assuming the algebra is noncommutative, the opposite algebra is that with reversed multiplication, i.e. $a*b:=ba$), then considering the direct sum of these, it shouldn't be isomorphic to the enveloping algebra of the underlying Lie algebra of $A$, which is isomorphic to $A$. I thought that I read the result in Dixmier, but I can't seem to find it. :/ - 2 AFAIK this is a different use of the term "universal enveloping algebra," unrelated to its use in Lie algebras, and it's the tensor product, not the direct sum. It appears for example in discussions of Hochschild (co)homology. In general the universal enveloping algebra of a Lie algebra is infinite-dimensional, so it can't be described by such a construction. – Qiaochu Yuan Jul 7 '11 at 17:47 Yes, that is a good point about dimensionality. Can you point me to a statement of this so I can at least remember what I was thinking of? – BBischof Jul 7 '11 at 17:51 ## 2 Answers One can find in Weibel (for example) the following definition: the enveloping algebra of an associative algebra $A$ is the algebra $A \otimes A^{op}$. The significance of this construction is that an $A\text{-}A$ bimodule is the same thing as a left $A \otimes A^{op}$-module, and one can use this to define Hochschild (co)homology in terms of Tor and Ext. This term is, as far as I know, unrelated to the universal enveloping algebra of a Lie algebra. It is also false that the universal enveloping algebra of the underlying Lie algebra of an associative algebra $A$ is isomorphic to $A$, for the same reason as I pointed out in the comments: for $A$ finite-dimensional you can get infinite-dimensional universal enveloping algebras. In other words, not every representation of $A$ as a Lie algebra extends to a representation of $A$ as an algebra. - ok great. Thanks. – BBischof Jul 7 '11 at 18:16 I recently finished teaching a summer half course in which I debated whether to mention the enveloping algebra $A^e$ of an algebra $A$ explicitly (I didn't, as it turned out). One of the references I was using -- Pierce's great book on associative algebras -- really likes to phrase things in terms of $A^e$, which was giving me some grief. I mentioned this recently to a colleague, and it came out that I have no idea why $A^e$ is called the enveloping algebra of $A$...other than the possible guess that it "envelops" $A$ by coming at it from both left and right. Does anybody know? – Pete L. Clark Jul 7 '11 at 21:36 1 In particular I wonder whether $A^e$ has anything to do with universal enveloping algebras...and presume that it does not. – Pete L. Clark Jul 7 '11 at 21:37 @Pete L. Clark, yes, this is definitely the thrust of my question, why the heck is it the universal enveloping algebra of anything? – BBischof Jul 8 '11 at 13:31 I am pretty sure the two terms are unrelated, as I said. – Qiaochu Yuan Jul 8 '11 at 14:02 Here is an amusing situation where the two are related. This is not an "answer" to my question, just an auxiliary fact that I found interesting(hence the CW). Consider $\mathfrak{g}$ a Lie algebra over a field $\mathbb{K}$ and $M$ a $\mathfrak{g}$-module. We define Cartan-Eilenberg Cohomology as \begin{equation} H^n_{CE}(\mathfrak{g},M):= Ext^n_{U(\mathfrak{g})}(\mathbb{K},M) \end{equation} for $U(\mathfrak{g})$ the universal enveloping algebra of $\mathfrak{g}$, and $Ext^n_{U(\mathfrak{g})}(\mathbb{K},-)$ the $n$'th derived functor of $Hom_{U(\mathfrak{g})}(\mathbb{K},-)$. Similarly, as Qiaochu mentions, for $A$ an associative unital algebra with $B$ an $A$-bimodule, we define Hochschild Cohomology as, \begin{equation} HH^n(A,B):= Ext^n_{A^e}(A,B) \end{equation} for $A^e$ the enveloping algebra of $A$. So the amusing fact is first, that the two "enveloping algebras" appear in the downstairs of the Extension functor for defining respective Cohomology theories. More amusingly, for $M$ a $U(\mathfrak{g})$-bimodule, we have \begin{equation} HH^n(U(\mathfrak{g}),M)\simeq H^n_{CE}(\mathfrak{g},M_{ad}) \end{equation} i.e. \begin{equation} Ext^n_{U(\mathfrak{g})}(\mathbb{K},M_{ad})\simeq Ext^n_{U(\mathfrak{g})^e}(U(\mathfrak{g}),M). \end{equation} I claim this is amusing. Tee hee. - This is true more generally for Hopf algebras, and you can even keep the Mad joke as there is a notion of adjoint action for a Hopf algebra. It's one way to prove that the ordinary cohomology ring $\operatorname{Ext}(k,k)$ is graded commutative for a Hopf algebra: you use the machinery above to inject the ordinary cohomology into the Hochschild cohomology which is always graded commutative. – mt_ Jul 27 '11 at 15:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358116388320923, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/322057/are-there-any-heron-like-formulas-for-convex-polygons
# Are there any Heron-like formulas for convex polygons? Are there any Heron-like formulas for convex polygons ? By Heron-like I mean formulas without angles as arguments and which takes as arguments only lenghts of sides of polygon - that is - we know no lengths of diagonals ? Does such formulas exist ? I don't think so, because we could go with area of regular quadrangle to zero, but how to prove it for convex k-polygon, or mayby in other cases it isn't true ? - ## 3 Answers There are such formulas for cyclic polygons (i.e., those inscribed in a circle). I draw your attention to the work of Robbins in this respect. - ## Did you find this question interesting? Try our newsletter email address No. Consider a quadrilateral with all sides equal. You can vary the angles and change the area without changing the side lengths. Consider a rhombus and a square with equal side lengths, for example. Similar examples work in the case of a general regular $n$-gon. - I've wrote that. – Qbik Mar 6 at 2:21 1 @Qbik What exactly do you want then? No such formula exists. – Potato Mar 6 at 2:49 Your intuition that such a formula cannot exist is correct. For general quadrilaterals or other polygons you may not be able to get the area to zero, but all you need is that it can vary depending on one angle. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167484641075134, "perplexity_flag": "middle"}
http://xianblog.wordpress.com/tag/harmonic-mean-estimator/
# Xi'an's Og an attempt at bloggin, from scratch… ## summary statistics for ABC model choice Posted in Statistics with tags ABC, Bayesian model choice, England, harmonic mean estimator, HPD region, Kenilworth, New Zealand, Read paper, semi-automatic ABC, Series B on March 11, 2013 by xi'an A few days ago, Dennis Prangle, Paul Fernhead, and their co-authors from New Zealand have posted on arXiv their (long-awaited) study of the selection of summary statistics for ABC model choice. And I read it during my trip to England, in trains and planes, if not when strolling in the beautiful English countryside as above. As posted several times on this ‘Og, the crux of the analysis is that the Bayes factor is a good type of summary when comparing two models, this result extending to more model by considering instead the vector of evidences. As in the initial Read Paper by Fearnhead and Prangle, there is no true optimality in using the Bayes factor or vector of evidences, strictly speaking, besides the fact that the vector of evidences is minimal sufficient for the marginal models (integrating out the parameters). (This was a point made in my discussion.) The implementation of the principle is similar to this Read Paper setting as well: run a pilot ABC simulation, estimate the vector of evidences, and re-run the main ABC simulation using this estimate as the summary statistic. The paper contains a simulation study using some of our examples (in Marin et al., 2012), as well as an application to genetic bacterial data. Read more » 1 Comment » ## estimating a constant Posted in Books, Statistics with tags All of Statistics, Bayesian Analysis, Bernoulli factory, Chris Sims, cross validated, harmonic mean estimator, Larry Wasserman, numerical analysis, StackExchange, statistical inference on October 3, 2012 by xi'an Paulo (a.k.a., Zen) posted a comment in StackExchange on Larry Wasserman‘s paradox about Bayesians and likelihoodists (or likelihood-wallahs, to quote Basu!) being unable to solve the problem of estimating the normalising constant c of the sample density, f, known up to a constant $f(x) = c g(x)$ (Example 11.10, page 188, of All of Statistics) My own comment is that, with all due respect to Larry!, I do not see much appeal in this example, esp. as a potential criticism of Bayesians and likelihood-wallahs…. The constant c is known, being equal to $1/\int_\mathcal{X} g(x)\text{d}x$ If c is the only “unknown” in the picture, given a sample x1,…,xn, then there is no statistical issue whatsoever about the “problem” and I do not agree with the postulate that there exist estimators of c. Nor priors on c (other than the Dirac mass on the above value). This is not in the least a statistical problem but rather a numerical issue.That the sample x1,…,xn can be (re)used through a (frequentist) density estimate to provide a numerical approximation of c $\hat c = \hat f(x_0) \big/ g(x_0)$ is a mere curiosity. Not a criticism of alternative statistical approaches: e.g., I could also use a Bayesian density estimate… Furthermore, the estimate provided by the sample x1,…,xn is not of particular interest since its precision is imposed by the sample size n (and converging at non-parametric rates, which is not a particularly relevant issue!), while I could use importance sampling (or even numerical integration) if I was truly interested in c. I however find the discussion interesting for many reasons 1. it somehow relates to the infamous harmonic mean estimator issue, often discussed on the’Og!; 2. it brings more light on the paradoxical differences between statistics and Monte Carlo methods, in that statistics is usually constrained by the sample while Monte Carlo methods have more freedom in generating samples (up to some budget limits). It does not make sense to speak of estimators in Monte Carlo methods because there is no parameter in the picture, only “unknown” constants. Both fields rely on samples and probability theory, and share many features, but there is nothing like a “best unbiased estimator” in Monte Carlo integration, see the case of the “optimal importance function” leading to a zero variance; 3. in connection with the previous point, the fascinating Bernoulli factory problem is not a statistical problem because it requires an infinite sequence of Bernoullis to operate; 4. the discussion induced Chris Sims to contribute to StackExchange! 15 Comments » ## Harmonic means, again Posted in Statistics, University life with tags arXiv, Bayes factors, bias, convergence, evidence, exoplanet, harmonic mean estimator, importance sampling, marginal likelihood on January 3, 2012 by xi'an Over the non-vacation and the vacation breaks of the past weeks, I skipped a lot of arXiv postings. This morning, I took a look at “Probabilities of exoplanet signals from posterior samplings” by Mikko Tuomi and Hugh R. A. Jones. This is a paper to appear in Astronomy and Astrophysics, but the main point [to me] is to study a novel approximation to marginal likelihood. The authors propose what looks at first as defensive sampling: given a likelihood f(x|θ) and a corresponding Markov chain (θi), the approximation is based on the following importance sampling representation $\hat m(x) = \sum_{i=h+1}^N \dfrac{f(x|\theta_i)}{(1-\lambda) f(x|\theta_i) + \lambda f(x|\theta_{i-h})}\Big/$ $\sum_{i=h+1}^N \dfrac{1}{(1-\lambda) f(x|\theta_i) + \lambda f(x|\theta_{i-h})}$ This is called a truncated posterior mixture approximation and, under closer scrutiny, it is not defensive sampling. Indeed the second part in the denominators does not depend on the parameter θi, therefore, as far as importance sampling is concerned, this is a constant (if random) term! The authors impose a bounded parameter space for this reason, however I do not see why such an approximation is converging. Except when λ=0, of course, which brings us back to the original harmonic mean estimator. (Properly rejected by the authors for having a very slow convergence. Or, more accurately, generally no stronger convergence than the law of large numbers.)  Furthermore, the generic importance sampling argument does not work here since, if $g(\theta) \propto (1-\lambda) \pi(\theta|x) + \lambda \pi(\theta_{i-h}|x)$ is the importance function, the ratio $\dfrac{\pi(\theta_i)f(x|\theta_i)}{(1-\lambda) \pi(\theta|x) + \lambda \pi(\theta_{i-h}|x)}$ does not simplify… I do not understand either why the authors compare Bayes factors approximations based on this technique, on the harmonic mean version or on Chib and Jeliazkov’s (2001) solution with both DIC and AIC, since the later are not approximations to the Bayes factors. I am therefore quite surprised at the paper being accepted for publication, given that the numerical evaluation shows the impact of the coefficient λ does not vanish with the number of simulations. (Which is logical given the bias induced by the additional term.) 2 Comments » ## Bayesian inference and the parametric bootstrap Posted in R, Statistics, University life with tags Bayesian, bootstrap, Brad Efron, confidence region, credible intervals, cross validated, harmonic mean estimator, parametric bootstrap, R on December 16, 2011 by xi'an This paper by Brad Efron came to my knowledge when I was looking for references on Bayesian bootstrap to answer a Cross Validated question. After reading it more thoroughly, “Bayesian inference and the parametric bootstrap” puzzles me, which most certainly means I have missed the main point. Indeed, the paper relies on parametric bootstrap—a frequentist approximation technique mostly based on simulation from a plug-in distribution and a robust inferential method estimating distributions from empirical cdfs—to assess (frequentist) coverage properties of Bayesian posteriors. The manuscript mixes a parametric bootstrap simulation output for posterior inference—even though bootstrap produces simulations of estimators while the posterior distribution operates on the parameter space, those  estimator simulations can nonetheless be recycled as parameter simulation by a genuine importance sampling argument—and the coverage properties of Jeffreys posteriors vs. the BCa [which stands for bias-corrected and accelerated, see Efron 1987] confidence density—which truly take place in different spaces. Efron however connects both spaces by taking advantage of the importance sampling connection and defines a corrected BCa prior to make the confidence intervals match. While in my opinion this does not define a prior in the Bayesian sense, since the correction seems to depend on the data. And I see no strong incentive to match the frequentist coverage, because this would furthermore define a new prior for each component of the parameter. This study about the frequentist properties of Bayesian credible intervals reminded me of the recent discussion paper by Don Fraser on the topic, which follows the same argument that Bayesian credible regions are not necessarily good frequentist confidence intervals. The conclusion of the paper is made of several points, some of which may not be strongly supported by the previous analysis: 1. “The parametric bootstrap distribution is a favorable starting point for importance sampling computation of Bayes posterior distributions.” [I am not so certain about this point given that the bootstrap is based on a pluggin estimate, hence fails to account for the variability of this estimate, and may thus induce infinite variance behaviour, as in the harmonic mean estimator of Newton and Raftery (1994). Because the tails of the importance density are those of the likelihood, the heavier tails of the posterior induced by the convolution with the prior distribution are likely to lead to this fatal misbehaviour of the importance sampling estimator.] 2. “This computation is implemented by reweighting the bootstrap replications rather than by drawing observations directly from the posterior distribution as with MCMC.” [Computing the importance ratio requires the availability both of the likelihood function and of the likelihood estimator, which means a setting where Bayesian computations are not particularly hindered and do not necessarily call for advanced MCMC schemes.] 3. “The necessary weights are easily computed in exponential families for any prior, but are particularly simple starting from Jeffreys invariant prior, in which case they depend only on the deviance difference.” [Always from a computational perspective, the ease of computing the importance weights is mirrored by the ease in handling the posterior distributions.] 4. “The deviance difference depends asymptotically on the skewness of the family, having a cubic normal form.” [No relevant comment.] 5. “In our examples, Jeffreys prior yielded posterior distributions not much different than the unweighted bootstrap distribution. This may be unsatisfactory for single parameters of interest in multi-parameter families.” [The frequentist confidence properties of Jeffreys priors have already been examined in the past and be found to be lacking in multidimensional settings. This is an assessment finding Jeffreys priors lacking from a frequentist perspective. However, the use of Jeffreys prior is not justified on this particular ground.] 6. “Better uninformative priors, such as the Welch and Peers family or reference priors, are closely related to the frequentist BCa reweighting formula.” [The paper only finds proximities in two examples, but it does not assess this relation in a wider generality. Again, this is not particularly relevant from a Bayesian viewpoint.] 7. “Because of the i.i.d. nature of bootstrap resampling, simple formulas exist for the accuracy of posterior computations as a function of the number B of bootstrap replications. Even with excessive choices of B, computation time was measured in seconds for our examples.” [This is not very surprising. It however assesses Bayesian procedures from a frequentist viewpoint, so this may be lost on both Bayesian and frequentist users...] 8. “An efficient second-level bootstrap algorithm (“bootstrap-after-bootstrap”) provides estimates for the frequentist accuracy of Bayesian inferences.” [This is completely correct and why bootstrap is such an appealing technique for frequentist inference. I spent the past two weeks teaching non-parametric bootstrap to my R class and the students are now fluent with the concept, even though they are unsure about the meaning of estimation and testing!] 9. “This can be important in assessing inferences based on formulaic priors, such as those of Jeffreys, rather than on genuine prior experience.” [Again, this is neither very surprising nor particularly appealing to Bayesian users.] In conclusion, I found the paper quite thought-provoking and stimulating, definitely opening new vistas in a very elegant way. I however remain unconvinced by the simulation aspects from a purely Monte Carlo perspective. 1 Comment » ## R exam Posted in Kids, pictures, Statistics, University life with tags exam, harmonic mean estimator, Introduction to Monte Carlo Methods with R, Méthodes de Monte-Carlo avec R, Monte Carlo methods, R, simulation, Université Paris Dauphine on November 28, 2011 by xi'an Following a long tradition (!) of changing the modus vivendi of each exam in our exploratory statistics with R class, we decided this year to give the students a large collection of exercises prior to the exam and to pick five among them to the exam, the students having to solve two and only two of them. (The exercises are available in French on my webpage.) This worked beyond our expectations in that the overwhelming majority of students went over all the exercises and did really (too) well at the exam! Next year, we will hopefully increase the collection of exercises and also prohibit written notes during the exam (to avoid a possible division of labour among the students). Incidentally, we found a few (true) gems in the solutions, incl. an harmonic mean resolution of the approximation of the integral $\int_2^\infty x^4 e^{-x}\,\text{d}x=\Gamma(5,2)$ since some students generated from the distribution with density f proportional to the integrand over [2,∞) [a truncated gamma] and then took the estimator $\dfrac{1-e^{-2}}{\frac{1}{n}\,\sum_{i=1}^n y_i^{-4}}\approx\dfrac{\int_2^\infty e^{-x}\,\text{d}x}{\mathbb{E}[X^{-4}]}\quad\text{when}\quad X\sim f$ although we expected them to simulate directly from the exponential and average the sample to the fourth power… In this specific situation, the (dreaded) harmonic mean estimator has a finite variance! To wit; ```> y=rgamma(shape=5,n=10^5) > pgamma(2,5,low=FALSE)*gamma(5) [1] 22.73633 > integrate(f=function(x){x^4*exp(-x)},2,Inf) 22.73633 with absolute error < 0.0017 > pgamma(2,1,low=FALSE)/mean(y[y>2]^{-4}) [1] 22.92461 > z=rgamma(shape=1,n=10^5) > mean((z>2)*z^4) [1] 23.92876 ``` So the harmonic means does better than the regular Monte Carlo estimate in this case! 2 Comments » Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911332905292511, "perplexity_flag": "middle"}
http://nrich.maths.org/7678/index?nomenu=1
## 'Board and Spool' printed from http://nrich.maths.org/ ### Show menu A man is holding one end of a board and the another end is on a spool with the outer radius $R$ and the inner radius $r$. A board does not slip on the spool and the spool does not slip on the ground. The man starts to move with the board with a speed $u$. 1) How long does it take for the man to reach the spool? 2) Find the distance which must be traveled by the man to reach the spool. 3) Find the distance traveled by the man if $r = R$. 4) Calculate the time and the distance if $l = 298$cm, $R = 101$ cm, $r = 86$ cm, $u = 1$m/s.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8615278601646423, "perplexity_flag": "middle"}
http://matthewkahle.wordpress.com/tag/combinatorics/
# The fundamental group of random 2-complexes Eric Babson, Chris Hoffman, and I recently posted major revisions of our preprint, “The fundamental group of random 2-complexes” to the arXiv. This article will appear in Journal of the American Mathematical Society. This note is intended to be a high level summary of the main result, with a few words about the techniques. The Erdős–Rényi random graph $G(n,p)$ is the probability space on all graphs with vertex set $[n] = \{ 1, 2, \dots, n \}$, with edges included with probability $p$, independently. Frequently $p = p(n)$ and $n \to \infty$, and we say that $G(n,p)$ asymptotically almost surely (a.a.s) has property $\mathcal{P}$ if $\mbox{Pr} [ G(n,p) \in \mathcal{P} ] \to 1$ as $n \to \infty$. A seminal result of Erdős and Rényi is that $p(n) = \log{n} / n$ is a sharp threshold for connectivity. In particular if $p > (1+ \epsilon) \log{n} / n$, then $G(n,p)$ is a.a.s. connected, and if $p < (1- \epsilon) \log{n} / n$, then $G(n,p)$ is a.a.s. disconnected. Nathan Linial and Roy Meshulam introduced a two-dimensional analogue of $G(n,p)$, and proved an analogue of the Erdős-Rényi theorem. Their two-dimensional analogue is as follows: let $Y(n,p)$ denote the probability space of all 2-dimensional (abstract) simplicial complexes with vertex set $[n]$ and edge set ${[n] \choose 2}$ (i.e. a complete graph for the 1-skeleton), with each of the ${ n \choose 3}$ triangles included independently with probability $p$. Linial and Meshulam showed that $p(n) = 2 \log{n} / n$ is a sharp threshold for vanishing of first homology $H_1(Y(n,p))$. (Here the coefficients are over $\mathbb{Z} / 2$. This was generalized to $\mathbb{Z} /p$ for all $p$ by Meshulam and Wallach.) In other words, once $p$ is much larger than $2 \log{n} / n$, every (one-dimensional) cycle is the boundary of some two-dimensional subcomplex. Babson, Hoffman, and I showed that the threshold for vanishing of $\pi_1 (Y(n,p))$ is much larger: up to some log terms, the threshold is $p = n^{-1/2}$. In other words, you must add a lot more random two-dimensional faces before every cycle is the boundary of not any just any subcomplex, but the boundary of the continuous image of a topological disk. A precise statement is as follows. Main result Let $\epsilon >0$ be arbitrary but constant. If $p \le n^{-1/2 - \epsilon}$ then $\pi_1 (Y(n,p)) \neq 0$, and if $p \ge n^{-1/2 + \epsilon}$ then $\pi_1 (Y(n,p)) = 0$, asymptotically almost surely. It is relatively straightforward to show that when $p$ is much larger than $n^{-1/2}$, a.a.s. $\pi_1 =0$. Almost all of the work in the paper is showing that when $p$ is much smaller than $n^{-1/2}$ a.a.s. $\pi_1 \neq 0$. Our methods depend heavily on geometric group theory, and on the way to showing that $\pi_1$ is non-vanishing, we must show first that it is hyperbolic in the sense of Gromov. Proving this involves some intermediate results which do not involve randomness at all, and which may be of independent interest in topological combinatorics. In particular, we must characterize the topology of sufficiently sparse two-dimensional simplicial complexes. The precise statement is as follows: Theorem. If $\Delta$ is a finite simplicial complex such that $f_2 (\sigma) / f_0(\sigma) \le 1/2$ for every subcomplex $\sigma$, then $\Delta$ is homotopy equivalent to a wedge of circle, spheres, and projective planes. (Here $f_i$ denotes the number of $i$-dimensional faces.) Corollary. With hypothesis as above, the fundamental group $\pi_1( \Delta)$ is isomorphic to a free product $\mathbb{Z} * \mathbb{Z} * \dots * \mathbb{Z} / 2 * \mathbb{Z}/2$, for some number of $\mathbb{Z}$‘s and $\mathbb{Z} /2$‘s. It is relatively easy to check that if $p = O(n^{-1/2 - \epsilon})$ then with high probability subcomplexes of $Y(n,p)$ on a bounded number of vertices satisfy the hypothesis of this theorem. (Of course $Y(n,p)$ itself does not, since it has $f_0 = n$ and roughly $f_2 \approx n^{5/2}$ as $p$ approaches $n^{-1/2}$.) But the corollary gives us that the fundamental group of small subcomplexes is hyperbolic, and then Gromov’s local-to-global principle allows us to patch these together to get that $\pi_1 ( Y(n,p) )$ is hyperbolic as well. This gives a linear isoperimetric inequality on $pi_1$ which we can “lift” to a linear isoperimetric inequality on $Y(n,p)$. But if $Y(n,p)$ is simply connected and satisfies a linear isoperimetric inequality, then that would imply that every $3$-cycle is contractible using a bounded number of triangles, but this is easy to rule out with a first-moment argument. There are a number of technical details that I am omitting here, but hopefully this at least gives the flavor of the argument. An attractive open problem in this area is to identify the threshold $t(n)$ for vanishing of $H_1( Y(n,p), \mathbb{Z})$. It is tempting to think that $t(n) \approx 2 \log{n} / n$, since this is the threshold for vanishing of $H_1(Y(n,p), \mathbb{Z} / m)$ for every integer $m$. This argument would work for any fixed simplicial complex but the argument doesn’t apply in the limit; Meshulam and Wallach’s result holds for fixed $m$ as $n \to \infty$, so in particular it does not rule out torsion in integer homology that grows with $n$. As far as we know at the moment, no one has written down any improvements to the trivial bounds on $t(n)$, that $2 \log{n} / n \le t(n) \le n^{-1/2}$. Any progress on this problem will require new tools to handle torsion in random homology, and will no doubt be of interest in both geometric group theory and stochastic topology. Posted in research # Coloring the integers Here is a problem I composed, which recently appeared on the Colorado Mathematical Olympiad. If one wishes to color the integers so that every two integers that differ by a factorial get different colors, what is the fewest number of colors necessary? I might describe a solution, as well as some related history, in a future post. But for now I’ll just say that Adam Hesterberg solved this problem at Canada/USA Mathcamp a few summers ago, claiming the \$20 prize I offered almost as soon as I offered it. At the time, I suspected but still did not know the exact answer. Although the wording of the problem strongly suggests that the answer is finite, I don’t think that this is entirely obvious. Along those lines, here is another infinite graph with finite chromatic number. If one wishes to color the points of the Euclidean plane so that every two points at distance one get different colors, what is the fewest number of colors necessary? This is one of my favorite unsolved math problems, just for being geometrically appealing and apparently intractably hard. After fifty years of many people thinking about it, all that is known is that the answer is 4, 5, 6, or 7. Recent work of Shelah and Soifer suggests that the exact answer may depend on the choice of set theoretic axioms. This inspired the following related question. If one wishes to color the points of the Euclidean plane so that every two points at factorial distance get different colors, do finitely many colors suffice? More generally, if $S$ is a sequence of positive real numbers that grows quickly enough (say exponentially), and one forbids pairs points at distance $s$ from receiving the same color, one would suspect that finitely many colors suffice. On the other hand, if $S$ grows slowly enough (say linearly), one might expect that infinitely many colors are required. Posted in expository Tagged combinatorics # Francisco Santos disproves Hirsch conjecture The Hirsch conjecture has been around for over 50 years. The conjecture states that a $d$-dimensional polytope with $n$ facets has diameter at most $n-d$. (The facets of a polytope are the maximal dimensional faces (i.e. the $(d-1)$-dimensional faces), and saying that the diameter is at most $n-d$ is just saying that every pair of vertices can be connected by a path of length at most $n-d$.) The upper bounds on diameter are very far away from $n-d$; in fact no bound that is even polynomial (much less linear) in $n$ and $d$ is known. It was considered substantial progress when Gil Kalai got the first subexponential upper bounds. This fact led many to speculate that the conjecture is false, and apparently Victor Klee encouraged many people to try disproving it over the years. Francisco Santos has just announced an explicit counterexample to the Hirsch conjecture, in dimension $d = 43$ with $n =86$ facets. Details to be described at the Grünbaum–Klee meeting in Seattle this summer. To researchers on polytopes, this result may not be too surprising. Nevertheless, it is an old problem and it is nice to see it finally resolved. And there is something pleasing about finding explicit counterexamples in higher dimensional spaces. It offers a concrete way to see how bad our intuition for higher dimensions really is. Kahn and Kalai’s spectacular counterexample to “Borsuk’s conjecture” (in dimension $d = 1325$) comes to mind. I look forward to seeing the details of Santos’s construction. Posted in expository Tagged combinatorics, geometry # A conjecture concerning random cubical complexes Nati Linial and Roy Meshulam defined a certain kind of random two-dimensional simplicial complex, and found the threshold for vanishing of homology. Their theorem is in some sense a perfect homological analogue of the classical Erdős–Rényi characterization of the threshold for connectivity of the random graph. Linial and Meshulam’s definition was as follows. $Y(n,p)$ is a complete graph on $n$ vertices, with each of the ${n \choose 3}$ triangular faces inserted independently with probability $p$, which may depend on $n$. We say that $Y(n,p)$ almost always surely (a.a.s) has property $\mathcal{P}$ if the probability that $Y(n,p) \in \mathcal{P}$ tends to one as $n \to \infty$. Nati Linial and Roy Meshulam showed that if $\omega$ is any function that tends to infinity with $n$ and if $p = (\log{n} + \omega) / n$ then a.a.s $H_1( Y(n,p) , \mathbb{Z} / 2) =0$, and if $p = (\log{n} - \omega) / n$ then a.a.s $H_1( Y(n,p) , \mathbb{Z} / 2) \neq 0$. (This result was later extended to arbitrary finite field coefficients and arbitrary dimension by Meshulam and Wallach. It may also be worth noting for the topologically inclined reader that their argument is actually a cohomological one, but in this setting universal coefficients gives us that homology and cohomology are isomorphic vector spaces.) Eric Babson, Chris Hoffman, and I found the threshold for vanishing of the fundamental group $\pi_1(Y(n,p))$ to be quite different. In particular, we showed that if $\epsilon > 0$ is any constant and $p \le n^{-1/2 -\epsilon}$ then a.a.s. $\pi_1 ( Y(n,p) ) \neq 0$ and if $p \ge n^{ -1/2 + \epsilon}$ then a.a.s. $\pi_1 ( Y(n,p) ) = 0$. The harder direction is to show that on the left side of the threshold that the fundamental group is nontrivial, and this uses Gromov’s ideas of negative curvature. In particular to show that the $\pi_1$ is nontrivial we have to show first that it is a hyperbolic group. [I want to advertise one of my favorite open problems in this area: as far as I know, nothing is known about the threshold for $H_1( Y(n,p) , \mathbb{Z})$, other than what is implied by the above results.] I was thinking recently about a cubical analogue of the Linial-Meshulam set up. Define $Z(n,p)$ to be the one-skeleton of the $n$-dimensional cube with each square two-dimensional face inserted independently with probability $p$. This should be the cubical analogue of the Linial-Mesulam model? So what are the thresholds for the vanishing of $H_1 ( Z(n,p) , \mathbb{Z} / 2)$ and $\pi_1 ( Z(n,p) )$? I just did some “back of the envelope” calculations which surprised me. It looks like $p$ must be much larger (in particular bounded away from zero) before either homology or homotopy is killed. Here is what I think probably happens. For the sake of simplicity assume here that $p$ is constant, although in realty there are $o(1)$ terms that I am suppressing. (1) If $p < \log{2}$ then a.a.s $H_1 ( Z(n,p) , \mathbb{Z} /2 ) \neq 0$, and if $p > \log{2}$ then a.a.s $H_1 ( Z(n,p) , \mathbb{Z} /2 ) = 0$. (2) If $p < (\log{2})^{1/4}$ then a.a.s. $\pi_1 ( Z(n,p) ) \neq 0$, and if $p > (\log{2})^{1/4}$ then a.a.s. $\pi_1 ( Z(n,p) ) = 0$. Perhaps in a future post I can explain where the numbers $\log{2} \approx 0.69315$ and $(\log{2})^{1/4} \approx 0.91244$ come from. Or in the meantime, I would be grateful for any corroborating computations or counterexamples. Posted in research Tagged combinatorics, probability, topology # Topological Turán theory I just came across the following interesting question of Nati Linial. If a two-dimensional simplicial complex has $n$ vertices and $\Omega(n^{5/2})$ faces, does it necessarily contain an embedded torus? I want to advertise this question to a wider audience, so I’ll explain first why I think it is interesting. First of all this question makes sense in the context of Turán theory, a branch of extremal combinatorics. The classical Turán theorem gives that if a graph on $n$ vertices has more than $\displaystyle{ \left( 1-\frac{1}{r} \right) \frac{n^2}{2} }$ edges then it necessarily contains a complete subgraph $K_r$ on $r$ vertices. This is tight for every $r$ and $n$. One could ask instead how many edges one must have before there is forced to be a cycle subgraph, where it doesn’t matter what the length of the cycle is. This is actually an easier question, and it is easy to see out that if one has $n$ edges there must be a cycle. It also seems more natural, in that it can be phrased topologically: how many edges must be added to $n$ vertices before we are forced to contain an embedded image of the circle? What is the right two-dimensional analogue of this statement? In particular, is there a constant $C$ such that a two-dimensional simplicial complex with $n$ vertices and at least $C n^2$ two-dimensional faces must contain an embedded sphere $S^2$? If so, then this is essentially best possible. By taking a cone over a complete graph on $n-1$ vertices, one constructs a two-complex on $n$ vertices with ${n -1 \choose 2}$ faces and no embedded spheres. Without having thought about it at all, I am not sure how to do better. In any case, the corresponding question for torus seems more interesting, but for different reasons. In a paper with Eric Babson and Chris Hoffman we looked at the fundamental group of random two-complexes, as defined by Linial and Meshulam, and found the rough threshold for vanishing of the fundamental group. To show that the fundamental group was nontrivial when the number of faces was small required a lot of work — in particular, in order to apply Gromov’s local-to-global method for hyperbolicity, we needed to prove that the space was locally negatively curved, and this meant classifying the homotopy type of subcomplexes up to a large but constant size. It turned out that the small subcomplexes were all homotopy equivalent to wedges of circles, spheres, and projective planes. In particular, we show that there are not any torus subcomplexes, at least not of bounded size. (Linial may have recently shown that there are not embedded tori, even of size tending to infinity with $n$.) On the other hand, just on the other side of the threshold embedded tori abound in great quantity. It is interesting that something similar happens in the density random groups of Gromov — that the threshold for vanishing of the density random group corresponds to the presence of tori subcomplex in the naturally associated two-complex. It is not clear to me if this is a general phenomenon, coming geometrically from the fact that a torus admits a flat metric. Some of the great successes of the probabilistic method in combinatorics have been in existence proofs when constructions are hard or impossible to come by. It would be nice to have interesting or extremal topological examples produced this way. Nati’s question suggests an interesting family of extremal problems in topological combinatorics, and it might make sense that in certain cases, random simplicial complexes have nearly maximally many faces for avoiding a particular embedded subspace. Update: Nati pointed me to the paper Sós, V. T.; Erdos, P.; Brown, W. G., On the existence of triangulated spheres in \$3\$-graphs, and related problems. Period. Math. Hungar. 3 (1973), no. 3-4, 221–228. Here it is shown that $n^{5/2}$ is the right answer for the sphere. Their lower bound is constructive, based on projective planes over finite fields. Nati said that being initially unaware of this paper, he found a probabilistic proof that works just as well as a lower bound for every fixed 2-manifold. So it seems that the main problem here is to find a matching upper bound for the torus. Posted in research Tagged combinatorics, probability, topology # The Rado complex A simplicial complex is said to be flag if it is the clique complex of its underlying graph. In other words, one starts with the graph and add all simplices of all dimensions that are compatible with this $1$-skeleton. A subcomplex $F'$ of a flag complex $F$ is said to be induced if it is flag, and if whenever vertices $x, y \in F'$ and $\{ x,y \}$ is an edge of $F$, we also have that $\{ x,y \}$ is an edge of $F'$. Does there exist a flag simplicial complex $\Delta$ with countably many vertices, such that the following extension property holds? [Extension property] For every finite or countably infinite flag simplicial complex $X$ and vertex $v \in X$, and for every embedding of $X-v$ as an induced subcomplex $i: X -v \hookrightarrow \Delta$ , $i$ can be extended to an embedding $\widetilde{i}: X \hookrightarrow \Delta$ of $X$ as an induced subcomplex. It turns out that such a $\Delta$ does exist, and it is unique up to isomorphism (both combinatorially and topologically). Other interesting properties of $\Delta$ immediately follow. - $\Delta$ contains homoemorphic copies of every finite and countably infinite simplicial complex as induced subcomplexes. - The link of every face $\sigma \in \Delta$ is homeomorphic to $\Delta$ itself. - The automorphism group of $\Delta$ acts transitively on $d$-dimensional faces for every $d$. - Deleting any finite number of vertices or edges of $\Delta$ and the accompanying faces does not change its homeomorphism type. Here is an easy way to describe $\Delta$. Take countably many vertices, say labeled by the positive integers. Choose a probability $p$ such that $0 < p < 1$, and for each pair of integers $\{ m,n \}$, connect $m$ to $n$ by an edge with probability $p$. Do this independently for every edge. This is sometimes called the Rado graph, and because it is unique up to isomorphism (and in particular because it does not depend on $p$) it is sometimes also called the random graph. It is also possible to construct the Rado graph purely combinatorially, without resorting to probability. The $\Delta$ I have in mind is of course just the clique complex of the Rado graph. We can filter the complex $\Delta$ by setting $\Delta(n)$ to be the induced subcomplex on all vertices with labels $\le n$, and this allows us to ask more refined questions. (Now the choice of $p$ affects the asymptotics, so we assume $p = 1/2$.) From the perspective of homotopy theory, $\Delta$ is not a particularly interesting complex; it is contractible. (This is an exercise, one should check this if it is not obvious!) However, $\Delta(n)$ has interesting topology. As $n \to \infty$, the probability that $\Delta(n)$ is contractible is going to $0$. It was recently shown that $\Delta(n)$ has asymptotically almost surely (a.a.s.) at least $\Omega( \log{ \log{ n}} )$ nontrivial homology groups, concentrated around dimension $\log_2{n}$. For comparison, the dimension of $\Delta(n)$ is $d \approx 2 \log_2{n}$. I think one can probably show using the techniques from this paper that there is a.a.s. no nontrivial homology above dimension $d/2$ or below dimension $d/4$. It is still not clear (at least to me) what happens between dimensions $d/4$ and $d/2$. It seems that a naive Morse theory argument can give that the expected dimension of homology is small in this range, but to show that it is zero would take a more refined Morse function. Perhaps a good topic for another post would be “Morse theory in probability.” Another question: given a non-contractible induced subcomplex (say an embedded $d$-dimensional sphere) $\Delta' \subset \Delta$ on a set $S$ of $k$ vertices, how many vertices $f(k)$ should one expect to add to $S$ before $\Delta'$ becomes contractible in the larger induced subcomplex? For example, it seems that once you have added about $2^k$ vertices, it is reasonably likely that one of these vertices induces a cone over $\Delta'$, but is it possible that the subcomplex becomes contractible with far fewer vertices added? Posted in research Tagged combinatorics, probability, topology # The foolproof cube – a study in symmetry Besides being a fun toy, and perhaps the most popular puzzle in human history, the Rubik’s Cube is an interesting mathematical example. It provides a nice example of a nonabelian group, and in another article I may discuss some features of this group structure. This expository article is about an experiment, where I made Rubik’s cubes with two or three colors, instead of six. In particular, I want to mention an interesting observation made by Dave Rosoff about one of the specially colored cubes: It turns out to be foolproof, in the sense that no matter how one breaks it apart and reassembles the pieces, it is still solvable by twisting the sides. It is well known that this is not a property that stock Rubik’s Cubes have. The first observation about the physical construction of the cube is that it is made out of 21 smaller plastic pieces: a middle piece with 6 independently spinning center tiles, 12 corner cubies, and 8 edge cubies.  The frame includes 6 of the stickers. Each corner cubie has 3, and each edge cubie has 2. (And this accounts for all of the $6 \times 9 =54 = 6+ 12 \times 3 + 8 \times 2$ stickers. These stickers are on a solid piece of plastic, so they always stay next to each other, no matter matter how much you scramble it by twisting the sides. (I remember getting really mad as a kid, after figuring out that someone else had messed around with the stickers on my cube. Still a pet peeve, taking stickers off a cube is like fingernails on a chalkboard for me.) So anyway, it’s impossible to get two yellow stickers on the same edge cubie, for example, because that would make the cube impossible to solve: you couldn’t ever get the two stickers onto the same side. But this is not the only thing that can’t happen. Let’s restrict ourselves from now on to just the positions you can get to by taking the cube apart into plastic pieces, and putting them back together. If you take it apart, and put it back together randomly, will you necessarily be able to solve it by only twisting sides? (Of course you can always solve it by taking it back apart and putting it back together solved!) I knew, empirically, as a kid that you might not be able to solve it if you put it together haphazardly. Someone told me in high school that if you put it together “randomly,” your chances that it was solvable were exactly 1 in 12, and explained roughly why: it is impossible to flip an edge (gives a factor of 2), rotate a corner (a factor of 3), or to switch any two cubies (another factor of 2). This seemed plausible at the time, but it wasn’t until a graduate course in algebra that I could finally make mathematical sense of this. In fact, one of the problems on the take-home final was to prove that it’s impossible to flip any edge (leaving the rest of the cube untouched!), through any series of twists. I thought about it for a day or two, and was extremely satisfied to finally figure it out, how to prove something that I had known in my heart to be true since I was a kid. It is a fun puzzle, and one can write out a proof that doesn’t really use group theory in any essential way (although to be fair, group theory does provide a convenient language, and concise ways of thinking about things). For example, it is possible to describe a number 0 or 1 to every position, such that the number doesn’t change when you twist any side (i.e. it is invariant). Then provided that the property prescribes a 0 to a solved cube and a 1 to a cube with a flipped edge, the invariance gives  that the flipped edge cube is unsolvable. Several years ago, I got inspired to try different colorings of a Rubik’s Cube, just allowing some of the sides to have the same color. I was picky about how I wanted to do it, however. Each color class should “look the same,” up to a relabeling. A more precise way to say this is that every permutation of the colors is indistinguishable from some isometry. (Isometries of the cube are its symmetries: reflections, and rotations, and compositions of these. There are 48 in total.) It turns out that this only gives a few possibilities. The first is the usual coloring by 6-colors. Although there are several ways to put 6 colors on the faces of a cube,  for our purposes here there is really only one 6-color cube. There is also the “Zen cube,” with only one color. (“Always changing, but always the same.”) But there are a few intermediate possibilities that are interesting. First, with two colors, once can two-color the faces of a cube in essentially two different ways. Note that since we want each color class to look the same, each color gets three sides. The three sides of a color class either all three meet at a corner, or they don’t. And these are the only two possibilities, after taking into account all of the symmetries. So I bought some blank cubes and stickers, and made all four of the mathematically interesting possibilities. (I keep meaning to make a nice Zen cube, perhaps more interesting philosophically than mathematically, but I still haven’t gotten around to it. ) My friend Dave Rosoff and I played around with all of these, and found them somewhat entertaining. A first surprise was that they seem harder to solve than a regular Rubik’s cube. Seems like it should be easier, in terms of various metrics: the number of indistinguishable positions being much smaller, or equiavalently, the number of mechanical positions which are indistinguishable from the “true” solved position being much bigger. However in practice, what happens for many experience cube solvers, is that they get into positions that they don’t recognize at the end: the same-colored stickers seem to mask your true position. Nevertheless, an experienced solver can handle all four of these cubes without too much difficulty. When playing around with them, occasionally a cubie would pop out fall to the floor. The thing to do is to just pop the piece back in arbitrarily, and the solve it as far as you can. It is usually an edge that pops, so the probability of having a solvable cube is 1/2.  And if not, one can tell at the end, and then remedies the situation by flipping any edge back. Dave noticed something special about one of these four cubes, I think just from enough experience with solving it: no matter which piece got popped out, when he popped it back in randomly, the cube still seemed to be solvable. He thought it seemed too frequent to be a coincidence, so after a while of chatting about it, we convinced ourselves that this cube indeed had a special property: if one takes it apart into its 21 pieces, and reassembles it arbitrarily, it is always solvable by twists, a property we described as, “foolproof.” We talked it about it for a while longer, and convinced ourselves that this is the only foolproof cube, up to symmetry and permuting colors. (Well, the Zen cube is foolproof too.) So which of these four cubes is foolproof? This puzzle yields to a few basic insights, and does not actually require making models of each type of cube, although I would encourage anyone to do so who has extra blank cubes and stickers around, or who wants a neat Cube variant for their collection. Posted in expository Tagged combinatorics, cubes # Threshold behavior for non-monotone graph properties One of my research interests is what you might call non-monotone graph properties. A seminal result of Erdős and Rényi is that if $p \ll \log{n} / n$, then the random graph $G(n,p)$ is a.a.s. disconnected, while if $p \gg \log{n} /n$ then $G(n,p)$ is a.a.s. connected. This can be made more precise; for example, if $p = \frac{\log{n} + c}{n},$ with $c \in \mathbb{R}$, then $\mbox{Pr}[G(n,p) \mbox{ is connected }] \to e^{-e^{-c}}$ as $n \to \infty$. Connectivity is an example of a monotone graph property, meaning a set of graphs either closed under edge deletion or edge contraction. Other examples would be triangle-free, k-colorable, and less than five components. The fact that every monotone graph property has a sharp threshold is a celebrated theorem of Friedgut and Kalai. More generally, a graph property is a way of assigning numbers to finite graphs that is invariant under graph isomorphism, and increases or decreases with edge deletion or addition. Examples would be the number of triangle subgraphs, chromatic number, and number of connected components. Much of random graph theory is concerned with monotone properties of random graphs. But it is not hard to think of examples of non-monotone properties. For example, let $i =$ the number of induced four-cycle subgraphs. Another example would be $j =$ the number of complete $K_3$-subgraphs that are not contained in any $K_4$-subgraph. When $p$ is very large,  $p \gg n^{-1/3}$, then a.a.s. every three vertices share some common neighbor, so every $K_3$-subgraph is contained in a $K_4$-subgraph and $j=0$. Similarly, when $p$ is very small, $p \ll n^{-1}$, then a.a.s. there are no $K_3$ subgraphs, and $j=0$. For intermediate values of $p$, $n^{-1/3} \ll p \ll n^{-1}$, the expected value of $j$ is $E[j] = \theta (n^3 p^3)$. So the expected value of the graph property is unimodal in edge density, and we still have threshold behavior. Can Friedgut and Kalai’s result be extended to a more general setting that includes these cases? This is a simple and perhaps natural combinatorial example, but my original motivation was topological. I wrote a paper about topological properties of random clique complexes, available on the arXiv, which turn out to be non-monotone generalizations of the original Erdős-Rényi theorem. I will continue to write more about this and other related examples sometime soon, and as I have time, but in the meantime if anyone knows any nice examples of non-monotone graph properties, please leave a comment. One more thought for now. It seems in many of the most natural examples we have of non-monotone properties $F$, the expected value of $F$ over the probability space $G(n,p)$ is basically a unimodal function in the underlying parameter $p$. Can you give any natural examples of bimodal or multimodal graph properties? (Pathological examples?) Posted in research Tagged combinatorics, probability # Mathematical art Blog at WordPress.com. | Theme: Dusk To Dawn by Automattic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 240, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467955231666565, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/200617/how-to-solve-an-nth-degree-polynomial-equation?answertab=active
# How to solve an nth degree polynomial equation The typical approach of solving a quadratic equation is to solve for the roots $$x=\frac{-b\pm\sqrt{b^{2}-4ac}}{2a}$$ Here, the degree of x is given to be 2 However, I was wondering on how to solve an equation if the degree of x is given to be n. For example, consider this equation: $$a_0 x^{n} + a_1 x^{n-1} \dots a_n = 0$$ - @Johannes thanks for editing – Ayush Khemka Sep 22 '12 at 8:39 ## 3 Answers There is no perfect answer to this question. For polynomials up to degree 4, there are explicit solution formulas similar to that for the quadratic equation (the Cardano formulas for third-degree equations, see here, and the Ferrari formula for degree 4, see here). For higher degrees, no general formula exists (or more precisely, no formula in terms of addition, subtraction, multiplication, division, arbitrary constants and $n$-th roots). This result is proved in Galois theory and is known as the Abel-Ruffini theorem. Edit: Note that for some special cases (e.g., $x^n - a$), solution formulas exist, but they do not generalize to all polynomials. In fact, it is known that only a very small part of polynomials of degree $\ge 5$ admit a solution formula using the operations listed above. Nevertheless, finding solutions to polynomial formulas is quite easy using numerical methods, e.g., Newton's method. These methods are independent of the degree of the polynomial. - The OP's question is not clear enough. The equation may be concretely given like $x^5 + x^4 + x^3 + x^2 + x + 1 = 0$. If the Galois group of the equation is solvable, it can be solved using repeatedly the roots of equations of the form $x^k - a = 0$. – Makoto Kato Sep 22 '12 at 9:27 That is certainly true, but he states a general polynomial without any assumptions on the coefficients. Therefore, I will assume he is looking for a general solution formula. Of course, say, $x^n - 1=0$ is very easy to solve in terms of radicals, but this is not the issue here. – Johannes Kloos Sep 22 '12 at 9:29 It is not clear to me if the OP asks a solution of a general polynomial. – Makoto Kato Sep 22 '12 at 9:32 shrug it looks perfectly clear to me - if he was looking for a specific polynomial, he would have given the coefficients. Nevertheless, I will amend my answer. – Johannes Kloos Sep 22 '12 at 9:34 When one writes $ax^2 + bx + c = 0$ without explanation, it is not clear if it is a general equation or not. In other words, it is not clear whether $a, b, c$ are independent variables or constants. – Makoto Kato Sep 22 '12 at 9:41 show 8 more comments However, I was wondering on how to solve an equation if the degree of x is given to be n. It depends on the information you want. For many applications, the fact "$\alpha$ is a solution to that equation" is all the information you need, and so solving the equation is trivial. Maybe you'll also want to know how many real solutions there are. Descartes' rule of signs is good for that. Also, see Sturm's theorem. Sometimes, you need some information on the numeric value. You usually don't need much: "$\alpha$ is the only solution to that equation that lies between 3 and 4", for example. It's pretty easy to get rough information through ad-hoc means. Newton's method can be used to improve estimates, and determining how many solutions there are can help ensure you've found everything. - If I understand the question correctly: there is no general expression for finding roots of polynomials of degree 5 or more. See here For degrees 3 and 4 the Wikipedia entries are quite good. - The Abel-Ruffini theorem statest that there is no solution using radicals, not that there is no expression or method in general. – Calle Sep 22 '12 at 8:42 @Calle Given he/she mentioned the quadratic formula I assumed he/she meant a similar expression for higher degrees. But I take your point. – user39572 Sep 22 '12 at 8:48 The OP's question is not clear enough. The equation may be concretely given like $x^5 + x^4 + x^3 + x^2 + x + 1 = 0$. – Makoto Kato Sep 22 '12 at 9:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383832216262817, "perplexity_flag": "head"}
http://mathschallenge.net/full/odd_power_divisibility
# mathschallenge.net ## Odd Power Divisibility #### Problem Prove that $6^n + 8^n$ is divisible by 7 iff $n$ is odd. #### Solution We shall prove this in two different ways. The first proof, which is the simplest, makes use of congruences and the second proof makes use of the binomial expansion . First Proof As $6 \equiv -1 \mod 7$ and $8 \equiv 1 \mod 7$ it follows that $6^n \equiv (-1)^n$ and $8^n \equiv 1$. $\therefore S = 6^n + 8^n \equiv (-1)^n + 1 \mod 7$. If $n$ is even then $(-1)^n = 1 \implies S \equiv 2 \mod 7$. If $n$ is odd then $(-1)^n = -1 \implies S \equiv 0 \mod 7$. Thus S is divisible by 7 iff $n$ is odd. Second Proof Let $x = 6^n = (7 - 1)^n$ and $y = 8^n = (7 + 1)^n$. $\begin{align}\therefore y &= 7^n + \dbinom{n}{n-1}7^{n-1} + \dbinom{n}{n-2}7^{n-2} + ... \dbinom{n}{2}7^2 + \dbinom{n}{1}7 &+ 1\\x &= 7^n - \dbinom{n}{n-1}7^{n-1} + \dbinom{n}{n-2}7^{n-2} - ... &+ &\pm1\\\therefore x+y &= 2 \times 7^n + 2 \dbinom{n}{n-2}7^{n-2} + 2 \dbinom{n}{n-4}7^{n-4} + ...\end{align}$ We note that the last term of the series for $x$ will be $-1$ if $n$ is odd and $+1$ if $n$ is even. Therefore the series for $x + y$ will end $2 \dbinom{n}{1}7$ if $n$ is odd and 2 if $n$ is even. In other words, all the terms will divide by 7 except the last term when $n$ is odd. Hence $x + y \equiv 0 \mod 7$ iff $n$ is odd. What can you deduce about $x + y \mod 49$? Investigate the remainder when $(a - 1)^n + (a + 1)^n$ is divided by $a$ or $a^2$. Problem ID: 279 (13 May 2006)     Difficulty: 3 Star Only Show Problem
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91110759973526, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/16308/what-is-the-general-term-a-n-of-the-alternating-sequence-cos3n-pi-2
# What is the general term $(a_n)$ of the alternating sequence $\cos(3n \pi/2)$? What is the general term $(a_n)$ of the alternating sequence $\displaystyle \cos \left( \frac{3n \pi}{2} \right)$ from $1$ to $\infty$, $n \in \mathbb{N}$ ? - 1 Does t = n? Have you tried evaluating a few terms yourself? – Qiaochu Yuan Jan 4 '11 at 12:08 This is alternating, and furthermore it is periodic with 4t. This should be enough for solving that... – ftiaronsem Jan 4 '11 at 12:12 $cos(3t \pi/2): cos(3 \pi/2)=0, cos(3\pi)=-1, cos(9 \pi/2)=0, cos(6 \pi) = 1 \cdots$, but general rule is missing in my mind. – alvoutila Jan 4 '11 at 12:18 ## 1 Answer Well, you nearly got it ;-) Lets write down the first terms: $$\begin{array}{cc} k& \quad & \cos \left(\frac{3\pi k}{2} \right) \\ 1& \quad & 0 \\ 2& \quad &-1 \\ 3& \quad & 0 \\ 4& \quad & 1 \\ 5& \quad & 0 \\ 6& \quad &-1 \\ 7& \quad & 0 \\ 8& \qquad & 1 \\ \end{array}$$ So we are obviously searching for something which is $-1$ every second and $1$ every fourth time. Wat comes to mind? $i^k$ Unfortunatelly every first time we have $i$ and every third $-i$ Now we have to find a way to cancl out $i$ every first and third time. We are therefore searching a $x$ so that: $$\begin{array}{crr} k \quad & i^k & x^k \\ 1 \quad & i &-i\\ 2 \quad & 1 & 1\\ 3 \quad &-i & i\\ 4 \quad &-1 &-1\\ \end{array}$$ Because if we had that, we simply would sum $x^k$ and $i^k$ , divide it by two and would be finished. After a little thinking $$x^k=(-i)^k$$ comes to mind, since $(-1)^k$ has exactly the alternating properties we are searching. $$a_n = \frac{i^k+(-i)^k}{2}$$ That's a bit circuitous. Just use the fact that $\cos(x)=(\exp(ix)+\exp(-ix))/2$ and $\exp(3\pi i/2)=-i$ ... – non-expert Jan 4 '11 at 14:31 hahaha, yeah, thats way more direct. Thanks for the comment, I simply hadn't seen $\exp(3\pi i/2)=-i$. But this is very nice... – ftiaronsem Jan 4 '11 at 14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516556262969971, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/field-theory+symmetry
# Tagged Questions 2answers 136 views ### Does a constant factor matter in the definition of the Noether current? This is a very basic Lagrangian Field Theory question, it is about a definition convention. It takes much more time to typeset it than answering, but here it is: Consider a field Lagrangian with only ... 1answer 184 views ### Local and Global Symmetries Could somebody point me in the direction of a mathematically rigorous definition local symmetries and global symmetries for a given (classical) field theory? Heuristically I know that global ... 1answer 203 views ### U(1) Charged Fields I don't quite understand what is actually meant by a field charged under a $U(1)$ symmetry. Does it mean that when a transformation is applied the field transforms with an additional phase? More ... 1answer 173 views ### How to perform a scale (invariance) transformation? According to this wikipedia article in the $\phi^4$ section, the equation $$\frac{1}{c^2}\frac{∂^2}{∂t^2}\phi(x,t)-\sum_i\frac{∂^2}{∂x_i^2}\phi(x,t)+g\ \phi(x,t)^3=0,$$ in 4 dimensions is invariant ... 1answer 257 views ### Understanding P-, CP-, CPT-violation etc. in field theory and in relation to the principle of relativity I can never get my head around the violations of $P-$, $CP-$, $CPT-$ violations and their friends. Since the single term "symmetry" is so overused in physics and one has for example to watch out and ... 1answer 207 views ### Why does charge conservation due to gauge symmetry only hold on-shell? While deriving Noether's theorem or the generator(and hence conserved current) for a continuous symmetry, we work modulo the assumption that the field equations hold. Considering the case of gauge ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253368377685547, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/38098/list
## Return to Question 3 added 26 characters in body Often when people write about the geometrization conjecture they assume (for simplicity) that the manifold is orientable. I never seriously thought of non-orientable 3-manifolds, but while reading Morgan-Tian's paper I realized that they prove the geometrization for every compact 3-manifold $M$ with no embedded two-sided projective planes, which was news to me. There is a lot of notation in their paper so I ask Question 1. Is the above a correct interpretation of what is proved in Morgan-Tian's paper? Note: one way to rule out two-sided projective planes is to assume that $\pi_1(M)$ has no 2-torsion (because then a two-sided projective plane would lift to the orientation cover where it cannot exist). In particular, this gives the geometrization conjecture when $M$ is aspherical (in which case $\pi_1(M)$ is torsion free). Question 2. What is the status of the geometrization conjecture for manifolds that contain two-sided projective planes? Note: on the last two pages of Scott's wonderful survey "The Geometries of 3-manifolds" he describes a version of the geometrization conjecture that makes sense in the presence of two-sided projective planes. UPDATE: 1. Looks like my reading of Morgan-Tian was hasty, and I no longer think they prove the geometrization for non-orientable manifolds that contain no two-sided projective planes. They only prove it for manifolds that become extinct in finite time under Ricci flow. 2. As we discussed with Ryan in comments the geometrization for manifolds that contain two-sided projective planes reduces to geometrization of certain non-orientable orbifolds with isolated singular points. However, in contrast with what Ryan says, I was unable to find this result the geometrization for such orbifolds in the literature. Again, lots of particular cases are known, but I could not find it claimed (let alone proved) in full generality. 2 added 695 characters in body Often when people write about the geometrization conjecture they assume (for simplicity) that the manifold is orientable. I never seriously thought of non-orientable 3-manifolds, but while reading Morgan-Tian's paper I realized that they prove the geometrization for every compact 3-manifold $M$ with no embedded two-sided projective planes, which was news to me. There is a lot of notation in their paper so I ask Question 1. Is the above a correct interpretation of what is proved in Morgan-Tian's paper? Note: one way to rule out two-sided projective planes is to assume that $\pi_1(M)$ has no 2-torsion (because then a two-sided projective plane would lift to the orientation cover where it cannot exist). In particular, this gives the geometrization conjecture when $M$ is aspherical (in which case $\pi_1(M)$ is torsion free). Question 2. What is the status of the geometrization conjecture for manifolds that contain two-sided projective planes? Note: on the last two pages of Scott's wonderful survey "The Geometries of 3-manifolds" he describes a version of the geometrization conjecture that makes sense in the presence of two-sided projective planes. UPDATE: 1. Looks like my reading of Morgan-Tian was hasty, and I no longer think they prove the geometrization for non-orientable manifolds that contain no two-sided projective planes. They only prove it for manifolds that become extinct in finite time under Ricci flow. 2. As we discussed with Ryan in comments the geometrization for manifolds that contain two-sided projective planes reduces to geometrization of certain non-orientable orbifolds with isolated singular points. However, in contrast with what Ryan says, I was unable to find this result in the literature. Again, lots of particular cases are known, but I could not find it claimed (let alone proved) in full generality. 1 # Geometrization for 3-manifolds that contain two-sided projective planes Often when people write about the geometrization conjecture they assume (for simplicity) that the manifold is orientable. I never seriously thought of non-orientable 3-manifolds, but while reading Morgan-Tian's paper I realized that they prove the geometrization for every compact 3-manifold $M$ with no embedded two-sided projective planes, which was news to me. There is a lot of notation in their paper so I ask Question 1. Is the above a correct interpretation of what is proved in Morgan-Tian's paper? Note: one way to rule out two-sided projective planes is to assume that $\pi_1(M)$ has no 2-torsion (because then a two-sided projective plane would lift to the orientation cover where it cannot exist). In particular, this gives the geometrization conjecture when $M$ is aspherical (in which case $\pi_1(M)$ is torsion free). Question 2. What is the status of the geometrization conjecture for manifolds that contain two-sided projective planes? Note: on the last two pages of Scott's wonderful survey "The Geometries of 3-manifolds" he describes a version of the geometrization conjecture that makes sense in the presence of two-sided projective planes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473088979721069, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/169530/about-operatornamesym-omega-when-omega-is-an-infinite-set
# About $|\operatorname{Sym}(\Omega)|$ when $\Omega$ is an infinite set. Here is a problem: Show that if $\Omega$ is an infinite set, then $|\operatorname{Sym}(\Omega)|=2^{|\Omega|}$. I have worked on a problem related to a group that is $S=\bigcup_{n=1}^{\infty } S_n$. Does it make sense we speak about the relation between $S$ and $\operatorname{Sym}(\Omega)$ when $\Omega$ is an infinite set. Moreover, I know that $|\operatorname{Sym}(\Omega)|=|\Omega|^{|\Omega|}$. How we can reach from $|\Omega|^{|\Omega|}$ to $2^{|\Omega|}$. Thanks. - If $\Omega$ is infinite, the two have the same cardinality. – André Nicolas Jul 11 '12 at 17:01 @AndréNicolas: Isn't Sym$(\Omega)\subset S$? – Babak S. Jul 11 '12 at 17:27 We can have two infinite sets $A$ and $B$, with $A$ a proper subset of $B$, but with $A$ and $B$ of the same cardinality. for example (Galileo) let $B$ be the integers, and $A$ the even integers. – André Nicolas Jul 11 '12 at 17:48 ## 2 Answers You have $|\Omega|^{|\Omega|} \geq 2^{|\Omega|}$, since $\Omega$ is infinite. Then, $|\Omega| < 2^{|\Omega|}$ (Cantor's theorem) so $|\Omega|^{|\Omega|} \leq (2^{|\Omega|})^{|\Omega|}=2^{|\Omega|.|\Omega|}=2^{|\Omega|}$, since $\Omega$ is infinite. - $\textbf{Lemma}$ : For cardinals, $2 \leq \kappa \leq \lambda$ where $\lambda$ is infinite, then $\kappa^\lambda = 2^\lambda$. It is clear that $2^\lambda \leq \kappa^\lambda$. However $\kappa^\lambda \leq (2^\kappa)^\lambda = 2^{\kappa \cdot \lambda} = 2^{\lambda}$, where $\lambda \cdot \kappa = \text{max}\{\lambda, \kappa\}$, which can be proved. Now taking $\kappa = \lambda = |\Omega|$ in the lemma, you have $2^{|\Omega|} = |\Omega|^{|\Omega|}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467562437057495, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/183440/how-to-prove-that-a-mapping-is-homomorphic
# How to prove that a mapping is homomorphic Let $f:(A, \cdot) \to (B, \ast)$ and $g:(B,\ast) \to (C,\times)$ be Operation preserving maps. Then I must prove that $g \circ f$ is an operation preserving map too. This is what I have so far: Since $f$ is a homomorphism $(A, \cdot)$ and $(B, \ast)$ are groups and $f(x \cdot y)=f(x)\ast f(y)$ Since $(C,\times)$ is a group so $g(f(x)\ast (f(y))=g(f(x)) \times g(f(y))$. Hence $g\circ f$ is homomorphic. - What is 'operation preserving maps'? Do you want to prove that composition of homomorphisms is homomorphism? – Sigur Aug 17 '12 at 0:45 An operation preserving map I think is another way of saying that a function is homomorphic. My textbook uses the weirdest terms – math101 Aug 17 '12 at 0:49 1 You should probably write out the full line: $$(g\circ f)(x\cdot y)=g(f(x\cdot y))=g(f(x)*f(y))=g(f(x))\times g(f(y))=(g\circ f)(x)\times(g\circ f)(y).$$ That's all you have to do, right? – anon Aug 17 '12 at 0:52 1 $(g\circ f)(x\cdot y)=g(f(x\cdot y))=g(f(x)\ast f(y))=g(f(x))xg(f(y))=(g\circ f)(x)x(g\circ f)(y)$ – Sigur Aug 17 '12 at 0:53 1 It would be better to use `\times` ($\times$) instead of overuse the letter $x$... – anon Aug 17 '12 at 0:53 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304086565971375, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/39176/classical-approach-to-schwarzschild-radius
# Classical approach to Schwarzschild radius The Schwarzschild radius from general relativity is given to be $r = \frac{2GM}{c^2}$. One can obtain the same answer using classical calculations. That is, the escape velocity of a particle is given by $v = \sqrt\frac{2GM}{r}$, which can be arranged to give $r = \frac{2GM}{v^2}$, which can be interpreted as the maximum radius for which a particle travelling at velocity $v$ cannot escape. By treating light as simply a particle travelling at velocity $c$ and substituting in the above equation, one arrives at the Schwarzschild radius. Is it just a coincidence that the classical approach gives the same result as the general relativity result, or is there some merit to the classical approach? - ## 1 Answer It's just a coincidence that the factor of 2 comes out right, as the kinetic energy of the photon doesn't have a denominator. If you use isotropic coordinates, the "radius" of the black hole becomes M/2 instead of 2M (in natural units), where the "radius" in isotropic coordinates is the value of $\sqrt{x^2+y^2+z^2}$ on the horizon. The coincidence is dependent on Schwarzschild coordinates, and has no deeper significance. But the dependence on GM and r is demanded by dimensional analysis, and is not coincidental. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132229685783386, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/1453/great-unsolved-physics-problems/3406
# Great unsolved physics problems [closed] We all know that some theoretical ideas lack experimental evidence while in other cases there's a lack of a suitable theory for known phenomena and established facts and concepts. But what problem in physics, according to you, deserves a mention? And why you think solving that particular problem is of utmost importance and/or how far-reaching its effects/repercussions would be. One unsolved problem per post. - 1 Some qualified person (i.e. not me) should write an answer about the AdS/CFT conjecture! – Greg P Dec 28 '10 at 19:14 3 @arivero: perhaps. I just wanted to point out that in my opinion greatest unsolved problems are hiding in the nature not in the mathematical foundations of our theoretical models. But this was never intended as a criticism, just an amusing fact to note :) – Marek Jan 20 '11 at 14:34 show 4 more comments ## closed as not constructive by David Zaslavsky♦May 15 '12 at 21:22 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance. ## 33 Answers A general existence and uniqueness theorem for the Navier-Stokes equations. The lack of one is probably the largest open problem in classical mechanics, considering that the study of fluid dynamics is based on the Navier-Stokes equations and generalizations thereof. If there were an equation of state/initial data set where the solution was either non-existent or non-unique, even if such a thing were not experimentally realizable (for instance, the "good" initial data formed a dense set in the space of possible initial data), it would still fundamentally change how we look at the evolution of classica systems. - 2 – user346 Jan 20 '11 at 6:00 8 I would have thought the question was only of mathematical interest. Real fluids are not continuous, but made of particles. This makes the Navier-Stokes equations an approximation. – Hugh Allen Feb 7 '11 at 6:15 show 2 more comments Q: Are there magnetic monopoles? Why: If there are, Maxwell's equations become more symmetric and, more important, it immediately explains why charge is quantized. If there aren't, then the interesting question might be: Why? - 1 – Robin Maben Nov 30 '10 at 4:45 1 @user16307 Well, "explain" is relative. But we already know that angular momentum is quantized, and if magnetic monopoles exist, you can find a system whose EM field has an angular momentum that is proportional to the electric charge, so that then has to be quantized as well. – Lagerbaer Mar 4 at 16:00 show 2 more comments There is not (as far as I know) a satisfactory microscopic description of high-Tc superconductors. Classical (s-wave) superconductors are described by BCS theory, which qualitatively assembles charge-carriers into phonon-mediated Cooper pairs, and successfully predicts many macroscopic phenomena like the Meissner effect. This theory has been known for over 50 years. The behavior of high-Tc superconductors seems to be a different animal, with substantially more complexity. If we were to have a good grasp of the microscopic theory, this would be a major insight into the behavior of certain highly-correlated electronic systems, and one could perhaps build on such an understanding to analyze more complicated systems. On a more applications-oriented level, it is conceivable that we would have an easier time cooking up better-performing or more industrially economical superconductors. - What makes up dark matter. The existence of dark matter is well established cosmologically especially through recent gravitational lens observations of galaxies that collided. But we have no idea what it consists of. Ordinary baryonic matter is virtually rules out and galactic structure formation models suggest a weakly interacting massive particle is the best fit. So is that right and if so how do such particles fit into particle physics beyond the standard model? - Is supersymmetry (SUSY) a symmetry of our world? This has to do with many outstanding problems in modern physics, like hierarchy problem, nature of dark matter, cosmological constant problem and others. If true, it explains some of them in one stroke and for others it provides clues as to what to do next. But note that it would also bring many other questions (e.g. precisely what SUSY model is realized in nature and how exactly is spontaneous breaking of SUSY realized). It also points a way to quantum gravity (via supergravity). In short, this symmetry is very appealing theoretically and many people are quite sure it is indeed correct despite not yet being observed (so, in this regard it is quite similar to Higgs boson). Now, not only is the resolution of this problem (in either way) very interesting but LHC should actually be able to give the answer in next few years and this makes it all the more exciting. - show 7 more comments A completely general method for overcoming the fermion sign problem in quantum monte carlo sampling. As it is, any special circumstances in which we can guarantee the weights of a fermion path integral to be nonnegative are highly prized, but in general, in spatial dimensions greater than d = 1 we just can't accurately do QMC simulations for fermions. Consequences? Well, in addition to making my life (exponentially!) easier, it would also prove P=NP. (see, e.g.: Phys. Rev. Lett. 94, 170201) - show 4 more comments What mechanism is responsible for the imbalance between matter and anti-matter in the universe? Did the universe start with such an imbalance already in place or is it due to some CP violating mechanism at high energy that was significant in the early universe? If so what interactions are involved? Currently known CP violating mechanisms do not appear to be strong enough to account for the observed amount of matter in the universe, so the explanation is liekly to be someting new beyond the known standard model. - 1 +1 Baryon asymmetry is definitely one of the most perplexing aspects of cosmology. – Noldorin Jan 22 '11 at 23:47 Since no one mentioned the confinement: "no analytic proof exists that quantum chromodynamics should be confining" Briefly: it turns out that one cannot isolate color-charged particles. Intuitively one can understand it -- gluons, being the carriers of strong interaction, carry the color-charge themselves. So they are "screening" the color-charge of a carrier. And when one tries to, say, separate two quarks, the "gluon tube" appears between them. Which leads to production of new quarks, and those new quarks combine with the quarks being separated, producing colorless states in the end. Up to now there is no analytic calculation, that supports this picture. I think that the solution of that would be a great achievement. - One fundamental problem in mathematical physics that has to my knowledge not been resolved, but I might be mistaken since I haven't been working in this particular field for already 5 years and 5 years is a long time, is the following: In macroscopic bodies, there are several laws existing describing transport phenomena like heat conduction. They are described by precise mathematical laws which are well established experimentally. A well-known one is Fourier's law: $$c\frac{\partial}{\partial t}T(\vec{r},t)=\nabla \cdot \left(\kappa \nabla T(\vec{r},t)\right) \; ,$$ where $T(\vec{r},t)$ is the local temperature, $c=c(T)$ the specific heat and $\kappa=\kappa(T)$ the heat conductivity. There is however no rigorous mathematical derivation of this equation for any classical or quantum model with a Hamiltonian microscopic evolution. This problem is related to the question wether deterministic microscopic systems can fully explain the behaviour of macroscopic matter. (Remember this question, well basically, we have no rigorous proof that it's all just probability applied on huge amounts of microscopic particles. At least not for out of equilibrium processes, which transport phenomena are.) Now, I don't expect anyone but mathematical physicists invested in this particular field to loose any sleep over it. We have a good heuristic understanding of how things work and there's nobody seriously doubting that the explanation of macroscopic phenomena lies in understanding the underlying microscopic phenomena (except maybe some crackpots and fringe physicists). Still, a mathematical derivation could bring us a deeper understanding of why it works. The Green-Kubo relation gives $\beta V \int_0^\infty d\tau \left\langle J(0)J(\tau) \right\rangle$ for the transport coefficients. The time autocorrelation function can be computed using the closed time path formalism. - 2 Interesting. I had no idea that Fourier's law had no derivation from microscopic principles! – Noldorin Nov 30 '10 at 11:25 show 6 more comments What is the explanation of sonoluminescence? Under the right circumstances, sound waves can cause a bubble in a liquid to emit light. But the mechanism by which this happens is not understood. Some of the theories proposed to explain it are impressively exotic and far-fetched. Who knows what could come out of it? - show 1 more comment What is the nature and parameters of the Higgs sector? I.e. Does the Higgs particle exist? If so, what is its mass? Are there Higgs multiplets e.g. as predicted by supersymmetry? This is the last remaining question mark over the standard model and its resolution may take us to the next step beyond the standard model. - 1 I can't believe this answer (well, question) doesn't have more up-votes. How many billions of dollars were just spent building the LHC in an effort to answer it?? (Not to mention man-hours of the thousands of physicists contributing to it.) Needless to say, it gets my vote. – qftme May 10 '11 at 10:30 Why is the cosmological constant $10^{-120}$ times smaller than the Planck scale? Why is the electroweak scale so many orders of magnitude smaller than the Planck scale? Why is the QCD scale comparable to the electroweak scale? The cosmological constant problem is especially acute because of zero-point energy corrections coming from quantum field theory. In a nonsupersymmetric theory, with a mismatch between the number of bosonic and fermionic fields, this would require an incredible fine-tuning. Even in a nonsupersymmetric theory where the number of bosonic and fermionic fields match, unless the masses also match, we still require an enormous fine-tuning. Even if we have supersymmetry, it has to be broken below the TeV scale. Dynamical mechanisms to solve this problem typically run into problems with the renormalization group. Do the anthropic principle and the multiverse answer these questions? - Shucks - i just repeat the question that M. Veltman asks about a dozen times in Facts and Mysteries - WHY ARE THERE THREE GENERATIONS OF FERMIONS ? Halzen & Martin ask the same, as do a dozen others I could find. - What is the explanation for all the seemingly arbitrary dimensionless parameters? This is the biggest mystery of science today I think - 1 – Gordon Feb 3 '11 at 23:49 1 Given that the fine structure constant is a running constant that scales with energy, it's worth noting that it has this value only at zero energy. – qftme May 10 '11 at 10:25 show 3 more comments I think one important problem (at least from the point of mathematical physics) is to establish the Standard Model as a mathematically complete and consistent quantum field theory. This is related to: http://www.claymath.org/millennium/Yang-Mills_Theory/. The point is that the Standard Model of particle physics deals with fundamental structures of matter and is one of the most successful models in physics in terms of accuracy of predictions. However it seems to consist of a bunch of unjustified rules, people argue using undefined objects etc. Up to now there is no conceptual clear, mathematical and logical consistent description of the Standard Model available. So I think it would give us a quite deeper understanding how nature works if we had such a consistent formualtion of the Standard Model. - 4 I don't think this is an important problem because I don't think it is true that the Standard Model is mathematically complete and consistent. The Clay prize refers only to QCD, which is asymptotically free. Other parts of the Standard Model (the U(1) gauge part, the Higgs sector) are not asymptotically free and thus need a UV completion. This is one of the reasons why people think the Standard Model is only an effective field theory, valid at low energies rather than a theory that makes mathematical sense at all distance scales. – pho Jan 20 '11 at 0:49 1 I think any theory that needs renormalizations and IR problem resolving is physically and mathematically inconsistent by definition. A consistent theory calculates energies, scattering cross sections, etc., from the fundamental constants, not "renormalizes" them. – Vladimir Kalitvianski Jan 20 '11 at 11:29 2 Vladimir, you'll have to admit this is a minority opinion. – pho Jan 20 '11 at 13:05 show 1 more comment Find a consistent and complete theory of quantum gravity combining quantum mechanics with general relativity. Clearly, our universe is described by both quantum mechanics and general relativity. So, a more complete theory of the universe would have to incorporate quantum gravity somehow. However, combining the two has resisted decades of effort so far, and consistency and completeness turn out to be extremely stringent criteria in this case. - 1 @Marek: It may be too general, but as we are not really sure what form the final theory of quantum gravity will look like, making it more specific might run the risk of excluding the final correct theory. – QGR Jan 20 '11 at 9:49 show 1 more comment Is physical world ultimately describable by a renormalizable theory? Most physicists tend to assume that this has to be so. The truth is that this assumption is just a convenience so we can add a cutoff that will keep the low-energy models safe from interacting with features of the high-energy underlying models. There is nothing that will force this; in fact, is not out of the park to believe that most of the (27? 28? can't remember) "free" parameters in the standard model might be predicted by a non-perturbative nonrenormalizable high-energy model More importantly though, this very assumption is directly related to the fact that current physical theories have remained largely unfalsifiable in practical terms. So this assumption (may) eventually turn out to be a self-induced dead-end - show 5 more comments Are there extra spatial dimensions? Since many problems become exceptionally easier to solve with an assumption of extra spatial dimensions, I think this is one of the greatest questions of all time. - I would like to know how to compute the charge of an electron from first principles. This would likely have major implications that would depend on the form of the solution. - 2 Do mean the value of e? At what energy scale? – pho Jan 20 '11 at 2:37 1 I mean I want to derive the fine structure constant from first principles. – Matt Mar 2 '11 at 7:07 Greg P asked, so here it is. I think a proof of the AdS/CFT conjecture relating $N=4$ SYM with gauge group $SU(N_c)$ to IIB string theory on $AdS_5 \times S^5$ with $N_c$ units of five-form flux would be extremely important, even if the proof was only a proof by physics standards. I say this not because I think the conjecture may be false, indeed there is overwhelming evidence that the conjecture is true. Rather I think it would be important because many of the most important applications of gauge/gravity duality require going beyond AdS/CFT to theories without supersymmetry or conformal symmetry (such as gravity duals of QCD) or to theories without Lorentz invariance (various condensed matter applications) or to theories on the gravity side which are not asymptotically AdS. For example there are proposals for a de Sitter correspondence and Verlinde has proposed "emergent gravity" bases on holographic ideas which as far as I can tell do not involve the input of data from the boundary of spacetime as one needs in AdS. A proof of the original conjecture might indicate more clearly whether these extensions of the idea are correct, or whether at some point things go wrong when you try to generalize. - Is information preserved in evaporating black holes in quantum gravity? I know there are already too many answers here relating to quantum gravity, but I think this question is important. According to quantum field theory in curved spacetime, information crossing the event horizon will end up at the singularity where it will be destroyed. If we produce a pair of entangled particles outside the black hole, and throw one of them inside, it might appear as if we have converted a pure state into a mixed state, which is extremely problematic. A lot of weird and undesirable things will happen if unitary time evolution is violated in quantum mechanics, which is why it's likely a complete theory of quantum gravity will lead to unitary time evolution. So, the question is now where is the information hidden? - A mathematical solution of the plasma physics that causes the reversal of the sun's magnetic field every 11.5 years. - Why was the initial state of the universe so special? Why was the initial entropy so low? See recent book by Sean Carroll. - Why is the expansion of the universe accelerating? - show 1 more comment What is mass/Inertia? What is gravity? - What the heck happens when we do a measurement in quantum theory? That is why should the system jump to an Eigen state? Why should nice unitary evolution suddenly do something weird just because someone did a measurement? What exactly is a measurement anyway? - The Fundamental Properties of Space and Time at an Event. Some specifics are: 1. Is Time a logical derivable construct from some non-time based fundamental theory? 2. Is Time discrete or continuous? 3. Is Space(-Time) discrete - and if so at the Planck level? 4. Directionality (and perhaps dimensionality) of Time 5. Dimensionality of Space (the String Theory issue) 6. Substructure (and any physics) below any Planck scale "minimum distance" These kinds of issues are of course at the root of many conceptual and calculational issues in the theories around today. - What is the exact form of Exchange-correlation (xc) functional in Density Functional Theory (DFT)? The entire community of Condensed Matter Physics and Quantum Chesmitry want to know that answer. (: - show 1 more comment Quantum Mechanics gives us a mathematical framework for making predictions at the quantum level, which agree perfectly with experiment. However, no one knows what physical model (if any) that math represents. What does quantum mechanics really mean? Disturbingly, many physics students do not realize that the Copenhagen Interpretation - that particles do not actually have a position or momentum until they are physically measured - is only one of many interpretations of quantum mechanics, none of which we have any reason to believe over any other. All the interpretations lead to the same math, and thus, the same experimental results. Somewhat surprising to many, there are even some interpretations that do allow particles to have both definite positions and momentums (though other, equally-unintuitive assumptions must be made). - 1 This unanswered question was one of the biggest shocks on my undergraduate Quantum Mechanics courses. This actually was the motivation by which I became so interested in probability theory and, particularly, in the work of E.T. Jaynes. – Néstor May 12 '12 at 5:45 Does it make sense to talk about quantum gravity? Is it possible that gravity is a classical field or a condensate which appears largely classical, but where the underlying quantum physics is entirely different. Thirring fermions ~ sine Gordon solitons, where the SG solitons are a Euclideanize form of the hyperbolic dynamics on $AdS_2$. The $AdS_2~\sim~ CFT_1$ tells us the isometry of the spacetime is equivalent to the group for conformal quantum mechanics on the boundary. Might it then be the actual quantization is not with the spacetime, but with an underlying fermionic physics? If so can this be generalized to $AdS_n$, for $n~>~2$? - show 7 more comments ## protected by David Zaslavsky♦Jun 4 '11 at 20:45 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392055869102478, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/3290/several-questions-about-paillier-cryptosystem?answertab=votes
# Several questions about Paillier cryptosystem I have several questions concerning the original Paillier cryptosystem as described in Paillier, Pascal (1999). "Public-Key Cryptosystems Based on Composite Degree Residuosity Classes". EUROCRYPT. Springer. pp. 223-238. http://www.springerlink.com/content/kwjvf0k8fqyy2h3d/?MUD=MP Notation: $p$ - unencrypted message, plaintext $c$ - encrypted message, ciphertext $r$ - random factor $k_{pub}$ - public key $k_{priv}$ - private key I have a Paillier-encrypted ciphertext ($p$) that is no straight encryption but the result of an arbitrary number of various true or mixed homomorphic operations or re-randomizations. 1. Assuming I know $p$ and the corresponding private key $k_{priv}$. Am I able to compute the random factor $r$ from this so that a reencryption of $p$ with $r$ would be identical to $c$, i.e. $E(p, k_{pub}, r) = c$ ? 2. if positive answer to 1, how do I compute $r$? 3. if positive answer to 1, is it possible that there exists another $r'$ so that a different plaintext $p'$ encrypted with $r'$ would also result in $c$, i.e. $Enc(p, k_{pub}, r) = c = E(p', k_{pub}, r')$, $p \neq p'$, $r \neq r'$ ? 4. if positive answer to 3, could this $r'$ be efficiently computed, i.e. could the owner of $k_{priv}$ be trusted if he would provide $r$ to a given $c$ as (Zero knowledge) proof of correct decryption? - ## 1 Answer Let us briefly recall the Paillier encryption. Let $k_{pub} = (N = PQ, g)$ be a public key, where $N$ is the RSA modulus. The secret key is $\lambda = \mathrm{lcm}(P-1,Q-1)$ (or $P,Q$). The encryption of $p \in \mathbb{Z}_N$ with randomness $r \in \mathbb{Z}_N^*$ is $C = g^p r^N \bmod{N^2}$. You can verify $\mathbb{Z}_{N^2}^* \simeq \mathbb{Z}_N \times \mathbb{Z}_N^*$. As Paillier wrote, there is a subgroup $G = \{z \mid z = y^N \bmod{N^2}\}$ of order $\phi = (P-1)(Q-1)$. There is a bijective mapping $\tau$ from $\mathbb{Z}_N^*$ to $G$ which maps $y$ to $z = y^N \bmod{N^2}$. The inversion $\tau^{-1}(z)$ is computed by $y = z^{N^{-1} \bmod{\lambda}} \bmod{N}$. Roughly speaking, this is the RSA decryption with $e = N$. Answers: 1. Yes. 2. You can find such $r$. Since you know the plaintext $p$, you can compute $d = g^{-p}C = r^N \bmod{N^2}$. By applying $\tau^{-1}$, you can find $r$. (See Section 5.) 3. No. There is only one $r \in \mathbb{Z}_N^*$. (See Lemma 3.)\ Makes sense since there may be even a huge number of ciphertexts representing the same plaintext but they all have to decrypt to just one plaintext. Hence r is unique as long as en- and decryption are executed using the same pair of keys. 4. No. - Dear Community♦. I cannot understand why you add the comment, because Lieven asks the question on the same encryption key. – xag Jul 27 '12 at 15:19 For future reference, Community isn't a real user but a bot that bumps up questions that didn't get an accepted answer every now and then so they don't stay dead (think convection). – Thomas Aug 25 '12 at 14:12 1 @Thomas In this context it wasn't a bump, it was an edit. Probably an edit by an unregistered user. – CodesInChaos Aug 26 '12 at 10:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9088549613952637, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/45841/what-is-this-subgroup-of-mathfrak-s-12/
## What is this subgroup of $\mathfrak S_{12}$ ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) On some occasion I was gifted a calendar. It displays a math quizz every day of the year. Not really exciting in general, but at least one of them let me raise a group-theoretic question. The quizz: consider an hexagon where the vertices and the middle points of the edges are marked, as in the figure alt text One is asked to place the numbers $1,2,3,4,5,6,8,9,10,11,12,13$ (mind that $7$ is omitted) on points $a,\ldots,\ell$, in such a way that the sum on each edge equals $21$. If you like, you may search a solution, but this is not my question. Of course the solution is non unique. You may apply any element of the isometry group of the hexagon. A little subtler is the fact that the permutation $(bc)(ef)(hi)(kl)(dj)$ preserves the set of solutions (check this). Question. What is the invariance group of the solutions set ? Presumably, it is generated by the elements described above. What is its order ? Because it is not too big, it must be isomorphic to a known group. Which one ? - 2 What do you mean by edge? – Mariano Suárez-Alvarez Nov 12 2010 at 16:50 1 I assume edges are subsets of the vertices, and that you want to distribute the numbers among the vertices so that the sum of the numbers corresponding to the vertices of each edge sum up to 14. But then 13 can only be accompanied in an edge by 1, so it belongs to exactly one edge... – Mariano Suárez-Alvarez Nov 12 2010 at 17:04 Grumblh... my English is good enough for PDEs and for matrices, but not for geometry. By "edge", I meant a segment joining two consecutive vertices of the hexagone. There is one point at each vertex, and one point between two consecutive vertices. The sum of the numbers allocated to two consecutive vertices and to their middle point must be $21$. – Denis Serre Nov 12 2010 at 17:20 1 You wrote $14$ in the question, not $21$ :) – Mariano Suárez-Alvarez Nov 12 2010 at 17:22 1 Well, I has assume that since that was impossible you had something stranger in mind for "edge"... I wonder what the people who voted the question understood! :) – Mariano Suárez-Alvarez Nov 12 2010 at 17:27 show 2 more comments ## 3 Answers The invariance group of the solutions set can be given a geometric interpretation as follows. Note that $\mathfrak{S}_4 \times \frac{\mathbf{Z}}{2\mathbf{Z}}$ is none other than the group of isometries of the cube. It is known that if one cuts a cube by the bisecting plane of a space diagonal, the cross-section is a regular hexagon (see the picture at the middle of this page). The vertices of this hexagon are midpoints of (some) edges of the cube. Let $X$ be the set of corners and middles of this hexagon (it has cardinality $12$). Let us consider the following bijection between $X$ and the set $E$ of edges of the cube : if $[AB]$ is a side of the hexagon, with midpoint $M$, we map $A$ (resp. $B$) to the unique edge $e_A$ (resp. $e_B$) in $E$ containing it, and we map $M$ to the unique edge $e_M \in E$ such that $e_A$, $e_B$ and $e_M$ meet at a common vertex of the cube. Given any solution of the initial problem, we can label the edges of the cube using the above bijection. This labelling has the following nice property : the sum of three edges meeting at a common vertex is always 21. Proof : by construction, six of these eight summing conditions are satisfied. The remaining two conditions read $b+f+j=d+h+\ell=21$ using Denis' notations, and are implied by the first six conditions. So we found an equivalent ($3$-dimensional) formulation of the problem, namely labelling the edges of a cube. It is now clear that the symmetry group of the cube acts on the set of solutions. It remains to prove that the solution is unique up to isometry, which can be done by hand, here is how I did it : note that only two possible sums involve $1$ (resp. $13$), namely $1+8+12$ and $1+9+11$ (resp. $2+6+13$ and $3+5+13$). Therefore $1$ and $13$ must sit on opposite edges. Then $4$ and $10$ must sit on the unique edges which are parallel to $1$ and $13$. It is the easy to complete the cube. The resulting labelling has some amusing properties For example, the sum of edges of a given face is always $28$. The sum of two opposite edges is always $14$. Finally, the sum of edges along a cyclohexane-like circuit is always $42$. - 2 +1. This is awesomely beautiful! – Alex Bartel Nov 17 2010 at 13:49 5 Et +1 pour la communication entre membres d'un même laboratoire sur MO ! – Maxime Bourrigan Nov 17 2010 at 17:57 3 @Maxime. Indeed. Discussing this problem at lunch helped much finding the main idea. This shows to me the importance of discussions with colleagues. – François Brunault Nov 17 2010 at 22:22 1 Your construction tells that the graph of the benzene ($C_6H_6$) can be folded over the cube. To do so, label the hydrogens $1,\ldots,6$ clockwise. Then pull upward the vertices $1,3,5$ and merge them. Likewise, pull downward the vertices $2,4,6$ and merge them. You obtain a (combinatorial) cube. Of course, this does not have a chemical counterpart. On the one hand, benzene is a rigid molecule. On the other hand, the bonds $C\sim C$ are not equivalent to the bonds $C-H$. For instance, their lengths are different. – Denis Serre Nov 20 2010 at 8:30 1 @Denis. This is a nice way to visualize the construction. Regarding benzene, we could also say that it's a planar molecule (according to Wikipedia). The shape of cyclohexane ($C_6 H_{12}$) looks more like a cube, but it has too many hydrogen atoms, and the $C-C-C$ angles are 109°, not 90°. Note also that cyclohexane has two conformations, namely "chair" (more stable) and "boat". But the sum of edges along a "boat-like" circuit on the cube is not constant. – François Brunault Nov 20 2010 at 9:50 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It is quite easy to prove that the group is exactly what you wrote. It is enough to show that, up to the action of an element of that group, the unique solution is 13.3.5.4.12.8.1.11.9.10.2.6. First, one may assume of course a=13. Since the only way to decompose 8 is 2+6 and 3+5, one might also assume, up to elements in the group, that b=3 and c=5. Clearly {l,k}={2,6}. An elementary calculation shows that the sums b+f+j and d+h+l equal 21, so actually each of the 12 vertices belongs to exactly two sets of 3 vertices with sum 21. Now, let's see where 12 may be. Its complement to 21 is 9, which has 3 writings : 1+8 4+5 and 3+6. The last one can not occur since 3 and 6 are already placed. It is then clear that one must have d=4 and e=12. From here it is easy to fill out the hexagone: Since l+d+h=21, l can not be 2, so l=6 and h=11. Further, {f,g}={1,8}. If f=1, the sum f+b+j can not be 21. So f=8, g=1, and finally i=9. It's not very elegant, but it works... - It looks like you are assuming 13 and 12 lie on corners of the hexagon. – S. Carnahan♦ Nov 12 2010 at 18:07 1 Yes, for 13 is clear, since some elements of Denis' group exchange corners and middles. For 12 it is a consequence of my argument: if it were in the middle, i.e. d=12, then either l=6, so h=5, contradiction, or l=2, so h=7, contradiction. – Andrei Moroianu Nov 12 2010 at 18:10 I agree that my argument is not explained in detail, I am new on this site and I have no experience in writing down in real time... – Andrei Moroianu Nov 12 2010 at 18:15 Oh, I had not noticed the exchange. +1. – S. Carnahan♦ Nov 12 2010 at 18:24 @Andrei. Your argument is OK. I had to check the impossibility of placing $12$ on $d$, but this is OK on MO. – Denis Serre Nov 13 2010 at 9:12 show 1 more comment Here is the MAGMA code to generate your group: ````G:=sub<Sym(12)|(1,3,5,7,9,11)*(2,4,6,8,10,12), (2,12)*(3,11)*(4,10)*(5,9)*(6,8), (2,3)*(5,6)*(8,9)*(11,12)*(4,10)>; ```` I have a small function written by Tim Dokchitser that recognises direct and semi-direct products of standard groups like cyclic, symmetric, dihedral, etc. The group you have described is isomorphic to $C_2\times {\mathfrak S}_4$. The unique normal subgroup of order 2 is generated by a reflection in the centre of the hexagon, so its non-trivial element is given by (a,g)(b,h)(c,i)(d,j)(e,k)(f,l). The 4-cycles are somewhat harder to visualise. One of them is (a,d,g,j)(b,c,e,f)(h,i,k,l). The copy of ${\mathfrak S}_4$ that this is contained in (there are three normal subgroups isomorphic to ${\mathfrak S}_4$) is generated by the following elements: • (b, l)(c, k)(d, j)(e, i)(f,h) (reflection in the axis through a and g) • (a, i, e)(b, j, f)(c, k, g)(d, l, h) (counter-clockwise rotation by $2\pi/3$) • (a, g)(b, e)(c, f)(d, j)(h, k)(i, l) (square of the above 4-cycle) • (a, j)(b, k)(c, i)(d, g)(e, h)(f, l) ( a weird thing somewhat similar to the previous one) - 1 I doubt Denis wanted a description of the grou pgenerated by the symmetries he listed... He wants to know the group of symmetries of the set of solutions, which may or not be generated by those he lists. – Mariano Suárez-Alvarez Nov 12 2010 at 16:52 Maybe you are right, in that case I misunderstood the question. Sorry! – Alex Bartel Nov 12 2010 at 16:54 Even though I have never seen a line of MAGMA code, I can understand this one. However, I am not sure to understand the result $C_2\times S_4$. Do you mean $\mathfrak S_4$ ? If so, how do you visualize an action of $\mathfrak S_4$ here ? What do you mean by "generated by a reflection in the central point of the hexagon" ? Finally, what kind of output message do you get from MAGMA ? – Denis Serre Nov 12 2010 at 16:58 @Mariano and Alex. Actually, I am interested in both questions. I bet that the symmetry group $S$ of the solution set is that $S_0$ generated by those elements I described, but I have no proof. And I had difficulty to visualize the order and the structure of $S_0$. – Denis Serre Nov 12 2010 at 17:00 1 @Andrei and Alex. You both deserve $+1$ for your answers. But I am embarrassed to choose which one to accept as the answer to my question. Alex identified completely the group from the generators I gave, but Andrei's analysis shows that this is the invariance group of the solution set (every solution is, up to the action of the group, that one he calculated). – Denis Serre Nov 13 2010 at 9:25 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219761490821838, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/282731/is-there-an-analytical-solution-to-this-system-of-odes
# Is there an analytical solution to this system of ODEs? I have been working on a problem, and reached a part where I have a system of ODEs. Let $x(t) \in \mathbb R^n$ be a univariate function, and $\dot x(t)$ be its derivative. I will drop the $(t)$ for ease of notation. Let $m < n$, and let $A$ be a full rank $m \times n$ matrix. Also, for $i = 1,\ldots,n-m$ let $L_i$ be a subset of $\{1,2,\ldots,n\}$ and $1 \le k_i \le n$, with $k_i \not\in L_i$. Let $x^{L_i} \in \mathbb R^{|L_i|}$ be the components of $x$ corresponding to the indices in $L_i$. The same for $\dot x$. The system of ODEs I am trying to solve has the following form: \begin{align*} A\dot x &= 0 & \\ 2t \langle x^{L_i}, \dot x^{L_i}\rangle - \dot x^{k_i} &= -\Vert x^{L_i} \Vert && \mbox{$\forall i = 1,\ldots,n-m$} \end{align*} I don`t know if my notation is confusing. If it is, please let me know. I am almost sure that I won't be able to solve it analytically, but a man can dream :) Given that an explicit solution is rather unlikely (you have irrational nonlinearity in $\|\cdot \|$), what is the purpose of your investigation of this ODE system? Are you interested in asymptotic long-term behavior, stability with respect to initial values, or in approximate solutions? – user53153 Jan 28 at 3:54 Both stability with respect to initial values and approximate solutions would be very interesting. Specially approximate solutions! In my application I will only need values of $x(t)$ for $t \in [0,1]$, so asymptotic behavior is not really important. – Daniel Fleischman Jan 28 at 10:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.910579264163971, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/29370/computing-gravitational-deflection-of-light-knowing-phi-and-nabla-phi
# Computing gravitational deflection of light knowing $\phi$ and $-\nabla \phi$? I have a 3D cartesian grid an in each grid I know the gravitational potential $\phi$ and the 3D gravitational field $-\nabla \phi$ (with a Newtonian approach). How to compute the path of a photon in these grids and its 3D deflection in each cell ? Thank you very much. - ## 1 Answer The deflection in each cell is twice that which it would be for a Newtonian particle coming in with velocity c. You apply the transverse force to change the direction, multiplying by 2 so as to account for the GR space-space parts of the metric tensor. This assumes that the matter making the gravity is nonrelativistic (so that you are justified in using a Newtonian potential in the first place), at low pressure (also important, like in normal matter, not neutron stars) and so the light is moving through the field effectively instantaneously. I have assumed the deflection is small, but if the deflection is not small from one end of the box to the other, you are probably not justified in using Newtonian gravity over the whole box. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363006949424744, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/211605-how-can-i-prove-two-mean-values-not-equal-each-other.html
# Thread: 2. ## Re: How can I prove two mean values are not equal to each other? You need some information about g(t). If it's symmetric about the line $x=\frac{1}{2}$, then $\xi=\eta=\frac{1}{2}$. - Hollywood
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8319181799888611, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/204680/complex-analysis-integration-question?answertab=active
# Complex analysis integration question Let $f(z) = A_0 + A_1z + A_2z^2 + \ldots + A_nz^n$ be a complex polynomial of degree $n > 0$. Show that $\frac{1}{2\pi i} \int\limits_{|z|=R} \! z^{n-1} |f(z)|^2 dz = A_0 \bar{A_n}R^{2n}$. - thanks for editing the format! i didn't know how to format it properly – matt Sep 30 '12 at 2:40 1 Write out the conjugate of $f(z)$, multiply by $f(z)$ itself, expand the whole thing, and integrate term-by-term. Most of those integrals are zero. – GEdgar Sep 30 '12 at 3:36 thanks, i tried that, but i end up with all of the integrals being 0. which integrals wouldn't be 0? – matt Sep 30 '12 at 5:16 you end up with some sum of Ai's and conjugate of Ai's as the coefficient for z^q for some integer q. but the integral on a smooth closed curve for z^q is always 0 for q >= 1. – matt Sep 30 '12 at 5:18 if i just look at the Ao*conjugate(An) term. i have it as a coefficient for z^n * z^(n-1) = z^(2n-1). i let z = Re^(it), 0<=t<= 2pi. – matt Sep 30 '12 at 5:47 show 2 more comments ## 2 Answers Denote by $\bar f$ the polynomial obtained from $f$ by conjugating its coefficients $A_k$. When $|z|^2=z\bar z=R^2$ then $$|f(z)|^2=f(z)\,\overline{f(z)}=f(z)\bar f(\bar z)=f(z)\bar f\Bigl({R^2\over z}\Bigr)\ .$$ Now $$z^{n-1}\bar f\Bigl({R^2\over z}\Bigr)=\bar A_n{R^{2n}\over z} + q(z)\ ,$$ where $q$ is a certain polynomial. It follows that $${1\over2\pi i}\int\nolimits_{\partial D_R} z^{n-1}|f(z)|^2\ dz={1\over2\pi i}\int\nolimits_{\partial D_R} f(z)\Bigl(\bar A_n{R^{2n}\over z} + q(z)\Bigr)\ dz=A_0\bar A_n R^{2n}\ ,$$ because $f(z)=A_0+ z\, p(z)$ for some polynomial $p$. - great! thanks :D – matt Oct 1 '12 at 3:00 Let $\Gamma = \{z: |z| = R\}$. Recall that $$\int_{\Gamma} z^k \, dz = \begin{cases} 0 & k \neq -1 \\ 2\pi i & k = -1 \end{cases}$$ Now, when we multiply out $|f(z)|^2$ in terms of $z$ and $\overline{z}$, we are ultimately evaluating an integral of the following form: $$\frac{1}{2\pi i} \int_{\Gamma} \sum_j B_j z^{p_j} \overline{z}^{k_j} \, dz = \frac{1}{2\pi i} \int_{\Gamma} \sum_j B_j z^{p_j-k_j} R^{2k_j} \, dz$$ for some powers $p_j, k_j$. Then, since we know that $z^k$ integrates to $0$ unless $k = -1$, then we require that $p_j - k_j = -1$. Since the whole integrand is multiplied by $z^{n-1}$ originally, then it must be that $p_j = 0, k_j = n$, so the only term that does not vanish is the one that has the term that was formed from multiplied the $A_0$ term with the $\overline{A}_n\overline{z}^n$ term. Therefore, in summary, $$\frac{1}{2\pi i} \int_{\Gamma} z^{n-1} |f(z)|^2 \, dz = \frac{1}{2\pi i} \int_{\Gamma} A_0\overline{A}_n R^{2n} z^{-1} \, dz$$ which evaluates to your desired result. - that works! thanks a bunch :) – matt Oct 1 '12 at 2:54 Sorry, I noticed some typos in my answer, which I will now fix. – Christopher A. Wong Oct 1 '12 at 3:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934688150882721, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3731555
Physics Forums ## Curvilinear Motion 1. The problem statement, all variables and given/known data A stunt car is driven off a cliff with a speed of 110 ft/s. What is the gravitational acceleration of the car normal to its path after falling for 3 seconds? 2. Relevant equations The kinematic equations...? I'm pretty sure that this should be done in Normal and tangential components, so with that said: $s = \theta r$ $a_t = \dot{v} = v \frac{dv}{ds} = \alpha r$ $v = \dot{s} = \omega r$ $a_n = \frac{v^2}{\rho} = \omega^2 r$ Where $\rho$ is the radius of curvature. 3. The attempt at a solution For the x-direction: $(v_0)_x = 110$ $t = 3$ $\Delta x = (v_0)_x t = (110)(3) = 330$ For the y-direction: $y = y_0 + (v_0)_y t + \frac{1}{2} a t^2$ Solving for distance in the y-direction: $y = \frac{1}{2}(-g)t^2 \quad (t = 3)$ $y = -144.9 ft$ But I really have no idea if any of that is necessary, or if it is where do I go from there? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Hi mathmann, You got the position of the car, and you know that the acceleration is g, downward. You need the normal component of the acceleration. The normal of a curve is perpendicular to its tangent. And you also know that the velocity is tangent to the path at any point. Find the velocity vector at the point (330, -144.9) first. ehild Quote by ehild Hi mathmann, You got the position of the car, and you know that the acceleration is g, downward. You need the normal component of the acceleration. The normal of a curve is perpendicular to its tangent. And you also know that the velocity is tangent to the path at any point. Find the velocity vector at the point (330, -144.9) first. ehild Ok so for the x-direction since there is no acceleration then $v_x = 110$ and for y-direction $v^2 = v_0^2 + 2(-g)(-144.9)$ to get $v_y = 96.6$, this is my guess on what to do next. $v = \sqrt{v_x^2 + v_y^2} = 146.39$ But that is a scalar..? So again I'm stuck. Thank you for your first post though Recognitions: Homework Help ## Curvilinear Motion The velocity is a vector. Write it out with its horizontal and vertical components. I see, you have set the coordinate system that the y axis points upward. ehild Quote by ehild The velocity is a vector. Write it out with its horizontal and vertical components. I see, you have set the coordinate system that the y axis points upward. ehild So $v = (110, 96.6)$ ft/s. I know this is a very basic question I just am so lost on what to do. I don't know if this will be any use.. But its the picture for the question. Attached Thumbnails Recognitions: Homework Help The car moves downward. What is the sign of the y component of velocity? Could you show the velocity vector in the picture? You need the direction perpendicular to the velocity. What do you know about the components of the vectors which are perpendicular to each other? ehild Tags 2-d motion, curvilinear motion, dynamics Thread Tools | | | | |-----------------------------------------|-------------------------------|---------| | Similar Threads for: Curvilinear Motion | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 6 | | | Advanced Physics Homework | 3 | | | Introductory Physics Homework | 41 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9091947674751282, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/1447/r-squared-i-have-never-fully-grasped-the-interpretation?answertab=oldest
# r-squared: I have never fully grasped the interpretation I want to fully grasp the notion of $r^2$ describing the amount of variation between variables. Every web explanation is a bit mechanical and obtuse. I want to "get" the concept, not just mechanically use the numbers. E.g.: Hours studied vs. test score $r$ = .8 $r^2$ = .64 • So, what does this mean? • 64% of the variability of test scores can be explained by hours? • How do we know that just by squaring? - your question is not about R vs R-square (you understand that $0.8^2=0.64$) it is about interpretation of $r^2$. Please reformulate the title. – robin girard Aug 9 '10 at 15:03 – Abe Apr 22 at 19:04 ## 3 Answers A mathematical demonstration of the relationship between the two is here: Pearson's correlation and least squares regression analysis. I am not sure if there is a geometric or any other intuition that can be offered apart from the math but if I can think of one I will update this answer. Update: Geometric Intuition Here is a geometric intuition I came up with. Suppose that you have two variables $x$ and $y$ which are mean centered. (Assuming mean centered lets us ignore the intercept which simplifies the geometrical intuition a bit.) Let us first consider the geometry of linear regression. In linear regression, we model $y$ as follows: $y = x\ \beta + \epsilon$. Consider the situation when we have two observations from the above data generating process given by the pairs ($y_1,y_2$) and ($x_1,x_2$). We can view them as vectors in two-dimensional space as shown in the figure below: Thus, in terms of the above geometry, our goal is to find a $\beta$ such that the vector $x\ \beta$ is the closest possible to the vector $y$. Note that different choices of $\beta$ scale $x$ appropriately. Let $\hat{\beta}$ be the value of $\beta$ that is our best possible approximation of $y$ and denote $\hat{y} = x\ \hat{\beta}$. Thus, $y = \hat{y} + \hat{\epsilon}$ From a geometrical perspective we have three vectors. $y$, $\hat{y}$ and $\hat{\epsilon}$. A little thought suggests that we must choose $\hat{\beta}$ such that three vectors look like the one below: In other words, we need to choose $\beta$ such that the angle between $x\ \beta$ and $\hat{\epsilon}$ is 900. So, how much variation in $y$ have we explained with this projection of $y$ onto the vector $x$. Since the data is mean centered the variance in $y$ is equals ($y_1^2+y_2^2$) which is the square of the distance between the point represented by the point $y$ and the origin. The variation in $\hat{y}$ is similarly the distance from the point $\hat{y}$ and the origin and so on. By the Pythagorean theorem, we have: $y^2 = \hat{y}^2 + \hat{\epsilon}^2$ Therefore, the proportion of the variance explained by $x$ is $\frac{\hat{y}^2}{y^2}$. Notice also that $cos(\theta) = \frac{\hat{y}}{y}$. and the wiki tells us that the geometrical interpretation of correlation is that correlation equals the cosine of the angle between the mean-centered vectors. Therefore, we have the required relationship: (Correlation)2 = Proportion of variation in $y$ explained by $x$. Hope that helps. - I do appreciate your attempt at helping, but unfortunately, this just made things 10x worse. Are you really introducing trigonometry to explain r^2? You're way too smart to be a good teacher! – JackOfAll Aug 10 '10 at 1:49 I thought that you wanted to know why correlation^2 = R^2. In any case, different ways of understanding the same concept helps or at least that is my perspective. – user28 Aug 10 '10 at 8:47 Start with the basic idea of variation. Your beginning model is the sum of the squared deviations from the mean. The R^2 value is the proportion of that variation that is accounted for by using an alternative model. For example, R-squared tells you how much of the variation in Y you can get rid of by summing up the squared distances from a regression line, rather than the mean. I think this is made perfectly clear if we consider that regression problem and imagine that plotted out. Imagine a typical scatterplot where you have a predictor X along the horizontal axis and a response Y along the vertical axis. The mean is a horizontal line on the plot where Y is constant. The Y variation is the sum of squared differences between the mean of Y and each individual data point. It's the distance between the mean line and every individual point squared and added up. You can also calculate another measure of variability after you have the regression line from the model. This is the difference between each Y point and the regression line. Rather than each (Y - the mean) squared we get (Y - the point on the regression line) squared. If the regression line is anything but horizontal, we're going to get less total distance when we use that instead of the mean--that is there is less unexplained variation. The ratio between the extra variation explained and the original variation is your R^2. It's the proportion of the original variation in your response that is explained by fitting that regression line. See the (quick and dirty) image below... Uploaded with ImageShack.us. Here is some R code for a basic graph with mean and the regression line plotted to help visualize, but without some helpful notes: ````data(trees) plot((trees$Volume~trees$Girth)) abline(lm(trees$Volume~trees$Girth)) abline(lm(trees$Volume~1)) ```` - > The ratio between the variation explained and the original variation is your R^2 Let's see if I got this. If the original variation from mean totals 100, and the regression variation totals 20, then the ratio = 20/100 = .2 You're saying R^2 = .2 b/c 20% of the mean variation (red) is accounted for by the explained variation (green) (In the case of r=1) If the original variation totals 50, and the regression variation totals 0, then the ratio = 0/50 = 0 = 0% of the variation from the mean (red) is accounted for by the explained variation (green) I'd expect R^2 to be 1, not 0. – JackOfAll Aug 10 '10 at 1:51 R^2 = 1-(SSR/SST) or (SST-SSR)/SST. So, in your examples, R^2=.80 and 1.00. The difference between the regression line and each point is that left UNexplained by the fit. The rest is the proportion explained. Otherwise, that's exactly right. – Brett Magill Aug 10 '10 at 4:17 I edited that last paragraph to try to make it a bit clearer. Conceptually (and computationally) all you need is there. It might be clearer to actually add the formula and refer to the SST SSE and SSR, but then I was trying to get at it conceptually – Brett Magill Aug 10 '10 at 4:18 ie: R^2 is the proportion of the total variation from mean (SST) that is the difference b/w the expected regression value and mean value (SSE). In my example of hours vs. score, the regression value would be the expected test score based on correlation with hours studied. Any additional variation from that is attributed to SSR. For a given point, hours studied variable/regression explained x% of the total variation from the mean (SST). With a high r-value, "explained" is big percentage of SST compared to SSR. With a low r-value, "explained" is a lower percentage of SST compared to SSR. – JackOfAll Aug 10 '10 at 13:56 The Regression By Eye applet could be of use if you're trying to develop some intuition. It lets you generate data then guess a value for R, which you can then compare with the actual value. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423054456710815, "perplexity_flag": "middle"}
http://mrchasemath.wordpress.com/2012/03/07/rationalization-rant/
# Rationalization Rant Posted on March 7, 2012 by Every high school math student has been taught how to rationalize the denominator. We tell students not to give an answer like $\frac{1}{\sqrt{2}}$ because it isn’t fully “simplified.” Rather, they should report it as $\frac{\sqrt{2}}{2}.$ This is fair, even though the second answer isn’t much simpler than the first. What does it really mean to simplify an expression? It’s a pretty nebulous instruction. We also don’t consider $\frac{12}{1+\sqrt{5}}$ to be rationalized because of the square root in the denominator, so we multiply by the conjugate to obtain $2-2\sqrt{5}.$ In this particular example, multiplying by the conjugate was really fruitful and the resulting expression does indeed seem much more desirable than the original expression. But here’s where it gets a little ridiculous. Our Algebra 2 book also calls for students to rationalize the denominator when (1) a higher root is present and (2) roots containing variables are present. Let me show you an example of each situation, and explain why this is going a little too far. ## Rationalizing higher roots First, when a higher root is present like $\sqrt[5]{\frac{15}{2}},$ the book would have students multiply the top and bottom of the fraction inside the radical by $2^4$ so as to make a perfect fifth root in the denominator. The final answer would be $\frac{\sqrt[5]{240}}{2}.$ Simpler? You decide. This becomes especially problematic when we encounter sums involving higher roots. It’s certainly possible, using various tricks, to rationalize the denominator in expressions like this: $\frac{1}{2-\sqrt[3]{5}}.$ But is that really desirable? The result here is $\frac{1}{2-\sqrt[3]{5}}\cdot\frac{4+2\sqrt[3]{5}+\sqrt[3]{25}}{4+2\sqrt[3]{5}+\sqrt[3]{25}}=\frac{4+2\sqrt[3]{5}+\sqrt[3]{25}}{3},$ which is, arguably, more complex than the original expression. Can anyone think of a good reason to do this, except just for fun? ## Rationalizing variable expressions Now, let’s think about variable expressions. Here is a problem, directly from our Algebra 2 book (note the directions as well): Write the expression in simplest form. Assume all variables are positive. $\sqrt[3]{\frac{x}{y^7}}$ The method that leads to the “correct” solution is to multiply the fraction under the radical by $\frac{y^2}{y^2}$, and to finally write $\frac{\sqrt[3]{xy^2}}{y^3}.$ This is problematic for two reasons. (1) This isn’t really simpler than the original expression and (2) this expression isn’t even guaranteed to have a denominator that’s rational! (Suppose $y=\sqrt{2}$ or even $y=\pi$.) Once again I ask, can anyone think of a good reason to do this, except just for fun?? ## So how far do we take this? Is it reasonable to ask someone to rationalize this denominator? $\frac{1}{2\sqrt{2}-\sqrt{2}\sqrt[3]{5}+2\sqrt{5}-5^{5/6}}$ You can rationalize the denominator, but I’ll leave that as an exercise for the reader. So how far do we take this? I had to craft the above expression very carefully so that it works out well, but in general, most expressions have denominators that can’t be rationalized (and I do mean “most expressions” in the technical, mathematical way–there are are an uncountable number of denominators of the unrationalizable type). All that being said, I think this would make a great t-shirt: And I rest my case. ### Like this: This entry was posted in Algebra 1, Algebra 2, Precalculus by Mr. Chase. Bookmark the permalink. ## 16 thoughts on “Rationalization Rant” 1. genechase on March 7, 2012 at 12:18 pm said: Glad to find that the next generation is carrying the torch, son! I’ve had this rant for decades. Here’s my take: Ban all mention of the term simplify unless it is accompanied by the phrase for the purpose of …. In my History of Math class today I told them about the accomplishment of which François Viète (1540-1603) was proudest: The roots of a polynomial are symmetric functions of its coefficients. Note that (x-a)(x-b)=0 is simpler for the purpose of finding the roots of the equation (that is, the zeros of the polynomial) than the equivalent x^2 -(a+b) x + ab = 0. But for the purpose of finding where y = x^2 + 3x – 5 crosses the y-axis, I’ll take that form over the equivalent factored form y = (x – (-3 – √29)/2) ( x – (√29 – 3)/2) any day. Fortunately, there are other words than “simplify” in this context: “find the linear factors,” or “express as a trinomial.” Trig substitutions simplify an integrand for the purpose of recognizing something that I can integrate. I’ll give the Calculus instructors the right to leave out the obvious: “Use a trig substitution to simplify” without mentioning the obvious purpose. Vi Hart’s video about the number Wau comes to mind. She instructs us to “complexify the following” (my term). And that’s more fun. • Mr. Chase on March 7, 2012 at 9:08 pm said: Yes, I totally agree about the use of the word ‘simplify’. As a teacher I always have to specify, ‘write the expression using no negative exponents,’ or ‘reduce the expression to the argument of a single log’ or whatever else. Amen to that! I think our issue with Rationalization runs even deeper, too. Usually there’s a good purpose for the desired manipulation (like trig substitutions in calculus). But here, I don’t see any good purpose for rationalizing the denominator. Can you give a math history perspective on that? Was there a time when it was more useful, but now it’s not? I just can’t think of a very good argument why anyone would ever *need* to rationalize the denominator. • on March 8, 2012 at 8:34 am said: One thread in history of math is an ever-changing notion of what counts as a number. For Pythagoras, √2 was a magnitude but not a number. Now it is a number. We want a number as an answer, not an indicated operation. √25 is not a number, but 5 is. 1/√5 is not a number, but √5/5 is. I was amazed to learn that in India in the Middle Ages, things that I would regard even today as indicated operations were regarded unproblematically as numbers, such as √(√5 -1). Math history is not as simple as having the notion of number becoming ever-more inclusive, as we were led to believe in math classes (positive integers – integers – rationals – reals – complexes – quaternions – octonions). It’s been a labyrinth of give-and-take. Rafael Bombelli used complex numbers to solve polynomial equations in the 16th C., but thought they were “sophistry,” not numbers, 350 years before David Hilbert’s “formalist” program of mathematics. For heaven’s sake! Even 1 was not considered a number until the Middle Ages. See the bottom of page 2 of this source for a list of quotations to prove that, from an MIT course on ancient history of mathematics. 2. Gabriel Verret on March 8, 2012 at 6:01 am said: I think the purpose in general is not simplification but to express everything in a well-defined “canonical” form. This can be very useful for example for a grader who can immediately recognise that the answer is correct. • on March 8, 2012 at 8:46 am said: And canonical forms have value independent of graders. Writing the equation of an ellipse in canonical form allows us to identify the lengths of semi-major and semi-minor axes and location of center in an instant. Since the goal of mathematics is insight, not computation, canonical forms play a special role in mathematics: a diagonalized matrix, a boolean expression in disjunctive normal form, a predicate calculus proposition in prenex normal form, or invariant factor decomposition of a finitely generated abelian group. There are hundreds of examples. • Gabriel Verret on March 9, 2012 at 7:45 am said: I agree that canonical forms are very important. That’s at least one important justification for rationalization. • Mr. Chase on March 9, 2012 at 10:04 am said: But what’s the value of the canonical form *here*? I see GREAT value in other canonical forms, like the vertex form for a parabola, or the factored form for a polynomial, or the diagonalization of a matrix. I guess just to show that two radical expressions are equivalent? Not sure. Second question: what IS the canonical form for the two examples I presented in the post? (higher order roots and radical expressions involving variables) Most often, like I said, such a form isn’t even possible. 3. on March 10, 2012 at 8:58 am said: Hi, I always thought that the historical reason was that it is easier to get a decimal approximation by hand with rationalized denominators. For instance, it is easier to divide 1.414. . . by 2 using long division than it is to divide 1 by 1.414. . . using long division. With a rationalized denominator, you can easily increase your precision with long division by simply using 1.414 instead of 1.41 (say), and the reusing the exact work you did before. If you calculate 1 divided by 1.41and later decide you need the precision of 1.414, then you pretty much have to start your long division again from scratch. So I think rationalizing was useful in the days when calculators did not exist; I do not see how the technique is useful now. Bret • on March 12, 2012 at 9:21 am said: Agreed. Rationalizing denominators was truly practical back in those days. Maybe rationalization should go the way of trig tables and log tables and log trig tables, now being replaced by a calculator (that is, calculator the machine, not calculator the person). At least for students not planning to study math beyond one college course. But see my next post for why the more theoretical you are, the more you need to rationalize denominators. And as a mathematician I always shudder when students limit their options by predicting what math they might need. I once earned money tutoring a grad student in veterinarian school who needed Calculus to understand one of the professional papers he was assigned to read and master. When he was in high school he was like, “I’m going to be a vet. I don’t need more math.” (Intentional youth lingo.) You can never learn too much math! 4. on March 12, 2012 at 9:09 am said: Let Q be the rational numbers. Let Q[√5] mean the set of all numbers of the form a + b √5 with a, b in Q. We say “Q[√5] is Q with √5 adjoined.” One can show that you can do division in Q[√5], which is to say that (a+b√5)/(c+d√5) is again of the form x + y√5 with x and y rational numbers. We can repeat this process. The field (technical term alert!) Q[√5] can again be extended to Q[√5][√3] which is all numbers of the form a + b√5 where a, b are taken from Q[√5]. That means that such things as 1/(x + y√5 + z√3 + t√5√3) can be rationalized, and furthermore that the result will be of the form a + b√5 + c√3 + d√5√3. The proof is constructive: We actually rationalize the denominator. Theorems well-known to those who study enough abstract algebra show that one can generalize this with square roots of any rationals adjoined to the rationals. Since Euclidean (straightedge and compass) constructions can do iterated square roots, all of this has a nice geometric interpretation too. I won’t speak today to the usefulness of rationalizing more general expressions, but it seems that a student of abstract algebra shouldn’t be handicapped by having never practiced rationalizing fractions with square roots in the denominator. Exercise: To Q adjoin c = the cube root of 5. Can you do division in the resulting set of numbers, Q[c], which is to say numbers of the form x + yc ? Exercise: generalize the above exercise. • on March 12, 2012 at 9:23 am said: I meant “a + b√3 where a, b are taken from Q[[√5]” at the end of paragraph 1. • Mr. Chase on March 12, 2012 at 11:02 am said: I agree 100% with everything here. In fact, the very examples you quoted came up in my Algebra class last semester. And I admitted in my original post that rationalization in those particular cases is useful. But what about ANY expression involving radicals in the denominator? And what about variable expressions? I’m not sure I see the value as much there. I also agree 100% that students should not limit their mathematical options. But in this case, I felt like the book and curriculum were presenting contrived problems without providing a justification. “For fun” is even a valid justification as far as I’m concerned, but it should be acknowledged. • genechase on March 12, 2012 at 3:39 pm said: I prefer problems that are not contrived. I prefer telling students why what they’re doing will be worth their time, or letting them discover it. It doesn’t have to be a technical explanation. “Fun” is a good motive. Another good motivator is to suggest a short reading. An often missing motivator is to ask a student to reflect on her work to ask what future direction it could go in. I did that with my comment above, where I say “generalize,” and I did it with my Pi R Squared post where I end with, “Can you think of anything that involves π but doesn’t involve a circle?” This is George Polya’s Step 4 in his famous book How To Solve It. But I admit that sometimes I am more concerned with how much I cover in a lesson than with how much I uncover. Testing whether comments can embed LaTeX: $\Sigma$ 5. Pingback: Mathblogging.org Weekly Picks « Mathblogging.org — the Blog 6. Jen Thoman on January 3, 2013 at 8:00 pm said: I teach high school math. Today I taught rationalizing the denominator. I fully understand the concept and I agree with the above comments. For extra credit, I asked the students to find who came up with the idea of rationalizing the denominator and why. I understand the why. What I haven’t been able to find is the “who.” Is there one person credited with the concept? 7. on January 3, 2013 at 11:24 pm said: I’m not sure. In the 12th century, Islamic commentators on Euclid’s Elements, Book X, rationalized denominators, but I’m not sure which one would have been first. Here are some possibilities (from Victor J. Katz, op. cit., pp. 332-333, http://www.amazon.com/History-Mathematics-3rd-Victor-Katz/dp/0321387007 ): Al-Khwarizmi, al-Khayyami, Sharaf al-Din a-Tusi, Ibn Mun’im, al-Samaw’al, al-Karkhi (i.e. al-Karaji), al-Samaw’al, or Ibn al-Baghdadi. But even if you found the first person to mention it, that wouldn’t necessarily be the one who should get the credit. L’Hopital gets credit for a Calculus rule because he was the first to mention it in print, but in fact John (Johann) Bernoulli is more likely to be the creator of L’Hopital’s rule, at least according to Ron Larson (at http://www.amazon.com/exec/obidos/ASIN/039593320X/ ). %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384155869483948, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/61351-logs.html
# Thread: 1. ## logs! Can anyone give me a hand with this one? loga(x) + loga(x - 2) = loga(x + 4) 2. Originally Posted by rtwilton Can anyone give me a hand with this one? loga(x) + loga(x - 2) = loga(x + 4) Hello rt, Remember $\log_bm + \log_bn = \log_bmn$ $\log_ax+\log_a(x-2)=\log_a(x+4)$ $\log_ax(x-2)=\log_a(x+4)$ $x(x-2)=x+4$ Can you finish from here? 3. I think so. I distributed it out, got everything on one side and got: x^2 -3x - 4 = 0 so x = -1 and x = 4 but x cannot be a negative number right? so x=4 is the answer? 4. Originally Posted by rtwilton I think so. I distributed it out, got everything on one side and got: x^2 -3x - 4 = 0 so x = -1 and x = 4 but x cannot be a negative number right? so x=4 is the answer? You can take the log of a negative number, but your answer is no longer "real", but complex. A logarithm is the inverse of a power. If $y=10^x$, then $x=\log(y)$. $10^x$ is always positive. As a result, you can only take the log of a positive number. However, 10 raised to an imaginary number can be negative, so the log of a negative number is imaginary. So long as you use restrict your domain to real numbers, the log of a negative number is undefined.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184368252754211, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/magnetic-monopoles?sort=active
# Tagged Questions The magnetic-monopoles tag has no wiki summary. 1answer 177 views ### Using the covariant derivative to find force between 't Hooft-Polyakov magnetic monopoles I am reading this research paper authored by NS Manton on the Force between 't Hooft-Polyakov monopoles. I have a doubt in equation 3.6 and 3.7. We assume the gauge field for a slowly accelerating ... 1answer 130 views ### Magnetic monopoles I am a non-expert in this field, just have a layman's interest in the subject. Has anyone ever considered the possibility of magnetic monopoles (one positive and one negative charge) being confined ... 6answers 5k views ### How to make a monopole magnet? I want to create a monopole magnet. Is this practically / theoretically possible? 0answers 78 views ### Do semiclassical GR and charge quantisation imply magnetic monopoles? Assuming charge quantisation and semiclassical gravity, would the absence of magnetically charged black holes lead to a violation of locality, or some other inconsistency? If so, how? (I am not ... 1answer 85 views ### What is the action for an electromagnetic field if including magnetic charge Recently, I try to write an action of an electromagnetic field with magnetic charge and quantize it. But it seems not as easy as it seems to be. Does anyone know anything or think of anything like ... 1answer 89 views ### What exactly does this news story mean by “magnetic charge”? I recently saw this article in my Google News page, whose headline and first paragraph are: Antimatter's Magnetic Charge Revealed Scientists say they've made the most precise measurements to ... 1answer 110 views ### Dirac magnetic monopoles and electric charge quantization Wikipedia describes how assuming the existence of a single magnetic monopole leads to electric charge quantization. But what if there's more than one? The same argument would apply to each of them ... 3answers 310 views ### Are gravitomagnetic monopoles hypothesized? My understanding is that gravitomagnetism is essentially the same relativistic effect as magnetism. If so, why is it that I've heard so much about magnetic monopoles, but never gravitomagnetic ... 2answers 224 views ### What happens to the magnetic field in this case? As far as I know, it's possible to create a radially polarised ring magnet, where one pole is on the inside, and the field lines cross the circumference at right angles. So imagine if I made one ... 1answer 65 views ### Proof of quantization of magnetic charge of monopoles using homotopy groups Suppose we place a monopole at the origin $\{{\bf 0}\}$, and the gauge field is well-definded in region $\mathbb R^3-\{0\}$ which is homomorphic to a sphere $S^2$. Then the total manifold is $U(1)$ ... 0answers 131 views ### Magnetic monopole and electromagnetic field quantization procedure From the Maxwell's equations point of view, existence of magnetic monopole leads to unsuitability of the introduction of vector potential as $\vec B = \operatorname{rot}\vec A$. As a result, it was ... 2answers 288 views ### Basic question on magnetism regarding north and south pole I am currently busy with some magnetism and quite shockingly (to me at least) I haven't yet read anything about the difference between the north pole and the south pole of a magnet. Before I started ... 2answers 98 views ### Can the poles of a magnet have varying intensity? In re-reading Is it possible to separate the poles of a magnet? (amongst others) the question mentioned in the title here just occurred to me. It may not be possible, at our current levels of ... 1answer 409 views ### Magnetic poles in Halbach array? I am a bit confused by the description of Halbach arrays. It is said that the line of magnets aligned in certain way results in cancellation of magnetic field on one side of the array, and ... 0answers 63 views ### Divergence calculation of a lie algebra valued quantity having spinor indices I am reading this paper by E. Weinberg - Fundamental monopoles and multimonopole solutions for arbitrary simple gauge groups. I am having a problem with a calculation. I don't have much experience ... 1answer 184 views ### Killing vectors for SO(3) (rotational) symmetry I am reading a paper$^1$ by Manton and Gibbons on the dynamics of BPS monopoles. In this, they write the Atiyah-Hitchin metric for a two-monopole system. The first part is for the one monopole moduli ... 2answers 302 views ### Can you put a magnetic ball into a hollow magnetic sphere? if all magnets have to have two poles(one north one south), is it possible to construct a hollow sphere where the inside face of the sphere was one pole, and the outside face another pole? is it also ... 1answer 83 views ### Relation between electric charge and gauge parameter of the moduli space of monopoles I am studying about the moduli space of a 2 monopole system from Harvey's notes, and Manton's paper. In both of these, (Harvey section 6.2), after constructing the Lagrangian for a two dyons system, ... 0answers 26 views ### Can we create a magnet with only one Pole? [duplicate] Possible Duplicate: How to make a monopole magnet? No matter how many times you cut a magnet, we always end up with 2 poles. Is there any possibility of creating a monopole magnet? 4answers 843 views ### What is the magnetic field inside hollow ball of magnets Setup: we have a large number of thin magnets shaped such that we can place them side by side and eventually form a hollow ball. The ball we construct will have the north poles of all of the magnets ... 0answers 35 views ### Does the universe appear to be a monopole to a ferromagnetic object within a solenoid? Just what the title states, please. To a ferromagnetic object placed at the centre of a solenoid (E.g. car-starter), does it appear that the universe around it is a monopole? p.s. Preferably in ... 0answers 130 views ### What is the mass of the emergent magnetic monopoles in spin ice and how is the mass of an emergent particle determined? In solid state physics emergent particles are very common. How one determines if they are gap-less excitations? Do the defects in spin ice called magnetic monopoles have mass? What is the mass of ... 0answers 105 views ### Does a rotating magnetic monopole have electric and magnetic moment in classical view? Would a rotating sphere of magnetic monopole charge have electric moment ? In a duality transformation E->B.c etc. how is the magnetic moment translated m = I.S -> ? Mel = d/dt(-Qmag/c).S ? A more ... 1answer 139 views ### Why are electric charges allowed to be so light but magnetic monopoles have to be so heavy? My question is in two parts. What is the origin of the electric field from an electric charge and why electron can have so small mass? While on the other hand for a magnetic monopole to create a ... 1answer 186 views ### Why magnetic monopole found in spin ice don't modify the Maxwell's Equations? Magnetic monopole predicted by Dirac nearly a century ago was found in spin ice as quasi-particle(2). My question is Why magnetic monopole found in spin ice don't modify the Maxwell's Equations? (I ... 8answers 2k views ### Is it possible to separate the poles of a magnet? It might seem common sense that when we split a magnet we get 2 magnets with their own N-S poles. But somehow, I find it hard to accept this fact.(Which I now know is stated by Gauss's Law) I have ... 2answers 263 views ### What tree-level Feynman diagrams are added to QED if magnetic monopoles exist? Are the added diagrams the same as for the $e-\gamma$ interaction, but with "$e$" replaced by "monopole"? If so, is the force between two magnetic monopoles described by the same virtual ... 0answers 57 views ### What is Higgs transition? In recent publications there is a frequent mentioning of the so-called "Higgs transition" in connection with magnetic monoples. Can anybody please describe the phenomenon in simple terms? For ... 1answer 97 views ### Hypothetical very massive particles I'm looking for a table or compilation of hypothetical very massive ($m\gtrsim 1$ TeV) particles and their expected masses (or bounds on them or relation with other scales). All I know is (please, ... 1answer 176 views ### Gravimagnetic monopole and General relativity Review and hystorical background: Gravitomagnetism (GM), refers to a set of formal analogies between Maxwell's field equations and an approximation, valid under certain conditions, to the Einstein ... 0answers 46 views ### Limit of the scalar field, and potential for a soliton ( finite energy, non dissipative) solution I want to prove that the the scalar field of the yang-mills lagrangian tends to some constant value which is a function of theta at infinity and that this value is a zero of the potential, when we ... 1answer 168 views ### What is the winding number of a magnetic monopole, and why is it conserved I had asked a similar question about a calculation involving the winding number here. But i haven't got a satisfactory response. So, I am rephrasing this question in a slightly different manner. What ... 2answers 318 views ### Winding number in the topology of magnetic monopoles I am reading on magnetic monopoles from a variety of sources, eg. the Jeff Harvey lectures.. It talks about something called the winding $N$, which is used to calculate the magnetic flux. I searched ... 1answer 218 views ### Dirac's quantization rule I first recall the Dirac's quantization rule, derived under the hypothesis that there would exit somewhere a magnetic charge: $\frac{gq}{4\pi} = \frac{n\hbar}{2}$ with $n$ natural. I am wondering ... 1answer 226 views ### The quantum-mechanical description of an electron motion in a magnetic monopole field The quantum-mechanical motion problem of an electron in electric field of the nucleus is well known. The quantum-mechanical description of electron motion in a magnetic field is also not difficult, ... 3answers 436 views ### can one introduce magnetic monopoles without Dirac strings? To introduce magnetic monopoles in Maxwell equations, Dirac uses special strings, that are singularities in space, allowing potentials to be gauge potentials. A consequence of this is the quantization ... 2answers 267 views ### Why do we like gauge potentials so much? Today I read articles and texts about Dirac monopoles and I have been wondering about the insistence on gauge potentials. Why do they seem (or why are they) so important to create a theory about ... 2answers 434 views ### Does existence of magnetic monopole break covariant form of Maxwell’s equations for potentials? Absence of magnetic charges is reflected in one of Maxwell's fundamental equations: $$\operatorname{div} \vec B = 0 \text{ (1).}$$ This equation allows us to introducte concept of vector potential: ... 1answer 193 views ### Effect of introducing magnetic charge on use of vector potential It is well known that Maxwell equations can be made symmetric w.r.t. $E$ and $B$ by introducing non-zero magnetic charge density/flux. In this case we have $div B = \rho_m$, where $\rho_m$ is a ... 2answers 165 views ### Orientation of Magnetic Dipoles Does a magnetic dipole (in a permanent magnet) tend to align with the B-field or with the H-field? The current loop (Ampère) model of the magnetic dipole suggests the former, while the ... 1answer 153 views ### Dirac string on (periodic) compact space For a non-compact space, the Dirac string can be defined as a line joining the Dirac monopole to infinity (or another Dirac monopole). The region where the gauge connection is ill-defined. (as can be ... 1answer 441 views ### How does Inflation solve the Magnetic Monopole problem? Cosmological Inflation was proposed by Alan Guth to explain the flatness problem, the horizon problem and the magnetic monopole problem. I think I pretty much understand the first two, however I don't ... 5answers 450 views ### How would I go about detecting monopoles? A question needed for a "solid" sci-fi author: How to detect a strong magnetic monopole? (yes, I know no such thing is to be found on Earth). Think of basic construction details, principles of ... 5answers 1k views ### Why do physicists believe that there exist magnetic monopoles? One thing I've heard stated many times is that "most" or "many" physicists believe that, despite the fact that they have not been observed, there are such things as magnetic monopoles. However, I've ... 1answer 227 views ### With estimates of mass constraints on magnetic monopoles, how likely is one to be found by the LHC(MoEDAL)? Fermilab seems to have ruled out monopoles with mass less than 850 GeV, but I have seen some estimates of the mass thought to be in the order of up to $10^{18}$ GeV, which, of course, would make them ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196635484695435, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/2293/list
## Return to Question 3 formatting I've recently started to look at elliptic curves and have three basic questions: (1) 1. Is it correct to say that elliptic curves $E$ in the projective plane are in bijective correspondence with lattices $L$ in the complex plane via $E$ <--> $C/L$. (2) 2. If so, is there an explicit expression of the lattice generators in terms of the equation defining the curve? Or, at least, is there a simple example of a curve and its corresponding lattice?(3) 3. Since every elliptic curve is a Lie group, it must have a corresponding Lie algebra. Is there an explicit expression of the Lie algebra in terms of the equation or lattice? Or,againOr, again, a simple example of a curve and its Lie algebra (or, even better, an example of a curve, its lattice, and its Lie algebra). 2 edited tags 1 # Elliptic Curves, Lattices, Lie Algebras I've recently started to look at elliptic curves and have three basic questions: (1) Is it correct to say that elliptic curves $E$ in the projective plane are in bijective correspondence with lattices $L$ in the complex plane via $E$ <--> $C/L$. (2) If so, is there an explicit expression of the lattice generators in terms of the equation defining the curve? Or, at least, is there a simple example of a curve and its corresponding lattice? (3) Since every elliptic curve is a Lie group, it must have a corresponding Lie algebra. Is there an explicit expression of the Lie algebra in terms of the equation or lattice? Or,again, a simple example of a curve and its Lie algebra (or, even better, an example of a curve, its lattice, and its Lie algebra).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938051700592041, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/tagged/stochastic-calculus?page=1&sort=votes&pagesize=30
# Tagged Questions The stochastic-calculus tag has no wiki summary. learn more… | top users | synonyms 1answer 2k views ### What is the role of stochastic calculus in day-to-day trading? I work with practical, day-to-day trading: just making money. One of my small clients recently hired a smart, new MFE. We discussed potential trading strategies for a long time. Finally, he expressed ... 3answers 2k views ### What is a stationary process? How do you explain what a stationary process is? In the first place, what is meant by process, and then what does the process have to be like so it can be called stationary? 2answers 556 views ### Missing step in stock price movement equations Assuming a naive stochastic process for modelling movements in stock prices we have: $dS = \mu S dt + \sigma S \sqrt{dt}$ where S = Stock Price, t = time, mu is a drift constant and sigma is a ... 3answers 686 views ### Solving Path Integral Problem in Quantitative Finance using Computer I've asked this question here at Physics SE, but I figured that some parts would be more appropriate to ask here. So I'm rephrasing the question again. We know that for option value calculation, path ... 0answers 321 views ### Law of an integrated CIR Process as sum of Independent Random Variables It is known (see for example Joshi-Chan "Fast and Accureate Long Stepping Simulation of the Heston SV Model" available at SSRN) that for a CIR process defined as : dY_t= \kappa(\theta -Y_t)dt+ ... 3answers 584 views ### Deterministic interpretation of stochastic differential equation In Paul Wilmott on Quantitative Finance Sec. Ed. in vol. 3 on p. 784 and p. 809 the following stochastic differential equation: $$dS=\mu\ S\ dt\ +\sigma \ S\ dX$$ is approximated in discrete time by ... 2answers 253 views ### Obtaining characteristics of stochastic model solution I want to use the following stochastic model $$\frac{\mathrm{d}S_{t}}{ S_{t}} = k(\theta - \ln S_{t}) \mathrm{d}t + \sigma\mathrm{d}W_{t}\quad (1)$$ using the change in variable $Z_t=ln(S_t)$ we ... 1answer 354 views ### How to perform basic integrations with the Ito integral? From the text book Quantitative Finance for Physicists: An Introduction (Academic Press Advanced Finance) I have this excercise: Prove that ... 1answer 171 views ### Upper bound concerning Snell envelope Consider a non-negative continuous process $X = \left (X_t \right)_ {t\geq 0}$ satisfying $\mathbb E \left \{ \bar X \right\}< \infty$ (where $\bar X =\sup _{0\leq t \leq T} X_t$) and its ... 2answers 580 views ### How do practitioners use the Malliavin calculus (if at all)? This question is inspired by the remark due to Vladimir Piterbarg made in a related thread on Wilmott back in 2004: Not to be a party-pooper, but Malliavin calculus is essentially useless in ... 1answer 187 views ### Why does the price of a derivative not depend on the derivative with which you hedge volatility risk? I'm trying to derive the valuation equation under a general stochastic volatility model. What one can read in the literature is the following reasoning: One considers a replicating self-financing ... 0answers 263 views ### Probability distribution of maximum value of binary option? A binary option with payout \$0/\$100 is trading at \\$30 with 12 hours to expiration. Assuming the underlying follows a geometric Brownian motion (hence volatility remains constant), what ... 0answers 211 views ### Transformation of Volatility - BS I have recently seen a paper about the Boeing approach that replaces the "normal" Stdev in the BS formula with the Stdev \begin{equation} \sigma'=\sqrt{\frac{ln(1+\frac{\sigma}{\mu})^{2}}{t}} ... 2answers 339 views ### What is the average stock price under the Bachelier model? Let's say stock price follows following process: $$dS(t) = \sigma dW(t)$$ where $W(t)$ is Standard Brownian motion. The initial level for the stock is $S(0)$. Define the average of stock price ... 3answers 435 views ### How to use Itô's formula to deduce that a stochastic process is a martingale? I'm working through different books about financial mathematics and solving some problems I get stuck. Suppose you define an arbitrary stochastic process, for example $X_t := W_t^8-8t$ where \$ W_t ... 2answers 588 views ### How does one go from measure P to Q(risk-neutral) when modeling an asset paying dividends? I am really having a terrible time applying Girsanov's theorem to go from the real-world measure $P$ to the risk-neutral measure $Q$. I want to determine the payoff of a derivative based an asset ... 2answers 287 views ### What is the mean and the standard deviation for Geometric Ornstein-Uhlenbeck Process? I am uncertain as to how to calculate the mean and variance of the following Geometric Ornstein-Uhlenbeck process. $$d X(t) = a ( L - X_t ) dt + V X_t dW_t$$ Is anyone able to calculate the mean ... 1answer 308 views ### What is the forward rate for a Black-Karasinski interest rate model? I was wondering if anyone could help me with the instantaneous forward rate equation for a Black-Karasinski interest rate model? I was also after the Black-Karasinski Bond Option Pricing Formula. 1answer 197 views ### Derivation of Ito's Lemma My question is rather intuitive than formal and circles around the derivation of Ito's Lemma. I have seen in a variety of textbooks that by applying Ito's Lemma, one can derive the exact solution of a ... 1answer 142 views ### Non-arbitrage theory and existence of a risk premium Consider a probability filtred space $(\Omega, \mathcal F, \mathbb F, \mathbb P)$, where $\mathbb F = (\mathcal F_t)_{0\leq t\leq T}$ satisfing the habitual conditions and isgenerated by $1 d$- ... 1answer 276 views ### Change of measure discrete time Suppose I have a random walk $X_{n+1} = X_n+A_n$ where $A_n$ is an iid sequence, $\mathsf EA_n = A>0$. How to construct a martingale measure for this case? 0answers 80 views ### Simple question concerning Jump process (Lévy process) model for a risky actif price process [closed] Consider $X= \left( X_t \right)_{t\geq 0}$ is a Lévy process whose characteristic triplet is $\left( \gamma, \sigma ^2, \nu \right)$ and where its Lévy measure is \nu \left( dx\right) = A ... 1answer 191 views ### What is augmented data when simulating stochastic differential equations using Gibbs Sampler? I am reading this paper on Bayesian Estimation of CIR Model. Basically, it is about estimating parameters using Bayesian inference. It estimates this stochastic differential equation: dy(t)=\{ ... 1answer 159 views ### Integrating log-normal The usual log normal model in differential form is: $dS = \mu S dt + \sigma S dX$ where $dX$ is the stochastic part, so $\frac{dS}{S} = \mu dt + \sigma dX$ (1) and we normally solve this by ... 1answer 170 views ### How to measure a non-normal stochastic process? If I understand right, Itô's lemma tells us that for any process $X$ that can be adapted to an underlying standard normal Wiener measure $\mathrm dB_t$, and any twice continuously differentiable ... 1answer 155 views ### Simple question about stochastic differential What is the equivalent of product rule for stochastic differentials? I need it in the following case: Let $X_t$ be a process and $\alpha(t)$ a real function. What would be $d(\alpha(t)X_t)$? 2answers 424 views ### Financial Mathematics - Martingales example Was hoping somebody could help me with the following question. Prove that under the risk-neutral probability $\tilde{\mathsf P}$ the stock and the bank account have the same average rate of growth. ... 2answers 110 views ### Quadratic variation quesiton Here I have this question (i) state Ito's formula (ii) hence or otherwise show that $\int^t_0B_s dB_s = \dfrac{1}{2}B^2_t -\dfrac{1}{2} t$ (iii) define the quadratic variation $Q(t)$ of Brownian ... 1answer 237 views ### Regime switching in mean reverting stochastic process Let you have a mean reverting stochastic process with a statistically significant autocorrelation coefficient; let it looks like you can well model it using an $ARMA(p,q)$. This time series could be ... 0answers 100 views ### Measure change in a bond option problem This is not a homework or assignment exercise. I'm trying to evaluate $\displaystyle \ \ I := E_\beta \big[\frac{1}{\beta(T_0)} K \mathbf{1}_{\{B(T_0,T_1) > K\}}\big]$, where $\beta$ is the ... 0answers 92 views ### Stochastic discount factor (aka deflator or pricing kernel) and class D processes When (under what assumptions on the model) does a Stochastic Discount Factor need to be of Class D? What would be the implications if it was not? Is it connected to one of the no-arbitrage notions? 1answer 218 views ### Does Ito/Malliavin calculus have any applications helpful for direction based trading? I'm an aspiring computer scientist who want to move into algorithmic trading at some point. At the moment I'm mostly focusing on courses in machine learning/data analysis etc. but I've noticed that ... 3answers 171 views ### Why is the CAPM securities market line straight? Let $\gamma$ be the expected return, in terms of its exponential growth rate, of the market asset. If we set $\gamma=\mu-\sigma^2/2$ as explained by the Doléans-Dade exponential, then the expected ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139015078544617, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/algorithm
# Tagged Questions An algorithm is a sequence of well-defined steps that defines in abstract the solution to a problem. 1answer 64 views ### What is the underlying algorithm for GroupElementToWord? What algorithm is Mathematica 9 using for GroupElementToWord[group, g]? 0answers 15 views ### best fraction algorithm [migrated] So i have 7 "fractions" that i need to build a algorithm for. a/16 b/15 c/15 d/14 e/10 f/8 g/3 are the "fractions"(the denominators will never change) now when you add all the denominators together ... 1answer 54 views ### How can I stop DiscreteMarkovProcess[] from relabeling vertices? I'm attempting to calculate the mean first passage time between two vertices, $v_1$ and $v_2$, provided some undirected graph $G$. However, I noticed that running the following script: ... 2answers 52 views ### How can I efficiently and uniformly sample the set of vertices a fixed edge-wise distance away from a chosen vertex? I have a large graph $G$, which may be either directed or undirected. How would I use DepthFirstScan[] or BreadthFirstScan[] to ... 0answers 53 views ### Fast calculation of commute distances on large graphs (i.e. fast computation of the pseudo-inverse of a large Laplacian / Kirchhoff matrix) I have a large, locally connected and undirected graph $G$ with $\approx 10^4$ vertices and $\approx 10^5$ to $\approx 10^6$ edges. Moreover I can bound the maximum vertex degree as $Q_{max}$. I ... 0answers 81 views ### Counting paths of a certain length between a source and sink vertex I have a graph $G$, which may be directed or not, and I was wondering if there was an efficient way of using, say, BreadthFirstScan[] and FindShortestPath[] to count the number of paths between some ... 1answer 62 views ### Can I force a function to quit and return some value after a certain amount of time has passed during its evaluation? Imagine I provide some random input to function like FindInstance[], and I observe that, despite the existence of good solutions, the function will, with some ... 1answer 87 views ### Why is FindInstance failing when I relax a set of constraints? I'm attempting to use FindInstance to generate coordinate sets for plausible triangles with edge length distance constraints. E.g.: ... 0answers 107 views ### Solving recursion relations using Mathematica I want to solve the recursion relation given in equation 2.7(a/b) on page $6$ of this paper. (..the initial seed is $F_1 = G_1 = 1$ and the functions $\alpha$ and $\beta$ are defined on page $5$ in ... 1answer 163 views ### Making FindShortestPath a little bit sloppy [duplicate] I have a dense graph, and I'd like to find multiple "almost shortest" paths from a source vertex, $v_s$, to a sink vertex, $v_s$, on an undirected graph $G$. How can I repeatedly run ... 1answer 119 views ### Is there a fast way to trilaterate a point? I have a point in 2D or 3D space at an unknown coordinate, $p_0$, and I'd like to determine its position using distances from known coordinates $(p_1, p_2, p_3)$. Beyond using ... 2answers 198 views ### Is there something akin to “SubgraphIsomorphismQ” in Mathematica 9? Provided two unlabeled graphs, $G$ and $H$, I would like to test where $H$ is a subgraph of $G$. In other words, I'd like to test whether we can prune some fixed number of vertices or edges from $G$ ... 1answer 62 views ### Is it possible for me to explicitly specify a point list for SpatialGraphDistribution? The function RandomGraph[SpatialGraphDistribution[n, r]] generates a random geometric graph over $[0,1]^2$ where vertices are connected if they are within a ... 1answer 74 views ### Determining whether two k-chromatic graphs are equivalent (not simply isomorphic) using IsomorphicGraphQ? In a previous question of mine, I asked whether Mathematica's built-in routines could determine an isomorphism for two $k$-chromatic graphs, Determining whether two $k$-chromatic graphs are isomorphic ... 2answers 154 views ### Determining whether two $k$-chromatic graphs are isomorphic (respecting vertex coloration) Consider the case where I have two $k$-chromatic graphs $G_1$ and $G_2$, i.e. two graphs where individual vertices can be colored with one of a set of $k$ total colors, and I would like to determine ... 1answer 217 views ### Efficient method for inverting a block tridiagonal matrix Is there a better method to invert a large block tridiagonal Hermitian block matrix, other than treating it as a ordinary matrix? For example: ... 2answers 227 views ### Finding all points of period n in an iterated map I'm trying to implement an algorithm of Jenkinson and Pollicott to calculate the Hausdorff dimension of a Julia set for the map $f_c : z\mapsto z^2 + c$. It's described on page 40 of their paper, ... 4answers 603 views ### Mathematica Implementations of the Random Forest algorithm Is anyone aware of Mathematica use/implementation of Random Forest algorithm? 2answers 72 views ### How to incorporate functions within Do Loops I'm attempting to repeatedly perform a simple algorithm with incremental changes of a parameter. I can easily express my changing parameter: ... 2answers 360 views ### Easier program for period of Fibonacci sequence modulo p For a little project I need to calculate the period of a Fibonacci sequence modulo p, for which p is a prime number. For example, the Fibonacci sequence modulo 19 would be: 0, 1, 1, 2, 3, 5, 8, 13, ... 1answer 127 views ### Random number generation with specific distribution I am writing a program for solving the shortest path in travelling salesman problem, with a twist that there are multiple salesmen who partition the cities among themselves, thus creating two part ... 1answer 340 views ### How to extract metadata from an image of a business card? I'm trying to digitize some documents, and I came across a very cool app called camscanner app which performs parallax transform and ocr very nicely, now I'm implementing it in mathematica... Given ... 3answers 11k views ### How to create new “person curve”? Wolfram|Alpha has a whole collection¹ of parametric curves that create images of famous people. To see them, enter WolframAlpha["person curve"] into a Mathematica ... 1answer 84 views ### How do I determine if there's an arithmetic sequence within a list? [duplicate] Possible Duplicate: find subsequences of constant increase Given an arbitrarily long list of integers (let's say they are sorted), how would one determine if any 3 (or more) of those ... 1answer 536 views ### Machine learning. SVM algorithm I want to work with machine learning in Mathematica. Are there any SVM algorithms implemented in Mathematica anywhere? Or any other algorithms for machine learning? With positive and negative database ... 0answers 119 views ### How can I compute the chromatic index and number of a graph? I saw a recent question from M.R. and realized there is no function to compute the chromatic index and number of a graph, other than a really slow method in the now deprecated Combinatorica. So, how ... 3answers 278 views ### How do we solve Eight Queens variation using primes? Using a $p_n$x $p_n$ matrix, how can we solve the Eight queens puzzle to find a prime in every row and column? ... 2answers 377 views ### How can I speed up the classic GA for graph coloring? I'm trying to compute the chromatic number of this graph (which is 28): g = Import@"http://www.info.univ-angers.fr/pub/porumbel/graphs/dsjc250.5.col"; My genetic ... 2answers 557 views ### How to quickly calculate intersections of filled curves? I am trying to quickly calculate the intersection of polygons with more than 6,000 points. A compiled solution would be preferable. Here is one example of the problem: ... 11answers 639 views ### Generating an ordered list of pairs of elements from ordered lists I have a pair of ordered lists. I want to generate a new ordered list (using the same ordering) of length n by applying a binary operator to pairs of elements, one from each list, along with the index ... 1answer 117 views ### Community structure algorithm [closed] I'm using CommunityStructureAssginment[] from GraphUtilities but not sure what is the algorithm behind it. May be someone knows what is the algorithm? Or any info on the community structure ... 3answers 372 views ### Computing polynomial eigenvalues in Mathematica MATLAB offers a function polyeig for computing polynomial eigenvalues, which appear, for instance in quadratic eigenvalue problems (see here for some applications) such as: \begin{equation} ... 1answer 562 views ### Efficient implementation of cubic equation of state ...In this case the Peng Robinson Equation of State. Equations of state are empirical equations with parameters derived from experimental data. They are used to predict pure component and mixture ... 2answers 270 views ### find subsequences of constant increase A list like l = {0, 1, 2, 3, 4, 5, 7, 9, 12, 13, 18, 19} may have subsequences of constant increase, $a_{n+1} = a_n + k$. For example: ... 2answers 690 views ### How can I create a glass distortion effect in an image? I'd like to overlay a glass jar onto an image with realistic light bending. Can anyone think of a way to automate this effect (perhaps this can be done with the raytracing package rayica)? 2answers 2k views ### Programmatic approach to HDR photography with Mathematica image processing functions The High dynamic range imaging (HDR or HDRI) direction in photography and image processing became very popular recently. Besides obvious photo art applications (see examples), there are many great ... 2answers 333 views ### Word Squares and Beyond A word square is a set of words which, when placed in a grid, read the same horizontally and vertically. For example, the following is an English word square of order 5: ... 3answers 455 views ### Build a refined grid based on intersecting line I honestly have no idea where to begin with this problem. In summary, I have a 2D coarse grid with an intersecting line. For an easy example, let's assume it's a 4x4 grid. I wish to pass through ... 1answer 102 views ### Center of quadrangular Having 4 points A (ax, ay), B (bx, by), C (cx, cy) and D ... 1answer 383 views ### Efficient backtracking with Mathematica Backtracking is a general algorithm for finding all (or some) solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c ... 1answer 354 views ### Computing Slater determinants I need to compute Slater determinants. I'm wondering if I would benefit from assigning each of my functions to a variable prior to computation. I'm working with Slater determinants, but my question ... 3answers 235 views ### Faster Alternatives to DateDifference I need a faster implementation of FractionOfYear and FractionOfMonth, which do the following: Input: A time/date specified by ... 1answer 229 views ### How can I extend BinCounts to work on Times and Dates? I want to extend BinCounts to work on a list of times or dates, binning them by Day, Week, Hour of the Month, Day of the Year, etc... So given a list of times and a time bin spec, I need to calculate ... 2answers 439 views ### How can I pack circles of different sizes into a spiral? Given a list of circles of different areas, I need to arrange them tangentially in order of increasing area and spiraling outward. An example of the type of packing I'm attempting is shown by the ... 0answers 728 views ### How to construct a treemap using non-rectangles? I've written the standard version of a tree map (a graphic that shows nested data) and I'm looking to improve on this layout by switching to different types of polygons or perhaps circles. Can anyone ... 1answer 339 views ### Higher order SVD Does anyone know how to do a higher order SVD in Mathematica ? A good reference seems to be here http://csmr.ca.sandia.gov/~tgkolda/pubs/bibtgkfiles/TensorReview.pdf but I don't understand their ... 4answers 932 views ### Finding a percolation path I would like to examine percolation on a random lattice. To be exact, I wish to find the minimum length of a 'bond' needed such that the leftmost site can be connected to the rightmost site. Here is ... 7answers 775 views ### Splitting words into specific fragments I am looking into splitting words into a succession of chemical elements symbols, where possible. For example: Titanic = Ti Ta Ni C (titanium, tantalum, nickel, carbon) A word may or may not be ... 2answers 770 views ### Simple algorithm to find cycles in edge list I have the edge list of an undirected graph which consists of disjoint "cycles" only. Example: {{1, 2}, {2, 3}, {3, 4}, {4, 1}, {5, 6}, {6, 7}, {7, 5}} Each ... 3answers 656 views ### Effective matrix power like algorithm First example Suppose you want to calculate the 6th power of some matrix $A$. The brute force attempt of doing this is considering $$(((AA)A)A)A)A$$ which requires a total of 5 matrix ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8983890414237976, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/57667/coproduct-direct-sum?answertab=oldest
# coproduct, direct sum I thought about the problem of how to understand coproduct and direct sum and I think this could be thought of as a way of thinking. I am posting this to verify if my understanding is correct. So what we need is the following. Given data: $$\begin{align} &f_i\colon X_i\to P \\ &g_i\colon X_i \to X \end{align}$$ Now define in someway $$g\colon P \to X$$ $P$ is the coproduct we are trying to define. Please note that we need to define $g$ given $g_i$. $g$ is actually to be defined so that it completes the commutative diagram and then only $P$ is the coproduct. The most obvious way of defining $g$ on each of the basis elements is $g := \sum g_i$ where $i$ ranges over the index set. Now all our group, ring homomorphism operations are defined over finite sums or finite products. So there is no way we could make sense of an infinite sum in the definition of $g$. Thus we have to restrict it to finite sum. Hence if we have to restrict it to finite sum then only a finite number of elements of $P$ could be non zero and the rest are all $0$s. This is my understanding and if somebody could point out that there is some gaps it would be very helpful. Thanks - 2 $\sum g_i$ does not make sense in arbitrary categories; are you assuming that you are in an abelian category? Also, how can you be "given" the $fi$, and yet be "trying to define $P$"? You cannot be "given" a function into an object that has not been defined. – Arturo Magidin Aug 15 '11 at 20:05 Yes I am assuming abelian category. Furthermore, I understand your statement about P but I am really not sure how to word it. I want to somehow understand why is it so that if u have a coproduct you need to have a finite number of non zero elements and cannot have an infinite number of non zero elements in P. Again I am not sure if I am able to explain properly. But (1,1, 1, ... ) is not a member of P but (1, 1, 0, ....) is a member. So this is the major motivation. So if you could tell me how the post should be modified it would be helpful. – sumo Aug 15 '11 at 20:58 I am not sure but should it be worded as how do u define g then. Because the direct sum could be thought of as the ( P, g_{i}, g) all these together. Loosely P is called the direct sum but actually it should be the three together if I am not highly mistaken. The probably if that is true then I would edit the post suitably again. – sumo Aug 15 '11 at 21:03 No, $g$ is induced by the universal property of the coproduct, and is not part of the coproduct. Also, you aren't just considering an abelian category, you are considering a category in which the underlying set of the product is the product of the underlying sets and the underlying set of the coproduct is the restricted direct product. I don't know if this always holds. But since it is clear you are confused, so am I, so I cannot tell you how to "unconfuse" your question. – Arturo Magidin Aug 16 '11 at 3:13 ## 2 Answers I saw you posted on MO, and it seems you are trying to relate the ideas of coproduct and direct sum. Put simply, the notion of a coproduct is a generalization of the notion of a direct sum to a general category. For concreteness, consider the case of vector spaces over a field $k$. Given a $k$-vector space $W$ and two subspaces $U, V$, the sum $U+V$ is just $\{u+v|u\in{}U,v\in{}V\}$, and the sum is said to be direct if we have unique expression, i.e. if $u,u'\in{}U$, $v,v'\in{}V$ and $u+v=u'+v'$, then $u=u'$ and $v=v'$. Now, given a direct sum $U\oplus{}V$, there are 'injections' $i:U\to{}U\oplus{}V$ and $j:V\to{}U\oplus{}V$ taking elements to themselves, and we have the following universal mapping property: Given a $k$-vector space $T$ and linear maps $f:U\to{}T$, $g:V\to{}T$, there is a unique linear map $[f,g]:U\oplus{}V\to{}T$ such that $[f,g]i=f$ and $[f,g]j=g$. The map $[f,g]$ is defined by taking $u+v$ to $f(u)+g(v)$. This is well-defined, since the sum is direct. By the usual arguments, we can show that this universal property characterizes $U\oplus{}V$ up to isomorphism, so we have an abstract characterization of a direct sum purely in terms of maps. Of course this characterization just says that $U\oplus{}V$, together with $i$ and $j$, is a coproduct diagram of $U$ and $V$ in $k$-Vect. These remarks generalize to infinite direct sums as well, that is, given a family $(U_i)_{i\in{}I}$ of subspaces of $W$, the direct sum of these spaces, together with the relevant injection maps, is a coproduct diagram in $k$-Vect. In a general category, it doesn't usually make sense to talk about elements and maps as we have above, but it always does make sense to talk about morphisms, diagrams, and universal properties. So, the notion of a coproduct really generalizes direct sums to the general categorical setting. - Perhaps this might work: you understand that in an abelian category the coproduct and the product of two objects are the same (when they both exist). You want to "understand" something about the coproduct and the product of infinite families and how they may differ. In an abelian category, if $J\subseteq K$ are finite, then you have maps from $\prod_{j\in J}X_j$ to $\prod_{k\in K}X_k$ and back; the map from the $J$-indexed product to the $K$-indexed product is obtained by "extending by $0$s", while the map back is obtained by dropping the components not in $J$. These are in fact maps induced by the universal properties of the product and the coproduct by the canonical projections and immersions, recalling that the finite products are equal to the finite coproducts. That means that given an infinite family $\{X_i\}_{i\in I}$, you can look at the family of finite products $\bigl\{\prod_{j\in J}X_j\mid J\subseteq I,\ |J|\lt\infty\bigr\}$ as both a directed family (ordered by the partial order of inclusion among finite subsets) and as an inversely directed family. If you take the direct limit of this family and interpret the objects as coproducts, then you get the coproduct of the full infinite family. If you take the inverse limit of this family and interpret the objects as products, you get the product of the full infinite family. The direct limit of the objects is defined by "local" conditions, so that intuitively you only need to worry about what is happening "locally", hence the objects should only require finitely many information; whereas the inverse limit requires a whole family of consistent objects, so that you need "a lot more" information. Intuitively, local information just isn't enough. For abelian groups, modules, and vector spaces, the directed limit consists of equivalence classes of objects of the form $[(x,J)]$ with $x\in \prod_{j\in J}X_j$, with $[(x,J)]=[(y,K)]$ if and only if there exists $M$ finite, $J,K\subseteq M$, and such that the images of $x$ and $y$ in $\prod_{m\in M}X_m$ agree; so elements are completely determined by information on a finite subset of $I$. On the other hand, the inverse limit is constructed at the set level as a subset of the set-theoretic product $\times(\prod X_i)$ of the families, taking those elements $(x_J)$ indexed by the finite subsets of $I$, such that if $J\subseteq K$ then the image of $x_K$ in $\prod_{j\in J}X_j$ is precisely $x_J$ ("consistent families"). This means that you may need information about what is happening on all coordinates to determine an element, and there are elements that are not obtained merely by local information. (A possible model might be the difference between the directed limit of the cyclic groups $\mathbb{Z}_{p^n}$ and the inverse limit of the same groups; the directed limit is the Prüfer group, in which each element is, in a sense, "finite", whereas the inverse limit is the $p$-adic integers, in which you have elements that are not "finite", but rather "come" from 'infinite tuples'.) - Thx so much for clarifying. It was very very helpful. This was the thing which clarified everything. – sumo Aug 16 '11 at 5:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570746421813965, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=314045
Physics Forums ## current density at the surface of a magnetised material 1. The problem statement, all variables and given/known data Write down an expression for the current density per unit length flowing at the surface of a magnetised material. 3. The attempt at a solution Any ideas? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help E is the electric field, but there's no electric field in this problem, so that equation can't apply, can it? It sounds to me like the problem is talking about the density of bound current. There's a simple expression for that in terms of the magnetization... you can look it up, or if you think about it you can probably come up with the relation on your own. (It might help to imagine each atom of the magnetized material as a tiny little current loop) Recognitions: Homework Help You've written down Ohm's law which isn't the right equation to use. I think you're looking for $\vec{K}$, the surface current density, K is the current per unit width perpendicular to the flow? ## current density at the surface of a magnetised material Sorted! Its I=M X n Thread Tools | | | | |------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: current density at the surface of a magnetised material | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 2 | | | Advanced Physics Homework | 1 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 4 | | | Classical Physics | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988345861434937, "perplexity_flag": "middle"}
http://www.openwetware.org/index.php?title=User:David_J_Weiss/Notebook/people/weiss/Formal&diff=prev&oldid=374173
# User:David J Weiss/Notebook/people/weiss/Formal ### From OpenWetWare (Difference between revisions) | | | | | |---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | () | | () | | | Line 8: | | Line 8: | | | | | | | | | ==<center>Abstract</center>== | | ==<center>Abstract</center>== | | - | The ratio for electric charge to the mass of an electron is a fundamental concept in physics and useful for future students interested in the study of physics.  From this you can conclude how the electron is hardly affected by gravity and how the electric field governs how the electron behaves.  This is important to know for the reason that it is one of the most important values in quantum mechanics.  We did this by means of observing the trajectory of electrons in a known constant magnetic field.  From this you can find the ratio of electric charge to mass for an electron as a function of observed radii, magnetic field, and energy.  This can be done with an electron gun a Helmholtz Coil and a couple of power sources.  With all these things we can determine how a beam of electrons curves within a magnetic field and thus measure a radius and with some tricky manipulation figure the ratio for electric charge compared to mass for the electrons.  From my experimental data we found that the ratio of e/m is 2.3+/-.23*10 coul/kg.  This was one standard deviation away from the accepted value.  There was still some systematic and random error that was prevalent throughout the experiment.  We will discuss the reasons and sources of these errors. | + | The ratio for electric charge to the mass of an electron is a fundamental concept in physics and useful for students interested in the study of physics.  From this you can conclude how the electron is hardly effected by gravity and how the electric field governs how the electron behaves.  This is important to know because it is one of the most important values in quantum mechanics.  We did this by means of observing the trajectory of electrons in a known constant magnetic field.  From this you can find the ratio of electric charge to mass for an electron as a function of observed radii, magnetic field, and energy.  This can be done with an electron gun a Helmholtz Coil and a couple of power sources.  With all these things we can determine how a beam of electrons curves within a magnetic field and thus measure a radius and with some tricky manipulation figure the ratio for electric charge compared to mass for the electrons.  From my experimental data we found that the ratio of e/m is 2.3+/-.23*10 coul/kg.  This was one standard deviation away from the accepted value.  There was still some systematic and random error that was prevalent throughout the experiment.  We will discuss the reasons and sources of these errors. | | | | + | | | | ==<center>Introduction</center>== | | ==<center>Introduction</center>== | | | The charge of an electron is one of the most basic concepts in the entire study of electromagnetism and atomic particles.  The first person o find an electron was J.J. Thompson.  He did so in a series of experiments which used cathode ray tubes to try to find electrons.  He did three such different experiments and it wasn't until the third that he found the charge to mass ratio for the electron which he found in 1987<small><sup>2</sup></small>.  These results let him to formulate his "Plum Pudding Model" of the atom.  This experiment is a lot like the one detailed here.  For these experiments he was awarded the Nobel Prize in Physics in the year 1906. | | The charge of an electron is one of the most basic concepts in the entire study of electromagnetism and atomic particles.  The first person o find an electron was J.J. Thompson.  He did so in a series of experiments which used cathode ray tubes to try to find electrons.  He did three such different experiments and it wasn't until the third that he found the charge to mass ratio for the electron which he found in 1987<small><sup>2</sup></small>.  These results let him to formulate his "Plum Pudding Model" of the atom.  This experiment is a lot like the one detailed here.  For these experiments he was awarded the Nobel Prize in Physics in the year 1906. | ## Experimental Determination of the Electron Charge to Mass Ratio Author: David Weiss Experimentalists: David Weiss, Elizabeth Allen University of New Mexico, Department of Physics and Astronomy MSC07 4220, 800 Yale Blvd NE, Albuquerque, New Mexico 87131-0001 USA Contact info: [email protected] ## Abstract The ratio for electric charge to the mass of an electron is a fundamental concept in physics and useful for students interested in the study of physics. From this you can conclude how the electron is hardly effected by gravity and how the electric field governs how the electron behaves. This is important to know because it is one of the most important values in quantum mechanics. We did this by means of observing the trajectory of electrons in a known constant magnetic field. From this you can find the ratio of electric charge to mass for an electron as a function of observed radii, magnetic field, and energy. This can be done with an electron gun a Helmholtz Coil and a couple of power sources. With all these things we can determine how a beam of electrons curves within a magnetic field and thus measure a radius and with some tricky manipulation figure the ratio for electric charge compared to mass for the electrons. From my experimental data we found that the ratio of e/m is 2.3+/-.23*10 coul/kg. This was one standard deviation away from the accepted value. There was still some systematic and random error that was prevalent throughout the experiment. We will discuss the reasons and sources of these errors. ## Introduction The charge of an electron is one of the most basic concepts in the entire study of electromagnetism and atomic particles. The first person o find an electron was J.J. Thompson. He did so in a series of experiments which used cathode ray tubes to try to find electrons. He did three such different experiments and it wasn't until the third that he found the charge to mass ratio for the electron which he found in 19872. These results let him to formulate his "Plum Pudding Model" of the atom. This experiment is a lot like the one detailed here. For these experiments he was awarded the Nobel Prize in Physics in the year 1906. After Thompson did these experiments R.A. Millikan came around and found through experimentation the charge of the electron. His experiments which involve dropping oil droplets in a chamber that could be charged to see how the oil droplets reacted in an electric field. These experiments then lead to the charge that an electron has on it3. He was later awarded the Nobel Prize in Physics for these experiments in 1923 after some controversy due to the deeds of one Felix Ehrenhaft's claim that he found a smaller charge than Millikan, but these claims turned out to be wrong and the prize was given to Millikan. With out these fundamental experiments we could have not found the charge of the electron, and with out this fundamental constant we could not have been able to do some of the work in chemistry atomic physics and quantum mechanics. The experiment that i did was similar to the experiment that Thompson did in that I am using an electron gun to "boil" off electrons and measure how they behave in a magnetic field. I will vary the force of the electrons by mean of changing the voltage to the electron gun which is the Lorenz Force4, I will also vary the magnetic field by means of changing the current that is applied to the Helmholtz Coils5 to show how an electron responds to a changing electric field and or a changing force. ## Experiment and Materials ### Instrumentation and Assembly An electron gun is housed in a bulb that has some gas in to so you can see the electron beam. There is also a Helmholtz Coil attached to this apparatus so that a uniform magnetic field can be generated. This is a manufactured piece so there is no need to worry about aligning everything properly (e/m Experimental Apparatus Model TG-13 Uchida Yoko as shown in Fig. 1). There are three different power supplies each one connects to a different part of the e/m apparatus. A connection needs to be made between the 6-9 Vdc 2A power supply (SOAR corporation DC Power Supply Model 7403, 0-36V, 3A, As shown in Fig.3)and the Helmholtz Coil with a multimeter in series (BK PRECISION Digital Multimeter Model 2831B, Fig 3). The 6.3V power supply (Hewlett-Packard DC Power Supply Model 6384A, Fig. 2)needs to be connected to the heater jacks. A power source rated at 150-300V (Gelman Instrument Company Deluxe Regulated Power Supply, Fig.3) needs to be connected to another multimeter (BK PRECISION Digital Multimeter (Model 2831B, Fig. 2) to the electron gun. Fig. 1) e/m Experimental Apparatus (Model TG-13),left to right:connections for the Helmholtz Coil power supply,current adjustment knob,focus adjustment,connection to voltmeter for electron gun,connections for the electron gun power supply,conections for the heater power supply Fig. 2)(bottom) Hewlett-Packard DC Power Supply (Model 6384A, 4-5.5V, 0-8A) connected in series to (top) BK PRECISION Digital Multimeter Model 2831B Fig. 3) (left of e/m E. A.) SOAR corporation DC Power Supply Model 7403, (top right) BK PRECISION Digital Multimeter Model 2831B, (bottom right) Gelman Instrument Company Deluxe Regulated Power Supply ### Procedure and Methods The general procedure can be found in Professor Gold's Lab Manual1. We first turned on the power to the heater and let it warm up for approximately 2 minutes, we knew this was done when we observed the cathode glowing red. After we warmed up the heater we applied a voltage of 200V to the electron gun, we then observed the beam of electrons that was glowing green. After we observed the electron beam we then applied a current to the Helmholtz Coils, and observed the electron beam take a circular orbit. We then proceeded to take our data on the radii of electron beam, one on the right side and one on the left, in addition to the voltage on the electron gun and the current on the Helmholtz Coils. We took the data on the radii of the beam by looking at a ruler attached to the back of the e/m apparatus(Fig. 1). We noticed how the radii of the beam was effected by changing the voltage while holding the current constant and also with the opposite. In our experiment we first started holding the current along the coils constant at 1.35A while fluctuating the voltage on the electron gun from a max value of 250V to a minimum voltage of 146V. We observed that the more voltage we applied while keeping the current constant the radius of the electron beam increased. For the next set of experiments we kept the voltage constant at 143V and had a range of current from 0.9A to 1.33A and observed that the radii increased as we decreased the current along the coils. We took data on the radii versus the current and radii versus the voltage and this can be found on my data page for this lab. ### Analysis Methods The theory predicts that there is a relation between the magnetic field and the force at which electrons travel in an electromagnetic field. From this we can use the Boit-Savart Law1 (see Equation 1) to find the magnetic field. The Lorenz Force 4(see Equation 2) can be solved for to find the ratio of e/m by means of knowing the velocity (Equation 3). So knowing this the theory will predict a linear relationship between the Radii and the velocity when current remains constant (Equation 4) with the ratio of m/e being the slope. The theory also predicts that there is also a liner relationship between the radii and the inverse current squared (Equation 5) where voltage remains constant. We used the Microsoft Excel function linest, to find this slope for both the constant current and constant voltage data. $Equation 1: B=\frac{\mu R^2NI}{\left(R^2+x^2 \right)^2}$ $Equation 2: \vec{F}=e(\vec{v} \times \vec{B}) = m \frac{\vec{v}^{2}}{R}=\frac{e}{m}=\frac{|\vec{v}|}{R|\vec{B}|}$ $Equation 3: v=\sqrt{\frac{2eV}{m}}$ $Equation 4: \frac{m}{e}\times V=\frac{(7.8\times10^{-4}\times I \times R)^2}{2}$ $Equation 5: \frac{m}{e}\times \frac{1}{I^2}=\frac{(7.8\times10 ^{-4}\times R)^2}{2\times V}$ A more detailed method for my calculations can be found here. ### Results and Discussion The data given in Table 1(Fig.4) shows the results that were obtained while keeping the voltage constant at 143 volts. The graph of radii vs 1/I^2, showing how the data compares to a best fit line (using Microsoft Excel,Fig. 5) showing one standard deviation most of the data fits the linear fit. The value of e/v was determined to be 2.74+/-0.38*10^11 coul/kg within 68% confidence interval, which has an error of approximately 55.76% when compared to the value from the paper "The Electronic Atomic Weight and e/m Ratio6" of 1.76*10^11 coul/kg. From Table 1(Fig. 4) the value of e/m while holding the current constant at 1.35 amps was 1.85+/-0.12*10^11 coul/kg. Using Microsoft Excel a plot of my data radii62 vs voltage(Fig. 6) is given compared to a liner fit of the data, and as there is no obvious systematic deviation this is a good fit for the data. From my value for e/m the error is 5.27% when compared to the vale from "The Electronic Atomic Weight and e/m Ratio6" of 1.76*10^11 coul/kg. From Table 1(Fig. 4) the value of e/m was calculated while the voltage and current were both being varied. This value was 4.091+/- 0.82*10^11 coul/kg. The liner graph showing the varying values are radii^2 vs voltage(Fig. 7) showin again that even while the voltage and current are varied a linear fit still holds. An error of 76.8% was given when compared to the value in "The Electronic Atomic Weight and e/m Ratio6" of 1.76*10^11 coul/kg. So the data is not good when the current and the voltage are not held constant and an even worse error is generated. Fig. 5)Radii versus inverse current(constant voltage 143V).All data falls with in one standard deviations of the liner fit.Made using Microsoft Excel. Fig. 6)Radii squared vs voltage(constant current 1.35A).All data lies within one standard deviation of the linear fit.Has a better fit than that of the plot of radii vs. current.Made using Microsoft Excel Fig. 7)Radii squared versus voltage shown with one standard deviation.Most of the data falls along this line but with some outliers so not as good of data.Made using Microsoft Excel ## Conclusion So from my experiment to find the ratio for e/m the best result I found was when I held the voltage constant and varied the radii I came up with a value of $1.85\pm.12*10^{11}\frac{coul}{kg}$. So based on my calculations from the above section I obtain the best result when I take the value using constant voltage, which the error is approximately 5.11%. This is not a bad result considering that their is a significant amount of systematic and random error present in this lab. The sources of random(non-biased) error in this lab are as follows. The collisions that occur between the helium atoms in the bulb and discharged electrons from the electrons gun, some of the energy that is used to accelerate the electrons is wasted in the form of visible light due to the collisions so the energy that is measured is an over estimate. Some of the systematic errors are caused by the imprecision of the ruler and in aligning them. We tried to over come some of these by taking two reading one on the left side and one on the right and averaging the two but this can only decrease the error but not eliminate it completely. Another error is that when the voltage is raised or the current is lowered the discharge of the electron beam doesn't line up with where it impacts the electron gun so you can adjust this by means of the focus knob, this changes the radii of the electron beam so you need to adjust the focus every time you change the voltage or current if not you potentially can alter the data and throw off the results. The experiment showed me that an electron beam when in the presence of a uniform magnetic field creates a circular form. This circle whose radii is directly related to the strength of the magnetic field and the velocity at which the electrons leave the electron gun. If I were to do this experiment again I would not imagine to obtain such a value so close to the actual value for e/m due to the large possibility for errors due to the large amount of systematic errors that can be introduced into the expernment. Summing up even when there is potentially large amounts of errors you can still find fundamental constants. ## Acknowledgments I would like to thank my lab partner Elizabeth Allen, my lab professor Dr.Steven Koch our lab TA Pranav Rathi for their assistance and support in this lab. I would also like to thank my mother(Dixie Colvin) for her assistance in editing this paper thanks mom. ## References 1.M. Gold, Physics 307L: Junior Laboratoy, UNM Physics ans Astronomy (2006),[1] 2. J. Thompson, "Cathode Rays". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, Vol 44, 293-316 (1897). 3.R. A. Millikan, "On the elementary electrical charge and the Avogadro constant". The Physical Review, Series II 2: 109–143 (1913). 4.Darrigol, Olivier, "Electrodynamics from Ampère to Einstein",Oxford University Press, ISBN =0-198-50593-0, 327 (2000)[2] 5.R. Merritt, C. Purcell, and G. Stroink. "Uniform magnetic field produced by three, four, and five square coils". Review of scientific Instruments, Volume 54, Issue 7, 879 (1983). 6.R.C. Gibbs and R.C. Williams, "The Electronic Atomic Weight and e/m Ratio". The Physical Review, Volume 44, Issue 12, 1029 (1933).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236941337585449, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/105530-properties-z_-n.html
# Thread: 1. ## Properties of Z_{n} I want to show that the elements of $\mathbb{Z}_n$ under multiplication that have inverses are also the generators of $\mathbb{Z}_n$ under addition. My answer so far: The only integers with multiplicative inverses are -1 and 1; and these are generators of ( $\mathbb{Z}_n, +_{n}$). I feel like this answer is too simple though... Can I show this more rigorously? Besides saying that (obviously) the gcd(1,n)=1, and $\phi(x)$=(1)x, $\phi(x)$=(-1)x, generates $\mathbb{Z}_n$? Or am I interpreting this question completely wrong? 2. An element $a \in \mathbb{Z}_n$ has multiplicative inverse iff $(a,n)=1$ which happens iff $<a>= \mathbb{Z}_n$, which means not only $-1$ and $1$ have multiplicative inverses. 3. Originally Posted by Jose27 An element $a \in \mathbb{Z}_n$ has multiplicative inverse iff $(a,n)=1$ which happens iff $<a>= \mathbb{Z}_n$, which means not only $-1$ and $1$ have multiplicative inverses. OOOOOOOOOH. Doh. Thank you. Bear with my stupidity, I'm in an abstract algebra course and did not take the recommended prereq so I'm a little lost at the moment. I was confusing the infinite $\mathbb{Z}$ with $\mathbb{Z}_n$, I think. So does the fact that $(a,n)=1$ iff a generates $\mathbb{Z}_n$ sufficiently answer this question? Or do you think there is more I need to show? 4. Originally Posted by platinumpimp68plus1 So does the fact that $(a,n)=1$ iff a generates $\mathbb{Z}_n$ sufficiently answer this question? Or do you think there is more I need to show? You would have to prove: $a$ has inverse iff $(a,n)=1$ and then $<a>= \mathbb{Z}_n$ iff $(a,n)=1$ (notice there are two iff's to be proven) but yes, this would finish it, since by transitivity you would be saying that $a$ has an inverse iff $a$ generates the whole additive group. 5. I think I have the first statement proven, but I'm unsure about how to prove the second... any tips on where to start? 6. If $(a,n)=1$ then you can find integers $x,y$ such that $xa + yn = 1$; clearly $x$ is the multiplicative inverse of $a$ (mod n). To show that $a$ generates $\mathbb{Z}_n$ as an additive group, consider the elements $a,2a,3a,...,na$. I claim that they are all distinct; because if $ja\equiv ka \mod n$, then $a(j-k)\equiv 0 \mod n$. Since $a$ is invertible we can multiply by its inverse on both sides which yields $j \equiv k \mod n$. Hence $\{a,2a,...,na\}=\mathbb{Z}_n$. 7. Thank you!!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960325300693512, "perplexity_flag": "head"}
http://mathoverflow.net/questions/34465/automorphism-groups-and-vector-fields
## Automorphism groups and vector fields ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If the projective automorphism group of a smooth complex projective algebraic variety $X$ has positive dimension, then $X$ admits a nonzero holomorphic vector field i.e. $H^0(X,T_X)\neq 0$. This is easy to prove using analytic methods: every subgroup of $PGL_{n}(\mathbf{C})$ of positive dimension contains a $Ga$ or a $Gm$; both give analytic vector fields on $\mathbf{P}^{n-1}(\mathbf{C})$ tangent to $X$. There is probably an algebraic proof along the same lines which would work over any algebraically closed field (of any characteristic). I'd like to ask: is there a reference for that? Is there a generalization of this for smooth varieties which are not necessarily complete and/or are not necessarily embedded in any projective space? upd: settled by BCnrd and Francesco Polizzi in the comments. - 1 The standard observation to prove this is that the tangent space to the automorphism scheme at its identity point is the space of global vector fields (consider how to describe a $\mathbf{C}[\epsilon]$-automorphism of $X_ {\mathbf{C}[\epsilon]}$ which lifts the identity modulo $\epsilon$), and a non-etale locally finite type group scheme has a non-zero tangent space at the identity point. If you don't assume completeness for $X$ then the automorphism functor is generally not represented by a scheme (nor is it even an algebraic space); consider the automorphism functor of an affine space. – BCnrd Aug 4 2010 at 2:09 BCnrd: re the automorphisms of the affine space: true, but the affine space does have nonzero vector fields (and the automorphism functor has representable subfunctors). – algori Aug 4 2010 at 2:37 algori: in the absence of representability, how do you propose to replace the hypothesis that the automorphism scheme has positive dimension? Could demand that the automorphism functor contains a representable subfunctor locally of finite type with positive dimension, and then use the same argument as above, but is that satisfactory to you? Curiously, in the affine case the aut functor is always a direct limit of representable affine finite type subfunctors; this follows by the same method used to prove Lemma A.8.13 in the book "Pseudo-reductive groups". – BCnrd Aug 4 2010 at 2:46 BCnrd -- I'm not sure what the "right" generalization is. The problem is how to translate the notion of discrete. (But I guess the version you mention would suffice for most practical purposes). Re the observations on the automorphism schemes: is there a reference for that? – algori Aug 4 2010 at 3:22 algori: the interpretation of the tangent space to the identity of the automorphism functor is something I think I first learned from either examples near the end of Schlessinger's first paper on versal deformations (from his thesis) or perhaps from Grothendieck's Seminaire Cartan expose on Hilbert schemes (in which he discusses Aut schemes near the end, as an application). – BCnrd Aug 4 2010 at 3:51 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176047444343567, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/charge+forces
# Tagged Questions 1answer 61 views ### Find the dielectric constant of the medium? Two point charges a distance $d$ apart in free space exert a force of $1.4\times10^{-4}N$. When the free space is replaced by a homogeneous dielectric medium, the force becomes $0.9\times10^{-4}N$. ... 3answers 105 views ### Is electron velocity at induction higher than in a wire? When looking to the electrostatic induction on a microscopic level, do the electrons really move with high velocities or they move like when a current passes through the wire (slowly). 1answer 136 views ### Origin of electric charge Baryons have charges that are the result of a polynomial calculation of their building blocks (quarks)'s fractional charges. But what gives these quarks electric charges? What interactions do they ... 3answers 119 views ### Explanation on the resulting forces of two positive point charges Why will the resulting force lines of two positive point charges be like this: I would expect this: 1answer 57 views ### Charged plane in an electric field acceleration A perpendicular plane to an electric field's lines of force has more electric flux than a plane that is in parallel with the lines of force, right? Does this mean that a charged plate would ... 1answer 194 views ### Find the distance of a third charge [closed] The problem I am having is: Two positive charges (+8.0 mC and +2.0 mC) are separated by 300 m. A third charge is placed at distance r from the +8.0 mC charge in such a way that the resultant electric ... 1answer 146 views ### Force due to combination of free space and dielectric I will make a generalized form of my question. There are two point charges $q$, $x$ distance apart. And there is a dielectric slab of thickness $t$ and of dielectric constant $K$. Should the force ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172449111938477, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/172461-form-solution-linear-ode.html
# Thread: 1. ## the form of the solution of a linear ODE Hi everyone. I'm having a little trouble understanding something, so maybe (hopefully) you guys can help. It is a well known fact in differential equations that the solution to a first order linear differential equation is the sum of the solution to the associated homogeneous equation and the solution to the particular equation, that is $y=y_c+y_p$, where $y_c$ is the solution of $\frac{dy}{dx}+P(x)y=0$ and $y_p$ of $\frac{dy}{dx}+P(x)y=f(x)$ Now, I understand that this is a truism because $\frac{d}{dx}[y_c+y_p]+P(x)[y_c+y_p]=\underbrace{\frac{dy_c}{dx}+P(x)y_c}_0+\underbrac e{\frac{dy_p}{dx}+P(x)y_p}_{f(x)}=f(x)$, but what I'm having trouble with is that of course if I add zero to anything it won't change the element at all because of the fact that we are working on the real numbers here. So, what is the purpose here? I mean, I could easily say that... The number 5 has the property that it is the sum $\frac{dy_c}{dx}+P(x)y_c+5=5$. But to what end? By the way. I'm really sorry if this question makes no sense to you. If it doesn't, just tell me and I'll try to figure out what my malfunction is on my own. Thank you all for reading. 2. Here's the purpose: it's all about IVP's. If you have initial conditions, your particular solution isn't necessarily going to satisfy them. But, with the homogeneous solution, which contains arbitrary constants of integration, you can tailor your solution to match the initial condition. That's why you want the homogeneous part in there.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9714688062667847, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/257221/show-integrals-are-equal-and-limit-of-a-sequence-as-a-function
# Show integrals are equal and limit of a sequence as a function. Define $L(x)=\int_{1}^x {1\over t} dt$ NOTE: I realize that $L(x)$ is the definition of $ln(x)$, but we aren't allowed to use that. Our professor is walking us through the definition of $ln(x)$ and $e^x$. Part A: Show that $L({1\over x}) = -L(x)$ I've tried several different substitutions for this and even direct proof, but I'm completely stuck after working/thinking about this for the past several hours. Help is greatly appreciated! Part B: Using Cauchy Criterion showing that the sequence $$s_n = 1 + {1 \over 2} + {1 \over 3} + ... + {1 \over n}$$ is divergent if $m>n$, show that $L(x)$ tendsto $\infty$ as $x \to \infty$. Basically, I'm trying to show that if the limit of the sequence converges to some L, then the function of that sequence also converges to the same L. I suspect that I need to show that $l(x)=s_n={1 \over x}$, but I don't know how to formally state this idea. - 2 A. When you write the definition of $L(1/x)$, what do you get? Is there a simple change of variables you can use to relate the new interval of integration to the interval $[1,x]$? – GEdgar Dec 12 '12 at 17:18 Is Cauchy criterion the fact that convergent sequences are Cauchy, or that the convergence of the series for a monotonic sequence is determined by the $2^n$th terms, more usually called Cauchy condensation test? – ronno Dec 12 '12 at 18:01 ## 4 Answers For part A, I think the substitution $s=1/t$ should see you through. For part B, think about grouping with powers of 2, or something similar. Then compare the sum to the integral. This might help. - 1 For $x>0$, $$[L(\frac1x)]^{\prime}=L^{\prime}(\frac1x)(\frac1x)^{\prime}=x\frac{-1}{x^2}=-\frac1x=(-L(x))^{\prime}$$ which implies $L(\frac1x)=-L(x)$ 2 The Harmonic series diverges. By the integral test, $$\sum_{n=1}^{N}\frac{1}{n}\le \int_{1}^{N}\frac{1}{x}dx$$ and so it follows, $$\lim_{N\to +\infty}L(N)=+\infty$$ - 1 It seems Nameless didn't read the question... – GEdgar Dec 12 '12 at 17:21 @GEdgar How about now? – Nameless Dec 12 '12 at 17:27 For the first part, $$L(1/x) = \int_1^{1/x} \dfrac1t dt$$ Let $y=1/t$, then $dt = -\dfrac{dy}{y^2}$. $$L(1/x) = \int_1^x y\left( -\dfrac{dy}{y^2} \right) = - \int_1^x \dfrac{dy}y = - L(x)$$ For the second part, note that for all $n \in \mathbb{N}$, we have that $$s_{2n} - s_n = \dfrac1{n+1} + \cdots + \dfrac1{2n} > \dfrac1{2n} + \dfrac1{2n} + \cdots + \dfrac1{2n} = \dfrac12$$ Hence, by Cauchy criteria the sequence diverges. Now note that $$\int_1^x \dfrac{dt}t > \int_1^2 \dfrac{dt}2 + \int_2^3 \dfrac{dt}3 + \cdots + \int_{\lfloor x \rfloor - 1}^{\lfloor x \rfloor} \dfrac{dt}{\lfloor x \rfloor} = \dfrac12 + \dfrac13 + \cdots + \dfrac1{\lfloor x \rfloor} = s_{\lfloor x \rfloor} - 1$$ - $$L\left(\frac{1}{x}\right):=\int_1^{1/x}\frac{dt}{t}\;\;,\;\;u:=\frac{1}{t}\,\,,\,du=-\frac{1}{t^2}dt=-u^2\,dt\Longrightarrow$$ $$L\left(\frac{1}{x}\right)=\int_1^x-\frac{du}{u^2}\cdot u=-\int^x_1\frac{du}{u}=-L(x)$$ Added Part B: For $\,m>n\,$: $$|s_m-s_n|=\left|\frac{1}{n+1}+\frac{1}{n+2}+\ldots +\frac{1}{m}\right|\geq \frac{m-n}{m}=1-\frac{n}{m}$$ We thus can keep $\,n\,$ fixed and let $\,m\to\infty\,$, getting that $\,|s_m-s_n|\geq 1\,$ , which means the sequence cannot be Cauchy and thus cannot converge. Since it is formed by a sum of positive elements, this means the sequence diverges to $\infty\,$ - Holy cow, thank you so much! You have no clue what a relief this is! Also, how simple that substitution was! I actually tried tht, but, looking back at my work, I made a simple algebra mistake :( – ray Dec 12 '12 at 17:43 Yes, sometimes simple things just slip away by our side without us seeing them. Constant study and doing exercises do a lot to making things better. You have several answers now, so consider upvoting those that you find useful and, eventually, accept the one that you find the best for you. – DonAntonio Dec 12 '12 at 17:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419096112251282, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/6518/finding-roots-in-mathbbz-p
# Finding roots in $\mathbb{Z}_p$ Is there an efficient way (algorihtm) to compute the solutions of the congruence $x^n\equiv a \pmod p$? • $n\in \mathbb{N}$ • $a\in\mathbb{Z}_p$ • $p$ is a large prime number Note that: • By efficient I mean computationally efficient even when the given prime $p$ is appropriate for use in cryptographic protocols (hard to find discrete logarithm in $\mathbb{Z}_p$) • We do not know the prime factors of $p-1$. It’s easy to find one solution when $\gcd(n,p-1)=1$. I think this should be as hard as finding a Discrete Logarithm or as Factoring. Any ideas ? Thank you for your time. - ## 3 Answers This is an extended but still partial attempt at an explicit solution. Since $p$ is prime, Fermat's little theorem tells us that: • $\forall z\in\mathbb Z_p, z^p\equiv z\pmod p$. • $\forall z\in\mathbb Z_p^*, z^{p-1}\equiv1\pmod p$. We settle some easy cases: • If $n=0$ and $a\equiv 0\pmod p$, our equation $x^n\equiv a\pmod p$ is ill-defined. • If $n>0$ and $a\equiv 0\pmod p$, the solution is $x\equiv0\pmod p$. • If $n\equiv0\pmod{p-1}$ and $a\equiv 1\pmod p$, any $x\not\equiv0\pmod p$ is solution. • If $n\equiv0\pmod{p-1}$, $a\not\equiv 0\pmod p$ and $a\not\equiv 1\pmod p$, there is no solution. • If $n\equiv1\pmod{p-1}$, the solution is $x\equiv a\bmod p$. In all the other cases, $x\equiv0\pmod p$ will not be a solution, and we can reduce $n$ modulo $p-1$ to obtain an equivalent equation. From then on, we will assume $1<n<p-1$, $a\not\equiv0\pmod p$, $x\not\equiv0\pmod p$, and proceed to solve our equation $x^n\equiv a\pmod p$. As pointed out in the question, things are easy when $\gcd(n,p-1)=1$: the only solution is $x\equiv a^{n^{-1}\bmod(p-1)}\pmod p$. Proof: For the stated $x$, we have $x^n\equiv a^{(n^{-1}\bmod(p-1))·n}\pmod p$; thus $\exists k\in\mathbb N, x^n\equiv a^{1+k·(p-1)}\pmod p$; thus $x^n\equiv a\pmod p$; thus the stated $x$ is a solution of our equation. The function $z\mapsto z^n$ is thus a surjective function over the finite set $\mathbb Z_p^*$, thus a bijection, thus there is no solution other than the stated one. If $\gcd(n,p-1)\neq 1$, let it be $m$, with both $n/m$ and $(p-1)/m$ integers. We find $b$ such that $b^{n/m}\equiv a\bmod(p-1)$ by the above method; that is: $b=a^{(n/m)^{-1}\bmod(p-1)}\bmod p$. Solving $x^m\equiv b\pmod p$ is equivalent to our original equation. By raising to the power $(p-1)/m$, we see that $x^m\equiv b\pmod p\implies 1\equiv b^{(p-1)/m}\pmod p$. Thus if $1\not\equiv b^{(p-1)/m}\pmod p$, which we can check, there is no solution. This test is quite useful; for example when $p=71$, $n=55$, $a=2$, we have $m=5$, $(n/m)^{-1}\bmod(p-1)=51$, $b=3$, $b^{(p-1)/m}\bmod p=54$, thus no solution, and that's the case for most $a$. But for $a=20$ we get $b=45$, $b^{(p-1)/m}\bmod p=1$, and indeed there are $5$ solutions $\{18,19,24,32,49\}\pmod p$. We can thus restrict to solving $x^n\equiv a\pmod p$ when $n$ divides $p-1$, $n>1$, and $a^{(p-1)/n}\equiv 1\pmod p$ (we have shown that any other $a$ leaves the equation without solution, except $a\equiv 0\pmod p$; and we have an efficient method to reduce to that case for any other $n$). That equation has exactly $n$ distinct solutions $\pmod p$. Proof: by the fundamental theorem of algebra, any polynomial of degree $n$ in the field $\mathbb Z_p$ has exactly $n$ roots, counted with multiplicity; applying that to $x^n-a$ we see that $x^n\equiv a\pmod p$ has at most $n$ solutions $\pmod p$; similarly there are at most $(p-1)/n$ solutions in $a$ to the equation $a^{(p-1)/n}\equiv 1\pmod p$; thus the function $z\mapsto z^n$ over the finite set $\mathbb Z_p^*$ maps at most $n$ elements to any element, and at most $(p-1)/n$ elements have a preimage; by a counting argument, every element with a preimage thus has exactly $n$ preimages. It follows that we can not aim at finding all the solutions in time polynomial w.r.t. $\log p$ for arbitrary $n$: there are too many solutions when $n$ is big, e.g. $n=(p-1)/2$. We can still aim at finding one solution, or perhaps the smallest one. When $(p-1)/n$ is small, there is an efficient and trivial probabilistic algorithm that finds a solution: pick a random $x$, and check if $x^n\equiv a\pmod p$, until a suitable $x$ is found; that is expected to require $(p-1)/n$ steps. By trying $x$ sequentially starting from $2$, we can find the smallest $x$. If the exponent $n$ is even, we can change the unknown to $y\equiv x^{n/2}\pmod p$ and first solve $y^2=a\pmod p$, which is the well-studied problem of finding a square root modulo a prime. By applying this recursively, we can remove the powers of $2$ from the factorization of $n$. Assuming a polynomial-time algorithm finding a solution for any odd exponent $n$, our resulting algorithm would find a solution and work for any exponent, while remaining polynomial in $\log p$ after reduction of the exponent $n\pmod p$. This is studied and generalized by Adleman-Manders-Miller, solving the problem in time polynomial w.r.t. $\log p$ for fixed $n$. Their algorithm is (at worse) linear w.r.t. $n$ (not $\log n$), and as an aside requires the factorization of $n$. This might be improved by Barreto-Voloch, and perhaps these two re-visitations of Adleman-Manders-Miller and Barreto-Voloch. [To be continued. Feel free to improve this community wiki, e.g. by including a description of Adleman-Manders-Miller; I'll have no time to do that in the following week.] An instance of the original problem with random exponent has a fair chance of being solvable with the above techniques. I have the feeling do not know if this is easier than discrete logarithm modulo $p$ even in the harder cases, like $n\cdot k=(p-1)$ with $k\approx\sqrt{p-1}$, and $p$,$k$,$n$ are primes. - Thank you for helping out. – epsilon Feb 28 at 20:56 There are standard algorithms for this. See, e.g., Section 7.3 of Algorithmic Number Theory (Eric Bach, Jeffrey Shallit, MIT Press, 1996). If you want to take square roots, you can use Cipolla's algorithm, the Tonelli-Shanks algorithm, or Pocklington's algorithm. The Tonelli-Shanks algorithm apparently has a generalization to take arbitrary $n$th roots. The GAP software can take $n$th roots modulo a prime (see RootMod). Here are some academic papers that also describe algorithms for solving this problem: • On Taking Roots in Finite Fields. Leonard Adleman, Kenneth Manders, Gary Miller. FOCS 1977. • Efficient Computation of Roots in Finite Fields. Paulo Barreto, Jose Voloch. Designs, Codes and Cryptography. These algorithms are computationally efficient if $n$ is small. I don't know if there are any algorithms that are computationally efficient if $n$ is large (i.e., that work for arbitrary $n$). It's a good question! - Do you know the asymptotic cost of these algorithms w.r.t. to the exponent, in the worst case? – fgrieu Mar 2 at 10:45 @fgrieu, good question. Actually, now that I take another look at those two papers, the running time seems to be linear in $n$, whereas we'd really want a running time that is poly($\log n$). I don't know whether there are efficient algorithms for large $n$. – D.W. Mar 2 at 11:02 Yes, there is. There is a polynomial algorithm for solving general polynomial equations, like $x^n - a \mod{p}$ in a finite field. And for simple powers, its even easier. So I wouldn't use this for a crypto system. Note that even square roots modulo a composite number is hard. - Can you be more precise? I couldn’t find an efficient algorithm to compute roots in $\mathbb{Z}_p$ if computing discrete logarithms there is hard. – epsilon Feb 28 at 18:51 – epsilon Mar 1 at 7:21 In the question as I read it, it is wanted an algorithm polynomial in $\log p$ and $\log n$. Is that the case for the unstated "polynomial algorithm for solving general polynomial equations" you are thinking of? I'm afraid that the algorithm outlined in section 4 here might be polynomial in $n⋅\log p$. Indeed, for simple powers, there are faster options, like Adleman-Manders-Miller but that seems polynomial in $\log p$ for fixed $n$ (notice the step "we factor gcd(n,p−1)"). – fgrieu Mar 1 at 8:51 1 Cantor-Zassenhaus or Berlekamp algorithms will factor polynomials, e.g. – Henno Brandsma Mar 1 at 13:50 1 Both algorithms have cost polynomial w.r.t. $\log p$, but at least polynomial w.r.t. $n$ (not $\log n$). In the context of the question, we'd like an algorithm polynomial w.r.t. $n$, if there is one (that's to find one solution; enumerating all the solutions has cost at least linear with the number of solutions, which can be $n$). – fgrieu Mar 2 at 18:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 142, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223827719688416, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/101879-sec-calcul.html
# Thread: 1. ## Sec-calcul Calculate : $\sec ^8 \left( {\frac{\pi }{9}} \right) + \sec ^8 \left( {\frac{{5\pi }}{9}} \right) + \sec ^8 \left( {\frac{{7\pi }}{9}} \right)<br />$ 2. Originally Posted by dhiab Calculate : $\sec ^8 \left( {\frac{\pi }{9}} \right) + \sec ^8 \left( {\frac{{5\pi }}{9}} \right) + \sec ^8 \left( {\frac{{7\pi }}{9}} \right)<br />$ $\frac\pi9,\ \frac{5\pi}9,\ \frac{7\pi}9$ are the roots of the equation $\cos(3\theta) = \tfrac12$. But $\cos(3\theta) = 4\cos^3\theta-3\cos\theta$. Therefore $\cos\Bigl(\frac\pi9\Bigr),\ \cos\Bigl(\frac{5\pi}9\Bigr),\ \cos\Bigl(\frac{7\pi}9\Bigr)$ are the roots of the equation $4x^3-3x = \tfrac12$. Writing y = 1/x, you see that $\sec\Bigl(\frac\pi9\Bigr),\ \sec\Bigl(\frac{5\pi}9\Bigr),\ \sec\Bigl(\frac{7\pi}9\Bigr)$ are the roots of the equation $y^3 + 6y^2 - 8 = 0$. If we call the roots of that equation $\alpha,\ \beta,\ \gamma$ then we know that $\textstyle\sum\alpha = -6$, $\textstyle\sum\beta\gamma = 0$ and $\alpha\beta\gamma = 8$. Now successively compute that $\textstyle\sum\alpha^2 = \bigl(\sum\alpha\bigr)^2 - 2\sum\beta\gamma = 36$, $\textstyle\sum\beta^2\gamma^2 = \bigl(\sum\beta\gamma\bigr)^2 - 2\alpha\beta\gamma\sum\alpha = 96$, $\textstyle\sum\alpha^4 = \bigl(\sum\alpha^2\bigr)^2 - 2\sum\beta^2\gamma^2 = 36^2 - 192 = 1104$, $\textstyle\sum\beta^4\gamma^4 = \bigl(\sum\beta^2\gamma^2\bigr)^2 - 2(\alpha\beta\gamma)^2\sum\alpha^2 = 96^2 - 128\times36 = 4608$, $\textstyle\sum\alpha^8 = \bigl(\sum\alpha^4\bigr)^2 - 2\sum\beta^4\gamma^4 = 1104^2 - 2\times4608 = \boxed{1,209,600}$. That looks like a large number, but if you compute the 8th powers of $\sec\Bigl(\frac\pi9\Bigr)\approx1.064$, $\sec\Bigl(\frac{5\pi}9\Bigr)\approx-5.759$ and $\sec\Bigl(\frac{7\pi}9\Bigr)\approx-1.305$, you'll find that the sum comes out right. 3. My calculator gives this value: sec^8 (pi/9) + sec^8 (5pi/9) + sec^8 (7pi/9) > 1.2096 How to get a closed form answer for this? dhiab, any hint?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137685298919678, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/94262-solving-equations.html
# Thread: 1. ## Solving equations Solve the following equation giving values for 0-360deg cosA=cos2A+cos4A cos2A=cos(30-A) i probably missed something on the identities part. thanks 2. $cos A=cos 2A+cos 4A$ $cos A=2cos 3A cos A$ $cos A=0$ or $cos 3A=\frac{1}{2}$ $A=90^0$ or $3A=60^0$ $A=90^0$ or $A=20^0$ 3. Originally Posted by arze Solve the following equation giving values for 0-360deg cosA=cos2A+cos4A Hi You can use the trig identity $\cos p + \cos q = 2 \cos \left(\frac{p+q}{2}\right) \: \cos \left(\frac{p-q}{2}\right)$ 4. ah! that's what i missed in the first one. thanks! 5. Originally Posted by arze ah! that's what i missed in the first one. thanks! $cos 2A = cos (30-A)$ $2A=30^o-A$ $3A=30^o$ $A=10^o$ OR $2A=A-30$ $A=-30^0$ 6. Originally Posted by alexmahone $cos 2A = cos (30-A)$ $2A=30^o-A$ $3A=30^o$ $A=10^o$ OR $2A=A-30$ $A=-30^0$ i'm supposed to give the answers $10^o$, $130^o$, $250^o$, and $330<br /> ^o$ i can understand $10^o$ and $330^o$ but what about the other two? 7. Originally Posted by arze i'm supposed to give the answers $10^o$, $130^o$, $250^o$, and $330<br /> ^o$ i can understand $10^o$ and $330^o$ but what about the other two? Looking back at alexmahone's work: Originally Posted by alexmahone $cos 2A = cos (30-A)$ $2A=30^o-A$ $3A=30^o$ If A is supposed to be between 0° and 360°, then 3A should be between 0° and 1080° (360° times 3). So from here you also have to mention that $3A=390^{\circ}$ and $3A=750^{\circ}$ because these angles are coterminal to 30° and they are between 0° and 1080°. Now divide both sides of both equations by 3 and you will get $A=130^{\circ}$ and $A=250^{\circ}$. 01 8. how could i have missed that? thank you very much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 40, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504738450050354, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/35868/fundamental-group-of-lie-groups/36032
## Fundamental group of Lie groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $T$ be a torus $V/\Gamma$, $\gamma$ a loop on $T$ based at the origin. Then it is easy to see that $$2 \gamma = \gamma \ast \gamma \in \pi_1(T).$$ Here $2 \gamma$ is obtained by rescaling $\gamma$ using the group law, while $\ast$ denotes the operation in the fundamental group. The way I can check this is rather direct: one lifts the loop (up to based homotopy) to a segment in $V$ and uses the identification of $\pi_1(T)$ with the lattice $\Gamma$. Is there a more conceptual way to prove this identity that will extend to more general (real or complex) Lie groups, or maybe to linear algebraic groups? Or is this fact false in more generality? - ## 4 Answers Yay! It's the Eckmann-Hilton argument! There are two group structures on $\pi_1(G)$ and they commute with each other. It turns out that that is sufficient to show that they are the same structure and that that structure is commutative. For a proof of this, using interpretative dance, take a look at the movie in this seminar that I gave last semester. There's also something on YouTube by The Catsters (see the nLab page linked above). (Forgot to actually answer your question!) This only depends on the fact that $\pi_1$ is a representable group functor and that $G$ is a group object in $hTop$. So it will extend to other group objects in $hTop$, such as those that you mention. This also explains why $\pi_k$ is abelian for $k \ge 2$ since $\pi_2(X) = \pi_1(\Omega X)$ and $\Omega X$ is a group object in $hTop$. - (NB: the fact that $\pi_1$ is representable isn't the main point, it's that $\pi_1$ preserves products so takes group objects to group objects. Representable implies this.) – Andrew Stacey Aug 17 2010 at 13:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The Eckmann-Hilton argument is the correct answer, but it might be amusing to note that there is a very explicit homotopy as well. Suppose $\alpha_1$, $\alpha_2 \in \pi_1 G$, and define $\alpha : I^2 \rightarrow G$ by $\alpha(t_1,t_2) = \alpha_1(t_1) \cdot \alpha_2(t_2)$, where $\cdot$ is the product in $G$. Then along the diagonal, we have $\alpha_1 \cdot \alpha_2$, the product using the group operation, while along the bottom edge followed by the right edge we have the composition $\alpha_1 * \alpha_2$, the product of loops in the fundamental group. Deforming the path shows they're homotopic. Similarly, along the left edge, followed by the top edge we get $\alpha_2 * \alpha_1$, so this product is commutative. - Very nice argument! – Andreas Thom Aug 19 2010 at 12:16 There is an elementary, formal argument given in Spanier (Theorem 1.6.8, p. 43, 1966 edition): If two composition laws have a common two-sided identity element and are mutually distributive, then they are equal, commutative and associative. See also the following corollaries, esp. Corollary 9 on homotopy classes of maps from an H-cogroup to an H-space. I always found the simplicity of this riveting. - i believe this is exactly what Andrew mentioned above... minus that reference – Sean Tilson Aug 19 2010 at 22:42 1 The link in my answer is to the nLab page where an even weaker hypothesis is assumed: "If a set is equipped with two binary operations with identity elements, as long as they commute with each other in the sense that one is (with respect to the other) a homomorphism of sets with binary operations, then everything else follows.". The argument is, as you say, quite formal and works in extremely general cases. – Andrew Stacey Aug 20 2010 at 7:11 I've always found it entertaining that one proves commutativity before associativity in this argument. – Robert Bruner Aug 20 2010 at 15:38 You don't need to assume a common identity element; it suffices to assume that each operation has its own 2-sided identity. (Of course, it ultimately follows that the two identity elements are equal.) Also, the hypothesis in the E-H argument, that each of the operations is a homomorphism with respect to the other, is not what is usually called distributivity. (For example, the meet and join operations in a distributive lattice are each distributive over the other, but this mutual distributivity is not what is wanted for the E-H argument.) – Andreas Blass Aug 20 2010 at 19:15 @Sean: yes, but my point was that this is true in general i.e. for any two composition laws in hom(X,Y), X,Y objects in a category, not just the homotopy category of pointed topological spaces and H-cogroups to H-spaces. Another reference is Peter Hilton's Homotopy Theory and Duality (1965), p. 5 (for the pointed homotopy category). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405134916305542, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/292752/prove-pa-mid-b-text-and-a-1
# Prove $P(a \mid (b \text{ and } a)) =1$ So I am supposed to prove that $P(a)$ given $P(b \text{ and } a)$ is $1$ One way of solving this is that with $P(b \text{ and } a )$ as given the sample space has been reduced to that only, and for that sample space the $a$ is there always so answer is $1$. But when I tried to do that using bayes theorem, I couldn't solve it $P(b \text{ and } a \mid a)\cdot P(a)$ divided by $P(b \text{ and } a)$ Can anyone give me the explanation for this? - ## 2 Answers Bayes formula amounts to $p(x|y)p(y)=p(xy)$ by interchanging the roles of $x$ and $y$. So, here is a formal proof: $p(a|ab)\cdot p(ab)=p(aab)=p(ab)$. If $p(ab)\ne 0$ then you can divide by it to obtain $p(a|ab)=1$. - You simply need to come back to the definition of conditional probabilities. Given two events $A,B$, the probability of $A$ given $B$ is defined as $$P(A|B)=\frac{P(A\cap B)}{P(B)}.$$ Of course, as you have certainly seen, it corresponds to the intuitive notion of "conditonal probability": we restrict the sample space to $B$. Now, apply this to compute $P(A | A\cap B)$. Bayes Therorem follows from a simple application of this definition. You usually use it when you want to compute $P(A|B)$ knowing $P(B|A)$, $P(A)$ and $P(B)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.963636040687561, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/07/31/the-determinant-of-unitary-and-orthogonal-transformations/?like=1&source=post_flair&_wpnonce=966197e62c
# The Unapologetic Mathematician ## The Determinant of Unitary and Orthogonal Transformations Okay, we’ve got groups of unitary and orthogonal transformations (and the latter we can generalize to groups of matrices over arbitrary fields. These are defined by certain relations involving transformations and their adjoints (transposes of matrices over more general fields). So now that we’ve got information about how the determinant and the adjoint interact, we can see what happens when we restrict the determinant homomorphism to these subgroups of $\mathrm{GL}(V)$. First the orthogonal groups. This covers orthogonality with respect to general (nondegenerate) forms on an inner product space $\mathrm{O}(V,B)$, the special case of orthogonality with respect to the underlying inner product $\mathrm{O}(V)$, and the orthogonal matrix group over arbitrary fields $\mathrm{O}(n,\mathbb{F})\subseteq\mathrm{GL}(n,\mathbb{F})$. The general form describing all of these cases is $\displaystyle O^*BO=B$ where $O^*$ is the adjoint or the matrix transpose, as appropriate. Now we can take the determinant of both sides of this equation, using the fact that the determinant is a homomorphism. We find $\displaystyle\det(O^*)\det(B)\det(O)=\det(B)$ Next we can use the fact that $\det(O^*)=\det(O)$. We can also divide out by $\det(B)$, since we know that $B$ is invertible, and so its determinant is nonzero. We’re left with the observation that $\displaystyle\det(O)^2=1$ And thus that the determinant of an orthogonal transformation $O$ must be a square root of ${1}$ in our field. For both real and complex matrices, this says $\det(O)=\pm1$, landing in the “sign group” (which is isomorphic to $\mathbb{Z}_2$). What about unitary transformations? Here we just look at the unitarity condition $\displaystyle U^*U=I_V$ We take determinants $\displaystyle\det(U^*)\det(U)=\det(I_V)=1$ and use the fact that the determinant of the adjoint is the conjugate of the determinant $\displaystyle\overline{\det(U)}\det(U)=\lvert\det(U)\rvert^2=1$ So the determinant of a unitary transformation $U$ must be a unit complex number in the circle group (which, incidentally, contains the sign group above). It seems, then, that when we take determinants the analogy we’ve been pushing starts to come out. Unitary (and orthogonal) transformations are like complex numbers on the unit circle, and their determinants actually are complex numbers on the unit circle. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 2 Comments » 1. [...] the determinant of a unitary transformation is a unit complex number, and the determinant of a positive-semidefinite transformation is a nonnegative real number. If is [...] Pingback by | August 19, 2009 | Reply 2. [...] the determinant on itself, but we can easily restrict it to any subgroup. We actually know that for unitary and orthogonal transformations the image of this homomorphism must lie in a particular subgroup of . But in any case, the [...] Pingback by | September 8, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8912826776504517, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/189682-rational-number.html
# Thread: 1. ## Is this a rational number? 2.202002000200002... There is a pattern there so part of me wants to say yes. But there is technically no repeat or termination so a bigger part of me wants to say no. I am making answer sheets for my students and initially I was going to say no, this is not a rational number due to the fact there is no repeat or termination of the decimal. But looking at the worksheet, it gives no options for the students to label a number as irrational. Just W, I or Q. Typically the directions for these sheets are very specific, so that got me thinking that this must be a rational number and that the obvious pattern must allow it to be translated into a fraction somehow, but I am not sure how I would do that. I know how to turn repeating and terminating decimals into fractions but not this if it is in fact possible. I know it is probably a basic question but I'm at a loss here. 2. ## Re: Is this a rational number? I believe a theorem states that every rational number is either a repeating or terminal decimal. Therefore, if it doesn't repeat, it must be irrational. 3. ## Re: Is this a rational number? That is what I assumed, but the worksheet did not give an option for irrational numbers in the directions, which they are typically VERY specific. So just making sure I am not losing it. Thank you for the reply. 4. ## Re: Is this a rational number? And what are W and I, just for information? I assume that Q means rational numbers. 5. ## Re: Is this a rational number? Whole and Integers 6. ## Re: Is this a rational number? Well, this will probably be meddling in your affairs, but just in case. If W, I and Q are used as labels and are written in thin (regular) font, I guess it's OK. If, however, they denote sets of numbers and are written in thick font like this: $\mathbb{Q}$, then the universal notation for integers is $\mathbb{Z}$. Also, Wikipedia claims that "whole number" is a term with inconsistent definitions. It is better to say "natural numbers" and denote their set by $\mathbb{N}$. 7. ## Re: Is this a rational number? In the instructions it says W = whole I = integers and Q = Rational In the book they also define Whole numbers as {0, 1, 2, 3...} Natural as {1, 2, 3, 4...} and Rational as any number that can be written in A over B form. So I am going based off of the information this company gave me. 8. ## Re: Is this a rational number? Hello, Jman115! $\text{Is this a rational number? }\:2.202002000200002\hdots$ There is a pattern there so part of me wants to say yes. But there is technically no repeat or termination . . so a bigger part of me wants to say no. You are right . . . there is a pattern. . . But it does not have a repeating cycle. As Jman115 suggested, it must be irrational. We have: . $X \;=\;\frac{2}{10^0} + \frac{2}{10^1} + \frac{2}{10^3} + \frac{2}{10^6} + \frac{2}{10^{10}} + \frac{2}{10^{15}} + \hdots$ The exponents are Triangular Numbers. . . That is: . $X \;=\;2\sum^{\infty}_{n=1}\frac{1}{10^{\frac{n(n-1)}{2}}}$ The series is neither arithmetic nor geometric, nor a combination thereof. I have found no way to evaluate it. It can be written as a recurrence: . $a_n \;=\;a_{n-1}\!\cdot\!\frac{1}{10^{n-1}},\;\;a_1\,=\,2$ . . but this doesn't help either . . . Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437405467033386, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/182221/relaxing-the-definition-of-a-von-neumann-regular-ring
# Relaxing the definition of a von Neumann regular ring Hereinafter, all rings are assumed to be unital but not necessarily commutative. A well-known class of rings are von Neumann regular rings, that is, rings $R$ such that for each $a\in R$ there is an $x\in R$ satisfying $a=axa$. They include many classical rings, but the motivation for the definition comes from ...analysis. It is not very hard to spot that a ring is von Neumann regular if and only if each its principal left ideal is generated by an idempotent element. I am interested whether anyone has studied rings for which each maximal left ideal, which is principal, is generated by an idempotent. Probably you'll ask for examples of rings with this property which are not von Neumann regular: the easiest one is the algebra $C(K)$ of scalar-valued continuous functions on an infinite compact space. I am interested also in rings which are not local but every maximal left ideal is principal. - ## 1 Answer I haven't seen and couldn't find anything on the first type of ring you described. However, generalizations of VNR rings are plentiful, so that is not saying much. You might look at stuff on cyclically presented simple modules/ideals (or even finitely presented ones). The equivalent condition that I was thinking of (which is admittedly less nice and more complicated than yours) for your condition is "All simple cyclically presented modules are projective." For the second type of ring you describe, you will have to stay away from (left and right) Noetherian rings, because it is known that such rings are principal right ideal rings. I imagine you are probably already aware of the commutative version (Kaplansky's theorem). Lam and Reyes generalized it to noncommutative Noetherian rings in one of my favorite papers here. If you are working with rings of continuous functions over noncompact spaces then you are nowhere near Noetherian rings :) At times like this I wish I owned a copy of that book by Gillman and Jerison... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565134644508362, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/45111/application-of-diffraction-problem
# Application of diffraction problem! Here is a problem that I am working on, which is the applying the concepts of diffraction to the setting of the sun: Air has a small, usually negligible index of refraction. It is 1.0002926. This causes the Sun to actually be below the horizon when it appears to be just on the verge of sinking below it. Suppose you are on the sea-shore watching the Sun apparently sinking into the ocean. When only its upper tip is still visible, by what fraction of the diameter of the Sun is that tip actually already below the surface? As an approximation, take the earth's atmosphere as being of uniform density out to a thickness of 8.600 km, beyond which there is no atmosphere. This means that, with the Earth's radius being 6400. km, your line of sight due West along the ocean surface to the horizon will intersect this "upper surface" of the atmosphere at about 331.9 km from your eye. (The diameter of the sun subtends 0.5000 degrees at your eye). This is how I've attempted to model the problem, but I'm not sure if it's correct... - This seems to me to be a literal copy of a homework question. If so I think you should add the "homework" tag and I think in any case you should show what effort you have made. – user16228 Nov 26 '12 at 1:16 I have a picture that I sketched, but it didn't let me post it because I'm a new member...but other than that, I have no idea how to approach this problem. – Vanessa Nov 26 '12 at 1:18 OK, well maybe you could describe the picture you have drawn in your question . I can't really help you now with this question, it's about 2 AM here and I am about to go to bed. I'm sure there are plenty of other people here who will give you good help though. – user16228 Nov 26 '12 at 1:23 Alright...I'm actually going to try and recreate it in paint and find a way to upload it... – Vanessa Nov 26 '12 at 1:25 This is how I'm imagining the situation, but I'm not sure if it's correct: – Vanessa Nov 26 '12 at 1:28 show 4 more comments ## 1 Answer (Needless to say, nothing is drawn to scale) First, evaluate $\alpha$ (see picture). Then, derive $i$: $i = 180 - 90 - \alpha$ Then, substitute into Snell's law and derive $i'$: $i' = arcsin(n sin(i))$ where $n=1.0002926$ Then derive the angle $j$ (see the picture). Don't forget to convert it to Sun diameters, that is, multiply with $2$, provided that the Sun diameter is $0.5$ degrees. I get $0.34308205134$ degrees, that is, $68.62$% of the diametre. Most important, when you have understood it, try to do it by yourself again. Good luck! - Thanks so much for your answer! I was able to work it out myself, but I didn't need to multiply by 2...is it possible that you could explain that step? – Vanessa Nov 27 '12 at 3:00 Because they ask "By what fraction of the diameter of the Sun...?". Since that diameter is half a degree, you divide your solution in degrees by 0.5, that is, multiply by 2. Then you get 0.68... but I multiplied by 100 to see it as a percentage. – Eduardo Guerras Valera Nov 27 '12 at 9:18 Alright! Thanks so much again! :D – Vanessa Nov 30 '12 at 6:02 You are welcome! :) – Eduardo Guerras Valera Nov 30 '12 at 12:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547127485275269, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/206118/project-in-computer-science-and-mathematics
# Project in computer science and mathematics. My background I'm a third year student. I study mathematics combined with computer science (with focus on modeling, simulations and visualization). In order to get my degree, I have to make a half-year project where I combine mathematics with computer science. What I want to do I am in love with algebra and group theory. But I don't know how it would fit with the interdisciplinary project, where I have to combine it with computer science. Someone suggested cryptology, but I don't know alot about it (any thoughts? and useful references?). My problem/question Is it possible to combine computer science with algebra? I also need project ideas (I would also like references). - 1 I believe the standard answer to this type of question is, "Don't you have an adviser of some sort you should talk to, for example, whoever is in charge of "grading" this project?" – Graphth Oct 2 '12 at 18:15 1 There is such a thing as "computational group theory", but I don't think it is a very fruitful subject, though I could be wrong. – akkkk Oct 2 '12 at 18:17 1 "Don't you have an adviser of some sort you should talk to, for example, whoever is in charge of "grading" this project?" Yes, I have and I will talk to him. But I would like to know others thoughts and ideas. – whoisitnow Oct 2 '12 at 18:18 @whoisitnow Okay, sounds good :) – Graphth Oct 2 '12 at 20:12 ## 3 Answers This is a bit challengeing, but who known that you won't like that? Homotopy type theory has definite connections to algebra (the groupoid model) but also to computer science (implementations in Coq). Take a peek at http://homotopytypetheory.org/ Maybe you can find something more suitable in denotational semantics (http://en.wikipedia.org/wiki/Denotational_semantics). This is an application of algebra to the study of programming languages. - Perhaps a project using Sage? http://www.sagemath.org/doc/thematic_tutorials/group_theory.html - You may consider this old hat, but you could write a solver/simulator for a Rubik's Cube or some other puzzle that can be described with groups. As for cryptography, there have been proposals for cryptosystems based on braid groups (an overview). I believe they use the conjugacy search problem or something similar as the trap door (i.e. given $a$ and $b$ that are conjugate, find $x$ such that $a=xbx^{-1}$). You might be able to visualize the way the computer manipulates the braids (puts them into normal form and such). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468888640403748, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/154165/fourier-coefficients-of-periodic-function
# Fourier Coefficients of periodic function Consider a Function $f\in L^2(\mathbb{T})$. Is there any lower bound for the decay of the Fourier coefficients $$\hat{f}(n)=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(t) e^{-int} dt$$ known? There are a lot of upper bounds known but i cant find anything about a lower bound. I would appreciate if you can help me! - 1 What do you exactly mean by lower bound? – Davide Giraudo Jun 5 '12 at 9:48 1 BTW welcome to Math.SE! – AD. Jun 5 '12 at 10:03 I mean the following: $|\hat f(n)|\ge g(n)$ for all $n\in \mathbb{N}$, where $g\in o(n!)$ for example. – Lenava Jun 5 '12 at 11:00 more precisely i am concerned about the coefficients of a function $f^{-1}$, where f is a polynomial. – Lenava Jun 5 '12 at 11:24 1 So, $f$ is the reciprocal of a (trigonometric or algebraic?) polynomial. This information certainly belongs in the post, because the question is trivial ("$0$ is the best lower bound you can have) without such information about $f$. As it stands, we still don't know enough to give any nontrivial bound: if $f=[1+\text{(some tiny polynomial terms)}]^{-1}$, then $\hat f(n)$ is tiny for $n\ne 0$. – user31373 Jun 5 '12 at 15:10 show 2 more comments ## 1 Answer Theorem 3.2.2. in Grafakos's book Classical Fourier analysis (page 176) states that given a sequence $(d_n,n\geqslant 0)$ which converges to $0$, we can find an integrable function $f$ such that $|\widehat{f}(n)|\geqslant d_n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877426981925964, "perplexity_flag": "head"}
http://mathoverflow.net/questions/17523/are-there-any-important-mathematical-concepts-without-discrete-analog/17585
## Are there any important mathematical concepts without discrete analog? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In "continuous" mathematics there are several important notions such as covering space, fibre bundle, Morse theory, simplicial complex, differential equation, real numbers, real projective plane, etc. that have a "discrete" analog: covering graph, graph bundle, discrete Morse theory, abstract simplicial complex, difference equation, finite field, finite projective plane, etc. I would like to know if there are others. But the real question is: Are there any important "continuous" mathematical concepts without "discrete" analog and vice versa? - 8 I'm not sure what kind of answers you're looking for. Would inner product spaces or normed vector spaces fit the bill? – Darsh Ranjan Mar 8 2010 at 21:38 7 Finite fields are not the discrete analogs of fields. – lhf Mar 8 2010 at 21:39 4 Maybe this should be tagged as soft-question – Andrea Ferretti Mar 8 2010 at 21:47 5 Your question is perhaps hopelessly vague. What purpose does the "analogue" have? If it has no purpose, you could call X a discrete analogue of Y for any pair (X,Y). An abstract simplicial complex isn't really "discrete" is it? If it was finite, sure, but if it's infinite, how would it qualify as discrete? – Ryan Budney Mar 8 2010 at 21:59 6 I suppose we can also do this in reverse... Are there any important mathematical concepts without continuous analog? – Gerald Edgar Mar 9 2010 at 1:31 show 11 more comments ## 9 Answers A lot of ideas from topology and analysis don't have obvious discrete analogues to me. At least, the obvious discrete analogues are vacuous. • Compactness. • Boundedness. • Limits. • The interior of a set. I think a better question is which ideas have surprisingly interesting discrete analogues, like cohomology or scissors congruence. - 2 I'd agree that some of these discrete analogues can be vacuous, but isn't that the point? For example, when we study compact sets in topology are we not, at least sometimes, trying to find non-trivial analogues of results that are trivially true of finite sets in the discrete case? – Dan Piponi Mar 8 2010 at 22:41 1 Actually, boundedness is one of the possible continuous generalization of finiteness. Compactness is another. As for "the interior of a set", there is linear optimization where one is talking about interiors of polytopes, and while it is usually done in R^n, it could equally well be studied over Q^n or (any ordered field)^n, and actually is a combinatorial science where discrete algorithms such as the simplex method matter, and the continuous structure of the field is just a red herring. – darij grinberg Mar 8 2010 at 22:46 18 It amuses me that this answer has been accepted when discrete mathematicians would use analogues of every single one of these. I recommend a look at this post of Terry Tao: terrytao.wordpress.com/2007/05/23/… – gowers Mar 8 2010 at 23:51 1 I don't know if compactness is a "generalization of finiteness properties" as such, but it certainly gets used as a substitute for finiteness all the time. There are various parts of Banach space/Banach algebra theory where a desire to interchange the order of various iterated limits can be done by judicious appeal to weak compactness of various sets. – Yemon Choi Mar 9 2010 at 8:50 1 It should also be pointed out that in some sense discreteness and compactness sit at opposite ends of the spectrum of locally compact spaces, so that it's not clear what a "discrete analogue" (as opposed to a "quantitative, finite analogue" might be – Yemon Choi Mar 9 2010 at 8:52 show 5 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Is there a discrete analogue of the notion of discreteness? - 13 Whoa, dude. That's just -- whoa. – Pete L. Clark Mar 11 2010 at 7:29 Continuity????? – Benjamin Steinberg Aug 20 at 1:57 A timely example would be the lack of a combinatorial Ricci flow in dimensions $n \geq 3$. In principle I think many people believe there should be one, but a combinatorial/discrete formalism has yet to be found. - The intermediate value theorem wouldn't be true in a discrete setting. - 1 There is even a discrete analog (Sperner's lemma) to the fixed point theorem. – Gil Kalai Dec 13 2010 at 16:30 3 I've used the following discrete analogue of the intermediate value theorem in a paper. If you have a function $f$ from the integers to the integers such that $|f(x)-f(x-1)|\le 1$ for ever integer $x$, then having $f(x)<0$ and $f(y)>0$ implies there is some integer $z$ with $x<z<y$ or $y<z<x$ such that $f(z)=0$. – Patricia Hersh Jun 6 at 12:46 Is "continuous function" an important concept? Does it have a discrete analog? - 2 @Mariano: again, not necessarily. For instance, the (relative) Zariski topology on $\mathbb{F}_q^n$ is discrete, and this is a nonvacuous statement: it has the important consequence that every function $\mathbb{F}_q^n \rightarrow \mathbb{F}$ is a polynomial function. (I think I need a few more rules in order to be comfortable playing this particular game.) – Pete L. Clark Mar 8 2010 at 21:51 18 The discrete analog of "continuous function" is "function". – darij grinberg Mar 8 2010 at 22:04 2 Well, there are incredibly interesting discrete analogues of analytic functions (Google should find the notes by Lovász on the subject, for example; this is a whole subject by now) Discreteness of topologies is absolutely irrelevant there---I have no reason to believe the 'canonical' discrete analogue for continuous functions has anything to do with them, either! :) – Mariano Suárez-Alvarez Mar 8 2010 at 22:19 5 I'd say the discrete analogue of a continuous function is one that is continuous in some quantitative way (such as being Lipschitz) on a finite metric space. If the finite metric space is one of a sequence of spaces with unbounded size, this can be very useful. – gowers Mar 8 2010 at 23:53 3 I did not say that "analytic" is an analogue of "continuous", as far as I can tell. I simply cannot understand what argument there can possibly be supporting a claim of the form 'there is no discrete analogue of X', apart from a standard argument from ignorance. – Mariano Suárez-Alvarez Mar 9 2010 at 1:17 show 10 more comments It seems to me there is no good (powerful) discrete version of Atiyah–Singer theorem. - Maybe Grothendieck-Riemann-Roch? But honestly I have no idea what exactly it states... At least I know there is no analysis involved in its statement. However, I fear getting something really discrete (= a statement on finite sets) out of it would require some serious constructivization. – darij grinberg Mar 8 2010 at 22:52 What do You mean by word analogy here? From wikipedia we have ( among others): The word analogy can also refer to the relation between the source and the target themselves, which is often, though not necessarily, a similarity So You see similarity in differential equation versus difference equation, but this is mostly matter of aesthetic. In practice if You need discrete equation for continues one, You have to put usually a large amount of work in order to make this analogy working. Of course in principle there is relation among differential and difference equation. But what is important here is not what is similar, but what is a gap between them. When You say, that discrete case may approximate continues one, in fact You take many assumptions, for example about criteria which constitutes what is that mean approximation. 1. Say what is analogy of holomorphic function? Is discrete complex function on lattice of Gauss integers, good approximation for some complex analytical function? In what meaning? What are criteria? Are all properties of holomorphic function shared by "discrete analogy" and vice versa? 2. For example, it is not true that whole theory of differential equations may be deduced from difference equations. We have several equations when we cannot find correct approximations, for example Navier-Stokes equation has no discrete model, at least till now. You may say: but chaos is analogous to turbulence. Why? Because is similar? Why do You may say that? Is that someone think two things are similar enough to say that they are? Then analogy is so broad in meaning word, that I may say, I can see analogy between every things You may point. It may be very useful as inspiration, sometimes it lead us to great discoveries. For every thing You say is analogous to some continues case, we may have differences between them which allows us to distinguish this cases. They nearly almost are non equivalent even in approximate meaning. They are never the same. It is a matter of criteria, if You may say two things are in analogy. - Searching Google Web, Google Books and Google Scholar for "no discrete version" OR "no discrete analog" OR "no discrete analogue" OR "no continuous version" OR "no continuous analog" OR "no continuous analogue" produces some examples including a comment that a continuous version of a discrete concept doesn't necessarily enable you to guess the properties of the discrete case. - Contrary to the comments appended to the question, I think the notion of analogy can be made precise. Definition: An analogy of concept A defined in setting SA, is a concept B defined in setting SB such that there exists a generalized setting SX which includes both SA and SB as example settings, and such that there also exists a concept X defined in setting SX which reduces to concept A or concept B when attention is restricted to either setting SA or SB. In general, an analogy is not unique. A concept could have many analogies, and even for a particular analogous concept there could be more than one way in which it is considered to be analogous. Example: In Time scale calculus which unifies difference and differential equations, there have been publications with differing answers over how to define the analogy between discrete and continuous transforms. A particular description which encapsulates both the integer and real number transforms may apply to other sets such as the rationals, but a different description might not apply to Q. So an analogy is not just two objects but also the link between them. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447627067565918, "perplexity_flag": "middle"}
http://wiki.panotools.org/index.php?title=Lens_Correction_in_PanoTools&diff=13766&oldid=13043
# Lens Correction in PanoTools From PanoTools.org Wiki (Difference between revisions) (→‎Mapping View Angle Radius) Zarl (Talk | contribs) m (Reverted edits by Koae (Talk) to last version by Tksharpless) (18 intermediate revisions by 2 users not shown) Line 1: Line 1: =Lens Correction in PanoTools= =Lens Correction in PanoTools= Line 9: Line 9: To make a panoramic image from photographs, it is essential to be able to calculate the direction in space corresponding to any given position in a given photo.  Specifically, we need to know the angles between the view directions of the photos (the ''alignment'' of the images), and a ''radial projection function'' that relates the distance of a point from image center to the true angle of view, measured from the optical axis of the lens.  Given a set of control points linking the images, PanoTools estimates both the alignment and the lens projection by a nonlinear least squares fitting procedure -- ''optimization''. Using the fitted lens parameters, the stitcher can correct each image to match the ideal geometry of the scene, according to whatever projection is chosen for the panorama.  Done right, that makes all the images fit together perfectly; moreover, it yields a panoramic image that seems to have been made with a perfect lens. To make a panoramic image from photographs, it is essential to be able to calculate the direction in space corresponding to any given position in a given photo.  Specifically, we need to know the angles between the view directions of the photos (the ''alignment'' of the images), and a ''radial projection function'' that relates the distance of a point from image center to the true angle of view, measured from the optical axis of the lens.  Given a set of control points linking the images, PanoTools estimates both the alignment and the lens projection by a nonlinear least squares fitting procedure -- ''optimization''. Using the fitted lens parameters, the stitcher can correct each image to match the ideal geometry of the scene, according to whatever projection is chosen for the panorama.  Done right, that makes all the images fit together perfectly; moreover, it yields a panoramic image that seems to have been made with a perfect lens. − ==Mapping View Angle <=> Radius== + ==Ideal Lens Models== The radial projection curve of a real lens may approximate some known mathematical function, but in practice it must be determined experimentally, a process known as calibrating the lens.  A calibration is a parametrized mathematical model, fitted to experimental data.  The typical model consists of an ideal angle-to-radius function, and a polynomial that converts the ideal radius to the actual radius measured on the image. The radial projection curve of a real lens may approximate some known mathematical function, but in practice it must be determined experimentally, a process known as calibrating the lens.  A calibration is a parametrized mathematical model, fitted to experimental data.  The typical model consists of an ideal angle-to-radius function, and a polynomial that converts the ideal radius to the actual radius measured on the image. Line 41: Line 41: Lens correction in PanoTools is unusual in several respects.  First, it ignores the physical parameters of the lens (focal length) and camera (pixel size).  Instead, it computes angle-to-radius scale factors from image dimensions and fields of view, as described below.  All correction computations are in terms of image radii, measured in pixels, rather than the normalized radii described above.  However, normalized radii are evaluated implicitly. Lens correction in PanoTools is unusual in several respects.  First, it ignores the physical parameters of the lens (focal length) and camera (pixel size).  Instead, it computes angle-to-radius scale factors from image dimensions and fields of view, as described below.  All correction computations are in terms of image radii, measured in pixels, rather than the normalized radii described above.  However, normalized radii are evaluated implicitly. − Second, the correction is computed in equal-angle spherical coordinates, rather than camera coordinates.  Observed image points are found by remapping those coordinates according to the ideal lens projection, and rescaling them according to the ratio of pixel sizes in the source and ideal images. + Second, the correction is computed in equal-angle spherical coordinates, rather than camera coordinates.  Observed image points are found by remapping those coordinates to the ideal lens projection, and rescaling them according to the ratio of pixel sizes in the source and ideal images. − Third, the correction is normalized to hold a certain radius, <math>\textstyle r_0</math>, constant.  It essentially consists of a cubic polynomial that computes the ratio of observed to ideal radius.  The argument to this polynomial is <math>\textstyle \frac R {r_0}</math>, and its constant term is set so that the result is exactly 1 when the argument is 1, that is, when <math>\textstyle R = r_0</math>.  With + Third, the correction polynomial is normalized to hold a certain radius, <math>\textstyle r_0</math>, constant.  It is a cubic polynomial, that computes the ratio of observed to ideal radius.  Its argument is <math>\textstyle \frac R {r_0}</math>, and its constant term is set so that the result is exactly 1 when the argument is 1, that is, when <math>\textstyle R = r_0</math>.  With : <math>\textstyle X  =  \frac R {r_0}</math> : <math>\textstyle X  =  \frac R {r_0}</math> The correction factor is The correction factor is Line 68: Line 68: : <math>\textstyle s = \frac e d</math>. : <math>\textstyle s = \frac e d</math>. − Factors d and e are focal lengths in pixels, because A2N() yields the normalized radius, equal to <math>\textstyle \frac R F</math>. For the panorama, which follows an ideal projection, d is identical to F.  In fact d, under the name “distance factor”, is used in many of libpano's coordinate transformation functions to convert radius in pixels to the ideal normalized radius in trigonometric units. + Factors <math>d</math> and <math>e</math> are focal lengths in pixels, because A2N() yields the normalized radius, equal to <math>\textstyle \frac R F</math>. For the panorama, which follows an ideal projection, <math>d</math> is identical to <math>F_{pano}</math>.  In fact, under the name “distance factor”, <math>d</math> is used by many of libpano's coordinate transformation functions to convert radius in pixels to the ideal normalized radius in trigonometric units. − For the source image, whose true projection is only approximately known, e is an estimate of F according to the fitted correction parameters.  Since hfov is one of those parameters, the fitted value of e will be proportional to the true F; the constant of proportionality will approach 1 as the fitted polynomial coefficients approach 0. + The true source projection is unknown, so <math>e</math> is an estimate of <math>F_{source}</math> according to the fitted correction parameters.  Since hfov is one of those parameters, <math>e</math> will be proportional to the true <math>F_{source}</math>; the constant of proportionality will approach 1 as the fitted polynomial coefficients approach 0. − ==Portable Lens Parameters== + In other words, <math>e</math> is a biased estimate of <math>F_{source}</math>.  However, the overall correction is equivalent to the generic one because the bias in the correction polynomial cancels the bias in the focal length.  The only real defect in the PanoTools scheme is that its parameters work for just one image format. − Focal length and projection function are separable lens properties.  In fact many schemes determine F from different data than those used to fit the lens curve.  PanoTools is somewhat unique in fitting all lens parameters to one set of experimental values.  However there is no reason why it should not compute portable correction parameters. + ==Portable Correction Coefficients== − The focal length in pixels must be known in order to compute, or to apply, a portable lens calibration.  This quantity depends on both lens and camera properties.  In most cases today, equipment manufacturers' specifications can provide the needed data: + In the generic calibration scheme, dividing image coordinates by F makes it possible for the fitted correction parameters (apart from F) to be independent of both image format and physical pixel size, so that they apply to any image made with the given lens. As explained above, dividing image coordinates by any factor proportional to F is logically sufficient; however values other than F itself lead to non-portable parameter values. − : <math>\textstyle F = {(FL\ in\ mm)} \frac {image\ width\ in\ pixels} {sensor\ width\ in\ mm}</math>. + − In any practical calibration scheme F is actually an adjustable parameter.  However the fitted value is expected to be close to the one implied by these physical specifications, the main uncertainty being how accurately the nominal lens focal length reflects the true one. + − + − In the generic calibration scheme, dividing image coordinates by F makes it possible for the fitted correction parameters (apart from F) to be independent of both image format and physical pixel size, so that they apply to any image made with the given lens. As explained above, dividing image coordinates by any factor proportional to F is logically sufficient; however values other than F itself lead to non-portable parameter values that depend on image format. + In the PanoTools scheme, the "distance parameter" d, which is the focal length in panorama pixels, would be the appropriate divisor. That would make the argument of the radius scaling polynomial the ideal normalized radius, In the PanoTools scheme, the "distance parameter" d, which is the focal length in panorama pixels, would be the appropriate divisor. That would make the argument of the radius scaling polynomial the ideal normalized radius, Line 86: Line 82: and the fitted coefficient values would be portable. and the fitted coefficient values would be portable. − The current non-portable coefficients can be converted to a portable form using data available inside libpano.  With + Alternatively the current non-portable coefficients can be converted using data available inside libpano.  With : <math>\textstyle k = \frac d {r_0}</math>, : <math>\textstyle k = \frac d {r_0}</math>, : <math>\textstyle w' =  w k </math>, : <math>\textstyle w' =  w k </math>, Line 101: Line 97: Along with the ideal function A2Nsource(), which gives N as a function of angle, this constitutes a portable lens correction function. Along with the ideal function A2Nsource(), which gives N as a function of angle, this constitutes a portable lens correction function. − To convert the PT lens parameters to a fully portable form also requires expressing the fitted focal length (and the optical center shifts d, e) in physical units rather than in pixels.  That depends on the physical pixel width, which unfortunately PanoTools does not use.  If h is the width of a pixel in mm, the calibrated lens focal length is + ==Fully Portable Corrections== + + To make a lens correction fully portable also requires expressing the fitted focal length in physical units rather than in pixels. + + The focal length in pixels must be known in order to compute, or to apply, any lens calibration, portable or not.  Physically, this quantity depends on lens focal length and camera properties.  Today, equipment manufacturers' specifications usually provide the needed data: + : <math>\textstyle F_{pixels} = F_{mm} \frac {sensor\ width\ in\ pixels} {sensor\ width\ in\ mm}</math>. + Alternatively, the EXIF data from most high-end cameras includes the "focal plane resolution" field, which gives the physical pixel size directly. + + In any practical calibration scheme <math>\textstyle F_{pixels} </math> is actually an adjustable parameter.  However the fitted value is expected to be quite close to the one given by the physical specifications, which would of course be used as the initial value.  The main uncertainty is how accurately the nominal lens focal length reflects the true one, because normally the focal plane resolution is precisely known. + + With <math> h </math> the width of a pixel in mm, the portable form of the fitted lens focal length is : <math>\textstyle F_{mm}  =  h F_{pixels} =  h e</math>, scale factor e defined above. : <math>\textstyle F_{mm}  =  h F_{pixels} =  h e</math>, scale factor e defined above. − The center shifts can be converted to mm the same way. + To adapt a portable correction to a given image it is only necessary to calculate <math>\textstyle F_{pixels} </math> from the calibrated <math>\textstyle F_{mm} </math> and the the pixel size associated with the image. − As it stands now, portable lens calibrations would have to be calculated, saved and restored by front-end software that has access to the camera's sensor size.  But if pixel size were added to the PanoTools parameter set, libpano could provide portable lens calibrations autonomously. + Unfortunately PanoTools does not now use any physical parameters, so fully portable corrections would have to be calculated, saved and restored by front-end software that has access to focal length and pixel size.  But if those were added to the PanoTools parameter set, libpano could handle fully portable corrections autonomously. + − -- 23 Jan 2010 T K Sharpless + -- 24 Jan 2010 T K Sharpless [[Category:Community:Project]] [[Category:Community:Project]] ## Latest revision as of 14:01, 10 November 2011 This article is a mathematical analysis of how the panotools library computes lens correction parameters, why those parameters are not portable, and how they could be made portable. For a more general and use-oriented description of the current way panotools deals with lens distortion see Lens correction model # Lens Correction in PanoTools The PanoTools library implements an effective, but rather idiosyncratic method for correcting lens projections, that causes a good deal of puzzlement. Lens parameters optimized for one image format generally do not work for a different format; even rotating a set of images 90 degrees before aligning them produces different and incompatible lens parameters. One would expect that there must be a way to convert either of those parameter sets to a common form, that would apply equally well to both formats, or indeed to any image taken with the same lens. To see how that might be done, I have made a detailed analysis of PanoTools lens correction computations, based on the code in historic as well as current versions of libpano and helpful discussions with Helmut Dersch. ## Why Lens Correction? To make a panoramic image from photographs, it is essential to be able to calculate the direction in space corresponding to any given position in a given photo. Specifically, we need to know the angles between the view directions of the photos (the alignment of the images), and a radial projection function that relates the distance of a point from image center to the true angle of view, measured from the optical axis of the lens. Given a set of control points linking the images, PanoTools estimates both the alignment and the lens projection by a nonlinear least squares fitting procedure -- optimization. Using the fitted lens parameters, the stitcher can correct each image to match the ideal geometry of the scene, according to whatever projection is chosen for the panorama. Done right, that makes all the images fit together perfectly; moreover, it yields a panoramic image that seems to have been made with a perfect lens. ## Ideal Lens Models The radial projection curve of a real lens may approximate some known mathematical function, but in practice it must be determined experimentally, a process known as calibrating the lens. A calibration is a parametrized mathematical model, fitted to experimental data. The typical model consists of an ideal angle-to-radius function, and a polynomial that converts the ideal radius to the actual radius measured on the image. Like many lens calibration programs, libpano uses just two ideal functions to model lenses: rectilinear, for 'normal' lenses, and 'fisheye', for all others. The rectilinear projection has radius proportional to the tangent of the view angle. PT's 'fisheye', better known as the equal-angle spherical projection, has radius proportional to the angle itself. The constant of proportionality is the lens focal length, F. With angle A in radians, and R the ideal radius, the formulas are Rectilinear: $\textstyle \frac R F = \tan(A)$ Equal-angle: $\textstyle \frac R F = A$ Of course R and F have to be measured in the same units. If we have F in mm, then R is in mm also. If we want to measure R in pixels, then we need F in pixels. In any case, F is the constant of proportionality between the actual radius and the value of a trigonometric function that defines the basic shape of the projection. In physical optics, focal length is defined as the first derivative of R by A, at A = 0. That is easy to see if we write $\textstyle R = F A$ or $\textstyle R = F \tan(A)$, because the slopes of A and tan(A) are both 1 at A = 0. This is also true of other trigonometric functions commonly used as ideal lens projections: Equal-Area: $\textstyle \frac R F = 2\sin\left(\frac A 2 \right)$ Stereographic: $\textstyle \frac R F = 2\tan\left(\frac A 2 \right)$. The dimensionless quantity $\textstyle N = \frac R F$ is the normalized ideal radius. Multiplying N by the focal length, in any units, gives the ideal image radius in the same units. ## Generic Correction Scheme The difference between the real lens projection and the ideal one is modeled by an adjustable correction function that gives the observed radius as a function of the ideal radius. The adjustable part is almost always a polynomial, because it it easy to fit polynomials to experimental data. The argument to the polynomial should be the normalized ideal radius, $\textstyle N = \frac R F$, because that makes the polynomial coefficients independent of how image size is measured. The constant term is 0 because both radii are zero at the same point. If the coefficient of the linear term is 1, so that the first derivative at 0 is 1, then the value of the polynomial will be the normalized observed radius, n = r / F. Multiplying n by the focal length, in any units, gives the observed image radius in the same units: $\textstyle r = F n$. Many calibration packages use a polynomial with only even order terms beyond the first: $\textstyle n = N + a N^2 + b N^4 + c N^6$. Equivalently $\textstyle n = N ( 1 + a N + b N^3 + c N^5 )$ The expression in parentheses is the ratio of observed to ideal radius, which is expected to be close to 1 everywhere if the ideal model function is well chosen. ## PanoTools Correction Scheme Lens correction in PanoTools is unusual in several respects. First, it ignores the physical parameters of the lens (focal length) and camera (pixel size). Instead, it computes angle-to-radius scale factors from image dimensions and fields of view, as described below. All correction computations are in terms of image radii, measured in pixels, rather than the normalized radii described above. However, normalized radii are evaluated implicitly. Second, the correction is computed in equal-angle spherical coordinates, rather than camera coordinates. Observed image points are found by remapping those coordinates to the ideal lens projection, and rescaling them according to the ratio of pixel sizes in the source and ideal images. Third, the correction polynomial is normalized to hold a certain radius, $\textstyle r_0$, constant. It is a cubic polynomial, that computes the ratio of observed to ideal radius. Its argument is $\textstyle \frac R {r_0}$, and its constant term is set so that the result is exactly 1 when the argument is 1, that is, when $\textstyle R = r_0$. With $\textstyle X = \frac R {r_0}$ The correction factor is $\textstyle x = (1 - a - b - c) + a X + b X^2 + c X^3$, and the observed radius is given by $\textstyle r = R x$. The observed radius is thus formally a 4th order polynomial in R: $\textstyle r = s R + t R^2 + u R^3 + v R^4$, where $\textstyle s = (1-a-b-c),\ t = \frac a {r_0},\ u = \frac b {{r_0}^2},\ v = \frac c {{r_0}^3}$. The normalization makes the PanoTools polynomial equivalent to the generic one, but with different coefficients. This can be seen as follows. The ideal radius is $\textstyle R = F N$ where F is the ideal focal length in pixels, so we can write the adjusted radius as $\textstyle r = F N\ \operatorname{poly}\left(\frac {F N} {r_0} \right)$, If $r_0$ is proportional to F, then the quotient is proportional to N, and the polynomial is equivalent to one whose argument is N. That is the case when $r_0$ is proportional to source image size, which is proportional to F by definition. But the proportionality factor varies with source image format, so the PanoTools coefficients also depend on source format. The overall computation proceeds as follows. PanoTools computes the ideal radius R by mapping a point in the panorama (which plays the role of the ideal image) to equal angle spherical projection. Then $\textstyle R = \sqrt{ h^2 + v^2 }$, where h and v are the pixel coordinates relative to the center of the equal-angle projection. Then PT's radius() function computes x as described, and returns scaled coordinates ( h x, v x ). If the lens is rectilinear, PT next remaps those coordinates to rectilinear; if it is a fisheye, no remapping is needed. In either case the coordinates are finally rescaled to account for any difference in resolution between the panorama and the source image. The scale factor is computed from the dimensions and angular fields of view of the panorama and the source image, as follows. $\textstyle d = \frac {half\ width\ of\ pano} {\operatorname{A2Npano}\ {(half\ hfov\ of\ pano)}}$, $\textstyle e = \frac {half\ width\ of\ source} {\operatorname{A2Nsource}\ {(half\ hfov\ of\ source)}}$, where A2Npano() and A2Nsource() are the ideal projection functions for panorama and lens. There are many panorama projections but only two lens projections: $\textstyle {\operatorname{A2Nsource}( a )} = tan( a )$ for rectilinear lenses $\textstyle {\operatorname{A2Nsource}( a )} = a$ for fisheye lenses. The scale factor from panorama to source coordinates is $\textstyle s = \frac e d$. Factors $d$ and $e$ are focal lengths in pixels, because A2N() yields the normalized radius, equal to $\textstyle \frac R F$. For the panorama, which follows an ideal projection, $d$ is identical to $F_{pano}$. In fact, under the name “distance factor”, $d$ is used by many of libpano's coordinate transformation functions to convert radius in pixels to the ideal normalized radius in trigonometric units. The true source projection is unknown, so $e$ is an estimate of $F_{source}$ according to the fitted correction parameters. Since hfov is one of those parameters, $e$ will be proportional to the true $F_{source}$; the constant of proportionality will approach 1 as the fitted polynomial coefficients approach 0. In other words, $e$ is a biased estimate of $F_{source}$. However, the overall correction is equivalent to the generic one because the bias in the correction polynomial cancels the bias in the focal length. The only real defect in the PanoTools scheme is that its parameters work for just one image format. ## Portable Correction Coefficients In the generic calibration scheme, dividing image coordinates by F makes it possible for the fitted correction parameters (apart from F) to be independent of both image format and physical pixel size, so that they apply to any image made with the given lens. As explained above, dividing image coordinates by any factor proportional to F is logically sufficient; however values other than F itself lead to non-portable parameter values. In the PanoTools scheme, the "distance parameter" d, which is the focal length in panorama pixels, would be the appropriate divisor. That would make the argument of the radius scaling polynomial the ideal normalized radius, $\textstyle N = \frac {R_{pano}} {F_{pano}}$ and the fitted coefficient values would be portable. Alternatively the current non-portable coefficients can be converted using data available inside libpano. With $\textstyle k = \frac d {r_0}$, $\textstyle w' = w k$, $\textstyle a' = a k^2$, $\textstyle b' = b k^3$, $\textstyle c' = c k^4$ are the coefficients of a polynomial in $\textstyle N = \frac R d$ that computes the same radius correction factor as the PT polynomial. The constant term w' is no longer a simple function of the other three, however it can be reduced to 1 by dividing all coefficients by w'. The reduced coefficients are $\textstyle W = 1$ $\textstyle A = a \frac k w$ $\textstyle B = b \frac {k^2} w$ $\textstyle C = c \frac {k^3} w$ So the portable radius mapping is $\textstyle r = R ( 1 + A N + B N^2 + C N^3 )$ Along with the ideal function A2Nsource(), which gives N as a function of angle, this constitutes a portable lens correction function. ## Fully Portable Corrections To make a lens correction fully portable also requires expressing the fitted focal length in physical units rather than in pixels. The focal length in pixels must be known in order to compute, or to apply, any lens calibration, portable or not. Physically, this quantity depends on lens focal length and camera properties. Today, equipment manufacturers' specifications usually provide the needed data: $\textstyle F_{pixels} = F_{mm} \frac {sensor\ width\ in\ pixels} {sensor\ width\ in\ mm}$. Alternatively, the EXIF data from most high-end cameras includes the "focal plane resolution" field, which gives the physical pixel size directly. In any practical calibration scheme $\textstyle F_{pixels}$ is actually an adjustable parameter. However the fitted value is expected to be quite close to the one given by the physical specifications, which would of course be used as the initial value. The main uncertainty is how accurately the nominal lens focal length reflects the true one, because normally the focal plane resolution is precisely known. With $h$ the width of a pixel in mm, the portable form of the fitted lens focal length is $\textstyle F_{mm} = h F_{pixels} = h e$, scale factor e defined above. To adapt a portable correction to a given image it is only necessary to calculate $\textstyle F_{pixels}$ from the calibrated $\textstyle F_{mm}$ and the the pixel size associated with the image. Unfortunately PanoTools does not now use any physical parameters, so fully portable corrections would have to be calculated, saved and restored by front-end software that has access to focal length and pixel size. But if those were added to the PanoTools parameter set, libpano could handle fully portable corrections autonomously. ```-- 24 Jan 2010 T K Sharpless ```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 59, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8739937543869019, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/04/01/almost-upper-triangular-matrices/?like=1&source=post_flair&_wpnonce=5b12459335
# The Unapologetic Mathematician ## Almost Upper-Triangular Matrices Over an algebraically closed field we can always find an upper-triangular matrix for any linear endomorphism. Over the real numbers we’re not quite so lucky, but we can come close. Let $T:V\rightarrow V$ be a linear transformation from a real vector space $V$ of dimension $d$ to itself. We might not be able to find an eigenvector — a one-dimensional invariant subspace — but we know that we can find either a one-dimensional or a two-dimensional invariant subspace $U_1\subseteq V$. Just like before we get an action of $T$ on the quotient space $V/U_1$. Why? Because if we have two representatives $v$ and $w$ of the same vector in the quotient space, then we can write $w=v+u$. Acting by $T$, we find $Tw=Tv+Tu$. And since $Tu\in U_1$, the vectors $Tv$ and $Tw$ are again equivalent in the quotient space. Now we can find a subspace $\hat{U}_2\subseteq V/U_1$ which is invariant under this action of $T$. Is this an invariant subspace of $V$? No, it’s not even a subspace of $V$. But we could pick some $U_2\subseteq V$ containing a unique representative for each vector in $\hat{U}_2$. For instance, we could pick a basis of $\hat{U}_2$, a representative for each basis vector, and let $U_2$ be the span of these representatives. Is this an invariant subspace? Still, the answer is no. Let’s say $u\in U_2$ is the identified representative of $\hat{u}\in\hat{U}_2$. Then all we know is that $Tu$ is a representative of $T\hat{u}$, not that it’s the identified representative. It could have some components spilling out into $U_1$. As we proceed, picking up either a one- or two-dimensional subspace at each step, we can pick a basis of each subspace. The action of $T$ sends each basis vector into the current subspace and possibly earlier subspaces. Writing it all out, we get a matrix that looks like $\displaystyle\begin{pmatrix}A_1&&*\\&\ddots&\\{0}&&A_m\end{pmatrix}$ where each $A_j$ is either a $1\times1$ matrix or a $2\times2$ matrix with no eigenvalues. The $1\times1$ blocks come from the one-dimensional invariant subspaces in the construction, while the $2\times2$ blocks come from the two-dimensional invariant subspaces in the construction, though they may not be invariant once we put them back into $V$. Above the diagonal we have no control (yet) over the entries, but below the diagonal almost all the entries are zero. The only exceptions are in the $2\times2$ blocks, where we poke just barely down by one row. We can note here that if there are $n\leq m$ two-dimensional blocks and $m-n$ one-dimensional blocks, then the total number of columns will be $2n+(m-n)=n+m=d$. Thus we must have at least $\lceil\frac{d}{2}\rceil$ blocks, and at most $d$ blocks. The latter extreme corresponds to an actual upper-triangular matrix. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 8 Comments » 1. [...] we don’t always have an upper-triangular matrix, but we can always find a matrix that’s almost upper-triangular. That is, one that looks [...] Pingback by | April 2, 2009 | Reply 2. [...] two-dimensional invariant subspace on which has no eigenvalues. This corresponds to a block in an almost upper-triangular representation of . So we’ll just assume for the moment that has dimension [...] Pingback by | April 3, 2009 | Reply 3. [...] if is a real vector space of any finite dimension we know we can find an almost upper-triangular form. This form is highly non-unique, but there are some patterns we can exploit as we move [...] Pingback by | April 6, 2009 | Reply 4. [...] The Multiplicity of an Eigenpair As usual, let be a linear transformation on a real vector space of dimension . We know that can be put into an almost upper-triangular form [...] Pingback by | April 8, 2009 | Reply 5. [...] We know that we can put into the almost upper-triangular form [...] Pingback by | April 15, 2009 | Reply 6. [...] vsaka realna matrika podobna realni zgornje trikotni matriki, npr. . Velja pa . Lahko pa za matriko dosežemo še več: [...] Pingback by | April 25, 2009 | Reply 7. [...] not be able to put the transformation into an upper-triangular form. But we can put it into an almost upper-triangular form. The determinant is then the product of the determinants of the blocks along the diagonal. The [...] Pingback by | August 3, 2009 | Reply 8. [...] vsaka realna matrika podobna realni zgornje trikotni matriki, npr. . Velja pa . Lahko pa za matriko dosežemo še več: [...] Pingback by | April 9, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8703263401985168, "perplexity_flag": "head"}