url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/calculus/96978-acceleration-problem.html
# Thread: 1. ## Acceleration problem The position equation for the movement of a particle is given by $s(t) = \sqrt{t^3 +1}$ where s is measured in feet and t is measured in seconds. Find the acceleration when t = 2 seconds. I know that I have to get y'', I got ${3t^2/2} -\sqrt{t^3+1}$, but when trying to get y'' I just made a huge mess and came out to a wrong answer. Someone want to walk me through the rest of this? 2. Remember to keep the symbols straight; there is no y in the actual problem. Though I know what you mean. Acceleration is defined to be the second derivative of position with respect to time t, so we want to find s''(t). First we find s'(t), which can be found by applying the chain rule, where the square root is the outer function and $t^3 + 1$ is the inner function. To keep things clear, I like to use exponents in place of roots, so I would write it out: $s(t) = (t^3 + 1)^\frac{1}{2}$ Explicitly, we can see that s(t) = f(g(t)) where $g(t) = t^3 + 1$ and $f(t) = t^{\frac{1}{2}}$. By the chain rule, s'(t) = f'(g(t))g'(t). $s'(t) = \frac{1}{2}(t^3 + 1)^{-\frac{1}{2}}(3t^2)$ In order to get the second derivative you will have to use the product rule in addition to the chain rule. Can you find s''(t) from here? 3. Originally Posted by slider142 Remember to keep the symbols straight; there is no y in the actual problem. Though I know what you mean. Acceleration is defined to be the second derivative of position with respect to time t, so we want to find s''(t). First we find s'(t), which can be found by applying the chain rule, where the square root is the outer function and $t^3 + 1$ is the inner function. To keep things clear, I like to use exponents in place of square roots, so I would write it out: $s(t) = (t^3 + 1)^\frac{1}{2}$ Explicitly, we can see that s(t) = f(g(t)) where $g(t) = t^3 + 1$ and $f(t) = t^{\frac{1}{2}}$. By the chain rule, s'(t) = f'(g(t))g'(t). $s'(t) = \frac{1}{2}(t^3 + 1)^{-\frac{1}{2}}(3t^2)$ Explicitly, we can see that s(t) = f(g(t)) where g(t) = t^3 + 1 and f(t) = t^(1/2) In order to get the second derivative you will have to use the product rule in addition to the chain rule. Did you find your problem here? Yes sir, whenever I integrate the power rule and the chain rule together I usually have a difficult time. So can you explain this to me so I can understand the concept better? 4. In the above, we have $<br /> s'(t) = \frac{3}{2}(t^3 + 1)^{-\frac{1}{2}}(t^2)<br />$ Since we do not know its derivative offhand, we note that it is a product of two functions, each of whose derivative we do know. Specifically $f(t) = (t^3 + 1)^{-\frac{1}{2}}$ and $g(t) = t^2$ so that we have $s'(t) = \frac{3}{2}f(t)g(t)$ By the product rule, we know that $s''(t) = \frac{3}{2}(f'(t)g(t) + f(t)g'(t))$ The only place you will need that chain rule here is in finding f'(t). Note how similar f(t) is to s(t). What do you propose s''(t) is? 5. Okay I did it the way you asked me to and came out with ${-9t^4/2}+{3(t^3+1)^-3/2/2}+3t^4 +3t$ 6. It looks like you're missing some small steps. Don't skip any steps until you get more experience with these rules. The first step is to get the parts of the product rule that you do not have, f'(t) and g'(t). Do not worry about how they interact with the rest of the equation yet. What did you get for f'(t)? What did you get for g'(t)? 7. for f' I got 6t and for g' I got $-1/2(-\sqrt{(t^3+1)}) (3t^2)$ 8. Originally Posted by radioheadfan for f' I got 6t and for g' I got $-1/2(-\sqrt{(t^3+1)}) (3t^2)$ Okay, so in your case, you switched them around from my post and you let $f(t) = 3t^2$ and $g(t) = (t^3 + 1)^{-\frac{1}{2}} = \frac{1}{\sqrt{t^3 + 1}}$. Your f'(t) is correct, but your g'(t) is not. g'(t) should be $-\frac{1}{2}(t^3 + 1)^{-\frac{1}{2} - 1}(3t^2) = -\frac{1}{2}(t^3 + 1)^{-\frac{3}{2}}(3t^2)$ The outer function gets taken down one unit to the -3/2 power by the power rule of differentiation. 9. Originally Posted by slider142 Okay, so in your case, you switched them around from my post and you let $f(t) = 3t^2$ and $g(t) = (t^3 + 1)^{-\frac{1}{2}} = \frac{1}{\sqrt{t^3 + 1}}$. Your f'(t) is correct, but your g'(t) is not. g'(t) should be $-\frac{1}{2}(t^3 + 1)^{-\frac{1}{2} - 1}(3t^2) = -\frac{1}{2}(t^3 + 1)^{-\frac{3}{2}}(3t^2)$ The outer function gets taken down one unit to the -3/2 power by the power rule of differentiation. Just to make sure I'm doing this correctly right now I have $<br /> \frac{1}{2}[6t(t^3+1)^{-\frac{1}{2}}+{-\frac{9t^4}{2}}(t^3+1)^{-\frac{3}{2}}]$ 10. Originally Posted by radioheadfan Just to make sure I'm doing this correctly right now I have $<br /> \frac{1}{2}[6t(t^3+1)^{-\frac{1}{2}}+{-\frac{9t^4}{2}}(t^3+1)^{-\frac{3}{2}}]$ That is exactly correct. 11. Originally Posted by slider142 That is exactly correct. Awesome, okay so the final answer is? $3t(t^3+1)^\frac{-1}{2}-\frac{9t^4(t^3+1)^\frac{-3}{2}}{2}$ 12. Originally Posted by radioheadfan Awesome, okay so the final answer is? $3t(t^3+1)^\frac{-1}{2}-\frac{9t^4(t^3+1)^\frac{-3}{2}}{2}$ Almost. The second term should also be multiplied by 1/2. 13. $3t(t^3+1)^\frac{-1}{2}-\frac{9t^4(t^3+1)^\frac{-3}{2}}{4}$ 14. Originally Posted by radioheadfan $3t(t^3+1)^\frac{-1}{2}-\frac{9t^4(t^3+1)^\frac{-3}{2}}{4}$ Yep, you've got it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9524427056312561, "perplexity_flag": "head"}
http://www.factbites.com/topics/Double-pendulum
Where results make sense About us   |   Why use us?   |   Reviews   |   PR   |   Contact us Topic: Double pendulum Ads by Google Double Elliptical Pendulum Harmonograph I was most interested in the Double Elliptical Pendulum Harmonograph because of the striking complexity of the patterns it could produce. The handsomest patterns are generated when the frequency of the upper pendulum bears a whole-number ratio to that of the lower pendulum - a ratio such as 3:2 or 2:1. The adjustable factors that describe any given figure are then the two amplitudes of the upper pendulum, the phase angle between them, the phase between upper and lower pendulum, the phase and amplitudes of the lower pendulum and the ratios of the frequencies of the upper and lower pendulums. www.lclark.edu /~miller/harmonograph.html   (1005 words) Reference.com/Encyclopedia/Pendulum The isochronism of the pendulum suggested a practical application for use as a metronome to aid musical students, and possibly for use in a clock. The mass of the pendulum was 28 kg and the length of the arm was 67 m. As the angles increase, however, the double pendulum exhibits chaotic motion that is sensitive to the initial conditions. www.reference.com /browse/wiki/Pendulum   (1778 words) Chaotic Pendulum Physics Simulation The pendulum is modeled as a point mass at the end of a massless rod. The damping (friction) is proportional to the angular velocity of the pendulum. This is the equation of motion for the driven damped pendulum. www.myphysicslab.com /pendulum2.html   (620 words) Second Push - Double Push -Speedskating Santa Barbara The inverted pendulum requires that the string be replaced with a thin stiff rod and that appropriate sideways forces be applied at ground level or the inverted pendulum will fall to the ground without executing even one oscillation. The simple pendulum has an oscillating motion with frequency of oscillation depending on the square root of g/L (g is gravitational acceleration, L is the length of the suspension string, cable, or rod). So a real pendulum has a damped oscillation and gravity does not only not drive an accelerating motion, it cannot even maintain a steady motion unless some external driving force is added, say from a pulsed electromagnet to replace the friction losses. home1.gte.net /pjbemail/SecondPush.html   (1912 words) Search Results for "pendulum" ...A pendulum clock having a round face, a relatively narrow elongated case, and a rectangular box at the bottom.... It has a double pendulum whose pace can be altered by sliding the upper weight up or down.... In the harmonic motion of a pendulum, the amplitude of the swing is the greatest distance reached... www.bartleby.com /cgi-bin/texis/webinator/sitesearch/+9wwFqopnDm1c1MxzmAwwwqFqqmx   (316 words) [No title] The double pendulum is a popular device for demonstration of deterministic chaos. This grease hinders rotation of the bearing and therefore movement of the pendulum. As with all pendulums, your double pendulum's oscillations are affected by friction and the moment of inertia of the components. www.rose-hulman.edu /~moloney/AppComp/2000Entries/Entry16/Entry16.htm   (1743 words) Pendulum principle demonstration apparatus (US5145378) A base supports a vertical pivot arm, one end of which supports a pivotal rotation means for a rotation support housing with a plurality of radial connector sockets. One of several pendulum arms can be fitted into each of the radial connector sockets, the pendulum arms supporting a pendulous mass on each pendulum arm. One such pendulum arm is articulated into two portions, each portion with a plurality of additional pendulum arms attached thereto. www.delphion.com /details?pn=US05145378__   (319 words) Print the story A double pendulum consists of one pendulum tacked on to the end of another. The upper pendulum swings from a fixed pivot point and the lower pendulum swings from the end of the upper one. In golf, the equivalent components are the shoulders (acting as the fixed pivot), arms and hands (the upper pendulum), and the club shaft and club head (the lower pendulum). www.physorg.com /printnews.php?newsid=85663321   (665 words) Running Mechanical Models (SimMechanics) The joint connecting the upper and lower arms of this pendulum contains a torsional spring and damper system that exert a counterclockwise force directly proportional to the angular displacement and velocity of the joint, respectively. This model uses Body blocks to model the upper and lower arms of the pendulum and a Revolute Joint block (J1) to model the connection between the pendulum and ground. In the case of the double pendulum, the point [0; 0; 0; 0] (i.e., the pendulum initially folded up and stationary) is a trivial equilibrium point and therefore to be avoided. www.weizmann.ac.il /matlab/toolbox/physmod/mech/mech_running7.html   (1007 words) Double Pendulum Double Pendulum: A Bridge Between Regular Dynamics and Chaos Abstract: Of all physical phenomena, the simple pendulum is perhaps the best suited to introduce students to the concept that the natural world can be described in a mathematical language and provides an entry point into conceptual, analytic and experimental techniques. The double pendulum is a system that behaves exactly like the simple pendulum for small amplitudes but is chaotic for larger amplitudes providing students with an introduction to the fascinating ideas about chaos theory while tying it closely to concepts and techniques taught at the Regents Physics level. www.cns.cornell.edu /cipt/labs/DoublePendulum.html   (156 words) Double pendulum - Tabitha It turns out that the simple double pendulum is pathological, so we will have to be a bit more sophisticated in our modelling of this system. For example, we could plot whether either pendulum flips within a set period of time as a function of the initial conditions to see the general structure of the solutions. Outside this region, the pendulums can flip but this is different from determining when they will flip. tabitha.phas.ubc.ca /wiki/index.php/Double_pendulum   (515 words) Double Pendulum Because the double pendulum is a Hamiltonian system (a conservative system) where the energy of the system is conserved, one must use numerial integration methods which conserve the energy. On the left side, the behaviors of the double pendulum is displayed. Because the double pendulum is a Hamiltonian system (a conservertive system), there exist no attractors, and tori or chaotic seas would be observed. brain.cc.kogakuin.ac.jp /~kanamaru/Chaos/e/DP   (219 words) The Double Pendulum The double pendulum is composed of a second pendulum attached to the end of the bob of an initial simple pendulum, as shown in the diagram below: As a result, the motion of the pendulum is more difficult to model, and requires more complex mathematics than those used to simulate the motion of the Simple Pendulum. In addition to this, it is also possible to activate a trace on the second bob in order to make it easier to visualise the motion of the pendulum over time; and you can deactivate gravity to see how the pendulum would react in zero-gravity conditions. www.maths.surrey.ac.uk /explore/michaelspages/Double.htm   (201 words) Double Pendulum Each mass plus rod is a regular simple pendulum, and the two pendula are joined together and the system is free to oscillate in a plane. The left panel shows two animated gifs illustrating solution of these equations for one kilogram pendulum masses and one metre pendulum lengths, for the indicated times in seconds (the two gifs may take some time to load - they are 109kB and 239kB respectively). The School of Physics at the University of Sydney has a compound square double pendulum. www.physics.usyd.edu.au /~wheat/dpend_html   (267 words) Chaotic Pendulums - Chaos Theory & The Double Pendulum However, unlike a simple single pendulum, it is impossible to predict the long term behaviour of the double pendulum. Put another way, the behaviour of a chaotic system depends so sensitively on the system's precise initial conditions that it is, in effect, unpredictable and cannot be distinguished from a random process, even though it is deterministic in a mathematical sense. To experience chaos theory first hand, repeatedly spin your chaotic pendulum ensuring that you have all initial conditions (initial position, initial angles, initial force…) as closely matched to the previous experiment as possible. www.chaoticpendulums.com /chaos-theory-a9.html   (364 words) Running Mechanical Models (SimMechanics) Consider a double pendulum initially hanging straight up and down. The net force on the pendulum is zero in this configuration. It is therefore unnecessary to pass any additional arguments (other than the model's name) to the command to linearize the model. www.technion.ac.il /guides/matlab/toolbox/physmod/mech/mech_running9.html   (482 words) Pendulum and Cart Physics Simulation We consider the torque from friction of the pendulum to be a vector perpendicular to the plane where the pendulum and cart move. This torque force F is applied at the pendulum bob, and its opposite is applied at the cart. You will find the same "Mass and Plane Pendulum Dynamic System" discussed on page 234 of the 1996 edition.) Our first step is to find the Lagrangian of the system which is the kinetic energy minus the potential energy. www.myphysicslab.com /pendulum_cart.html   (2131 words) Linearizing Mechanical Models :: Analyzing Motion (SimMechanics) Right-multiplying A by the state vector x yields the differential state equations corresponding to the LTI model of the double pendulum, These modes characterize how the double pendulum responds to small perturbations in the vicinity of the operating point, which here is the force-free equilibrium. The preceding sections of this chapter, Inverse Dynamics Mode with a Double Pendulum and Constrained Trimming of a Four Bar Machine, discuss the inverse dynamics and trimming of the four bar system. www.mathworks.com /access/helpdesk/help/toolbox/physmod/mech/f0-6469.html   (1847 words) A double pendulum is a simple but effective dem... A double pendulum is a simple but effective demonstration of chaos theory. Math Forum Discussions - Re: 3D Double Pendulum Simulation >of a double pendulum in 3 dimensions driven by external torques. the double pendulum can show up chaotic behaviour, that means minimal changes The Math Forum is a research and educational enterprise of the Drexel School of Education. www.mathforum.com /kb/thread.jspa?forumID=226&threadID=1419650&messageID=4950239   (390 words) Double Pendulum -- from Eric Weisstein's World of Physics Double Pendulum -- from Eric Weisstein's World of Physics A double pendulum consists of one pendulum attached to another. Double pendula are an example of a simple physical system which can exhibit chaotic scienceworld.wolfram.com /physics/DoublePendulum.html   (140 words) Double Pendulum/Iron Byron However, it goes on to translate the swing to what a person would have to do to implement the same principles given the differences between a model and a person. Well iron byron is a double pendulum but rather than being gravity powered (like what you've shown above) its powered by a motor at the hub (fl dot). Jorgeson used a double pendulum with a torque but he let the hub (fl dot) move. www.activegolf.com /forums/fb.aspx?go=prev&m=2130300&viewType=tm   (989 words) Encyclopedia :: encyclopedia : Pendulum   (Site not responding. Last check: ) A gravity pendulum (plural pendula) is a weight on the end of a rigid rod (or a string/rope), which, when given an initial push, will swing back and forth under the influence of gravity over its central (lowest) point. The pendulum was discovered by Ibn Yunus al-Masri during the 10th century, who was the first to study and document its oscillatory motion. The blue arrow is the gravitational force acting on the bob, violet arrows are that same force resolved into components parallel and perpendicular to the bob's instantaneous motion, the motion along the red axis, which is always perpendicular to the cable/rod. www.hallencyclopedia.com /Pendulum   (1260 words) Pendulum - Wikipedia, the free encyclopedia The pendulum was discovered by Ariana Leane during the 10th century, who was the first to study and document its oscillatory motion. A pendulum whose time period is two seconds is called the second pendulum since most clock escapements move the seconds hands on each swing. A pendulum in which the rod is not vertical but almost horizontal was used in early seismometers for measuring earth tremors. en.wikipedia.org /wiki/Pendulum   (552 words) Pendulum A simple gravity pendulum (plural pendulums or pendula), also called a bob pendulum, is a weight on the end of a rigid rod (or a string/rope), which, when given an initial push, will swing back and forth under the influence of gravity over its central (lowest) point. is the semi-amplitude of the oscillation, that is the maximum angle between the rod of the pendulum and the vertical. Pendulums (these may be a crystal suspended on a chain, or a metal weight) are often used for divination and dowsing. www.brainyencyclopedia.com /encyclopedia/p/pe/pendulum.html   (1418 words) NationMaster - Encyclopedia: Pendulum   (Site not responding. Last check: ) A double pendulum is a pendulum with another pendulum attached to its end, and is a simple physical system that exhibits rich dynamic behavior. Katers pendulum is a reversible pendulum designed and built by Captain Henry Kater in 1817 to measure the acceleration of free fall so that gravity may be calculated without knowledge of the pendulums centre of gravity and radius of gyration. In the case of a pendulum with a point mass swinging on massless string or rod of length l, and an ambient gravity acceleration of g, the period of a complete oscillation is www.nationmaster.com /encyclopedia/Pendulum   (1170 words) Pendulum   (Site not responding. Last check: ) For small displacements the movement of an pendulum can be described mathematically as simple harmonic motion as the change in potential energy the bottom of a circular arc is proportional to the square of the displacement. pendulums will also lose energy as they and so their motion will be damped the size of the oscillation decreasing approximately exponentially with time. In the case of a pendulum with point mass swinging on a massless rigid rod length l where $\theta$is the angle between rod and the vertical the acceleration is by$g\cdot sin\theta$and is equal to angular acceleration multiplied by the length of rod.$$$$ www.freeglossary.com /Pendulum   (1056 words) NationMaster - Encyclopedia: Double pendulum   (Site not responding. Last check: ) The motion of a double pendulum is governed by a set of coupled ordinary differential equations. The position of the centre of mass of the two rods may be written in terms of these coordinates.(If the origin of the coordinate system is assumed to be at the point of contact of the wall and the first pendulum). The double pendulum undergoes chaotic motion, and shows a sensitive dependence on initial conditions. www.nationmaster.com /encyclopedia/Double-pendulum   (1148 words) > > L'eStudiolo de Pendulum > > When you make a pendulum its periodicity, that is to say the time it takes to make one complete oscillation, depends solely on the length of the wire or rod that supports it and on the force of gravity in the place where it is hung. The pendulum swings faster the closer it is to the centre of the earth, independent of its mass and the width of its oscillation. The movement of the pendulum is the result of the limitations in the degrees of freedom of the movements of each one of the oscillation points. www.pendulum.es /english/estudiolo/pendulos.html   (1684 words) Pendulum   (Site not responding. Last check: ) Pendulum is a trade name for a preemergent herbicide used for control of crabgrass in turf.Its active ingredient is pendimethalin. For small displacements, the movement of an ideal pendulum can be described mathematically as simple harmonic motion, as the change in potential energy atthe bottom of a circular arc is nearly proportional to the square of the displacement. In the case of a pendulum with a point mass swinging on a massless rigid rodof length l, where θ is the angle between the rod and the vertical, the accelerationis given by www.therfcc.org /pendulum-20465.html   (682 words) Try your search on: Qwika (all wikis) About us   |   Why use us?   |   Reviews   |   Press   |   Contact us
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8214216232299805, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30877/what-is-relation-between-time-and-space-in-general-relativity/30887
what is relation between time and space in general relativity? there is a relation between time and space in special theory of relativity: $$t^2c^2-L^2=\tau^2.c^2$$ what is relation between time and space in general relativity? - 1 – David Zaslavsky♦ Jun 27 '12 at 21:07 1 – user1504 Jun 27 '12 at 21:29 -1: This is not a question. Are you asking for the generalization of the formula you gave? This is not a reasonable question--- the formula you gave is a pythagorean theorem in space-time. The generalization is the metric tensor, but it isn't even clear in which way you are looking for an answer. – Ron Maimon Jun 28 '12 at 1:26 2 Answers The remarkable property of spacetime in GR is that it is locally that of SR. Or, more technically, tangent to every event in the curved spacetime of GR is an SR spacetime. What this means is that, to first order, the line element at any event can be put into the (differential) form of SR in some coordinate system: $c^2 dt^2 - dL^2 = c^2 d\tau ^2$ The departure from the flat SR spacetime shows up at 2nd order; curvature is characterized by the 2nd order derivatives of the metric. - he Einstein field equations (EFE) or Einstein's equations are a set of 10 equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy. First published by Einstein in 1915 as a tensor equation, the EFE equate spacetime curvature (expressed by the Einstein tensor) with the energy and momentum within that spacetime (expressed by the stress–energy tensor). There is also the linearized EFE which are used for simplifying many general relativity problems as well as discussing gravitational radiation. These equations can be found at: http://en.wikipedia.org/wiki/Linearized_gravity#Linearised_Einstein_field_equations. A small side note. Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in n dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when T is identically zero) define Einstein manifolds. References: http://en.wikipedia.org/wiki/Einstein_field_equations -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481422305107117, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/39622/calculating-force-required-to-stop-bungee-jumper/39627
Calculating force required to stop bungee jumper Given that: • bungee jumper weighs 700N • jumps off from a height of 36m • needs to stop safely at 32m (4m above ground) • unstretched length of bungee cord is 25m Whats the force required to stop the jumper (4m above ground) First what equation do I use? $F = ma$? But even if $a = 0$ $v$ may not equals 0 (still moving) $W = F \Delta x$? Can I say if $\Delta x = 0$ object is not moving? Even then, I don't know the work ... I tried doing: $-32 = \frac{1}{2} (-9.8) t^2$ $t = 2.556s$ Then I'm stuck ... I know $t$ but I cant seem to use any other equations... $v_f, v_i =0$ - Think about potential energy. – Colin K Oct 12 '12 at 12:57 2 Answers As others here have pointed out, the force of the bungee cord would vary, increasing as it is stretched. So your question is not well posed. If this is an actual homework problem I would guess that you misread it and you are actually being asked to find the force constant of the bungee cord (assuming, as I will below, that it obeys Hooke's law). Or perhaps you want the maximum force on the jumper due to the bungee cord (which would be when it is stretched the most). Here is how you would get those... Ignoring air drag, the only forces on the jumper are due to the spring (bungee cord) and gravity, both of which are conservative forces, so you have: $$\frac{1}{2} m v_{f}^{2} + \frac{1}{2} k x_{f}^{2} + m g y_{f} = \frac{1}{2} m v_{i}^{2} + \frac{1}{2} k x_{i}^{2} + m g y_{i}$$ Going from the start of the fall of the jumper to when the stretch of the spring is maximum, the initial speed and final speed of the jumper are zero, and the initial x is zero (spring is unstretched at the start), while you can choose the final y to be zero, leaving: $$\frac{1}{2} k x_{f}^{2} = m g y_{i}$$ Which can be simply interpreted as saying that the initial gravitational potential energy of the jumper-earth system ends up stored in the spring as elastic potential energy. From this equation you can get $k$. You can then go on to find the force due to the spring on the jumper when it is stretched by any amount using $F_{x} = - k x$, and in particular, the maximum force due to the spring. - Well, I guess start by forgetting that the bungee is a spring, and would apply a non-constant force. But we'll ignore that first and imagine the bungee applies a constant force. You jump off the bridge at 36m, plunge 25m to 9m from the ground, which leaves you 5m to come to a stop. So, we can use the equations of constant linear motion to compute how fast you're going the moment the bungee tightens up: $v^2 = v_0^2 + 2a(r-r_0)$ So, in our example you'll be heading downwards at $\sqrt(0+2*9.8\frac{m}{s^2}(25m)) = 22.14\frac{m}{s}$ Then, assuming the bungee applies a constant force, we again use the initial equation to figure out the rate of deacceleration. $0 = (22.14 \frac{m}{s})^2 + 2a(5m) => a= \frac{(22.14 \frac{m}{s})^2}{2*5m} = 49 \frac{m}{s^2}$ Which, not so surprisingly, works out to be the same as $g(25m/5m)$, or $g$ times the falling distance divided by the stopping distance. Now that you know your deacceleration, multiply that by your mass and you've got your force. However, bungees actually don't apply a constant force, they apply a fairly linear force relative to their displacement for most of their stretchy range. You'll have to use Hooke's Law, the formula for the spring constant $F=-kx$ to more accurately model the system. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534586071968079, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/121231-baffling-integral.html
# Thread: 1. ## a baffling integral ∫ 1 / (x^6 – 1) dx Thanks in advance. 2. Originally Posted by niz ∫ 1 / (x^6 – 1) dx Thanks in advance. Are you familiar with the method of partial fractions? Notice that the denominator is the difference of two squares $(x^3-1)(x^3+1)$. Now use the formulae for the cubes. You can write that out as $(x-1)(x^2+x+1)(x+1)(x^2-x+1)$ You should be able to apply the method of partial fractions for the case when the denominator contains irreducible quadratic factors. 3. Here's my method , it is for those who HATE the method of partial fraction ( but i am not the one ) $\frac{1}{x^6- 1} = \frac{ x^2 - ( x^2 -1 )}{ x^6 - 1}$ $= \frac{x^2}{x^6-1} - \frac{ x^2 - 1}{ ( x^2 - 1 )( x^4 + x^2 + 1 )}$ $= \frac{x^2}{x^6-1} - \frac{1}{ x^4 + x^2 + 1}$ the integral $\int \frac{dx}{x^6 - 1} = \int \frac{x^2 dx}{x^6-1} - \int \frac{dx}{ x^4 + x^2 + 1}$ the first one we just need to substitute $t = x^3$ and finally obtain $\frac{1}{6}\ln{ \left( \frac{ x^3 - 1}{x^3 + 1} \right ) }$ you may think that we may need to apply partial fraction to solve the second part but if we consider $\int \frac{ x^2 + 1}{ x^4 + x^2 + 1}~dx$ Divide the numerator and the denominator by $x^2$ $= \int \frac{ 1 + 1/x^2}{ x^2 + 1 + 1/x^2 } ~dx$ $= \int \frac{ 1 + 1/x^2}{ \left ( x - \frac{1}{x} \right )^2 + 3 }$ then substitute $x- \frac{1}{x} = t , (1 + 1/x^2 )dx = dt$ the integral becomes $\int \frac{dt}{ t^2 + 3 } = \frac{1}{\sqrt{3}} \tan^{-1}(\frac{x^2-1}{\sqrt{3} x } )$ now consider $\int \frac{x^2 - 1 }{ x^4 + x^2 + 1}~dx$ do the same thing above but sub. $x + 1/x = t$ this time , we can get $= \frac{1}{2} \ln{ \left( \frac{ x^2 - x +1}{x^2 + x + 1} \right ) }$ we have $\int \frac{ x^2 - 1}{ x^4 + x^2 + 1}~dx = \frac{1}{2} \ln{ \left( \frac{ x^2 - x +1}{x^2 + x + 1} \right ) }$ $(1)$ and $\int \frac{ x^2 + 1}{ x^4 + x^2 + 1}~dx = \frac{1}{\sqrt{3}} \tan^{-1}(\frac{x^2-1}{\sqrt{3} x } )$ $(2)$ $(2) - (1)$ , $\int \frac{dx}{ x^4 + x^2 + 1} = \frac{1}{2}[ \frac{1}{\sqrt{3}} \tan^{-1}(\frac{x^2-1}{\sqrt{3} x } ) - \frac{1}{2} \ln{ \left( \frac{ x^2 - x +1}{x^2 + x + 1} \right ) } ]$ therefore , $\int \frac{dx}{x^6- 1} = \frac{1}{6}\ln{ \left( \frac{ x^3 - 1}{x^3 + 1} \right ) } - \frac{1}{2}[ \frac{1}{\sqrt{3}} \tan^{-1}(\frac{x^2-1}{\sqrt{3} x } ) - \frac{1}{2} \ln{ \left( \frac{ x^2 - x +1}{x^2 + x + 1} \right ) } ] + C$ 4. ## Thanks Thanks adkinsjr and simplependulum!! Actually I was hoping for some kind of trigonometric substitution to manipulate the integrand. Like for (x^2 -1) substituting x= secy etc.. I guess there are none such solutions right? Anyway, thanks guys.. Thanks simplependulum for the smart manipulation of the integrand..that was clever.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877701759338379, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/55264/how-to-measure-a-solid-solid-surface-energy?answertab=active
# How to measure a solid-solid surface energy? Many techniques exist to measure the surface energy between a liquid and a liquid or a liquid and a gas (see e.g. the wiki page). Methods to measure the surface energy between a solid and a fluid are rare, but still there is a method developed by Zisman (see e.g. here) that allows you to at least estimate it by extrapolation for solid/gas or solid/liquid, depending on the environment that you use in your experiment What I wonder: is there a method to measure the surface energy between two non-elastic solids? One option I could think of is that you could melt one of the solids and then use the technique of Zisman, but this will limit your knowledge to high temperature surface energy, whereas the ones at low temperature are the thing you are typically interested in. EDIT: just for future reference, this is a study on surface energies between 2 solids, but with 1 being highly elastic - – Luboš Motl Feb 27 at 7:24 That is actually a good point. I normally use the terms somewhat interchangeable, but I agree that in case of solids it doesn't make much sense to talk about surface tension – michielm Feb 27 at 7:59 1 Think about high-temperature creep experiments using very fine-grained polycrystalline samples to measure the thermal activation energy of pure diffusion creep. You might be able to estimate the average inter-granular surface energy using a diffusion-creep law. – Mark Rovetta Feb 28 at 0:28 I don't really understand how that would work. Could you expand a bit?! And doesn't this have the same issue as melting the material, i.e. you get the surface energy at way to high temperatures? – michielm Mar 1 at 9:00 @MarkRovetta could you explain this creep experiment?! – michielm Mar 2 at 7:53 show 1 more comment ## 1 Answer I have done some searching and found out that there is a technique that has been around for roughly 10 years already and it is surprisingly simple (if you have the right, expensive, equipment). It can be found in this JCIS paper (which is also freely available here). The technique works as follows: an atomic force microscope (AFM) with a well-defined spherical tip made out of solid 1 is brought into contact with solid 2. Then the tip is pulled of the surface again and the work of adhesion is measured. Based on the pull-off force and theoretical contact mechanics models (for details see the paper) you can calculate the surface energy $\gamma$ between the two solids from the following equation: $$\gamma = \frac{F}{2\pi c R}$$ where $F$ is the pull-off force, $R$ is the tip radius and $c$ is a constant between 1.5 and 2 depending on the details of the contact model. The paper explains how to choose which model is appropriate for the type of measurement you do. Some conditions (assumptions) for the theoretical models apply: 1. deformations of materials are purely elastic, described by classical continuum elasticity theory 2. materials are elastically isotropic 3. both Young’s modulus and Poisson’s ratio ofmaterials remain constant during deformation 4. the contact diameter between particle and substrate is small compared to the diameter of particle 5. a paraboloid describes the curvature of the particle in the particle–substrate contact area 6. no chemical bonds are formed during adhesion 7. contact area significantly exceeds molecular/atomic dimensions The paper explains in quite some details how deviations from these conditions are often source of error, but also how they can be met to get an appropriate measurement. So to conclude: the surface energy of a solid-solid system can be measured using AFM when taking into account that the assumptions of models used in data processing are thoroughly met. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396952986717224, "perplexity_flag": "middle"}
http://www.scholarpedia.org/article/Galactic_dynamics
# Galactic dynamics From Scholarpedia George Contopoulos and Christos Efthymiopoulos (2011), Scholarpedia, 6(5):10670. Curator and Contributors 1.00 - Christos Efthymiopoulos 0.40 - Eugene M. Izhikevich 0.20 - Benjamin Bronner 0.20 - Alessandra Celletti 0.20 - George Contopoulos Galactic dynamics is the study of the motions of the stars, gas and dark matter in order to explain the main morphological and kinematical features of galaxies. In the present article we focus on topics of galactic dynamics which are most relevant to the general theory and methods of dynamical astronomy (see also scholarpedia section "Extragalactic Astronomy"). ## Introduction and basic concepts The main components of the Universe are the galaxies, which are composed of billions of stars, but also of gas, dust and dark matter. The most well known classification of galaxies is due to Hubble. The main types are elliptical galaxies (E), normal spiral galaxies (S), barred spiral galaxies (SB) and irregular galaxies (I)( Figure 1 ). Figure 1: Classification of the main types of galaxies according to Hubble The elliptical galaxies have various degrees of ellipticity from zero (E0) to $$0.7$$ (E7), and they are slowly rotating. On the other hand the spiral and barred spiral galaxies are rotating fast and they contain spiral arms. There are tight spirals (Sa, SBa), intermediate (Sb, SBb) and open spirals (Sc, SBc). They are flat systems, containing also gas and dust, out of which new stars are formed continuously. The spiral and barred spiral galaxies are also called `disc galaxies', since the spiral arms are embedded in a thin disc (the thickness is of order less than 10%). Finally, the irregular galaxies are relatively small systems that accompany large spiral galaxies. The shapes of the galaxies are governed mainly by the gravitational interactions between their individual mass components, i.e. the stars, dark matter, and gas. The stars form the main body of the luminous part of a galaxy. The gas forms a relatively small proportion of matter (up to 10% in disc galaxies). The stars and the gas together are called `baryonic matter'. The dark matter, on the other hand, contains more mass than the stars and gas, and it extends to large distances (one order of magnitude larger than the baryonic matter). The motions of the stars and of the dark matter elements are governed purely by their gravitational forces. The study of these motions and their combinations to form self-consistent statistical mechanical configurations, is called stellar dynamics. This constitutes the central approach to galactic dynamics. Gas dynamics, on the other hand, is governed also by dissipative forces due to pressure, radiation, magnetic fields etc. The latter also influence, to some extent, particular morphological and kinematical features of the galaxies. The gravitational potential of a galaxy is composed of a mean field, due to the average distribution of the galactic matter, and fluctuations due to the approaches (encounters) between individual stars. An estimate of the effects of these encounters is provided by the "relaxation time" of the system (Chandrasakhar), which can be estimated by the time required by a star to change its average direction of motion by 90° due to the cumulative effects of encounters. This time is very long, of the order of 1012 years, while the periods of the motions of the stars around the center of the galaxy are of the order of 108 years ("dynamical time"). Thus, in a first approximation, we may consider the orbits of the stars as due to the general distribution of the galactic matter. The same applies to the orbits of dark matter elements. In a better approximation, however, various details of the relaxation process have to be taken into account. This issue is discussed in section 6 (N-body systems). If $$V(\textbf{x},t)$$ is the average galactic potential the equations of the average motion are $\tag{1} \ddot{\mathbf{x}}=-\frac{\partial V}{\partial\mathbf{x}}$ where $$\textbf{x}=(x,y,z)\ .$$ The distribution function $$f$$ is the density of matter in phase space $$(\textbf{x},\textbf{v})=(x,y,z,\dot{x},\dot{y},\dot{z})\ .$$ It is given by the "collisionless Boltzmann equation" $\tag{2} \frac{\partial f}{\partial t} + \frac{\partial f}{\partial \textbf{x}}\textbf{v}- \frac{\partial f}{\partial \textbf{v}} \frac{\partial V}{\partial \textbf{x}}=0$ The total density $$\rho(\textbf{x},t)$$ is found by integrating $$f$$ over all velocities $\tag{3} \rho(\textbf{x},t)=\int f(\textbf{x},\textbf{v},t)d^3\textbf{v}$ while the average potential $$V(\textbf{x},t)$$ is given by Poisson's equation $\tag{4} \nabla^2V=4\pi G\rho~~$ In the treatment of dark matter we may consider i) rigid halo models with a fixed dark matter density $$\rho_d\ ,$$ where $$f$$ and $$\rho$$ refer to the stellar distribution function and density while (4) is replaced by $$\nabla^2V=4\pi G(\rho+\rho_d)\ ,$$ or ii) live halo models in which the dark matter is responsive to, and affects the collective motions of stars. Rigid halo models have been used extensively in orbital studies of stars (or clusters). However, the rigid halo approach is not convenient in circumstances where the halo interacts collectively with a particular stellar sub-structure like a bar (see section 6). N-body simulations have shown that in such cases only a `live' halo model can capture correctly the effects of such interactions. By solving the equations of motion in a given potential $$V(\textbf{x},t)$$ we define smooth orbits of the stars or dark matter elements. The superposition of many orbits with appropriate weights yields a response density. The system must be self -consistent (self-gravitating) i.e. the response density must be equal to the imposed density. The self-consistency condition is satisfied accurately in N-body simulations (section 6). However in many cases we assume a fixed potential that represents a model galaxy. In this case the self-consistency condition can be checked a posteriori, i.e. after the orbits are calculated in the fixed potential. We frequently consider stationary models, i.e. $$V=V(\textbf{x})$$ is considered independent of time. In N-body simulations, we often consider successive snapshots and study the forms of the orbits at these times. The orbits in a given galaxy can be ordered (periodic or quasiperiodic) or chaotic. This classification is based on the number of integrals of motion obeyed by an orbit. An integral of motion is a function of the canonical variables (position and momenta) that remains constant along an orbit. The importance of integrals in galactic dynamics stems from Jeans' theorem which states that a stationary distribution function $$f(\mathbf{x},\mathbf{v})$$ can depend on its arguments only through the integrals $$I_i\ ,$$ i.e. $$f\equiv f(I_1(\mathbf{x},\mathbf{v}), I_2(\mathbf{x},\mathbf{v}),...)\ .$$ In stationary galactic systems of $$n$$ degrees of freedom there can be at most $$n$$ independent integrals which appear as arguments of $$f\ .$$ The energy $$E$$ is an obvious integral in all stationary systems, while the angular momentum $$L$$ along the axis of symmetry is a second integral in axisymmetric systems. Additional `third integrals', of exact or approximative form, can be found in various cases and they play a key role in dynamics. A main property of galactic kinematics is the form of the velocity ellipsoids. A velocity ellipsoid is defined at any point of space $$\mathbf{x}$$ by the three principal axes arising by the diagonalization of the $$3\times 3$$ velocity dispersion tensor $$\mathbf{\sigma}$$ with elements $$\sigma_{ij}$$ where $\tag{5} \sigma_{ij}^2(\textbf{x})={1\over\rho(\textbf{x})} \int (v_i-V_i)(v_j-V_j)f(\textbf{x},\textbf{v})d^3\textbf{v}$ where $$V_i$$ and $$V_j$$ are the mean velocities along the axes $$i\ ,$$ $$j$$ of an orthogonal coordinate system ($$i\ ,$$ $$j$$ run from 1 to 3). If the distribution function depends only on the energy, $$f=f(E)\ ,$$ the velocity ellipsoid at any point of space is a sphere, i.e. the velocity dispersion is the same in all directions. If, on the other hand, $$f$$ depends both on $$E$$ and $$L\ ,$$ i.e. $$f=f(E,L)\ ,$$ the velocity ellipsoid is a spheroid (two equal axes). Finally, if $$f$$ depends on three integrals, i.e. $$f\equiv f(E,L,I_3)\ ,$$ all three axes of the velocity ellipsoid are unequal. This distinction is important, because it allows one to link the kinematic observations available for a particular system to dynamical features of the same system. In fact, near the Sun the velocity ellipsoid has three unequal axes. An important theorem in galactic dynamics is the virial theorem. This theorem states that in a stellar system in equilibrium, the following equation holds: $\tag{6} 2T_{ij}+\Pi_{ij}+W_{ij}=0$ where $$T_{ij}={1\over 2}\int \rho(\textbf{x})V_iV_jd^3\textbf{x}$$ and $$\Pi_{ij}=\int\rho(\textbf{x})\sigma^2_{ij}d^3\textbf{x}\ .$$ The quantities $$K_{ij}=T_{ij}+\Pi_{ij}/2$$ and $$\Pi_{ij}$$ are called kinetic energy tensor and pressure tensor respectively, while $$W_{ij}=-{G\over 2}\int\int \rho(\textbf{x},\textbf{x}') {x_i(x_j-x_j') \over|\textbf{x}-\textbf{x}'|} d^3\textbf{x}d^3\textbf{x}'$$ is the potential energy tensor. The scalar virial theorem, obtained by taking the trace of Eq.(6) reads $\tag{7} 2K+W=0$ where $$K$$ and $$W$$ are the total kinetic and potential energies of the stellar system in equilibrium. Depending on the shape of a system, different states of virial equilibrium may exist with different degrees of kinetic energy that go to rotation or to velocity anisotropy. Accordingly, the elliptical galaxies as well as the spheroidal bulges of disc galaxies are divided into those being rotationally supported or pressure supported. ## Orbits and Integrals The study of individual orbits in fixed potential models with different degrees of symmetry is a central subject of galactic dynamics, since such models offer idealizations of the true gravitational potential of galaxies. A generic feature of such models is the co-existence of ordered and chaotic orbits. In fact, the properties of ordered orbits can be unraveled using various forms of the canonical perturbation theory. Such is the theory of the third integral as well as the Kolmogorov-Arnold-Moser (KAM) theory. On the other hand, the properties of chaotic orbits can be explored mainly by numerical means such as the Poincaré surface of section or quantities such as the Lyapunov characteristic number. Readers are deferred to (Contopoulos 2004) for an instructive introduction to these topics. Below, we review only the most basic facts relevant to the orbits in galaxies. We examine first the orbits and forms of integrals in simple (non-rotating) systems like elliptical galaxies. Then we examine the orbits and integrals in fast rotating disc galaxies. We return to the issue of self-consistency in sections 5 (spiral structure), and in section 6 (N-body simulations). ### Orbits in axisymmetric galaxies Within the approximation of collisionless stellar dynamics, the orbits in an axisymmetric galaxy are governed by a smooth time-independent axisymmetric gravitational potential $$V(R,z)\ ,$$ which corresponds to the solution of Poisson's equation for an axisymmetric matter distribution $$\rho(R,z)\ .$$ Then, the orbit of a star (point particle) becomes independent of its mass, and it is given by a Hamiltonian of the form $\tag{8} H\equiv{p_R^2\over 2}+{p_\vartheta^2\over 2R^2}+{p_z^2\over 2}+V(R,z)=E$ where $$R,\vartheta,z$$ are cylindrical coordinates and $$p_R\ ,$$ $$p_\theta\ ,$$ $$p_z$$ are the corresponding canonical momenta. Figure 2: A rosette orbit fills an annulus with an inner radius $$r_1$$ and an outer radius $$r_2$$ The orbits in the equatorial plane are rosettes ( Figure 2 ). Their angular momentum is constant $$p_\theta=J_0\ .$$ The radial oscillations are given by $\tag{9} \ddot{R}=-V'_0(R)+\frac{J^2_0}{R^3}$ where $$V_0(R)=V(R,z\!=\!0)\ .$$ For given energy and angular momentum the radius and angular velocity of the circular orbit $$R_0$$ are given by $\tag{10} J^2_0=R_0^3V'_0,~~~\Omega(R_0)=\sqrt{V'_0/R_0}~~.$ For orbits close to circular, the frequency of radial oscillation is called epicyclic frequency, given by $\tag{11} \kappa(R_0)=\left(V''_0+\frac{3V'_0}{R_0}\right)^{1/2}$ All the orbits on the equatorial plane are ordered. On the other hand, orbits with vertical oscillations with respect to the equatorial plane can be ordered or chaotic. Ordered orbits are found by calculating a `third integral' of motion in the form of a series (Contopoulos). Expanding the Hamiltonian with respect to the radius $$R_0$$ of a circular orbit, the Hamiltonian describing the motion in a meridian plane takes the form $\tag{12} H=\frac{1}{2}(\dot{x}^2+\dot{y}^2+\omega^2_1~x^2+\omega^2_2y^2)+ \varepsilon xy^2+\varepsilon'x^3+\mbox{higher order terms}$ where $$x$$ is the difference of the radial coordinate $$R$$ from the reference radius $$R_0\ ,$$ while $$y$$ is parallel to the axis of symmetry $$z\ .$$ By definition, a formal integral of motion $$\Phi(x,y,\dot{x},\dot{y})$$ is a function of the phase space coordinates which has a vanishing Poisson bracket with the Hamiltonian function, namely $\tag{13} [\Phi,H]=\frac{\partial\Phi}{\partial x}\frac{\partial H}{\partial\dot{x}}+\frac{\partial\Phi}{\partial y}\frac{\partial H}{\partial\dot{y}}-\frac{\partial\Phi}{\partial\dot{x}}\frac{\partial H}{\partial x}-\frac{\partial\Phi}{\partial\dot{y}}\frac{\partial H}{\partial y}= 0$ If we develop $$\Phi$$ and H in power series $$\Phi=\Phi_2+\Phi_3+...,~H=H_2+H_3+...\ ,$$ we can find the successive terms of $$\Phi$$ by equations of the form $\tag{14} [\Phi_N,H_2]=-[\Phi_{N-1},H_3]-...-[\Phi_2,H_N]$ provided that $$\Phi_2$$ is a properly chosen function. The successive terms in the power series are of increasing order in a properly chosen small parameter. The latter gives the order of the distance of orbits in phase space from the equatorial circular orbit. More generally, in `third integral' expansions the small parameter can be chosen so as to reflect the deviations of the model considered from an integrable model. A non-resonant third integral can be found if $$\omega_1$$ and $$\omega_2$$ satisfy no commensurability condition. We then choose, for example, $$\Phi_2=\frac{1}{2}(\dot{x}^2+\omega^2_1~x^2)\ .$$ The higher order terms $$\Phi_3,\ldots,\Phi_N$$ can be calculated by computer algebra. The third integral is in general a non-convergent series. However a proper truncation provides useful approximations for the forms of galactic orbits. In fact, it has been found that a truncated third integral $$\overline{\Phi}_N=\Phi_2+...+\Phi_N$$ is approximately conserved with greater accuracy as the order N increases beyond N=2. However if the expansion goes beyond a critical value $$N_{crit}$$ the variations of $$\overline{\Phi}_N$$ increase instead of decreasing. The order of $$N_{crit}$$ is smaller when the perturbations $$\varepsilon,\varepsilon'$$ etc are larger. By Nekhoroshev's theorem, the error of the approximation is exponentially small in $$1/\epsilon$$ (or $$1/\epsilon'$$) at the order $$N_{crit}\ .$$ In calculating the various terms of the third integral we find that $$\Phi_N$$ contains denominators of the form $$|\omega_1k_1+\omega_2k_2|$$ where $$k_1$$ and $$k_2$$ are positive or negative integers. Thus if $$\left|\frac{\omega_1}{\omega_2}\right|$$ is close to a rational number $$\left|\frac{k_2}{k_1} \right|\ ,$$ the quantity $$|\omega_1k_1+\omega_2k_2|$$ may be small. Then we say that we have a small divisor. In such cases we can find a resonant form of the third integral that is valid near the particular resonance $$\left|\frac{\omega_1}{\omega_2}\right|= \left|\frac{k_2}{k_1}\right|\ .$$ In particular we find the corresponding resonant periodic orbits and the tube orbits that surround the stable resonant periodic orbits. Of special importance are the low order resonances $$\omega_1/\omega_2=\pm 1/1,~\pm 2/1\ ,$$ etc. The ordered orbits obeying a non-resonant third integral are called tube' orbits if $$J_0\neq 0\ ,$$ and box' orbits if $$J_0=0\ .$$ Box orbits pass arbitrarily close to the center, while tube orbits leave a hole around the axis of symmetry. On the other hand, the orbits obeying a resonant third integral are either periodic or they form thin tubes around their corresponding periodic orbits. Figure 3: (a) A box orbit, (b) a thin tube orbit around the 1:1 resonant periodic orbit, and (c) a chaotic orbit in the meridian plane of a galaxy In the case of a galaxy with a smooth core the orbits near the center are ordered. The boxes are deformed Lissajous figures ( Figure 3 a ), while the orbits around particular resonant periodic orbits form elongated tubes ( Figure 3 b ). But there also many chaotic orbits ( Figure 3 c ), mainly in the region separating the box orbits from the main tube orbits. If now the core of a galaxy is cuspy (e.g. the density rises as a power law as we approach the center), or if we add a central mass (e.g. a black hole) at the center of the galaxy, all the orbits near the center become chaotic. At the same time the number of tube orbits increases. In order to distinguish between ordered and chaotic orbits we use a Poincaré surface of section, i.e. a surface in phase space that intersects all the orbits, and find the distribution of the successive intersections (iterates) of each orbit by this section. In the case of orbits in the meridian plane $$(R,z)$$ of a galaxy with a plane of symmetry $$z=0\ ,$$ the plane $$z=0$$ is a Poincaré surface of section ( Figure 4 ). The energy $\tag{15} E=\frac{1}{2}(\dot{R}^2+\dot{z}^2)+V(R,z)$ is then one integral of motion. If we also use the `third integral' $\tag{16} \Phi(\dot{R},\dot{z},R,z)=c$ we eliminate $$\dot{z}$$ between (15) and (16) and find a toroidal surface $\tag{17} F(R,z,\dot{R})=q$ on which lies the orbit. The successive intersections of an orbit by the plane $$z=0$$ lie on an invariant curve $\tag{18} F(R,0,\dot{R})=q$ The ordered orbits thus define invariant curves of the form of (18) . On the other hand the successive iterates of a chaotic orbit are scattered irregularly ( Figure 4 ). Figure 4: (a) The Poincaré surface of section in a galaxy without a central black hole. Regular box or tube orbits correspond to smooth curves while chaotic orbits correspond to scattered points. (b) In a galaxy with a central black hole the box orbits disappear, while the domain of regular loop orbits increases In a generic dynamical system the ordered and chaotic orbits coexist. In fact, if an integrable system is perturbed slightly, there is a large set of integral surfaces containing regular orbits. This result is based on the famous Kolmogorov, Arnold, Moser (KAM) theorem that can be stated as follows. If an autonomous Hamiltonian system of N degrees of freedom is close to an integrable system expressed in action-angle variables, there are N-dimensional invariant tori containing quasi-periodic motions with frequencies $$\omega_j$$ satisfying a Diophantine condition $\tag{19} |\mathbf{\omega\cdot k}|=|\omega_1 k_1+\omega_2k_2+... +\omega_Nk_N|>\frac{\gamma}{[|k_1|+|k_2|+...|k_N|]^\tau}$ where $$\tau>N-1\ .$$ The set of invariant tori is of order $$[1-O(\gamma)]\ .$$ thus if the perturbation $$\gamma$$ is small most initial conditions generate ordered orbits. However near every resonance where $$|\mathbf{\omega\cdot k}|$$ is small there are sets of chaotic orbits. On the other hand if $$\gamma$$ is large a large degree of chaos appears. The transition from order to chaos is produced by an overlapping of resonances (Rosenbluth,Sagdeev, Taylor & Zaslavski, Contopoulos, Chirikov). In fact at every resonance of a nonintegrable system there is one or two unstable periodic orbits and close to every unstable orbit there is some degree of chaos ( Figure 5 a ). If the perturbation is small the chaotic zones of the various resonances are separated by invariant curves around the center. But if the perturbation increases the intervening invariant curves are destroyed and the various chaotic domains overlap and produce a large degree of chaos ( Figure 5 b ). Figure 5: Chaos (a) before and (b) after the overlapping of resonances A general method to distinguish between ordered and chaotic orbits is by means of the Lyapunov characteristic number (LCN) defined as follows. We calculate two orbits $$x(t,x_0)$$ and $$x(t,x_0+\xi_0)=x(t,x_0)+\xi\ ,$$ where $$\xi$$ is an infinitesimal deviation, found by solving the variational equations or by approximate numerical methods. The Lyapunov characteristic number is $\tag{20} LCN=\lim\limits_{ t\rightarrow\infty}\frac{ln|\xi/\xi_0|}{t}$ where $$\xi_0$$ and $$\xi$$ are the deviations at times $$0$$ and $$t\ .$$ If $$LCN=0$$ the orbit is ordered and if $$LCN>0$$ the orbit is chaotic. This method is applicable for any number of degrees of freedom. The inverse of the Lyapunov characteristic number is called "Lyapunov time". Beyond the Lyapunov time the orbits are unpredictable. There are several practical methods to improve the method of the Lyapunov characteristic number, like the fast Lyapunov indicator (FLI, Froeschlé), the stretching numbers and helicity angles (Voglis), the mean exponential growth of nearby orbits (MEGNO, Cincotta, Simo) and the smaller alignment index (SALI, Skokos). On the other hand, a global analysis of phase space dynamics can be done very efficiently with the frequency analysis of Laskar. ### Orbits in triaxial galaxies In the study of triaxial galaxies, a basic starting model is the integrable model given by the Staeckel potential $\tag{21} V=-\frac{F_1(\lambda)}{(\lambda-\mu)(\lambda-\nu)} -\frac{F_2(\mu)}{(\mu-\nu)(\mu-\lambda)} -\frac{F_3(\nu)}{(\nu-\lambda)(\nu-\mu)}$ in elliptical coordinates $$\lambda,\mu,\nu\ .$$ The most important types of orbits are i) box, (ii) and (iii) inner and outer long axis tube, and (iv) short axis tube orbits ( Figure 6 ). Figure 6: The regions filled by the main types of orbits in triaxial galaxies (a) box (b) inner long axis tube (c) outer long axis tube (d) short axis tube The ILAT, OLAT and SAT orbits characterize the form of regular quasi-periodic orbits in generic triaxial potentials. On the other hand, the box orbits are present only in models of triaxial galaxies with a smooth core, i.e. one characterized by a flat central density profile. If, instead, the central density profile is cuspy, most box orbits disappear and are replaced by chaotic orbits which may pass arbitrarily close to the center. If we now consider periodic galactic orbits in 3 dimensions we have a number of new phenomena. The type of stability or instability of the periodic orbits depends on the eigenvalues of the monodromy matrix $$A\ ,$$ that gives the infinitesimal deviations from the periodic orbit after one period $$T\ :$$ $\tag{22} \xi(T)=A\xi_0$ The eigenvalues $$\lambda$$ of A satisfy an equation of the form $\tag{23} (\lambda^2+b_1\lambda+1)(\lambda^2+b_2\lambda+1)=0$ Thus there are two couples of inverse eigenvalues $\tag{24} \begin{array}{c} \lambda _1 \\ \lambda _2 \end{array} = {1 \over 2} (-b_1\pm \sqrt{ b^2_1-4}),\begin{array}{c} \lambda _3 \\ \lambda _4 \end{array} = {1 \over 2} (-b_2\pm \sqrt{ b^2_2-4})$ If $$|b_1|<2\ ,$$ $$|b_2|<2$$ the orbit is stable (S), if $$|b_1|<2\ ,$$ $$|b_2|>2\ ,$$ or $$|b_1|>2\ ,$$ $$|b_2|<2$$ the orbit is simply unstable (U), if $$|b_1|>2\ ,$$ $$|b_2|>2$$ the orbit is doubly unstable (DU), and if $$b_1\ ,$$ $$b_2$$ are complex, the orbit is complex unstable ($$\Delta$$). In a 2-D system there is only one factor of (23) and the orbits are either stable or unstable. As an example of a 3-D system we consider the Hamiltonian $\tag{25} H=\frac{1}{2} (\dot{x}^2+\dot{y}^2+\dot{z}^2+Ax^2+By^2+C z^2)-\varepsilon xz^2-\eta yz^2=h$ for fixed values of $$h,A,B,C$$ and varying parameters $$\varepsilon$$ and $$\eta\ .$$ A simple orbit, called 1a, has various types of stability, as $$\varepsilon$$ and $$\eta$$ vary. Figure 7 is a 'stability diagram' that contains all four types of stability-instability. Figure 7: Stability diagram for a family of simple 3-D periodic orbits In the case of 3-D galaxies the 3-D orbits undergo an oscillation along the third dimension, whatever the form of their projection on the galactic plane. Thus we may have instability along the third dimension even if the projection of an orbit on the plane of symmetry is stable. Another new phenomenon that appears in three or more degrees of freedom is Arnold diffusion. This is a slow diffusion that allows orbits to go very far from their initial conditions by following various chaotic layers in phase space. In the case of 2 degrees of freedom the phase space for a fixed energy is 3 dimensional. Then if there is a 2-dimensional invariant toroidal surface (and many such surfaces exist in slightly perturbed systems) the orbits inside the torus cannot cross the torus and go outside. Thus if the inner orbits are chaotic they cannot diffuse very far. However in a system of 3 degrees of freedom the phase space (for constant energy) is 5-dimensional, but the invariant surfaces provided by the KAM theorem are 3-dimensional and cannot separate the phase space into an interior and an exterior part. Thus diffusion can carry an orbit very far. A picture of such a diffusion is provided if we consider a 3-D phase space and 1-D invariant manifolds, like strings from the floor to the ceiling of a room. If the system is non-integrable these strings leave between them chaotic domains, and the diffusion along these domains is Arnold diffusion. Arnold diffusion can change considerably the galactic orbits, however the time scale of this diffusion is very long and it cannot affect appreciably galaxies that are not very irregular. A consequence is that despite Arnold diffusion the galaxies do not change appreciably over a Hubble time. ### Orbits in disc galaxies The patterns of disc galaxies (e.g. bars or spiral arms) rotate with an angular velocity $$\Omega_s$$ which is large compared to the slow figure rotation of elliptical galaxies. Thus, in order to study the orbits in disc galaxies it is customary to choose a rotating frame of reference. In an axisymmetric disc, the equations of motion in polar coordinates $$(R,\vartheta)$$ in the rotating frame are: Eq. (6) for the radial coordinate and $\tag{26} \dot{\vartheta}+\Omega_s=J_0/R^2$ for the angular coordinate. The radius of the circular orbit $$R_0$$ as well as the angular and epicyclic frequencies $$\Omega,\kappa$$ are defined by (10) , (11) . The combination $\tag{27} H=E_0-\Omega_sJ_0=h$ is called Jacobi constant (or energy in the rotating frame). Figure 8: The curves $$\Omega,\Omega\mp \kappa/2$$ and two possible lines $$\Omega=\Omega_s$$ for two galactic models, giving different forms of the curve $$\Omega-\kappa/2$$ From a given potential $$V_0(R)$$ we find three main functions (curves) ( Figure 8 ), namely $$\Omega\ ,$$ and $$\Omega\pm$$$$\kappa/2\ .$$ The intersections of these curves by the line $$\Omega_s$$ are corotation, where $$\Omega=\Omega_s\ ,$$ and outer or inner Lindblad resonances, where $\tag{28} \frac{\kappa}{\Omega-\Omega_s}=\mp\frac{2}{1}$ In generic potentials we have one corotation, one outer Lindbald resonance (OLR) and one, zero, or two inner Lindblad resonances (ILR). Figure 9: (a) Periodic orbits near the resonances $$2/1$$ (inner Lindblad), $$3/1$$ and $$4/1\ .$$ (b,c) Quasiperiodic orbits surrounding periodic orbits A convenient approach is to analyze the gravitational potential in terms of Fourier components $\tag{29} V(R,\theta,z)= \sum_{m=0}^{\infty}V_m(R,z)\cos\big[m\theta-\phi_m(r)\big]$ In most `grand design' spiral or barred galaxies, the m=2 component dominates over all other components except $$V_0$$ (axisymmetric component). The main families of orbits in such galaxies are examined below. The remaining components $$V_m$$ ($$m\neq$$0 or 2) describe deformations of the form of the spiral or bar structures from a purely bi-symmetric shape. It has been found observationally that these components may play a significant dynamical role in particular galaxies. The periodic orbits at the Lindblad resonances are approximately ellipses $$(2/1)$$ around the center of the galaxy ( Figure 9 a ), but there are also higher order resonances, e.g. triple $$(3/1)\ ,$$ quadruple $$(4/1)$$ ( Figure 9 a ), etc. The regular non-periodic orbits are close to periodic orbits. They have two basic frequencies, one nearly equal to that of the corresponding periodic orbit, and one representing oscillations around this periodic orbit. Such orbits are called quasiperiodic ( Figure 9 b,c ). The main families of periodic orbits inside corotation are the families $$x_1$$ (stable, Figure 10 a), $$x_2$$ (stable, oriented perpendicularly to $$x_1\ ,$$ ( Figure 10 a) and $$x_3$$ (unstable). Figure 10: (a) The main periodic orbits in a disc galaxy. (b) Orbits close to the short and long period orbits near corotation Figure 11: Gaps along the characteristic of the family $$x_1\ .$$ (—-) stable, ($$\cdot\cdot\cdot$$) unstable orbits; ($$---$$) curve of zero velocity. ($$-\cdot -$$) the $$x_1$$ family in the axisymmetric case We call "characteristic" of a family of periodic orbits the curve that gives the position of an orbit (say the coordinate $$x$$ at the point of intersection of the orbit with the x-axis ($$y=0$$)) as a function of the energy or the Jacobi constant ( Figure 11 ). From the stable families bifurcate higher order resonant families, e.g. 2/1, 3/1, 4/1, etc. It is known that in perturbed (i.e. non-axisymmetric) systems at the even resonances (2/1, 4/1, etc) the family $$x_1$$ forms gaps. Three main types of gaps are shown in Figure 11, but more complicated types of gaps also exist. Similar phenomena appear also beyond corotation. Near corotation we have four equilibrium points $$L_1\ ,$$ $$L_2\ ,$$ $$L_4\ ,$$ $$L_5$$ ($$L_3$$ represents the center of the galaxy). For small perturbations $$L_1\ ,$$ $$L_2$$ are unstable and $$L_4\ ,$$ $$L_5$$ are stable. Near $$L_4\ ,$$ $$L_5$$ we have two families of periodic orbits, the short (SPO) and the long period orbits (LPO) ( Figure 10 a ). These are connected by a complicated set of bifurcations. The ordered nonperiodic orbits near $$L_4\ ,$$ $$L_5$$ are either rings, close to SPO, or bananas, close to LPO ( Figure 10 b ). Many effects of the orbits can be understood by considering in detail the dynamics close to resonances. The most important resonances are the inner and outer Lindblad resonances, where $$\frac{\omega_1}{\omega_2}=\frac{\kappa}{\Omega-\Omega_s}= \pm{2\over 1}\ .$$ Close to these resonances the Hamiltonian can be written in action angle variables in the form $\tag{30} H=\omega_1I_1+\omega_2I_2+aI^2_1+2bI_1I_2+cI^2_2+...+V_1$ where $$V_1$$ represents a spiral or a bar $\tag{31} V_1=\varepsilon_0\cos 2\vartheta_2~-\left(\frac{2I_1}{\omega_1}\right)^{1/2}~[\varepsilon_+ \cos(\vartheta_1-2\vartheta_2)+\varepsilon_- ~\cos(\vartheta_1+2\vartheta_2)]+...$ The action $$I_1$$ corresponds to the radial deviations, $$I_2$$ corresponds to the angular momentum, $$\vartheta_1$$ represents the angle along an epicyclic oscillation around the circular orbit, and $$\vartheta_2$$ represents the azimuth around the center. Away from all resonances we can use a canonical transformation to eliminate the terms containing angles and find $$H$$ in the form $$H=\omega_1J_1+\omega_2J_2+...\ .$$ Then $$J_1$$ and $$J_2$$ are integrals of motion. When $$\frac{\omega_1}{\omega_2}$$ is close to 2 (inner Lindblad resonance) we can use a canonical change of variables and find an approximate Hamiltonian of the form $\tag{32} \overline{H}=\omega_2J_2+\gamma J_1+...-\varepsilon_+\left(\frac{2J_1}{\omega_1}\right)^{1/2}\cos\psi_1=0$ where $$J_1,J_2$$ are the new actions, $$\gamma=\omega_1-2\omega_2$$ and $$\psi_1=\vartheta^\ast_1-2\vartheta^\ast_2\ ,$$ with $$\vartheta^\ast_1, \vartheta^\ast_2$$ the new angles. As $$\psi_2$$ does not appear in this Hamiltonian the corresponding action $$J_2$$ is a second integral of motion. The integral $$\overline{H}-\omega_2J_2$$ is a function of $$J_1$$ and $$\psi_1$$ which gives the forms of the orbits near the inner Lindblad resonance. These orbits are close to two perpendicular deformed ellipses. The nonperiodic orbits in this region form rings around these ellipses. Thus the resonant integrals explain the main forms of the orbits inside corotation in a galaxy. Similar results appear near the outer Lindblad resonance and near corotation. If the disc has non-negligible thickness, we may also consider 3D orbits which have a vertical oscillation with respect to the disc. Such orbits can explain particular forms of galaxies seen edge-on, like peanut galaxies and box galaxies. Finally, a small proportion of stars continuously escape from a galaxy to infinity. In fact after a time interval equal to the time of relaxation $$t_{relax}\ ,$$ the encounters between stars (collisional effects) produce an approximately Maxwellian distribution of velocities. In a Maxwellian distribution a proportion $$0.0074$$ of stars have velocities greater than the escape velocity. Thus after a time t the proportion of escaping stars is equal to $\tag{33} \frac{dN}{N}=0.0074 t/ t_{relax}$ As a consequence a galaxy is completely dissolved after a time of the order of 1016 years. In the case of a given potential $$V(\textbf{x})$$ the stars inside corotation cannot escape from a galaxy, but the stars outside corotation may escape to infinity. If the galaxy is not axisymmetric only the Jacobi constant ( (27) ) is kept constant, while the energy and the angular momentum undergo variations, that appear as random. Then after a sufficient time the energy may become positive and the stars escape. If a system is close to integrable and there are invariant curves beyond corotation surrounding the galaxy the orbits inside them cannot ever escape. However in strongly perturbed systems there are no such invariant curves and the orbits may escape after a sufficiently long time. But even if the orbits are chaotic, nevertheless they are `sticky' and only after very long times such orbits can escape. Stickiness means that the chaotic orbits spend transient but long intervals of time in a restricted domain of the phase space around islands of stability or other invariant sets (like invariant manifolds; section 5.5), before filling a much larger chaotic domain, or escaping to infinity. ## Construction of equilibria The study of individual orbits in static potentials aims primarily to identify which types of orbits support the main observed morphological and kinematical features of galaxies. In order, however, to construct fully self-consistent stellar equilibria, one has to create an appropriate statistical mixture of many orbits ensuring that the resulting distribution function f is a solution of the collisionless Boltzmann equation ( (2) ). There are various methods to accomplish this goal. Some important methods are i) the hierarchy of Jeans' equations, ii) inversion formulae, and iii) the numerical construction of self-consistent equilibria (Schwarzschild). Finally (iv) one can make an `ad hoc' choice of a distribution function model. ### Hierarchy of Jeans' equations The collisionless Boltzmann equation ( (2) ) is, in general, very hard to solve in terms of all six arguments (positions and velocities). In practice, however, only the lowest moments of the distribution function can be compared to observations. In Cartesian coordinates, the moments are defined by: $\tag{34} \mu_{i,j,k}(\mathbf{r})=\rho(\mathbf{r})\overline{v_x^i v_y^j v_z^k}= \int v_x^iv_y^jv_z^k f dv_x dv_y dv_z$ The zero-th moment $$i=j=k=0$$ is the density itself. The first order moments $$i=1,j=0,k=0$$ (and cyclic permutations) depend on the mean streaming velocities in the three main directions, etc. Multiplying all terms of Boltzmann's equation with power terms $$v_x^iv_y^jv_z^k$$ we obtain the so-called hierarchy of Jeans' equations, which relate the various moments defined by ( (34) ). The most important Jeans' equations are the continuity and the momentum equations which relate the zero, first, and second order moments of $$f\ .$$ These equations must be supplemented with a closure condition, which is equivalent to the equation of state in hydrodynamics. Then, this system of equations can be solved and yields specific models that can be compared to observations. Special attention, however, has to be paid on aposteriori controls that the resulting models do not exhibit unphysical properties (for example, the density may turn to be negative in some region of the modelled galaxy, the velocity dispersion may show anomalous peaks etc.). ### Inversion formulae If we make some specific assumptions about the geometry and kinematics of an observed galaxy, we can deduce models of the distribution function by an inversion procedure starting from some functions determined by observations. The simplest such example, referring to spherical systems, is Eddington's inversion formula. Starting from the density $$\rho(r)\ ,$$ we derive the potential through Poisson's equation $$\nabla^2 V=4\pi G\rho$$ and then use the solution $$V(r)$$ to find $$r(V)\ ,$$ and finally $$\rho(V)\ .$$ If we now assume that the distribution function is isotropic in the velocities, we find its form via $\tag{35} f(E)={1\over\sqrt{8}\pi^2} {d\over dE}\int_{E}^0 {d\rho\over dV} {dV\over\sqrt{V-E}}$ Various generalizations of Eddington's formula exist. For example, the Ossipkov - Merritt formula for anisotropic spherical systems reads: $\tag{36} f(Q(E,L))={1\over\sqrt{8}\pi^2}{d\over dQ}\int_{Q}^0 {d\rho_Q\over dV} {dV\over\sqrt{V-Q}}$ where $$Q=E+L^2/2r_e^2\ ,$$ $$L$$ is the angular momentum, $$\rho_Q=\rho(1+r^2/r_a^2)$$ and $$r_a$$ is a characteristic radius beyond which the velocity distribution becomes anisotropic. In the case of axisymmetric systems, the inversion method yields the `even' part of the distribution function, i.e. the part depending on the energy and on the square of the angular momentum along the axis of symmetry (Lynden-Bell, Hunter, Dejonghe). The odd part, on the other hand, can be found by an inversion algorithm only if the azimuthal velocity distribution is known (Merritt). ### Schwarzschild's method A numerical method to examine the relative contribution of various types of orbits in models of stellar equilibria is Schwarzschild's method. This method tries to find whether there is an appropriate statistical combination of orbits in a model with fixed spatial density $$\rho(\mathbf{r})$$ (called 'imposed density') such that the superposition of the orbits yields a 'response density' equal to the imposed density. To this end, a grid of initial conditions is specified in a properly chosen subset of the phase space (e.g. on equipotential surfaces). The orbits with these initial conditions are integrated for sufficiently long time intervals. This creates a ‘library of orbits’. By now dividing the space into a large number $$N_c$$ of small cells, we calculate the time spent along each orbit inside every cell. We furthermore assign statistical weights $$w_o (o = 1, . . .,N_o)\ ,$$ where $$N_o$$ is the total number of orbits considered. Finally, we check whether there is a solution for these weights that satisfies the self-consistency condition $\tag{37} \sum_{o=1}^{N_o}w_o t_{oc}=m_c~~~~c=1,\ldots,N_c$ where $$t_{oc}$$ is the time spent by the orbit labeled by o in the cell labeled by c, and $$m_c$$ is the total mass in the cell c as derived from the imposed density function. We furthermore impose the constraint $$w_o\geq 0\ ,$$ i.e. the weights can only be positive quantities. If a solution under the imposed constraints is found, we call it a self-consistent model of a galaxy. In practice, instead of a perfect solution we look for a minimizer of the difference between imposed and response density, which can be found by various algorithmic techniques such as linear or quadratic programming, non-negative least squares, the Lucy algorithm, or ‘entropy’ functional methods. Usually, the solutions found are non-unique, and further constraints can be introduced, for example, when kinematic data are available for a particular galaxy. The method of Schwarzschild has been applied with great success in various models of axisymmetric or triaxial galaxies. An important question regards the relative contribution of regular and chaotic orbits to the so-constructed stellar equilibria. In early models of triaxial galaxies it was found that most orbits are regular. However, in models with central cusps or central black holes, it was found that the chaotic orbits contribute significantly to the self-consistent equilibria (by percentages reaching 50%). Finally, applications of variants of the same method in disc galaxies (Contopoulos) have revealed that chaotic orbits play an important role in the self-consistency condition near and beyond the corrotation region of barred galaxies. ## Gas Dynamics Up to about 10% of the total baryonic matter in disc galaxies can be in the form of gas or dust. The gas is present in the inner disc, but also at distances exceeding many optical lengths. The formation of galactic discs from cosmological initial conditions is itself attributed partly to dissipative processes taking place in the gas, during an early epoch of galaxy formation. The main condition for a mass component of a galaxy to be described by gas dynamics is $\tag{38} \lambda<<l$ where $$l$$ is the mean-free path, while $$\lambda$$ is the scalelength over which the distribution function exhibits significant variations. In terms of the temporal evolution of the distribution function $$f(\mathbf{x},\mathbf{v},t)\ ,$$ such a condition can be accounted for by including `collisional' (dissipative) terms in Boltzmann's equation, namely $\tag{39} \frac{\partial f}{\partial t} + \frac{\partial f}{\partial \textbf{x}}\textbf{v}- \frac{\partial f}{\partial \textbf{v}} \frac{\partial V}{\partial \textbf{x}}= \frac{df}{dt_c}$ where the term $$df/dt_c$$ describes non-zero variations of the distribution function's convective derivative (the latter are zero only in pure stellar dynamics). The index c in the r.h.s. means `collisions', since collisional effects mainly account for the time change in the distribution function. For example, dynamical friction phenomena may become important when massive objects (e.g. black holes, giant molecular clouds or clusters) move in the background of stars and gas. The timescale in which such phenomena affect the global form of the distribution function is of the order of the collisional relaxation time (see section 6), but the local form can be affected in significantly shorter times. By considering various velocity moments of the distribution function, ( (39) ) gives rise to specific forms of hydrodynamical equations (i.e. the continuity and force equations). The main role of the gas is that it produces secular evolution of the galaxies. Gas-dynamical phenomena of particular interest are: i) Jeans instability: if we consider a volume of gas with homogeneous density $$\rho\ ,$$ the time evolution of a density perturbation $$\delta\rho$$ generated by any mechanism within this volume depends on the scale $$L$$ in which this perturbation extends. Namely, if $$L>L_J=v_s(G\rho)^{-1/2}\ ,$$ where $$v_s$$ is the velocity of sound, the perturbation becomes unstable, i.e. its pressure can no longer sustain it against its self-gravity. The length $$L_J$$ is called Jeans' length. A stellar analog of this phenomenon occurs with the velocity of sound replaced by the velocity dispersion of the stars. Jeans instability is important because it leads to the growth of even very small initial density perturbations, and various applications of it have been studied, ranging from the early epoch of galaxy formation to the later epochs, where e.g. the spiral structure is formed. ii) Gas infall: giant gas clouds with masses comparable to those of dwarf galaxies, or giant gas streams, may cause accretion of mass in the centers of disc galaxies. The origin of the infalling gas lies mainly inside the disc, but a considerable fraction may come from the surrounding environment, in which case gas accretion is characterised as a remnant of cosmological infall. The accretion rate could be as high as 1 solar mass per year, resulting in significant secular evolution of the mass distribution in galaxies within one Hubble time. This, in turn, affects both the morphology and kinematics of a galaxy, but it also induces enhanced star formation and chemical (metalicity) evolution of galaxies. In numerical simulations, the motions of infalling gas components often appear to be highly non-circular. However, the gas may settle to particular locations in the disc, close to the main disc resonances. In this way, rings can be formed near the inner and outer Lindblad resonances. iii) Gas shocks: The spiral arms in disc galaxies are described as density waves embedded in the galactic disc. As a density wave propagates, large pressure gradients are formed in the wavefront, which may induce gas shocks. Such shocks are also accompanied by rapid star formation. In fact, observation of the phase lag between the loci of the maximum of the spiral density and of the density of suitable nearby tracers with relatively short lifetime (like open clusters of young stars), has been proposed as an observational method to determine the pattern speed of spiral arms. iv) Hot coronae: elliptical galaxies are often surrounded by hot gas coronae (of temperature $$10^6$$K). The total mass in such coronae can rise up to $$10^{10}$$ solar masses. Numerical studies of gas dynamics offer insights regarding the role of dissipative processes in the secular evolution of galaxies. Besides general mesh hydrodynamical schemes, some numerical techniques well suited to the study of galactic gas dynamics are the flux-split (FS2, van Albada) and the sticky particle (Calberg) techniques. A very widely used technique is smooth particle hydrodynamics (SPH). This is a Lagrangian scheme where the equations of motion of individual `gas particles' (which correspond, physically, to large masses of gas) are integrated numerically. The SPH particles are considered to occupy a volume (e.g. a sphere) in space. Quantities like pressure, temperature and viscosity can be defined by taking weighted-averages inside the volume. The particles may lose energy due to cooling (caused e.g. by radiation of the gas). This is taken into account by considering the physical processes which cause the gas to radiate. The SPH method is very flexible for many purposes, compared to the less flexible but more accurate mesh hydrodynamical methods. SPH can be combined with N-body stellar dynamical methods (section 6) to explore the evolution of galaxies. It has been applied to studies of gas dynamics in isolated galaxies and in cosmological simulations of galaxy formation and the formation of large scale structures. Finally, the comparison of photometric data in various wavelengths reveals to what extent gas dynamics follows or is differentiated from stellar dynamics in particular galaxies. This issue is important in the case of spiral structure, since we often find structures in the gas (e.g. spiral arms extending to large distances) that do not have a stellar counterpart, and vice versa. ## Spiral Structure The current paradigm of spiral structure in disc galaxies is based on the description of the spiral arms as density waves rather than material structures (i.e. always composed by the same stars). The origin of density wave theory lies in the observation that, if all matter is in nearly circular motion around the center of a disc galaxy, material non-axisymmetric structures embedded in the disc wind very quickly. This is due to the differential rotation of a galaxy, namely the fact that the angular rotation speed decreases from the center outwards. The average decrease with distance $$\Delta\Omega/\Delta R\ ,$$ of order $$10 Gyr^{-1}Kpc^{-1}\ ,$$ causes such structures to become tightly wound in a small time period (of order 1Gyr), acquiring pitch angles which are only a fraction of one degree. This is opposed to the fact that the pitch angles of observed spiral arms are substantially larger (a few degrees), leading to the conclusion that the observed spiral arms cannot be material. The most natural solution to the above winding dilemma is found by adopting that the spiral arms of the spiral galaxies are density waves, i.e. the stars pass through the spiral arms but stay longer on the average close to them. Thus the spiral arms are not composed of the same stars always, but they are waves, i.e. the stars move with different angular velocities $$\Omega$$ around the center, while the spiral arms (density maxima) rotate with a particular angular velocity $$\Omega_s\ .$$ Stars inside corotation have $$\Omega>\Omega_s$$ and overtake the spiral arms, while stars outside corotation have $$\Omega<\Omega_s$$ and move in a retrograde direction with respect to the spiral arms. The existence of spiral density waves has been established both by observations and by numerical N-body simulations. ### Linear density wave theory The density wave theory of spiral arms was developed first by Lindblad, and later by Lin, Shu, Kalnajs, Toomre, Lynden-Bell etc. The linear theory of density waves starts with a given axisymmetric model that consists of the functions $$f_0,\varrho_0$$ and $$V_0$$ satisfying equations (2),(3),(4), and finds the first order perturbations $$f_1,\varrho_1\ ,$$ and $$\!V_1$$ that have a spiral form (or a bar form in special cases). In the simplest case instead of $$\varrho_0, \varrho_1$$ we have the surface densities $$\sigma_0, \sigma_1\ .$$ The linear problem consists in finding self-consistent solutions of Eqs. (2),(3),(4) of the form $$f=f_0+f_1,\ \sigma=\sigma_0+\sigma_1$$ (where $$\sigma$$ is the surface density corresponding to a spatial density $$\rho=\sigma\delta(z)\ ,$$ with $$\delta(z)$$ the delta function) and $$V=V_0+V_1$$ with $$\!V_1$$ of the form $\tag{40} V_1=Ae^{i(\varphi+\omega t-2\vartheta)}$ where $$A=A(r)$$ is the amplitude, and $$\varphi=\varphi(r)$$ is the phase of the spiral, while $$m$$ is the number of spiral arms (usually $$m\!=\!2$$). This problem leads to an integral equation that has eigenfrequencies $$\omega=2\Omega_s$$ (for $$m\!=\!2$$), where $$\Omega_s$$ is the pattern velocity of the spiral. In the case of tight spiral arms the self-consistency condition leads to the Lin dispersion relation that relates the frequencies $$\frac{\omega_1}{\omega_2}=\frac{\kappa}{\Omega-\Omega_s}$$ at a distance $$r$$ with the inclination of the spiral arms (or local radial wavenumber $$k=\frac{d\varphi}{dr}$$), the local surface density $$\Sigma$$ and the radial dispersion of velocities $$\sigma_R\ .$$ The wavenumber is found as a function of the radius $$r\ .$$ Thus we can derive the value of $$\varphi\ ,$$ i.e. the form of the spiral arms all the way from the inner Lindblad resonance, past corotation, to the outer Lindblad resonance. Assuming $$\phi$$ to increase in the same sense as pattern rotation, the wave is called leading if $$k>0\ ,$$ and trailing if $$k<0\ .$$ The linear density wave theory relies on an important local approximation, namely that if both the disc potential and surface density are analyzed in modes (like (40) ), the potential perturbation induced by one mode at a point of the disc depends only on the density perturbation for the same mode at the same point. This (WKB) approximation leads to a local relation between the two quantities which ignores how the density perturbation in a global scale affects the potential locally. At a distance $$R\ ,$$ this approximation is valid to order $$(kR)^{-1}\ ,$$ thus, it holds better in the short wavelength limit ($$|kR|>>1$$). ### Nonlinear theory - Termination of spiral arms The linear theory is inadequate to deal with the regions near the inner and the outer Lindblad resonances, because there the perturbation $$\!f_1$$ tends to infinity. In fact if we expand $$f_1$$ as in the case of the third integral, we find that $$f_1$$ contains denominators of the form $$(\omega_1 \mp 2\omega_2)$$ (- at the ILR and + at the OLR) and this tends to zero when we approach the resonance. Therefore $$f_1$$ is larger than $$f_0$$ near the resonances and the basic assumption of the linear theory that $$f_1$$ is a small perturbation of $$\!f_0$$ is not satisfied. In these cases we need a different perturbation analysis (Contopoulos), namely one has to use the resonant integrals of motion of the collisionless Boltzman equation (2). Away from all resonances $$f$$ is a function of the generalized actions $$J_1,J_2$$ as seen in section 2.3. Close to the inner Lindblad resonance $$f$$ is a function of $$H$$ and $$J_2$$ (where $$H$$ contains not only $$J_1$$ and $$J_2\ ,$$ but also the resonant angle $$\psi=\theta_1-2\theta_2\!$$). Thus a different expansion is valid in this case. The use of a nonlinear theory of density waves allows us to describe the resonant phenomena near the Lindblad resonances and find the form of the spiral arms near these resonances. The nonlinear effects are important also near corotation. In this case the Lin dispersion relation has no infinities but the amplitude of the wave tends to infinity as we approach corotation (Shu). Thus in this case also we have to use resonant integrals of motion in $$f$$ and not the actions $$J_1,J_2\ .$$ Figure 12: Periodic orbits inside and outside the $$4/1$$ resonance in a normal spiral galaxy In normal spirals the non axi-symmetric perturbation is weak, of the order $$2-10\%\ ,$$ The linear density wave theory is applicable all the way to corotation if the amplitude of the spirals is of the order of $$2-5\%\ .$$ But in the range $$5-10\%$$ nonlinear effects due to higher order resonances are important. In particular the periodic orbits near the $$4/1$$ resonance are similar to squares with 4 round corners or 4 loops. But while inside the $$4/1$$ resonance the periodic orbits are oriented in a way supporting the spirals ( Figure 12 ), beyond the $$4/1$$ resonance the orientation of the orbits is quite different and does not support the spiral arms. Thus the stronger normal spirals tend to terminate near the $$4/1$$ resonance. This has been verified by many numerical experiments. ### Preference of trailing waves The observations indicate that most spiral galaxies host trailing spiral arms. On the other hand, the anti-spiral theorem (Lynden-Bell and Ostriker) states that for any collisionless steady-state solution of Boltzmann's equation representing trailing spiral arms, there is a symmetric solution (which is found be reversing the sings of all velocities in the distribution function) representing leading spiral arms. Thus, there are no steady-state spiral solutions, and the preference of trailing waves must be based on a time-dependence of the amplitude of the density waves such that trailing waves have a growing amplitude while leading waves have decreasing amplitude in time. The variation of amplitude in time can be caused by both dissipationless or dissipational mechanisms. Some proposed stellar-dynamical mechanisms are: Angular momentum transfer (Lynden-Bell and Kalnajs). The torques exerted between the spiral arms cause angular momentum to be transferred across the disc. In the case of trailing spiral arms, the transfer is from the center outwards, while in the case of leading spiral arms it is inwards, which leads to a number of non-physical effects. Swing amplification (Julian and Toomre etc). If a small trailing wave-packet is formed, e.g. by shear instability, this moves towards the disc center with a group velocity $$v_g=d\omega/dk\ .$$ After passing through the center it emerges as a leading wave, which propagates outwards and is transformed again to a trailing wave near corotation. A careful examination of the local epicyclic motions inside the wave shows that upon the transition to a trailing wave the perturbation is substantially amplified. Resonance effects in the ILR (Contopoulos). When the density wave theory is implemented in the inner Lindblad resonance, it is found that, starting with a slightly growing imposed density of trailing waves both inside and outside the ILR, the response density forms a short bridge which smoothly connects these arms. If, however, we start by imposing leading spiral arms inside and outside ILR, the bridge formed by the response density is trailing. ### Modal theory - theories of multiple pattern speeds Two important open problems regarding spiral structure refer to i) the longevity and degree of quasi-stationarity of the spiral arms, and ii) whether all structures in a galaxy rotate with a common, or with multiple pattern speeds. Regarding the assumption of quasi-stationary spiral arms, an important development going beyond the local assumptions of the Lin-Shu linear density wave theory arose by considering the problem of global modes in the galactic disc. This problem is hardly tractable in pure stellar dynamics, but numerical and semi-analytical solutions have been proposed where the effects of a cold gas component are also taken into account. This so-called modal theory (Lin and Bertin) describes evolving spiral or bar-spiral global modes. Their maintenance relies on the fact that the gas causes self-regulation, i.e. cooling of the disc, that compensates the stellar disc heating (the temporal increase of the random motions of the stars). The so-produced patterns are characterized as `quasi-stationary spiral structure' (QSSS), because the underlying global modes are long-lived, despite the fact that their superposition may lead to rather rapidly varying observed patterns. On the other hand, despite the fact that the basic form of density wave theory examines the case where the spirals rotate with a unique pattern speed, there are indications, both from observations and from dynamical or N-body models, of the existence of multiple pattern speeds (Sellwood). In a well studied scenario, an inner bar exhibits fast rotation, while the spiral arms rotate more slowly. In fact, the spiral arms may be generated by recurrent instabilities due to the dynamical coupling of the bar with disc modes beyond corotation. The spiral arms in such models recurrently appear and disappear. In N-body simulations it is found that, when spiral arms are present, their pattern speed is close to a resonant relation with the bar's pattern speed. ### Termination of bars - chaotic spiral arms The non-axisymmetric perturbation in barred galaxies is much higher than in normal spiral galaxies (of order $$50\%$$). In fact, near corotation we can find ordered orbits near the Lagrangian points $$L_4,L_5\ ,$$ while near the points $$L_1, L_2$$ we have the interaction of many resonances and large chaos is generated in this region. The existence of large chaos near corotation tends to destroy the bar of a barred galaxy. In fact the orbits near corotation spread in an irregular way and do not support a continuation of the bar beyond corotation. The chaotic domain is larger if the bar is stronger. Another reason for the termination of bars near corotation is that even if there are ordered orbits beyond corotation, these tend to produce density maxima perpendicularly to the bar, and not along the bar. Numerical experiments and observations of real galaxies show that, in fact, the bars terminate a little before corotation. However, in strong bars we observe spiral arms beyond the ends of the bar, starting close to the Lagrangian points $$L_1$$,$$L_2\ .$$ These spiral arms are rather different from the spiral arms of normal galaxies in the inner parts of the galaxies. In fact a systematic study has shown that the orbits of the outer spiral arms are chaotic. These spiral arms are still density waves, because they are not composed always of the same matter. The chaotic orbits produce maxima of density near their apocentra and pericentra, where $$\dot{r}=0\ .$$ The apocentra and pericentra are close to the unstable asymptotic curves (invariant manifolds) of the unstable periodic orbits $$PL_1, PL_2$$ around $$L_1, L_2\ .$$ These manifolds have a trailing spiral form (U,U$$'$$ in Figure 13 a), and on the leading side they form an envelope of the bar (UU,UU$$'\!$$). After longer integration times the unstable manifolds are seen to support a considerable part of the trailing spiral arms ( Figure 13 b ). In fact it has been found in many numerical experiments that the chaotic orbits tend to populate both the trailing spiral arms and an envelope of the bar while the main body of the bar is composed of ordered orbits (Voglis). A particular chaotic orbit supporting the bar and the spiral arms is shown in Figure 13 c. Figure 13: (a) Invariant manifolds from the unstable periodic orbits $$PL_1$$ $$PL_2$$ (S,SS stable, U,UU unstable). The stars approach PL$$_1\ ,$$ PL$$_2$$ along and close to the stable manifolds and they deviate along and close to the unstable manifolds. (b) Long time integration of the unstable invariant manifolds. (c) A chaotic orbit supporting the outer spirals and the envelope of the bar These chaotic spiral arms may last for a considerable fraction of the Hubble time. Later on the orbits become more complicated and some orbits escape to infinity. But many chaotic orbits are sticky', i.e. they remain close to the spiral arms and to the envelope of the bar ( Figure 13 c ) for very long times before escaping from the galaxy. Thus stickiness', i.e. the tendency of chaotic orbits to remain confined close to invariant structures (islands of stability or invariant manifolds) appears to play a key role in understanding the structures formed in barred galaxies beyond corotation. ## N-body Systems ### N-body simulations A particular method to explore the dynamics and evolution of a stellar system is by N-body simulations. These simulations are by definition self-consistent because the motions are due to forces generated by the masses themselves. In the case of small stellar systems, like a cluster, we consider a system of N point masses, where N is of the order of 102 - 104 and we follow the orbits of all these points. In the case of large systems, like galaxies, we take N of the order of 105 - 108 but not the actual number of stars in a galaxy, which is of the order of $$10^{12}\ .$$ In large systems we use various approximations to speed the calculations. E.g. we calculate in detail the forces due to nearby stars, but we consider the remote stars as forming a relatively small number of large masses affecting a particular star. The accurate N-body calculations for a given time T require $$N^2$$ calculations of forces, but the approximate methods (like tree or mesh methods) require O(N log N) calculations. Implementation of the so-called `self-consistent field' method (Clutton-Brock) in isolated systems with simple geometry has, finally, led to the production of O(N) algorithms. Similar calculations can be used for the gas dynamics of galaxies, that take into account also the effects of pressure. Such is the SPH (smooth particle hydrodynamics) method, in which the motions of fluid particles, acted upon by both gravitational and non-gravitational forces, are integrated. These studies are mainly aimed at finding the evolution of stellar systems. E.g. we may study the collapse or merging of stellar systems until they form an almost stationary configuration. We may also consider the differences between the evolution of the stellar component and the gas component of a galaxy. Other problems of interest refer to the tidal effects between neighboring galaxies, or the collisions of galaxies and the formation of tails or supermassive central cores in some galaxies. In such studies one considers two different types of relaxation, collisional and collisionless. Collisional relaxation is due to the close approaches of stars and their time scale is the time of relaxation. In a spherical system of stars of mass $$m$$ with average radius $$R$$ and density $$n_0$$ it is $\tag{41} t_{relax}=\sqrt{\frac{n_0}{Gm}}\frac{R^3}{\ln(N/2^{3/2})}$ where N is the total number of stars. The relaxation time is much longer than the "dynamical time" $\tag{42} t_0=\frac{R}{\sqrt{\langle v^2 \rangle }}$ where $$\langle v^2 \rangle$$ is the average of the squares of the velocities. On the other hand the collisionless relaxation takes place in only a few dynamical times. The rate of relaxation increases when the Lyapunov time $$t_L$$ of chaotic orbits (section 2.1) is small. In general, the evolution of an N-body simulation is characterized by the following phases which evolve in different timescales: (a) Violent relaxation and/or growth of collective instabilities . These phenomena evolve in a timescale which is of the order of the dynamical time $$\!t_0\ .$$ An abrupt collapse of a protogalaxy or a merging event (e.g. two colliding spiral galaxies forming an elliptical galaxy) is characterized by rapid fluctuations of the gravitational potential which result in a fast dynamical relaxation. The resulting stationary state gives a distribution of velocities with $\tag{43} \langle v^2 \rangle=\frac{GM}{2R}$ where $$M$$ is the total mass of the system. On the other hand, the initial conditions (positions and velocities of particles) may be chosen close to an unstable equilibrium state, in which case the simulation checks the expected fast growth of instabilities and the form of endstates to which the N-body system is led. (b) Virial equilibrium and/or secular evolution. Simulations of slowly or non-rotating 3D N-body systems may form states which are close to virial equilibrium, i.e. systems satisfying the virial equations (6) and (7). The main interest, in this case, is to examine the types of orbits and of the phase-space structures which support self-consistency all along the simulation. On the other hand, an N-body system may exhibit secular evolution. Two main cases are: i) addition of a central mass in simulations of elliptical galaxies (section 2), ii) bar-halo interaction in barred-spiral galaxies. The latter is an example of a live halo which exchanges angular momentum with the bar due e.g. to dynamical friction. The exchange results in a slowing down of the bar, gradually shifting the positions of resonances and affecting also the spiral structure beyond the bar. (c) Mixing of chaotic orbits. This takes place over a Lyapunov time $$t_L\ ,$$ and may be driven by small fluctuations of the N-body potential around its average form. (d) Collisional relaxation. This type of relaxation becomes effective in a long timescale, which gives also an estimate of the time of dissolution of the system (section 2.3). N-body simulations have become a very important tool for investigations in Galactic Dynamics (see specializing scholarpedia article on N-body simulations). ### Violent relaxation A common result found in many N-body simulations is that during the collapse or merging of stellar systems we have a fast approach to a quasi-stationary equilibrium state. This approach takes place in a timescale of the order of a few dynamical times. Thus, it cannot be attributed to two-body relaxation, whose timescale is much longer (of the order of the collisional relaxation time). An interpretation of this phenomenon was developed in the 60s by the theory of violent relaxation (Lynden-Bell). This theory considers the statistical distribution of the so-called phase-space elements', i.e. smooth elements of mass whose motion in phase space is governed by the collisionless Boltzmann equation. Then, it can be shown that the rapid time variations of the potential lead to a fast mixing of the phase-space elements. According to the Lynden-Bell statistics' (which is similar to a Fermi-Dirac statistics but without mass segregation), violent relaxation should lead to an equilibrium coarse-grained distribution function which has the form $\tag{44} \overline{f}=\frac{f_0~e^{-\beta[E-E_0]}}{1+e^{\beta(E-E_0)}}$ where E is the energy and $$f _0\ ,$$ $$\beta$$ and $$E_0$$ are constants. If $$\overline{f}$$ is small this distribution becomes $\tag{45} \overline{f}=f_0~e^{-\beta(E-E_0)}$ and it looks like a Boltzmann distribution. However in the Boltzmann distribution, which is due to collisions (encounters), $$\beta$$ is proportional to the masses of the stars $$m\ ,$$ while the Lynden-Bell distribution is collisionless and $$\beta$$ is independent of $$m\ .$$ The theory of violent relaxation has provided a paradigm of application of statistical mechanics in collisionless self-gravitating stellar systems. On the other hand, N-body simulations show that the quantitative predictions of the Lynden-Bell statistics represent well the endstates of only idealized systems with smooth' cores (i.e. ones with a nearly flat density profile in the central parts). In fact, in real galaxies there is a variety of physical mechanisms leading to strong deviations from Lynden-Bell statistics. Examples are: (i) Partial mixing, i.e. the mixing of particles (or of the phase space elements') is not complete. Obstructions to mixing are posed by the existence of additional constraints in the form of local approximately conserved quantities which restrict the allowable motions in phase space. In addition, the variations of the potential may decay well before there was sufficient time for mixing to become complete (this turns out to be true in particular in the outer parts of galaxies). (ii) Large local variations of the initial phase space density before relaxation. For example, when a massive galaxy merges with a smaller satellite galaxy, a substantial fragment of the smaller galaxy may be driven to the center of the more massive galaxy without any effective mixing taking place. Phenomena like the above cause an effect of memory of the initial conditions (the conditions before relaxation) in the final state. The theory of partially relaxed systems represents an important open challenge for statistical mechanics in general. Various attempts towards such a theory have been proposed over the years, but their usefulness in comparison with N-body experiments is still unclear (see Efthymiopoulos et al. 2007 for a review). ### Collective instabilities Another topic studied via N-body simulations is collective instabilities in stellar systems. These are caused when a system satisfies some instability criterion, and lead to abrupt redistribution of the mass and/or velocities within a system. Important cases are: (a) Axisymmetric instabilities: an equilibrium distribution of matter in a galactic disc is prone to axisymmetric instabilities when the velocity dispersion in the radial direction is small compared to the circular velocity. The threshold to instability is provided by Toomre's Q-parameter: $\tag{46} Q={\kappa\sigma_R\over 3.36G\Sigma}$ where $$\kappa$$ is the epicyclic frequency, $$\sigma_R$$ is the velocity dispersion in the radial direction, and $$\Sigma$$ is the surface density. Instability occurs if $$Q<1\ .$$ (b) Non-axisymmetric instabilities in disc galaxies: depending on the form of the radial profile of the density distribution, disc galaxies may develop a non-axisymmetric instability leading to the formation of bars. This effect was observed in N-body simulations (Hohl). The threshold of instability is given by the so-called Ostriker-Peebles criterion: $\tag{47} {T\over W}> 0.14$ where T is the rotational kinetic energy and W the potential energy of the disc. Transient spiral arms and/or ejection of material from the galaxy may occur after the onset of this type of instability. (c) Radial instability in spherical systems: these may lead to a collapse or a number of radial pulsations of a stellar system. The main criteria for the stability of spherical systems have been given by Antonov. Briefly, a condition for stability with respect to both radial and non-radial perturbations is that the distribution function $$f$$ should be a decreasing function of the energy E. However, a system develops a radial instability if $\tag{48} {d^3\rho\over dV^3}>0$ over a radial domain, where $$\rho$$ and $$V$$ are the density and potential of the spherical system respectively. (d) Radial orbit instability. This is manifested in spherical galaxies, when the velocity dispersion in the radial direction of motion is much smaller than in the transverse direction. The radial orbit instability leads to the loss of spherical symmetry and the formation of triaxial galaxies. Various thresholds for its onset have been proposed in the literature (Polyachenko and Shukhman, Merritt, Palmer and Papaloizou). One such criterion (Polyachenko and Shukhman) is $\tag{49} {2T_r\over T_t}>1.75$ where $$T_r$$ and $$T_t$$ are the kinetic energy in the radial and transverse directions respectively. The radial orbit instability is one of the earliest types of instability observed in N-body simulations (Henon), and it is believed to affect mainly the dark haloes of galaxies by turning their shape from spherical to triaxial. (e) Warp or bending instabilities. These are analyzed in terms of the growth of bending modes, i.e. collective vertical oscillations of a thin disc. N-body simulations have also been used in the study of live haloes (bar-halo interaction) and of the formation and secular evolution of galaxies. Finally the N-body simulations can be viewed as providing an experimental method to check the theoretical considerations in galactic dynamics. ## Further reading Bertin, G. and Lin, C.C.: 1996, Spiral structure in galaxies: a density wave theory, MIT Press, Cambridge, Massachusetts. Bertin, G.: 2000, Dynamics of Galaxies, Cambridge University Press, Cambridge. Binney, J. and Tremaine, S.: 2008, Galactic Dynamics, second edition, Princeton University Press, New Jersey. Boccaletti, D. and Pucacco, G.: 1996, Theory of Orbits, Springer, Berlin. Combes, F., Boissé, P., Mazure, A., Blanchard, A.: 2001, Galaxies and Cosmology, Springer, Berlin. Contopoulos, G., 2004, Order and Chaos in Dynamical Astronomy, Springer, New York. Contopoulos, G. and Patsis, P.A. (Eds): 2007, Chaos in Astronomy, Astrophysics and Space Science Proceedings, Springer, Berlin. Efthymiopoulos, C., Voglis N., and Kalapotharakos, C., 2007, Special Features of Galactic Dynamics, Lecture Notes in Physics 729, 297. Fridman A.M. and Polyachenko V.L.: 1984, Physics of Gravitating Systems, Springer, Berlin. Palmer, P.: 1995, Instabilities in Collisionless Stellar Systems, Cambridge University Press, Cambridge.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 340, "mathjax_asciimath": 4, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9065753221511841, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/202742-b-c-vectors-how-prove-vector-b-c-given-b-c-axb-axc.html
# Thread: 1. ## a,b,c are vectors. How to prove vector b = c, given a.b = a.c and axb = axc? a,b,c are vectors. How to prove vector b = c, given a.b = a.c and axb = axc? 2. ## Re: a,b,c are vectors. How to prove vector b = c, given a.b = a.c and axb = axc? I think we need an additional assumption that $\mathbf{a} \ne \mathbf{0}$; otherwise the claim is not true. So assuming $\mathbf{a} \ne \mathbf{0}$, as a first step let's see if we can show that if $\mathbf{a} \times \mathbf{v} = \mathbf{0}$ and $\mathbf{a} \cdot \mathbf{v} = 0$ then $\mathbf{v} = \mathbf{0}$. We have $\mathbf{0} = \mathbf{v} \times (\mathbf{a} \times \mathbf{v}) = \mathbf{a} (\mathbf{v} \cdot \mathbf{v}) - \mathbf{v} (\mathbf{a} \cdot \mathbf{v}) = \mathbf{a} (\mathbf{v} \cdot \mathbf{v}) - \mathbf{v} (0) = \mathbf{a} (\mathbf{v} \cdot \mathbf{v})$ so $\mathbf{v} \cdot \mathbf{v} = 0$, hence $\mathbf{v} = \mathbf{0}$. Finally, to show that if $\mathbf{a} \cdot \mathbf{b} = \mathbf{a} \cdot \mathbf{c}$ and $\mathbf{a} \times \mathbf{b} = \mathbf{a} \times \mathbf{c}$ then $\mathbf{b} = \mathbf{c}$, let $\mathbf{v} = \mathbf{a} - \mathbf{c}$ above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043065905570984, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/64377/birthday-attack-problem-calculate-exact-numbers?answertab=oldest
# Birthday attack/problem, calculate exact numbers? [duplicate] Possible Duplicate: Birthday-coverage problem An example of what I wish to do is the following: http://stackoverflow.com/questions/4681913/substr-md5-collision/4785456#4785456 How would I calculate how many people would be required, as in the link above, to reach 50% or 0.001% or n% probability of collision exactly? I am able to calculate the likelyhood of a collision in say a hash, with $1-e^\frac{-n^2}{(2*10^6)}$ 10^6 being six numerical digits from zero to nine. However, I would have to guess a lot of times before I got the exact number of people it would take to reach exactly 50%, which may be a fraction (i.e. 20.2 people) How would I be able to find this? - Are you assuming that birthdates are uniformly-distributed? There is data that leads to reasonably questioning this belief/assumption. – gary Sep 14 '11 at 4:29 1 – cardinal Sep 14 '11 at 10:15 ## marked as duplicate by Qiaochu YuanSep 14 '11 at 15:38 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer I'm somewhat confused by the question because it contains the word "exact" four times but you suggest to calculate the probability of a collision using a relatively simple approximation. For this answer, I'll assume that you're aware that there are better approximations for this probability, and of the various answers you get by searching for "birthday" on this site, and that your question is only about calculating $n$ given $1-\exp(-n^2/(2k))$. This you can do by solving for $n$ as follows: $$p=1-\mathrm e^{-n^2/2k}\;,$$ $$\mathrm e^{-n^2/2k}=1-p\;,$$ $$-\frac{n^2}{2k}=\log(1-p)\;,$$ $$n^2=-2k\log(1-p)\;,$$ $$n=\sqrt{2k\log(1-p)}\;,$$ where $\log$ is the natural logarithm, i. e. the logarithm to base $\mathrm e$. - The ln function is great, I never thought it could be this easy. Thank you for your steps. – Donaldt Fourier Sep 16 '11 at 1:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513722658157349, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/88066?sort=votes
## Polygons uniquely inducing arrangements ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A beautiful, relatively recent result is that, Every simple arrangement $\cal{A}$ of $n$ lines in the plane is induced by a simple $n$-gon $P$. In a simple arrangement, every pair of lines intersect in a point, and no three lines intersect in a common point. A polygon $P$ induces $\cal{A}$ if $\cal{A}$ is obtained by extending its $n$ edges to lines. Thus $P$ "visits" each line of $\cal{A}$ exactly once; it is a Hamiltonian-like cycle: This is proved in the paper, "On Inducing Polygons and Related Problems." Eyal Ackerman, Rom Pinchasi, Ludmila Scharf, Marc Scherfenberg. Algorithms-ESA 2009. Lecture Notes in Computer Science, Volume 5757, 2009, pp, 47-58. (PDF link ) Two natural question occur to me, neither of which is addressed in the paper: Q1. Which arrangements $\cal{A}$, $n>3$, have a unique inducing polygon? Q2. Does the theorem extend to $\mathbb{R}^3$, or higher dimensions? I.e., does every simple arrangement of $n$ planes have an inducing simple polyhedron of $n$ faces? It could be the answers are relatively easy: none and no respectively...? If anyone sees quick arguments, I'd appreciate hearing them. Thanks! Addendum. Here is an attempt to illustrate Gjergji Zaimi's idea, as I interpret it. The hexagon induces the arrangement of lines in the horizontal plane, and the polyhedron "attached" to the hexagon would be the intersection of the two tetrahedra. - 2 For Q2, if I'm not mistaken, you can take the line arrangement induced on one of the planes, pick a simple inducing polygon there and then find the smallest polyhedron attached to this polygon. – Gjergji Zaimi Feb 10 2012 at 2:58 1 The proof of the paper by Ackerman, Pinchasi, Scharf and Scherfenberg shows also that there exists a homologically non-trivial Hamiltonian cycle for simple arrangements of the projective plane. – Roland Bacher Feb 10 2012 at 12:45 @Gjergji: Very nice idea! Can you expand on "smallest"? It seems if your idea works, it settles the question in any dimension. – Joseph O'Rourke Feb 10 2012 at 13:54 You had a picture in your gallery of something I call a "Klingon triangle"; it was by Jeff Erickson KHi Jeff!)Gerhard and was a counterexample to some result about one polygon that could not be transformed to another using certain motions that would create interesting looking prisms. That might induce a unique arrangement, but the pentagon does not because you have four lines that can "flex" around a middle vertex. Gerhard "Ask Me About System Design" Paseman, 2012.02.10 – Gerhard Paseman Feb 10 2012 at 15:05 1 There are no simple polyhedrons with exactly 5 sides, so any simple arrangement of 5 planes will be a counterexample for Q2. – Zsbán Ambrus May 4 2012 at 13:42 show 2 more comments ## 1 Answer Q1: The only arrangement with a unique inducing polygon is the arrangement with three lines. In fact it follows from the first proof in the paper you cite that the number of inducing polygons is $\geq \lfloor\frac{n}{2}\rfloor$. This is because one can pick a line so that every intersection lies on the same half-plane defined by this line. Then one can pick an arbitrary intersection point $P$ on this line and produce a path which visits every line once. This path will also lie on the same half-plane so their algorithm produces an inducing polygon with $P$ as a vertex. But $P$ was arbitrary. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9120215773582458, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/318614/composition-of-continuous-functions/318616
# composition of continuous functions I was wondering if a function $f:[a,b]\rightarrow[c,d]$ is continuous, $g:[c,d]\rightarrow\mathbb{R}$ is continuous, does it necessarily imply that $g\circ f$ is continuous? Are there counterexamples? What is the necessary and sufficient condition for $g\circ f$ to be continuous? This is not HWQ. I am just wondering if that is possible. - 1 I don't understand your title – Cortizol Mar 2 at 13:00 – mezhang Mar 2 at 13:05 1 Yes. composition of continous functions is necessary continous. – tetori Mar 2 at 13:07 sorry, you said RS integrable and confused me. – mezhang Mar 2 at 13:14 1 I give two proofs below. One by and $\varepsilon$-$\delta$ argument and one by the characterization of continuity that says inverse-images of open sets are open. – Michael Hardy Mar 2 at 13:22 show 2 more comments ## 4 Answers With the sequence definition of continuity it is obvious that $g\circ f$ is continous, because $$\lim_{n\rightarrow \infty} g(f(x_n))=g(\lim_{n\rightarrow \infty} f(x_n)) = g(f(\lim_{n\rightarrow \infty} x_n))$$ because $f$ and $g$ are continuous. It is hard to say what is necessary that the composition of function is continuous, taking $$D(x)=\left\{ \begin{array}{rl} 0 & x\in \mathbb{R}\setminus \mathbb{Q}\\ 1 & x \in \mathbb{Q}\\ \end{array} \right.$$ is discontinuous in every $x\in \mathbb{R}$ but $D(D(x))=1$ is $C^\infty$. $C^\infty$ means the function is arbitrary often continuous differentiable. - One definition of continuity says $f$ is everywhere continuous if and only if for every open set $G$, the set $$\{ x\in\text{domain} : f(x) \in G\}$$ is open. So look at $$\{x : g(f(x))\in G\} = \{ x : f(x) \in \{ w : g(w)\in G\} \} = \{ x : f(x) \in H\},$$ where $H=\{ w : g(w)\in G\}$. The set $H$ is open because $g$ is continuous, and the last set mentioned above is open because $H$ is open. Therefore the first set mentioned on the line above is open; therefore $g\circ f$ is continuous. There's also the $\varepsilon$-$\delta$ definition of continuity, which readily defines the notion of continuity at a point $x$ in the domain. Given $\varepsilon>0$, we seek $\delta>0$ so small that if the distance from $x$ to $y$ is less than $\delta$, then the distance from $g(f(x))$ to $g(f(y))$ is less than $\varepsilon$. Given $\varepsilon>0$, the continuity of $f$ at $f(x)$ entails that there exists $\eta>0$ such that whenever the distance from $f(x)$ to $w$ is less than $\eta$, then the distance from $g(f(x))$ to $g(w)$ is less than $\varepsilon$. Next, the continuity of $f$ at $x$ entails that there exists $\delta>0$ such that if the distance from $x$ to $y$ is less than $\delta$, then the distance from $f(x)$ to $f(y)$ is less than $\eta$. The desired conclusion follows. So if $f$ is continuous at $x$ and $g$ is continuous at $f(x)$, then $g\circ f$ is continuous at $x$. - thank you for detailed answer. – ήλιος Mar 2 at 13:26 Yes it is continuous. Compositions of two continuous functions is always continuous. In this case you can see it by the sequential definition of continuity. $$x_n\rightarrow x \Rightarrow f(x_n)\rightarrow f(x) \Rightarrow g(f(x_n))\rightarrow g(f(x))$$ - Here's the proof using the $\varepsilon - \delta$ definition : Fix $\varepsilon > 0$. By the continuity of $g$ in $[c,d]$ which contains some points of $f([a,b])$ there exits $\gamma$ such that $d(g(y),g(f(q))) < \varepsilon$ when $d(y,f(q)) < \gamma$ for some point $q\in[a,b]$ where $y\in f([a,b])$ Now since $f$ is continuous there exists $\delta > 0$ such that $d(f(x),f(q))< \gamma$ when $d(x,q)< \delta$ where $x\in [a,b]$ Let $h= f \circ g$ then from the above it follows that $d(h(x),h(q))=d(g(f(x)),g(f(q))) <\varepsilon$ when $d(x,q)< \delta$ . Hence $h=f \circ g$ is continuous -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639285206794739, "perplexity_flag": "head"}
http://mathoverflow.net/questions/7776/universal-property-for-collection-of-epimorphisms
## Universal property for collection of epimorphisms ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Question Is there a nice universal property which captures the notion of "collection of all epimorphisms out of a given object". Of course I will have to consider two epimorphisms $X \rightarrow Y$ the same if they are isomorphic over $X$. The answer to the dual question is yes, at least in a topos: The power object $P(X)=\Omega^X$ , where $\Omega$ is the subobject classifier can be thought of as "the collection of all subobjects of X". The universal property is just the property for exponentials. Background (Not strictly necessary for the question): I have been reading Sheaves in Geometry and Logic by Mac Lane and Moerdijk. Their definition of an elementary topos is this: A category with pullbacks, a terminal object (i.e. all finite limits), a subobject classifier, and a power object for every object. They construct all other exponential objects from these axioms. The construction they use is to basically consider the "collection" of all graphs of morphisms. This is just the standard construction in set theory suped up to toposes. This construction agrees with the set theoretic convention that a function should be regarded as a set of ordered pairs, i.e. if $f:A \rightarrow B$, then the set theorist will define $f$ as the image of the map $A \rightarrowtail A \times B$ induced by the $1_A$ and $f$ (this may be the most convoluted sentence I have ever written). Why not define functions dually? There is also a map $A+B \twoheadrightarrow B$ induced by $1_B$ and $f$. Then we could define $f$ as the partition of $A$ induced by this epimorphism, which seems like a perfectly nice way to define functions. I was wondering if this construction could be used to construct exponential objects if I was given finite colimits and some kind of epimorphism classifier, or collection of epimorphisms out of a given object. Comment if it turns out that there is no really nice answer to this question, do you think that has bearing on the fact that the formula for the number of subsets of a set is easy ($2^{|X|}$) but the formula for the number of partitions of a set is relatively hard (http://en.wikipedia.org/wiki/Partition_of_a_set)? - ## 1 Answer I think the natural meaning of "collection of all epimorphisms out of $X$" or "epimorphism classifier" in a category $\mathbf{S}$ would be: an object $E$, an object $Y\to E$ of $\mathbf{S}/E$, and an epimorphism $p\colon E\times X \twoheadrightarrow Y$ in $\mathbf{S}/E$, such that for any object $U$ and any epimorphism $q\colon U\times X\twoheadrightarrow Z$ in $\mathbf{S}/U$, there exists a unique morphism $f\colon U\to E$ such that $(f\times 1)^*q$ is isomorphic to $p$ under $E\times X$ in $\mathbf{S}/E$. In other words, a representing object for the presheaf on $\mathbf{S}$ which sends an object $U$ to the set of (isomorphism classes of) epimorphisms out of $U\times X$. In a topos, such epimorphism classifiers can be constructed from power objects. Every epimorphism $X\twoheadrightarrow Z$ in a topos is the quotient of its kernel pair, which is an internal equivalence relation on $X$, i.e. a particular element of $P(X\times X)$, and every internal equivalence relation has a quotient. Therefore, the subobject of $P(X\times X)$ which internally "consists of all equivalence relations" can be shown to be an epimorphism classifier in the above sense. - This seems like it answers my question, but I will have to take some time to really understand it. Do you think that it would be possible to use epimorphism classifiers in this sense instead of power objects in the definition of an elementary topos, and construct power objects using cographs instead of graphs? I will have to play more with that. – Steven Gubkin Dec 4 2009 at 18:38 So I think that I almost understand the definition you gave. Could you clarify my intuition a bit though? Let's just think about the category of sets. Fix a set X. Then I understand you could take E to be the set of equivalence relations on X. What would $p$ and $Y$ correspond to in this case? – Steven Gubkin Dec 4 2009 at 19:21 In sets, yes, E would be the set of equivalence relations on X. Y would be the set of pairs (R,z) where R is an equivalence relation on X and z is an equivalence class of R. The map Y --> E is the obvious projection. The map p takes (R,x) to (R,[x]) where [x] is the equivalence class of x under R. – Mike Shulman Dec 4 2009 at 20:37 My gut feeling is that one won't be able to reconstruct power objects from epimorphism classifiers, but I don't have a counterexample in mind. It does seem likely that one could construct exponentials from epimorphism classifiers in the way you describe, at least as long as the category has enough internal logic (e.g. is a positive Heyting category). – Mike Shulman Dec 4 2009 at 20:41 Thank you! This clarifies things a lot. I will try to construct exponentials in this way. This is really just homework I am giving myself so that I know I understand what is going on in Mac Lane and Moerdijk's book, but I am happy that a deeper understanding of equivalence relations has come out of it! – Steven Gubkin Dec 4 2009 at 21:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429193139076233, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/158758/what-is-the-result-of-sum-sum-limits-i-0n-2i/158767
# What is the result of sum $\sum\limits_{i=0}^n 2^i$ [duplicate] Possible Duplicate: the sum of powers of $2$ between $2^0$ and $2^n$ What is the result of $$2^0 + 2^1 + 2^2 + \cdots + 2^{n-1} + 2^n\ ?$$ Is there a formula on this? and how to prove the formula? (It is actually to compute the time complexity of a Fibonacci recursive method.) - 8 Yes, there is a formula. Consider what the result looks like in binary notation. What happens if you add 1 to the result (still in binary)? – Henning Makholm Jun 15 '12 at 17:06 You know how a geometric series works? – J. M. Jun 15 '12 at 17:09 $2^{n+1}-1$ is correct. If you need to produce a rigorous proof, I suggest induction. – Henning Makholm Jun 15 '12 at 17:17 @null Isn't the time complexity of a Fibonacci recursive method given in terms of the Fibonacci numbers? – talmid Jun 15 '12 at 17:29 1 I am extremely amused as to why you didn't vote to close yourself @PatrickDaSilva! – Gigili Jun 16 '12 at 0:02 show 6 more comments ## marked as duplicate by user17762, Martin Sleziak, Henning Makholm, Peter Tamaroff, Asaf KaragilaJun 15 '12 at 21:05 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 7 Answers Let $S = 2^0 + 2^1 + 2^2 + \cdots + 2^{n}$. Then $2S = 2^1 + 2^2 + 2^3 + \cdots + 2^{n} + 2^{n+1}$. Then $$\begin{align*} S = 2S - S &= & & 2^1 &+& 2^2 & + & 2^3 & + & 2^4 &+&\cdots &+& 2^{n} &+& 2^{n+1}\\ && -2^0 -& 2^1 & - & 2^2 & - & 2^3 & - & 2^4 & - & \cdots & - & 2^n \end{align*}$$ How much is that? - 2 It's at least +1! – Patrick Da Silva Jun 15 '12 at 17:27 Might be overkill, but there is a well known identity for sums of the form $\sum_{i=0}^n x^i$ where $x$ is not 1. $$\sum_{i = 0}^n x^i = \frac{1- x^{n+1}}{1-x}$$ Now plug in 2 and you have what you seek. This identity can easily be proven by induction. - 3 I wouldn't call this "overkill", but rather "standard approach". The proof of this identity is essentially the standard approach when one tries to prove this identity for $x = 2$. – Patrick Da Silva Jun 15 '12 at 17:32 Where is that from? I 'invented' it in high school, and could never find a source... – Alex Feinman Jun 15 '12 at 18:26 This is usually an standard exercise in your first `proof' course at the university. – Dimitri Surinx Jun 15 '12 at 19:30 I thought I might post a little more elaborate version of Hennig's hint (see his cooment). $$\begin{align} 1&=2^0\\ 10&=2^1\\ 100&=2^2\\ 1000&=2^3\\ \vdots&=\vdots\\ 10\dots0&=2^n\\ \hline 11\dots1&=2^0+2^1+\dots+2^n\\ 1&=1\\ \hline 100\dots0&=2^0+2^1+\dots+2^n+1=2^{n+1} \end{align}$$ Hence $2^0+2^1+\dots+2^n=2^{n+1}-1$ - Hint: Consider the sequence of partial sums $(a_n) = 2^0 + \cdots + 2^n$. Add one to each term. Do you notice a pattern? For example: $a_0 + 1 = 2^0 + 1 = \dots$ $a_1 + 1 = 2^0 + 2^1 + 1 = \dots$ $a_2 + 1 = 2^0 + 2^1 + 2^2 + 1 = \dots$ Can you guess what a general formula for $a_n + 1$ might be? Then what is $a_n$? An easy way to prove this would be by induction. Since you already know $a_n = 2^{n+1} - 1$, the proof by induction goes like this: • Base case: with $n=0$: $a_0 = 2^0 = 1$, which is $2^{0+1}-1$, so the formula holds for the base case. • Inductive step: if $a_n = 2^{n+1}-1$, then $a_{n+1} = a_n + 2^{n+1}$ (because to get to the next term in the sequence you just add the next power of $2$); by the inductive hypothesis, this is equal to $(2^{n+1}-1) + 2^{n+1} = 2 \times 2^{n+1} - 1 = 2^{(n+1)+1} - 1$, which is the formula we conjectured for $a_{n+1}$. With this, we've shown $a_n = 2^{n+1}-1$ for all $n \in \mathbb{N}_0$. - How much is a direct summation worth? $$\begin{align*} 1 + \sum_{i=0}^n 2^i &= 1 + (2^0 + 2^1 + 2^2 + \cdots + 2^n)\\ &= (2^0 + 2^0) + (2^1 + 2^2 + \cdots + 2^n)\\ &= 2^1 + (2^1 + 2^2 + \cdots + 2^n)\\ &= (2^1 + 2^1) + (2^2 + \cdots + 2^n)\\ &= 2^2 + (2^2 + \cdots + 2^n)\\ \vdots &= \ddots\\ &= 2^n + (2^n)\\ &= 2^{n+1}. \end{align*}$$ Hence, $\displaystyle \sum_{i=0}^n 2^i = 2^{n+1} - 1.$ - 1 It's at least worth +1! – Patrick Da Silva Jun 15 '12 at 20:55 Let us take a particular example that is large enough to illustrate the general situation. Concrete experience should precede the abstract. Let $n=8$. We want to show that $2^0+2^1+2^2+\cdots +2^8=2^9-1$. We could add up on a calculator, and verify that the result holds for $n=8$. However, we would not learn much during the process. We will instead look at the sum written backwards, so at $$2^8+2^7+2^6+2^5+2^4+2^3+2^2+2^1+2^0.$$ A kangaroo is $2^9$ feet from her beloved $B$. She takes a giant leap of $2^8$ feet. Now she is $2^8$ feet from $B$. She takes a leap of $2^7$ feet. Now she is $2^7$ feet from $B$. She takes a leap of $2^6$ feet. And so on. After a while she is $2^1$ feet from $B$, and takes a leap of $2^0$ feet, leaving her $2^0$ feet from $B$. The total distance she has covered is $2^8+2^7+2^6+\cdots+2^0$. It leaves her $2^0$ feet from $B$, and therefore $$2^8+2^7+2^6+\cdots+2^0+2^0=2^9.$$ Since $2^0=1$, we obtain by subtraction that $2^8+2^7+\cdots +2^0=2^9-1$. We can write out the same reasoning without the kangaroo. Note that $2^0+2^0=2^1$, $2^1+2^1=2^2$, $2^2+2^2=2^3$, and so on until $2^8+2^8=2^9$. Therefore $$(2^0+2^0)+2^1+2^2+2^3+2^4+\cdots +2^8=2^9.$$ Subtract the front $2^0$ from the left side, and $2^0$, which is $1$, from the right side, and we get our result. - I mean, seriously? A kangaroo? Gosh. $2^8$ feet leaps? Man... +1 for the laugh – Patrick Da Silva Jun 15 '12 at 17:34 1 @PatrickDaSilva: Yes, seriously. Someone reading it with attention may be able to reconstruct the idea a year from now. – André Nicolas Jun 15 '12 at 17:39 1 @AndréNicolas "A Hulk is $2^9$ feet from her beloved..." would be more interesting. +1 – Peter Tamaroff Jun 15 '12 at 18:40 @AndreNicolas : You seem to like this pedagogic approach a lot. I respect you for that =) sometimes people forget that you need to do mathematics to have fun! When there's no fun, there's no theorem. – Patrick Da Silva Jun 15 '12 at 20:54 This is called as a Geometric progression. YES there is a formula. Refer this: http://en.wikipedia.org/wiki/Geometric_progression The answer for your problem is 2^(n-1). You want the proof refer the above link. Simple and excellent - Its 2^(n+1)....my bad. – george Jun 15 '12 at 21:03 1 Note that the sum has exactly one odd number and therefore the sum cannot be an odd number. So your answers $2^{n-1},2^{n+1}$ are wrong. – Asaf Karagila Jun 15 '12 at 21:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938536524772644, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/72922/cohomology-of-locally-indicable-groups
# Cohomology of Locally Indicable groups Let $G$ be a locally indicable group (i.e. there is a non trivial homomorphism from $G$ to the real additive group $(\mathbb{R},+)$) and $l^2(G)$ be the Hilbert space with the base $G$. Is it true that $H^1(G,l^2(G))$ does not vanish? - 1 I think locally indicable generally means that every finitely generated subgroup surjects onto the integers. – Mustafa Gokhan Benli Oct 15 '11 at 23:05 Yes! You are right Mustafa! For every finitely generated subgroup of G there is a non trivial homomorphism to (R,+). – Mahdi Teymuri Garakani Oct 15 '11 at 23:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8933143615722656, "perplexity_flag": "head"}
http://ams.org/bookstore?fn=20&arg1=tb-aa&ikey=GSM-131
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List Lie Superalgebras and Enveloping Algebras Ian M. Musson, University of Wisconsin, Milwaukee, WI SEARCH THIS BOOK: Graduate Studies in Mathematics 2012; 488 pp; hardcover Volume: 131 ISBN-10: 0-8218-6867-5 ISBN-13: 978-0-8218-6867-6 List Price: US\$87 Member Price: US\$69.60 Order Code: GSM/131 See also: Five Lectures on Supersymmetry - Daniel S Freed Enveloping Algebras - Jacques Dixmier Finite Dimensional Algebras and Quantum Groups - Bangming Deng, Jie Du, Brian Parshall and Jianpan Wang Lie superalgebras are a natural generalization of Lie algebras, having applications in geometry, number theory, gauge field theory, and string theory. This book develops the theory of Lie superalgebras, their enveloping algebras, and their representations. The book begins with five chapters on the basic properties of Lie superalgebras, including explicit constructions for all the classical simple Lie superalgebras. Borel subalgebras, which are more subtle in this setting, are studied and described. Contragredient Lie superalgebras are introduced, allowing a unified approach to several results, in particular to the existence of an invariant bilinear form on $$\mathfrak{g}$$. The enveloping algebra of a finite dimensional Lie superalgebra is studied as an extension of the enveloping algebra of the even part of the superalgebra. By developing general methods for studying such extensions, important information on the algebraic structure is obtained, particularly with regard to primitive ideals. Fundamental results, such as the Poincaré-Birkhoff-Witt Theorem, are established. Representations of Lie superalgebras provide valuable tools for understanding the algebras themselves, as well as being of primary interest in applications to other fields. Two important classes of representations are the Verma modules and the finite dimensional representations. The fundamental results here include the Jantzen filtration, the Harish-Chandra homomorphism, the Šapovalov determinant, supersymmetric polynomials, and Schur-Weyl duality. Using these tools, the center can be explicitly described in the general linear and orthosymplectic cases. In an effort to make the presentation as self-contained as possible, some background material is included on Lie theory, ring theory, Hopf algebras, and combinatorics. Readership Graduate students interested in Lie algebras, Lie superalgebras, quantum groups, string theory, and mathematical physics. AMS Home | Comments: [email protected] © Copyright 2012, American Mathematical Society Privacy Statement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8471616506576538, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Bernoulli's_principle
Bernoulli's principle This article is about Bernoulli's principle and Bernoulli's equation in fluid dynamics. For Bernoulli's theorem in probability, see law of large numbers. For an unrelated topic in ordinary differential equations, see Bernoulli differential equation. A flow of air into a venturi meter. The kinetic energy increases at the expense of the fluid pressure, as shown by the difference in height of the two columns of water. Continuum mechanics Laws Scientists In fluid dynamics, Bernoulli's principle states that for an inviscid flow, an increase in the speed of the fluid occurs simultaneously with a decrease in pressure or a decrease in the fluid's potential energy.[1][2] Bernoulli's principle is named after the Swiss scientist Daniel Bernoulli who published his principle in his book Hydrodynamica in 1738.[3] Bernoulli's principle can be applied to various types of fluid flow, resulting in what is loosely denoted as Bernoulli's equation. In fact, there are different forms of the Bernoulli equation for different types of flow. The simple form of Bernoulli's principle is valid for incompressible flows (e.g. most liquid flows) and also for compressible flows (e.g. gases) moving at low Mach numbers (usually less than 0.3). More advanced forms may in some cases be applied to compressible flows at higher Mach numbers (see the derivations of the Bernoulli equation). Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of mechanical energy in a fluid along a streamline is the same at all points on that streamline. This requires that the sum of kinetic energy and potential energy remain constant. Thus an increase in the speed of the fluid occurs proportionately with an increase in both its dynamic pressure and kinetic energy, and a decrease in its static pressure and potential energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same on all streamlines because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ρ g h) is the same everywhere.[4] Bernoulli's principle can also be derived directly from Newton's 2nd law. If a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline.[5][6][7] Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. Incompressible flow equation In most flows of liquids, and of gases at low Mach number, the density of a fluid parcel can be considered to be constant, regardless of pressure variations in the flow. Therefore, the fluid can be considered to be incompressible and these flows are called incompressible flow. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow. A common form of Bernoulli's equation, valid at any arbitrary point along a streamline, is: ${v^2 \over 2}+gz+{p\over\rho}=\text{constant}$ () where: $v\,$ is the fluid flow speed at a point on a streamline, $g\,$ is the acceleration due to gravity, $z\,$ is the elevation of the point above a reference plane, with the positive z-direction pointing upward – so in the direction opposite to the gravitational acceleration, $p\,$ is the pressure at the chosen point, and $\rho\,$ is the density of the fluid at all points in the fluid. For conservative force fields, Bernoulli's equation can be generalized as:[8] ${v^2 \over 2}+\Psi+{p\over\rho}=\text{constant}$ where Ψ is the force potential at the point considered on the streamline. E.g. for the Earth's gravity Ψ = gz. The following two assumptions must be met for this Bernoulli equation to apply:[8] • the flow must be incompressible – even though pressure varies, the density must remain constant along a streamline; • friction by viscous forces has to be negligible. By multiplying with the fluid density $\rho$, equation (A) can be rewritten as: $\tfrac12\, \rho\, v^2\, +\, \rho\, g\, z\, +\, p\, =\, \text{constant}\,$ or: $q\, +\, \rho\, g\, h\, =\, p_0\, +\, \rho\, g\, z\, =\, \text{constant}\,$ where: $q\, =\, \tfrac12\, \rho\, v^2$ is dynamic pressure, $h\, =\, z\, +\, \frac{p}{\rho g}$ is the piezometric head or hydraulic head (the sum of the elevation z and the pressure head)[9][10] and $p_0\, =\, p\, +\, q\,$ is the total pressure (the sum of the static pressure p and dynamic pressure q).[11] The constant in the Bernoulli equation can be normalised. A common approach is in terms of total head or energy head H: $H\, =\, z\, +\, \frac{p}{\rho g}\, +\, \frac{v^2}{2\,g}\, =\, h\, +\, \frac{v^2}{2\,g},$ The above equations suggest there is a flow speed at which pressure is zero, and at even higher speeds the pressure is negative. Most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoulli's equation ceases to be valid before zero pressure is reached. In liquids – when the pressure becomes too low – cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, the changes in mass density become significant so that the assumption of constant density is invalid. Simplified form In many applications of Bernoulli's equation, the change in the ρ g z term along the streamline is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height z along a streamline is so small the ρ g z term can be omitted. This allows the above equation to be presented in the following simplified form: $p + q = p_0\,$ where p0 is called 'total pressure', and q is 'dynamic pressure'.[12] Many authors refer to the pressure p as static pressure to distinguish it from total pressure p0 and dynamic pressure q. In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure."[13] The simplified form of Bernoulli's equation can be summarized in the following memorable word equation: static pressure + dynamic pressure = total pressure[13] Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own unique static pressure p and dynamic pressure q. Their sum p + q is defined to be the total pressure p0. The significance of Bernoulli's principle can now be summarized as total pressure is constant along a streamline. If the fluid flow is irrotational, the total pressure on every streamline is the same and Bernoulli's principle can be summarized as total pressure is constant everywhere in the fluid flow.[14] It is reasonable to assume that irrotational flow exists in any situation where a large body of fluid is flowing past a solid body. Examples are aircraft in flight, and ships moving in open bodies of water. However, it is important to remember that Bernoulli's principle does not apply in the boundary layer or in fluid flow through long pipes. If the fluid flow at some point along a stream line is brought to rest, this point is called a stagnation point, and at this point the total pressure is equal to the stagnation pressure. Applicability of incompressible flow equation to flow of gases Bernoulli's equation is sometimes valid for the flow of gases: provided that there is no transfer of kinetic or potential energy from the gas flow to the compression or expansion of the gas. If both the gas pressure and volume change simultaneously, then work will be done on or by the gas. In this case, Bernoulli's equation – in its incompressible flow form – can not be assumed to be valid. However if the gas process is entirely isobaric, or isochoric, then no work is done on or by the gas, (so the simple energy balance is not upset). According to the gas law, an isobaric or isochoric process is ordinarily the only way to ensure constant density in a gas. Also the gas density will be proportional to the ratio of pressure and absolute temperature, however this ratio will vary upon compression or expansion, no matter what non-zero quantity of heat is added or removed. The only exception is if the net heat transfer is zero, as in a complete thermodynamic cycle, or in an individual isentropic (frictionless adiabatic) process, and even then this reversible process must be reversed, to restore the gas to the original pressure and specific volume, and thus density. Only then is the original, unmodified Bernoulli equation applicable. In this case the equation can be used if the flow speed of the gas is sufficiently below the speed of sound, such that the variation in density of the gas (due to this effect) along each streamline can be ignored. Adiabatic flow at less than Mach 0.3 is generally considered to be slow enough. Unsteady potential flow The Bernoulli equation for unsteady potential flow is used in the theory of ocean surface waves and acoustics. For an irrotational flow, the flow velocity can be described as the gradient ∇φ of a velocity potential φ. In that case, and for a constant density ρ, the momentum equations of the Euler equations can be integrated to:[15] $\frac{\partial \varphi}{\partial t} + \tfrac{1}{2} v^2 + \frac{p}{\rho} + gz = f(t),$ which is a Bernoulli equation valid also for unsteady—or time dependent—flows. Here ∂φ/∂t denotes the partial derivative of the velocity potential φ with respect to time t, and v = |∇φ| is the flow speed. The function f(t) depends only on time and not on position in the fluid. As a result, the Bernoulli equation at some moment t does not only apply along a certain streamline, but in the whole fluid domain. This is also true for the special case of a steady irrotational flow, in which case f is a constant.[15] Further f(t) can be made equal to zero by incorporating it into the velocity potential using the transformation $\Phi=\varphi-\int_{t_0}^t f(\tau)\, \operatorname{d}\tau,\text{ resulting in }\frac{\partial \Phi}{\partial t} + \tfrac{1}{2} v^2 + \frac{p}{\rho} + gz=0.$ Note that the relation of the potential to the flow velocity is unaffected by this transformation: ∇Φ = ∇φ. The Bernoulli equation for unsteady potential flow also appears to play a central role in Luke's variational principle, a variational description of free-surface flows using the Lagrangian (not to be confused with Lagrangian coordinates). Compressible flow equation Bernoulli developed his principle from his observations on liquids, and his equation is applicable only to incompressible fluids, and compressible fluids up to approximately Mach number 0.3.[16] It is possible to use the fundamental principles of physics to develop similar equations applicable to compressible fluids. There are numerous equations, each tailored for a particular application, but all are analogous to Bernoulli's equation and all rely on nothing more than the fundamental principles of physics such as Newton's laws of motion or the first law of thermodynamics. Compressible flow in fluid dynamics For a compressible fluid, with a barotropic equation of state, and under the action of conservative forces, $\frac {v^2}{2}+ \int_{p_1}^p \frac {d\tilde{p}}{\rho(\tilde{p})}\ + \Psi = \text{constant}$[17]   (constant along a streamline) where: p is the pressure ρ is the density v is the flow speed Ψ is the potential associated with the conservative force field, often the gravitational potential In engineering situations, elevations are generally small compared to the size of the Earth, and the time scales of fluid flow are small enough to consider the equation of state as adiabatic. In this case, the above equation becomes $\frac {v^2}{2}+ gz+\left(\frac {\gamma}{\gamma-1}\right)\frac {p}{\rho} = \text{constant}$[18]   (constant along a streamline) where, in addition to the terms listed above: γ is the ratio of the specific heats of the fluid g is the acceleration due to gravity z is the elevation of the point above a reference plane In many applications of compressible flow, changes in elevation are negligible compared to the other terms, so the term gz can be omitted. A very useful form of the equation is then: $\frac {v^2}{2}+\left( \frac {\gamma}{\gamma-1}\right)\frac {p}{\rho} = \left(\frac {\gamma}{\gamma-1}\right)\frac {p_0}{\rho_0}$ where: p0 is the total pressure ρ0 is the total density Compressible flow in thermodynamics Another useful form of the equation, suitable for use in thermodynamics and for (quasi) steady flow, is:[2][19] ${v^2 \over 2} + \Psi + w =\text{constant}$[20] Here w is the enthalpy per unit mass, which is also often written as h (not to be confused with "head" or "height"). Note that $w = \epsilon + \frac{p}{\rho}$ where ε is the thermodynamic energy per unit mass, also known as the specific internal energy. The constant on the right hand side is often called the Bernoulli constant and denoted b. For steady inviscid adiabatic flow with no additional sources or sinks of energy, b is constant along any given streamline. More generally, when b may vary along streamlines, it still proves a useful parameter, related to the "head" of the fluid (see below). When the change in Ψ can be ignored, a very useful form of this equation is: ${v^2 \over 2}+ w = w_0$ where w0 is total enthalpy. For a calorically perfect gas such as an ideal gas, the enthalpy is directly proportional to the temperature, and this leads to the concept of the total (or stagnation) temperature. When shock waves are present, in a reference frame in which the shock is stationary and the flow is steady, many of the parameters in the Bernoulli equation suffer abrupt changes in passing through the shock. The Bernoulli parameter itself, however, remains unaffected. An exception to this rule is radiative shocks, which violate the assumptions leading to the Bernoulli equation, namely the lack of additional sinks or sources of energy. Derivations of Bernoulli equation Bernoulli equation for incompressible fluids The Bernoulli equation for incompressible fluids can be derived by integrating Newton's Second Law of Motion, or applying the law of conservation of energy in two sections along a streamline, ignoring viscosity, compressibility, and thermal effects. The simplest derivation is to first ignore gravity and consider constrictions and expansions in pipes that are otherwise straight, as seen in Venturi effect. Let the x axis be directed down the axis of the pipe. Define a parcel of fluid moving through a pipe with cross-sectional area "A", the length of the parcel is "dx", and the volume of the parcel A dx. If mass density is ρ, the mass of the parcel is density multiplied by its volume m = ρ A dx. The change in pressure over distance dx is "dp" and flow velocity v = dx / dt. Apply Newton's Second Law of Motion (Force =mass×acceleration) and recognizing that the effective force on the parcel of fluid is -A dp. If the pressure decreases along the length of the pipe, dp is negative but the force resulting in flow is positive along the x axis. $m \frac{\operatorname{d}v}{\operatorname{d}t}= F$ $\rho A \operatorname{d}x \frac{\operatorname{d}v}{\operatorname{d}t}= -A \operatorname{d}p$ $\rho \frac{\operatorname{d}v}{\operatorname{d}t}= -\frac{\operatorname{d}p}{\operatorname{d}x}$ In steady flow the velocity field is constant with respect to time, v = v(x) = v(x(t)), so v itself is not directly a function of time t. It is only when the parcel moves through x that the cross sectional area changes: v depends on t only through the cross-sectional position x(t). $\frac{\operatorname{d}v}{\operatorname{d}t}= \frac{\operatorname{d}v}{\operatorname{d}x}\frac{\operatorname{d}x}{\operatorname{d}t} = \frac{\operatorname{d}v}{\operatorname{d}x}v=\frac{d}{\operatorname{d}x} \left( \frac{v^2}{2} \right).$ With density ρ constant, the equation of motion can be written as $\frac{\operatorname{d}}{\operatorname{d}x} \left( \rho \frac{v^2}{2} + p \right) =0$ by integrating with respect to x $\frac{v^2}{2} + \frac{p}{\rho}= C$ where C is a constant, sometimes referred to as the Bernoulli constant. It is not a universal constant, but rather a constant of a particular fluid system. The deduction is: where the speed is large, pressure is low and vice versa. In the above derivation, no external work-energy principle is invoked. Rather, Bernoulli's principle was inherently derived by a simple manipulation of the momentum equation. A streamtube of fluid moving to the right. Indicated are pressure, elevation, flow speed, distance (s), and cross-sectional area. Note that in this figure elevation is denoted as h, contrary to the text where it is given by z. Another way to derive Bernoulli's principle for an incompressible flow is by applying conservation of energy.[21] In the form of the work-energy theorem, stating that[22] the change in the kinetic energy Ekin of the system equals the net work W done on the system; $W = \Delta E_\text{kin}. \;$ Therefore, the work done by the forces in the fluid = increase in kinetic energy. The system consists of the volume of fluid, initially between the cross-sections A1 and A2. In the time interval Δt fluid elements initially at the inflow cross-section A1 move over a distance s1 = v1 Δt, while at the outflow cross-section the fluid moves away from cross-section A2 over a distance s2 = v2 Δt. The displaced fluid volumes at the inflow and outflow are respectively A1 s1 and A2 s2. The associated displaced fluid masses are – when ρ is the fluid's mass density – equal to density times volume, so ρ A1 s1 and ρ A2 s2. By mass conservation, these two masses displaced in the time interval Δt have to be equal, and this displaced mass is denoted by Δm: $\begin{align} \rho A_1 s_1 &= \rho A_{1} v_{1} \Delta t = \Delta m, \\ \rho A_2 s_2 &= \rho A_{2} v_{2} \Delta t = \Delta m. \end{align}$ The work done by the forces consists of two parts: • The work done by the pressure acting on the areas A1 and A2 $W_\text{pressure}=F_{1,\text{pressure}}\; s_{1}\, -\, F_{2,\text{pressure}}\; s_2 =p_1 A_1 s_1 - p_2 A_2 s_2 = \Delta m\, \frac{p_1}{\rho} - \Delta m\, \frac{p_2}{\rho}. \;$ • The work done by gravity: the gravitational potential energy in the volume A1 s1 is lost, and at the outflow in the volume A2 s2 is gained. So, the change in gravitational potential energy ΔEpot,gravity in the time interval Δt is $\Delta E_\text{pot,gravity} = \Delta m\, g z_2 - \Delta m\, g z_1. \;$ Now, the work by the force of gravity is opposite to the change in potential energy, Wgravity = −ΔEpot,gravity: while the force of gravity is in the negative z-direction, the work—gravity force times change in elevation—will be negative for a positive elevation change Δz = z2 − z1, while the corresponding potential energy change is positive.[23] So: $W_\text{gravity} = -\Delta E_\text{pot,gravity} = \Delta m\, g z_1 - \Delta m\, g z_2. \;$ And the total work done in this time interval $\Delta t$ is $W = W_\text{pressure} + W_\text{gravity}. \,$ The increase in kinetic energy is $\Delta E_\text{kin} = \frac{1}{2} \Delta m\, v_{2}^{2}-\frac{1}{2} \Delta m\, v_{1}^{2}.$ Putting these together, the work-kinetic energy theorem W = ΔEkin gives:[21] $\Delta m\, \frac{p_{1}}{\rho} - \Delta m\, \frac{p_{2}}{\rho} + \Delta m\, g z_{1} - \Delta m\, g z_{2} = \frac{1}{2} \Delta m\, v_{2}^{2} - \frac{1}{2} \Delta m\, v_{1}^{2}$ or $\frac12 \Delta m\, v_1^2 + \Delta m\, g z_1 + \Delta m\, \frac{p_1}{\rho} = \frac12 \Delta m\, v_2^2 + \Delta m\, g z_2 + \Delta m\, \frac{p_2}{\rho}.$ After dividing by the mass Δm = ρ A1 v1 Δt = ρ A2 v2 Δt the result is:[21] $\frac12 v_1^2 +g z_1 + \frac{p_1}{\rho}=\frac12 v_2^2 +g z_2 + \frac{p_2}{\rho}$ or, as stated in the first paragraph: $\frac{v^2}{2}+g z+\frac{p}{\rho}=C$   (Eqn. 1), Which is also Equation (A) Further division by g produces the following equation. Note that each term can be described in the length dimension (such as meters). This is the head equation derived from Bernoulli's principle: $\frac{v^{2}}{2 g}+z+\frac{p}{\rho g}=C$   (Eqn. 2a) The middle term, z, represents the potential energy of the fluid due to its elevation with respect to a reference plane. Now, z is called the elevation head and given the designation zelevation. A free falling mass from an elevation z > 0 (in a vacuum) will reach a speed $v=\sqrt{{2 g}{z}},$ when arriving at elevation z = 0. Or when we rearrange it as a head: $h_v =\frac{v^2}{2 g}$ The term v2 / (2 g) is called the velocity head, expressed as a length measurement. It represents the internal energy of the fluid due to its motion. The hydrostatic pressure p is defined as $p=p_0-\rho g z \,$, with p0 some reference pressure, or when we rearrange it as a head: $\psi=\frac{p}{\rho g}$ The term p / (ρg) is also called the pressure head, expressed as a length measurement. It represents the internal energy of the fluid due to the pressure exerted on the container. When we combine the head due to the flow speed and the head due to static pressure with the elevation above a reference plane, we obtain a simple relationship useful for incompressible fluids using the velocity head, elevation head, and pressure head. $h_{v} + z_\text{elevation} + \psi = C\,$   (Eqn. 2b) If we were to multiply Eqn. 1 by the density of the fluid, we would get an equation with three pressure terms: $\frac{\rho v^{2}}{2}+ \rho g z + p=C$   (Eqn. 3) We note that the pressure of the system is constant in this form of the Bernoulli Equation. If the static pressure of the system (the far right term) increases, and if the pressure due to elevation (the middle term) is constant, then we know that the dynamic pressure (the left term) must have decreased. In other words, if the speed of a fluid decreases and it is not due to an elevation difference, we know it must be due to an increase in the static pressure that is resisting the flow. All three equations are merely simplified versions of an energy balance on a system. Bernoulli equation for compressible fluids The derivation for compressible fluids is similar. Again, the derivation depends upon (1) conservation of mass, and (2) conservation of energy. Conservation of mass implies that in the above figure, in the interval of time Δt, the amount of mass passing through the boundary defined by the area A1 is equal to the amount of mass passing outwards through the boundary defined by the area A2: $0= \Delta M_1 - \Delta M_2 = \rho_1 A_1 v_1 \, \Delta t - \rho_2 A_2 v_2 \, \Delta t$. Conservation of energy is applied in a similar manner: It is assumed that the change in energy of the volume of the streamtube bounded by A1 and A2 is due entirely to energy entering or leaving through one or the other of these two boundaries. Clearly, in a more complicated situation such as a fluid flow coupled with radiation, such conditions are not met. Nevertheless, assuming this to be the case and assuming the flow is steady so that the net change in the energy is zero, $0= \Delta E_1 - \Delta E_2 \,$ where ΔE1 and ΔE2 are the energy entering through A1 and leaving through A2, respectively. The energy entering through A1 is the sum of the kinetic energy entering, the energy entering in the form of potential gravitational energy of the fluid, the fluid thermodynamic energy entering, and the energy entering in the form of mechanical p dV work: $\Delta E_1 = \left[\frac{1}{2} \rho_1 v_1^2 + \Psi_1 \rho_1 + \epsilon_1 \rho_1 + p_1 \right] A_1 v_1 \, \Delta t$ where Ψ = gz is a force potential due to the Earth's gravity, g is acceleration due to gravity, and z is elevation above a reference plane. A similar expression for $\Delta E_2$ may easily be constructed. So now setting $0 = \Delta E_1 - \Delta E_2$: $0 = \left[\frac{1}{2} \rho_1 v_1^2+ \Psi_1 \rho_1 + \epsilon_1 \rho_1 + p_1 \right] A_1 v_1 \, \Delta t - \left[ \frac{1}{2} \rho_2 v_2^2 + \Psi_2 \rho_2 + \epsilon_2 \rho_2 + p_2 \right] A_2 v_2 \, \Delta t$ which can be rewritten as: $0 = \left[ \frac{1}{2} v_1^2 + \Psi_1 + \epsilon_1 + \frac{p_1}{\rho_1} \right] \rho_1 A_1 v_1 \, \Delta t - \left[ \frac{1}{2} v_2^2 + \Psi_2 + \epsilon_2 + \frac{p_2}{\rho_2} \right] \rho_2 A_2 v_2 \, \Delta t$ Now, using the previously-obtained result from conservation of mass, this may be simplified to obtain $\frac{1}{2}v^2 + \Psi + \epsilon + \frac{p}{\rho} = {\rm constant} \equiv b$ which is the Bernoulli equation for compressible flow. Applications Condensation visible over the upper surface of a wing caused by the fall in temperature accompanying the fall in pressure, both due to acceleration of the air. In modern everyday life there are many observations that can be successfully explained by application of Bernoulli's principle, even though no real fluid is entirely inviscid[24] and a small viscosity often has a large effect on the flow. • Bernoulli's principle can be used to calculate the lift force on an airfoil if the behaviour of the fluid flow in the vicinity of the foil is known. For example, if the air flowing past the top surface of an aircraft wing is moving faster than the air flowing past the bottom surface, then Bernoulli's principle implies that the pressure on the surfaces of the wing will be lower above than below. This pressure difference results in an upwards lifting force.[nb 1][25] Whenever the distribution of speed past the top and bottom surfaces of a wing is known, the lift forces can be calculated (to a good approximation) using Bernoulli's equations[26] – established by Bernoulli over a century before the first man-made wings were used for the purpose of flight. Bernoulli's principle does not explain why the air flows faster past the top of the wing and slower past the underside. To understand why, it is helpful to understand circulation, the Kutta condition, and the Kutta–Joukowski theorem. • The Dyson Bladeless Fan (or Air Multiplier) is an implementation that takes advantage of the Venturi effect, Coandă effect and Bernoulli's Principle.[27] • The carburetor used in many reciprocating engines contains a venturi to create a region of low pressure to draw fuel into the carburetor and mix it thoroughly with the incoming air. The low pressure in the throat of a venturi can be explained by Bernoulli's principle; in the narrow throat, the air is moving at its fastest speed and therefore it is at its lowest pressure. • The Pitot tube and static port on an aircraft are used to determine the airspeed of the aircraft. These two devices are connected to the airspeed indicator, which determines the dynamic pressure of the airflow past the aircraft. Dynamic pressure is the difference between stagnation pressure and static pressure. Bernoulli's principle is used to calibrate the airspeed indicator so that it displays the indicated airspeed appropriate to the dynamic pressure.[28] • The flow speed of a fluid can be measured using a device such as a Venturi meter or an orifice plate, which can be placed into a pipeline to reduce the diameter of the flow. For a horizontal device, the continuity equation shows that for an incompressible fluid, the reduction in diameter will cause an increase in the fluid flow speed. Subsequently Bernoulli's principle then shows that there must be a decrease in the pressure in the reduced diameter region. This phenomenon is known as the Venturi effect. • The maximum possible drain rate for a tank with a hole or tap at the base can be calculated directly from Bernoulli's equation, and is found to be proportional to the square root of the height of the fluid in the tank. This is Torricelli's law, showing that Torricelli's law is compatible with Bernoulli's principle. Viscosity lowers this drain rate. This is reflected in the discharge coefficient, which is a function of the Reynolds number and the shape of the orifice.[29] • In open-channel hydraulics, a detailed analysis of the Bernoulli theorem and its extension were recently (2009) developed.[30] It was proved that the depth-averaged specific energy reaches a minimum in converging accelerating free-surface flow over weirs and flumes (also[31][32]). Further, in general, a channel control with minimum specific energy in curvilinear flow is not isolated from water waves, as customary state in open-channel hydraulics. • The Bernoulli grip relies on this principle to create a non-contact adhesive force between a surface and the gripper. Misunderstandings about the generation of lift Main article: Lift (force) Many explanations for the generation of lift (on airfoils, propeller blades, etc.) can be found; some of these explanations can be misleading, and some are false.[33] This has been a source of heated discussion over the years. In particular, there has been debate about whether lift is best explained by Bernoulli's principle or Newton's laws of motion. Modern writings agree that both Bernoulli's principle and Newton's laws are relevant and either can be used to correctly describe lift.[34][35][36] Several of these explanations use the Bernoulli principle to connect the flow kinematics to the flow-induced pressures. In cases of incorrect (or partially correct) explanations relying on the Bernoulli principle, the errors generally occur in the assumptions on the flow kinematics and how these are produced. It is not the Bernoulli principle itself that is questioned because this principle is well established.[37][38][39][40] Misapplications of Bernoulli's principle in common classroom demonstrations There are several common classroom demonstrations that are sometimes incorrectly explained using Bernoulli's principle.[41] One involves holding a piece of paper horizontally so that it droops downward and then blowing over the top of it. As the demonstrator blows over the paper, the paper rises. It is then asserted that this is because "faster moving air has lower pressure".[42][43][44] One problem with this explanation can be seen by blowing along the bottom of the paper - were the deflection due simply to faster moving air one would expect the paper to deflect downward, but the paper deflects upward regardless of whether the faster moving air is on the top or the bottom.[45] Another problem is that when the air leaves the demonstrator's mouth it has the same pressure as the surrounding air;[46] the air does not have lower pressure just because it is moving; in the demonstration, the static pressure of the air leaving the demonstrator's mouth is equal to the pressure of the surrounding air.[47][48] A third problem is that it is false to make a connection between the flow on the two sides of the paper using Bernoulli’s equation since the air above and below are different flow fields and Bernoulli's principle only applies within a flow field.[49][50][51][52] As the wording of the principle can change its implications, stating the principle correctly is important.[53] What Bernoulli's principle actually says is that within a flow of constant energy, when fluid flows through a region of lower pressure it speeds up and vice versa.[54] Thus, Bernoulli's principle concerns itself with changes in speed and changes in pressure within a flow field. It cannot be used to compare different flow fields. A correct explanation of why the paper rises would observe that the plume follows the curve of the paper and that a curved streamline will develop a pressure gradient perpendicular to the direction of flow, with the lower pressure on the inside of the curve.[55][56][57][58] Bernoulli's principle predicts that the decrease in pressure is associated with an increase in speed, i.e. that as the air passes over the paper it speeds up and moves faster than it was moving when it left the demonstrator's mouth. But this is not apparent from the demonstration.[59][60][61] Other common classroom demonstrations, such as blowing between two suspended spheres, or suspending a ball in an airstream are sometimes explained in a similarly misleading manner by saying "faster moving air has lower pressure".[62][63][64][65][66][67][68] See also • Terminology in fluid dynamics • Navier–Stokes equations – for the flow of a viscous fluid • Euler equations – for the flow of an inviscid fluid • Hydraulics – applied fluid mechanics for liquids • Venturi effect • Inviscid flow References 1. Clancy, L.J., Aerodynamics, Chapter 3. 2. ^ a b Batchelor, G.K. (1967), Section 3.5, pp. 156–64. 3. "Hydrodynamica". Britannica Online Encyclopedia. Retrieved 2008-10-30. 4. Streeter, V.L., Fluid Mechanics, Example 3.5, McGraw–Hill Inc. (1966), New York. 5. "If the particle is in a region of varying pressure (a non-vanishing pressure gradient in the x-direction) and if the particle has a finite size l, then the front of the particle will be ‘seeing’ a different pressure from the rear. More precisely, if the pressure drops in the x-direction (dp/dx < 0) the pressure at the rear is higher than at the front and the particle experiences a (positive) net force. According to Newton’s second law, this force causes an acceleration and the particle’s velocity increases as it moves along the streamline... Bernoulli’s equation describes this mathematically (see the complete derivation in the appendix)."Babinsky, Holger (November 2003), "How do wings work?", Physics Education 6. "Acceleration of air is caused by pressure gradients. Air is accelerated in direction of the velocity if the pressure goes down. Thus the decrease of pressure is the cause of a higher velocity." 7. " The idea is that as the parcel moves along, following a streamline, as it moves into an area of higher pressure there will be higher pressure ahead (higher than the pressure behind) and this will exert a force on the parcel, slowing it down. Conversely if the parcel is moving into a region of lower pressure, there will be an higher pressure behind it (higher than the pressure ahead), speeding it up. As always, any unbalanced force will cause a change in momentum (and velocity), as required by Newton’s laws of motion." See How It Flies John S. Denker http://www.av8n.com/how/htm/airfoils.html 8. ^ a b Batchelor, G.K. (1967), §5.1, p. 265. 9. Mulley, Raymond (2004). Flow of Industrial Fluids: Theory and Equations. CRC Press. ISBN 0-8493-2767-9. , 410 pages. See pp. 43–44. 10. Chanson, Hubert (2004). Hydraulics of Open Channel Flow: An Introduction. Butterworth-Heinemann. ISBN 0-7506-5978-5. , 650 pages. See p. 22. 11. Oertel, Herbert; Prandtl, Ludwig; Böhle, M.; Mayes, Katherine (2004). Prandtl's Essentials of Fluid Mechanics. Springer. pp. 70–71. ISBN 0-387-40437-6. 12. "Bernoulli's Equation". NASA Glenn Research Center. Retrieved 2009-03-04. 13. ^ a b Clancy, L.J., Aerodynamics, Section 3.5. 14. Clancy, L.J. Aerodynamics, Equation 3.12 15. ^ a b Batchelor, G.K. (1967), p. 383 16. White, Frank M. Fluid Mechanics, 6e. McGraw-Hill International Edition. p. 602. 17. Clarke C. and Carswell B., Astrophysical Fluid Dynamics 18. Clancy, L.J., Aerodynamics, Section 3.11 19. Van Wylen, G.J., and Sonntag, R.E., (1965), Fundamentals of Classical Thermodynamics, Section 5.9, John Wiley and Sons Inc., New York 20. ^ a b c Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). . ISBN 0-201-02116-1. , Vol. 2, §40–3, pp. 40–6 – 40–9. 21. Tipler, Paul (1991). Physics for Scientists and Engineers: Mechanics (3rd extended ed.). W. H. Freeman. ISBN 0-87901-432-6. , p. 138. 22. Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). . ISBN 0-201-02116-1. , Vol. 1, §14–3, p. 14–4. 23. Physics Today, May 1010, "The Nearly Perfect Fermi Gas", by John E. Thomas, p 34. 24. Resnick, R. and Halliday, D. (1960), Physics, Section 18–5, John Wiley & Sons, Inc., New York ("[streamlines] are closer together above the wing than they are below so that Bernoulli's principle predicts the observed upward dynamic lift.") 25. Eastlake, Charles N. (March 2002). "An Aerodynamicist’s View of Lift, Bernoulli, and Newton". The Physics Teacher 40.  "The resultant force is determined by integrating the surface-pressure distribution over the surface area of the airfoil." 26. Clancy, L.J., Aerodynamics, Section 3.8 27. Mechanical Engineering Reference Manual Ninth Edition 28. Castro-Orgaz, O. & Chanson, H. (2009). "Bernoulli Theorem, Minimum Specific Energy and Water Wave Celerity in Open Channel Flow". Journal of Irrigation and Drainage Engineering, ASCE, 135 (6): 773–778. doi:10.1061/(ASCE)IR.1943-4774.0000084. 29. Chanson, H. (2009). "Transcritical Flow due to Channel Contraction". Journal of Hydraulic Engineering, ASCE 135 (12): 1113–1114. 30. Chanson, H. (2006). "Minimum Specific Energy and Critical Flow Conditions in Open Channels". Journal of Irrigation and Drainage Engineering, ASCE 132 (5): 498–502. doi:10.1061/(ASCE)0733-9437(2006)132:5(498). 31. Glenn Research Center (2006-03-15). "Incorrect Lift Theory". NASA. Retrieved 2010-08-12. 32. Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages. ISBN 978-0-415-49271-3. 33. 34. Phillips, O.M. (1977). The dynamics of the upper ocean (2nd ed.). Cambridge University Press. ISBN 0-521-29801-6.  Section 2.4. 35. Batchelor, G.K. (1967). Sections 3.5 and 5.1 36. Lamb, H. (1994) §17–§29 37.   "The conventional explanation of aerodynamical lift based on Bernoulli’s law and velocity differences mixes up cause and effect. The faster flow at the upper side of the wing is the consequence of low pressure and not its cause." 38. "Bernoulli's law and experiments attributed to it are fascinating. Unfortunately some of these experiments are explained erroneously..." Misinterpretations of Bernoulli's Law Weltner, Klaus and Ingelman-Sundberg, Martin Department of Physics, University Frankfurt http://www-stud.rbi.informatik.uni-frankfurt.de/~plass/MIS/mis6.html 39. "...air does not have a reduced lateral pressure (or static pressure...) simply because it is caused to move, the static pressure of free air does not decrease as the speed of the air increases, it misunderstanding Bernoulli's principle to suggest that this is what it tells us, and the behavior of the curved paper is explained by other reasoning than Bernoulli's principle." Peter Eastwell Bernoulli? Perhaps, but What About Viscosity? The Science Education Review, 6(1) 2007 http://www.scienceeducationreview.com/open_access/eastwell-bernoulli.pdf 40. "Make a strip of writing paper about 5 cm X 25 cm. Hold it in front of your lips so that it hangs out and down making a convex upward surface. When you blow across the top of the paper, it rises. Many books attribute this to the lowering of the air pressure on top solely to the Bernoulli effect. Now use your fingers to form the paper into a curve that it is slightly concave upward along its whole length and again blow along the top of this strip. The paper now bends downward...an often-cited experiment, which is usually taken as demonstrating the common explanation of lift, does not do so..." Jef Raskin Coanda Effect: Understanding Why Wings Work http://karmak.org/archive/2003/02/coanda_effect.html 41. "Blowing over a piece of paper does not demonstrate Bernoulli’s equation. While it is true that a curved paper lifts when flow is applied on one side, this is not because air is moving at different speeds on the two sides... It is false to make a connection between the flow on the two sides of the paper using Bernoulli’s equation." Holger Babinsky How Do Wings Work 42. "An explanation based on Bernoulli’s principle is not applicable to this situation, because this principle has nothing to say about the interaction of air masses having different speeds... Also, while Bernoulli’s principle allows us to compare fluid speeds and pressures along a single streamline and... along two different streamlines that originate under identical fluid conditions, using Bernoulli’s principle to compare the air above and below the curved paper in Figure 1 is nonsensical; in this case, there aren’t any streamlines at all below the paper!" Peter Eastwell Bernoulli? Perhaps, but What About Viscosity? The Science Education Review 6(1) 2007 http://www.scienceeducationreview.com/open_access/eastwell-bernoulli.pdf 43. "The well-known demonstration of the phenomenon of lift by means of lifting a page cantilevered in one’s hand by blowing horizontally along it is probably more a demonstration of the forces inherent in the Coanda effect than a demonstration of Bernoulli’s law; for, here, an air jet issues from the mouth and attaches to a curved (and, in this case pliable) surface. The upper edge is a complicated vortex-laden mixing layer and the distant flow is quiescent, so that Bernoulli’s law is hardly applicable." David Auerbach Why Aircreft Fly European Journal of Physics Vol 21 p 289 http://iopscience.iop.org/0143-0807/21/4/302/pdf/0143-0807_21_4_302.pdf 44. "Millions of children in science classes are being asked to blow over curved pieces of paper and observe that the paper "lifts"... They are then asked to believe that Bernoulli's theorem is responsible... Unfortunately, the "dynamic lift" involved...is not properly explained by Bernoulli's theorem." Norman F. Smith "Bernoulli and Newton in Fluid Mechanics" The Physics Teacher Nov 1972 45. "Bernoulli’s principle is very easy to understand provided the principle is correctly stated. However, we must be careful, because seemingly-small changes in the wording can lead to completely wrong conclusions." See How It Flies John S. Denker http://www.av8n.com/how/htm/airfoils.html#sec-bernoulli 46. "A complete statement of Bernoulli's Theorem is as follows: "In a flow where no energy is being added or taken away, the sum of its various energies is a constant: consequently where the velocity increasees the pressure decreases and vice versa."" Norman F Smith Bernoulli, Newton and Dynamic Lift Part I School Science and Mathematics Vol 73 Issue 3 http://onlinelibrary.wiley.com/doi/10.1111/j.1949-8594.1973.tb08998.x/pdf 47. "The curved paper turns the stream of air downward, and this action produces the lift reaction that lifts the paper." Norman F. Smith Bernoulli, Newton, and Dynamic Lift Part II School Science and Mathematics vol 73 Issue 4 pg 333 http://onlinelibrary.wiley.com/doi/10.1111/j.1949-8594.1973.tb09040.x/pdf 48. "The curved surface of the tongue creates unequal air pressure and a lifting action. ... Lift is caused by air moving over a curved surface." AERONAUTICS An Educator’s Guide with Activities in Science, Mathematics, and Technology Education by NASA pg 26 http://www.nasa.gov/pdf/58152main_Aeronautics.Educator.pdf 49. "Viscosity causes the breath to follow the curved surface, Newton's first law says there a force on the air and Newton’s third law says there is an equal and opposite force on the paper. Momentum transfer lifts the strip. The reduction in pressure acting on the top surface of the piece of paper causes the paper to rise." The Newtonian Description of Lift of a Wing-Revised David F. Anderson & Scott Eberhardt http://home.comcast.net/~clipper-108/Lift_AAPT.pdf 50. '"Demonstrations" of Bernoulli's principle are often given as demonstrations of the physics of lift. They are truly demonstrations of lift, but certainly not of Bernoulli's principle.' David F Anderson & Scott Eberhardt Understanding Flight pg 229 http://books.google.com/books?id=52Hfn7uEGSoC&pg=PA229 51. "As an example, take the misleading experiment most often used to "demonstrate" Bernoulli's principle. Hold a piece of piece of paper so that it curves over your finger, then blow across the top. The paper will rise. However most people do not realize that the paper would not rise if it were flat, even though you are blowing air across the top of it at a furious rate. Bernoulli's principle does not apply directly in this case. This is because the air on the two sides of the paper did not start out from the same source. The air on the bottom is ambient air from the room, but the air on the top came from your mouth where you actually increased its speed without decreasing its pressure by forcing it out of your mouth. As a result the air on both sides of the flat paper actually has the same pressure, even though the air on the top is moving faster. The reason that a curved piece of paper does rise is that the air from your mouth speeds up even more as it follows the curve of the paper, which in turn lowers the pressure according to Bernoulli." From The Aeronautics File By Max Feil http://webcache.googleusercontent.com/search?q=cache:nutfrrTXLkMJ:www.mat.uc.pt/~pedro/ncientificos/artigos/aeronauticsfile1.ps+&cd=29&hl=en&ct=clnk&gl=us 52. "Finally, let’s go back to the initial example of a ball levitating in a jet of air. The naive explanation for the stability of the ball in the air stream, 'because pressure in the jet is lower than pressure in the surrounding atmosphere,' is clearly incorrect. The static pressure in the free air jet is the same as the pressure in the surrounding atmosphere..." Martin Kamela Thinking About Bernoulli The Physics Teacher Vol. 45, September 2007 http://tpt.aapt.org/resource/1/phteah/v45/i6/p379_s1 53. "Aysmmetrical flow (not Bernoulli's theorem) also explains lift on the ping-pong ball or beach ball that floats so mysteriously in the tilted vacuum cleaner exhaust..." Norman F. Smith, Bernoulli and Newton in Fluid Mechanics" The Physics Teacher Nov 1972 p 455 54. "Bernoulli’s theorem is often obscured by demonstrations involving non-Bernoulli forces. For example, a ball may be supported on an upward jet of air or water, because any fluid (the air and water) has viscosity, which retards the slippage of one part of the fluid moving past another part of the fluid." The Bernoulli Conundrum Robert P. Bauman Professor of Physics Emeritus University of Alabama at Birmingham http://www.introphysics.info/Papers/BernoulliConundrumWS.pdf 55. "A second example is the confinement of a ping-pong ball in the vertical exhaust from a hair dryer. We are told that this is a demonstration of Bernoulli's principle. But, we now know that the exhaust does not have a lower value of ps. Again, it is momentum transfer that keeps the ball in the airflow. When the ball gets near the edge of the exhaust there is an asymmetric flow around the ball, which pushes it away from the edge of the flow. The same is true when one blows between two ping-pong balls hanging on strings." Anderson & Eberhardt The Newtonian Description of Lift on a Wing http://lss.fnal.gov/archive/2001/pub/Pub-01-036-E.pdf Notes 1. Clancy, L.J., Aerodynamics, Section 5.5 ("When a stream of air flows past an airfoil, there are local changes in flow speed round the airfoil, and consequently changes in static pressure, in accordance with Bernoulli's Theorem. The distribution of pressure determines the lift, pitching moment and form drag of the airfoil, and the position of its centre of pressure.") Further reading • Batchelor, G.K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2. • Clancy, L.J. (1975). Aerodynamics. Pitman Publishing, London. ISBN 0-273-01120-0. • Lamb, H. (1993). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9.  Originally published in 1879; the 6th extended edition appeared first in 1932. • Landau, L.D.; Lifshitz, E.M. (1987). Fluid Mechanics. Course of Theoretical Physics (2nd ed.). Pergamon Press. ISBN 0-7506-2767-0. • Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group. ISBN 978-0-415-49271-3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 56, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053636193275452, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/272380/int-0-infty-2xe-2x-dx-221-22-how-to-find-this-result-of-the-inte
# $\int_0^\infty 2xe^{-2x} \: dx=Γ(2)2(1/2)^2$ how to find this result of the integral? $$\int_0^\infty 2xe^{-2x} \: dx=Γ(2)2(1/2)^2$$ I don't understand. How can we write this? Please can you explain this clearly? - Is there a reason you write the constant as $2(1/2)^2$ instead of $1/2$? – Christopher A. Wong Jan 7 at 20:21 Actually, I think writing it like that is helpful given the answer. – Ron Gordon Jan 7 at 20:21 ## 3 Answers Start with the definition of the Gamma function $$\Gamma(n) = \int_0^{\infty} \: t^{n-1} e^{-t} dt$$ Substitute $t=2x$ in the definition $$= \int_0^{\infty} \: (2x)^{n-1} e^{-2x}\: 2dx$$ To match the power of $x$, set $n=2$. $$\Gamma(2)= \int_0^{\infty} \: 2x e^{-2x} \: 2dx$$ Divide both sides by 2. - please also note that $\Gamma(n) = (n-1)!$ for positive integers – karakfa Jan 8 at 15:27 $$\int_0^{\infty} dx \: x^n \exp{(-\alpha x)} = \frac{\Gamma(n+1)}{\alpha^{n+1}}$$ You should be able to see your result immediately. The integral may be derived through integration by parts. - Yes! Thank you:) – B11b Jan 7 at 20:21 Make the substitution $u=2x$ and look up the definition of the Gamma function. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8921806216239929, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/117959/laurent-polynomials/117960
Laurent Polynomials Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $R$ be a commutative ring with identity. Is there any characterization for invertible elements of $R[x,x^{-1}]$ ? - 3 Answers I have read about this somewhere. I think it was as follows: $\sum_{i=-n}^n a_ix^i \in R[x,x^{-1}]$ is invertible iff $\sum a_i^2$ is invertible in $R$ and for all $i \not = j$, $a_ia_j$ is nilpotent. - 7 And there is a very nice proof. Hints: (1) The Laurent polynomial $\sum_i a_i x^i$ is invertible if and only if the product $\left(\sum_i a_i x^i\right)\cdot \left(\sum_i a_i x^{-i}\right)$ is invertible. (2) The inverse of an invertible Laurent polynomial which lies in the subring $R\left[x+x^{-1}\right]$ must itself lie in this subring $R\left[x+x^{-1}\right]$. (3) The subring $R\left[x+x^{-1}\right]$ is isomorphic to the polynomial ring $R\left[y\right]$. (4) A polynomial over a commutative ring is invertible if and only if its constant term is invertible and all its other terms nilpotent. – darij grinberg Jan 3 at 16:26 I just wanted to post the same constructive proof (which I learned here: math.stackexchange.com/questions/147661/…) – Martin Brandenburg Jan 3 at 16:42 Actually I'm not sure about it anymore, because in the "$\sum_i a_i x^i$ invertible $\Longrightarrow$ ..." direction, it only shows that $\sum_{i-j=k} a_ia_j$ is nilpotent for every $k\neq 0$, but not that the individual $a_ia_j$ are nilpotent for $i\neq j$. But something tells me this can be fixed. – darij grinberg Jan 3 at 16:43 Ok, it can be fixed. We need to prove that $a_ia_j$ is nilpotent for every $i$ and $j$ with $i-j=k$, for every $k\neq 0$. We go by descending induction over $k$, knowing that this is tautological for $k$ large enough (since Laurent polynomials have only finitely many coefficients). Now the final hint: (5) If finitely many elements of a commutative ring are given such that the pairwise products of these elements are nilpotent ("pairwise" means "pairs of two different ones" here) and the sum of these elements is nilpotent, then each of these elements is nilpotent. – darij grinberg Jan 3 at 16:54 But $1\cdot 1$ isn't nilpotent either. – darij grinberg Jan 3 at 19:57 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Thinking geometrically in terms of the map ${\rm{Spec}}(R[x,1/x]) \rightarrow {\rm{Spec}}(R)$ and noting that being a unit amounts to being nonzero in the residue field at every prime, an element $f = \sum a_i x^i \in R[x,1/x]$ is a unit if and only if it has unit restriction to every fiber, which is to say that for every prime ideal $P$ of $R$ (with residue field $k(P)$) the image $f(P) := \sum a_i(P) x^i$ in $k(P)[x,1/x]$ is a unit. But since $k(P)$ is a field, this latter condition is exactly that $f(P)$ is a $k(P)^{\times}$-multiple of a power of $x$. That is, there is exactly one $i$ (depending perhaps on $P$) such that $a_i(P) \ne 0$, which can be equivalently expressed as the condition that $a_i(P)a_j(P) = 0$ in $k(P)$ for all $i \ne j$ and $\sum a_i(P) \ne 0$ in $k(P)$. Varying over all $P$, this necessary and sufficient condition says exactly that (1) $a_i a_j$ is nilpotent in $R$ when $i \ne j$ and (2) $\sum a_i \in R^{\times}$. In the presence of (1), squaring the sum in (2) (which has no effect on whether or not it is a unit) and noting that adding a nilpotent element has no effect on being a unit shows that (2) can be replaced with (2') $\sum a_i^2 \in R^{\times}$ (thereby recovering the formulation in shatich's answer). - Invertible elements of Laurent algebras, and more generally of algebras of torsionfree, cancellable commutative monoids, are characterised in Theorem 11.3 and Corollary 11.4 of Gilmer's Commutative Semigroup Rings (Chicago Lectures in Mathematics, 1984). The proofs given there are quite accessible. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370613098144531, "perplexity_flag": "head"}
http://physics.aps.org/articles/print/v4/43
# Viewpoint: Finally, results from Gravity Probe B , McDonnell Center for the Space Sciences and Department of Physics, Washington University, St. Louis, One Brookings Drive, St. Louis, MO 63130, USA Published May 31, 2011  |  Physics 4, 43 (2011)  |  DOI: 10.1103/Physics.4.43 Nearly fifty years after its inception, the Gravity Probe B satellite mission delivers the first measurements of how a spinning gyroscope precesses in the gravitational warping of spacetime. The great blues singer Etta James’ signature song begins, “At laaasst, my love has come along … .” This may have been the feeling on May 4th when NASA announced the long-awaited results of Gravity Probe B [1], which are appearing now in Physical Review Letters [2]. Over $47$ years and $750$ million dollars in the making, Gravity Probe B was an orbiting physics experiment, designed to test two fundamental predictions of Einstein’s general relativity. According to Einstein’s theory, space and time are not the immutable, rigid structures of Newton’s universe, but are united as spacetime, and together they are malleable, almost rubbery. A massive body warps spacetime, the way a bowling ball warps the surface of a trampoline. A rotating body drags spacetime a tiny bit around with it, the way a mixer blade drags a thick batter around. The spinning Earth does both of these things and this is what the four gyroscopes aboard the earth-orbiting satellite Gravity Probe B measured. The satellite follows a polar orbit with an altitude of $640$ kilometers above the earth’s surface (Fig. 1, top). The warping of spacetime exerts a torque on the gyroscope so that its axis slowly precesses—by about $6.6$ arcseconds (or $1.8$ thousandths of a degree) per year—in the plane of the satellite’s orbit. (To picture this precession, or “geodetic effect,” imagine a stick moving parallel to its length on a closed path along the curved surface of the Earth, returning to its origin pointing in a slightly different direction than when it started.) The rotation of the Earth also exerts a “frame-dragging” effect on the gyro. In this case, the precession is perpendicular to the orbital plane and advances by $40$ milliarcseconds per year. Josef Lense and Hans Thirring first pointed out the existence of the frame-dragging phenomenon in 1918, but it was not until the 1960s that George Pugh in the Defense Department and Leonard Schiff at Stanford independently pursued the idea of measuring it with gyroscopes. The Gravity Probe B (or GP-B, in NASA parlance) gyroscopes (Fig. 2) are coated with superconducting niobium, such that when they spin, the supercurrents in the niobium produce a magnetic moment parallel to the spin axis. Extremely sensitive magnetometers (superconducting quantum interference detectors, or “SQUIDs”) attached to the gyroscope housing are capable of detecting even minute changes in the orientation of the gyros’ magnetic moments and hence the precession in their rotation predicted by general relativity. At the start of the mission, the four gyros were aligned to spin along the symmetry axis of the spacecraft. This was also the optical axis of a telescope directly mounted on the end of the structure housing the rotors. Spacecraft thrusters oriented the telescope to point precisely toward the star IM Pegasi (HR $8703$) in our galaxy (except when the Earth intervened, once per orbit). In order to average out numerous unwanted torques on the gyros, the spacecraft rotated about its axis once every $78$ seconds. GP-B started in late 1963 when NASA funded the initial R&D work that identified the new technologies needed to make such a difficult measurement possible. Francis Everitt, a physicist at Stanford and a lead author on the current paper, became Principal Investigator of GP-B in 1981, and the project moved to the mission design phase in 1984. Following a major review of the program by a National Academy of Sciences committee in 1994, GP-B was approved for flight development, and began to collaborate with Lockheed-Martin and Marshall Space Flight Center. The satellite launched on April 20, 2004 for a planned 16-month mission, but another five years of data analysis were needed to tease out the effects of relativity from a background of other disturbances. Almost every aspect of the spacecraft, its subsystems, and the science instrumentation performed extremely well, some far better than expected. Still, the success of such a complex and delicate experiment boils down to figuring out the sources of error. In particular, having an accurate calibration of the electronic readout from the SQUID magnetometers with respect to the tilt of the gyros was essential. The plan for calibrating the SQUIDs was to exploit the aberration of starlight, which causes a precisely calculable misalignment between the rotors and the telescope as the latter shifts its pointing toward the guide star by up to $20$ arcseconds to compensate for the orbital motion of the spacecraft and the Earth. However, three important, but unexpected, phenomena were discovered during the experiment that affected the accuracy of the results. First, because each rotor is not exactly spherical, its principal axis rotates around its spin axis with a period of several hours, with a fixed angle between the two axes. This is the familiar “polhode” period of a spinning top and, in fact, the team used it as part of their analysis to calibrate the SQUID output. But the polhode period and angle of each rotor actually decreased monotonically with time, implying the presence of some damping mechanism, and this significantly complicated the calibration analysis. In addition, over the course of a day, each rotor was found to make occasional, seemingly random “jumps” in its orientation—some as large as $100$ milliarcseconds. Some rotors displayed more frequent jumps than others. Without being able to continuously monitor the rotors’ orientation, Everitt and his team couldn’t fully exploit the calibrating effect of the stellar aberration in their analysis. Finally, during a planned $40$-day, end-of-mission calibration phase, the team discovered that when the spacecraft was deliberately pointed away from the guide star by a large angle, the misalignment induced much larger torques on the rotors than expected. From this, they inferred that even the very small misalignments that occurred during the science phase of the mission induced torques that were probably several hundred times larger than the designers had estimated. What ensued during the data analysis phase was worthy of a detective novel. The critical clue came from the calibration tests. Here, they took advantage of residual trapped magnetic flux on the gyroscope. (The designers used superconducting lead shielding to suppress stray fields before they cooled the niobium coated gyroscopes, but no shielding is ever perfect.) This flux adds a periodic modulation to the SQUID output, which the team used to figure out the phase and polhode angle of each rotor throughout the mission. This helped them to figure out that interactions between random patches of electrostatic potential fixed to the surface of each rotor, and similar patches on the inner surface of its spherical housing, were causing the extraneous torques. In principle, the rolling spacecraft should have suppressed these effects, but they were larger than expected. The patch interactions also accounted for the “jumps”: they occurred whenever a gyro’s slowly decreasing polhode period crossed an integer multiple of the spacecraft roll period. What looked like a jump of the spin direction was actually a spiraling path—known to navigators as a loxodrome. The team was able to account for all these effects in a parameterized model. The original goal of GP-B was to measure the frame-dragging precession with an accuracy of $1%$, but the problems discovered over the course of the mission dashed the initial optimism that this was possible. Although Everitt and his team were able to model the effects of the patches, they had to pay the price of the increase in error that comes from using a model with so many parameters. The experiment uncertainty quoted in the final result—roughly $20%$ for frame dragging—is almost totally dominated by those errors. Nevertheless, after the model was applied to each rotor, all four gyros showed consistent relativistic precessions (Fig. 1, bottom). Gyro $2$ was particularly “unlucky”—it had the largest uncertainties because it suffered the most resonant jumps. When GP-B was first conceived in the early 1960s, tests of general relativity were few and far between, and most were of limited precision. But during the ensuing decades, researchers made enormous progress in experimental gravity, performing tests of the theory by studying the solar system and binary pulsars [3]. Already by the middle 1970s, some argued that the so-called parameterized post-Newtonian (PPN) parameters that characterize metric theories of gravity, like general relativity, were already known to better accuracy than GP-B could ever achieve [4]. Given its projected high cost, critics argued for the cancellation of the GP-B mission. The counter-argument was that all such assertions involved theoretical assumptions about the class of theories encompassed by the PPN approach, and that all existing bounds on the post-Newtonian parameters involved phenomena entirely different from the precession of a gyroscope. All these issues were debated, for example, in the 1994 review of GP-B that recommended its continuation. The most serious competition for the results from GP-B comes from the LAGEOS experiment, in which laser ranging accurately tracked the paths of two laser geodynamics satellites orbiting the earth. Relativistic frame dragging was expected to induce a small precession (around $30$ milliarcseconds per year) of the orbital plane of each satellite in the direction of the Earth’s rotation. However, the competing Newtonian effect of the Earth’s nonspherical shape had to be subtracted to very high precision using a model of the Earth’s gravity field. The first published result from LAGEOS in 1998 [5, 6] quoted an error for the frame-dragging measurement of $20$ to $30%$, though this result was likely too optimistic given the quality of the gravity models available at the time. Later, the GRACE geodesy mission offered dramatically improved Earth gravity models, and the analysis of the LAGEOS satellites finally yielded tests at a quoted level of approximately $10%$ [7]. Frame dragging has implications beyond the solar system. The incredible outpouring of energy from quasars along narrow jets of matter that stream at nearly the speed of light is most likely driven by the same frame-dragging phenomenon measured by GP-B and LAGEOS. In the case of quasars, the central body is a rapidly rotating black hole. In another example, the final inward spiral and merger of two spinning black holes involve truly wild gyrations of each body’s spin axes and of the orbit, again driven by the same frame-dragging effect, and these motions are encoded in gravitational-wave signals. Laser interferometric observatories on the ground, and in the future, a similar observatory in space, may detect these gravity waves. So there is a strong link between the physics Gravity Probe B was designed to uncover and that describing some of the most energetic and cataclysmic events in the universe. Even though it is popular lore that Einstein was right (I even wrote a book on the subject), no such book is ever completely closed in science. As we have seen with the 1998 discovery that the universe is accelerating, measuring an effect contrary to established dogma can open the door to a whole new world of understanding, as well as of mystery. The precession of a gyroscope in the gravitation field of a rotating body had never been measured before GP-B. While the results support Einstein, this didn’t have to be the case. Physicists will never cease testing their basic theories, out of curiosity that new physics could exist beyond the “accepted” picture. ### References 1. Press conference available at http://www.youtube.com/watch?v=SBiY0Fn1ze4. 2. C. W. F. Everitt et al., Phys. Rev. Lett. 106, 221101 (2011). 3. C. M. Will, Was Einstein Right? (Basic Books, Perseus, NY, 1993)[Amazon][WorldCat]. 4. C. M. Will, Living Rev. Relativ. 9, 3 (2006); http://www.livingreviews.org/lrr-2006-3. 5. I. Ciufolini et al., Class. Quantum Gravit. 14, 2701 (1997). 6. I. Ciufolini et al., Science 279, 2100 (1998). 7. I. Ciufolini et al., in General Relativity and John Archibald Wheeler, edited by I. Ciufolini and R. A. Matzner (Springer, Dordrecht, 2010), p. 371[Amazon][WorldCat]. ### Highlighted article #### Gravity Probe B: Final Results of a Space Experiment to Test General Relativity C. W. F. Everitt et al. Published May 31, 2011 | PDF (free) ### Figures ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426846504211426, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/172958-inhomogeneous-linear-second-order-ode-constant-coefficients.html
# Thread: 1. ## Inhomogeneous linear second-order ODE (constant coefficients) y''(x) +3(y'(x))+2y=4e^(2x)+7 this is my ODE, it is inhomogeneous linear and second order with constant coefficients. have i got this correct in saying the general solution is: y=(1/3)*e^(2x)+7/2 ??? please could someone verify this quickly for me? im just curious as to whether or not i should be adding some unknown constants or other terms as well? thanks 2. Originally Posted by situation y''(x) +3(y'(x))+2y=4e^(2x)+7 this is my ODE, it is inhomogeneous linear and second order with constant coefficients. have i got this correct in saying the general solution is: y=(1/3)*e^(2x)+7/2 ??? please could someone verify this quickly for me? im just curious as to whether or not i should be adding some unknown constants or other terms as well? thanks Take a look at the characteristic equation for the homogeneous problem: $m^2 + 3m + 2 = 0$ There are two solutions to this equation, meaning you have two terms in your homogeneous solution. Calling these solutions a and b, then the homogeneous solution will be $y_h(x) = Ae^{ax} + Be^{bx}$ -Dan 3. Originally Posted by topsquark Take a look at the characteristic equation for the homogeneous problem: $m^2 + 3m + 2 = 0$ There are two solutions to this equation, meaning you have two terms in your homogeneous solution. Calling these solutions a and b, then the homogeneous solution will be $y_h(x) = Ae^{ax} + Be^{bx}$ -Dan thanks! 4. Originally Posted by situation y''(x) +3(y'(x))+2y=4e^(2x)+7 this is my ODE, it is inhomogeneous linear and second order with constant coefficients. have i got this correct in saying the general solution is: y=(1/3)*e^(2x)+7/2 ??? You certainly would be wrong to say this the general solution! A "general" solution to a differential equation always contains undetermined constants such that all solutions can be got by taking different values for those solutions. Perhaps you meant to say "particular" solution. That's easy to check. If $y(x)= \frac{1}{3}e^{2x}+ \frac{7}{2}$, then $y'(x)= \frac{2}{3}e^{2x}$, and $y''= \frac{4}{3}e^{2x}$. Putting those into the given differential equation gives $y''+ 2y'+ 2y= \frac{4}{3}e^{2x}+ 2e^{2x}+ \frac{2}{3}e^2x+ 7= 4e^{2x}+ 7$ which does, in fact, satisfy the equation. Yes, that is a particular solution. Now, the "general solution" to the entire equation can be written by adding that to the general solution to the associated homogeneous equation, $y''+ 3y'+ 2y= 0$. please could someone verify this quickly for me? im just curious as to whether or not i should be adding some unknown constants or other terms as well? thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134969711303711, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/38241/normalization-of-a-spin-like-quantity-in-matrix-mechanics
# Normalization of a spin-like quantity in matrix mechanics Suppose that there is a quantity in Heisenberg picture as the following: $A=u_1\Sigma_1 + u_2\Sigma_2 +u_3\Sigma_3$ I am not sure why $u_1,u_2,u_3$ is normalized to be ${u_1}^2 + {u_2}^2 + {u_3}^2 =1$. (The matrices $\Sigma$ are the Pauli matrices.) - What is the physical interpretation of $A$? In quantum physics, there is no normalization of operators. – C.R. Sep 25 '12 at 4:44 ## 1 Answer This is because you are doing a rotation transformation, which is an SU(2) on the state vector (the vector the matrices act on), but when you do the transformation of the Pauli operators, it is a regular vector rotation of all three, so it preserves the length. This is a special case for 2 state quantum mechanics, any SU(2) transformation preserves the length of the coefficients of the Pauli matrix in the expansion of any 2 by 2 matrix, the Pauli matrices make an operator vector. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9064005017280579, "perplexity_flag": "head"}
http://nrich.maths.org/384/clue
### Set Square A triangle PQR, right angled at P, slides on a horizontal floor with Q and R in contact with perpendicular walls. What is the locus of P? ### Biggest Bendy Four rods are hinged at their ends to form a quadrilateral with fixed side lengths. Show that the quadrilateral has a maximum area when it is cyclic. ### Strange Rectangle 2 Find the exact values of some trig. ratios from this rectangle in which a cyclic quadrilateral cuts off four right angled triangles. # Strange Rectangle ##### Stage: 5 Challenge Level: To crack this Tough Nut look for pairs of similar triangles and angles adding up to $90$ degrees and this leads to $PQRS$ being a cyclic quadrilateral with $SQ$ as diameter. You need to know that opposite angles of a cyclic quadrilateral add up to $180$ degrees and the angle at the centre of a circle is twice the angle at the circumference subtended by the same arc. Call the centre of the circle $O$ the use the converse of Pythagoras' Theorem to find the angle $POR$ . The rest follows. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8993580937385559, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/126897/the-decimal-expansion-of-the-quotient-of-two-integers
# The decimal expansion of the quotient of two integers It is an exercise in a book on discrete mathematics.How to prove that in the decimal expansion of the quotient of two integers, eventually some block of digits repeats. For example: $\frac { 1 }{ 6 } =0.166\dot { 6 } \ldots$ and $\frac { 217 }{ 660 } =0.328787\dot { 8 } \dot { 7 } \ldots$ How to think of this?I just can't find the point to use the Pigeonhole Principle. Thanks for your help! - Probably you are intended to prove the result by discussing the school division procedure. In the formal sense, this is not enough, since we have not proved that the algorithm works. – André Nicolas Apr 1 '12 at 18:54 ## 2 Answers Let's proceed to the actual division : $\begin{array} {r|l} \boxed{217}\hphantom{000\;} & 660\\ \hline 2170\hphantom{000} & 0.3287\\ -1980\hphantom{000} & \\ \boxed{190}\hphantom{00\;} & \\ 1900\hphantom{00} & \\ -1320\hphantom{00} & \\ \boxed{580}\hphantom{0\;} & \\ 5800\hphantom{0} & \\ -5280\hphantom{0} & \\ \boxed{520}\hphantom{\;} & \\ 5200 & \\ -4620 & \\ \boxed{580} & \\ \end{array}$ The important point is that the remainders must be smaller than the quotient $660$ so that, after a finite number of operations, you must get $0$ or a remainder you got before. What will the next digit of the quotient be? And the next remainder? Hoping it clarified, - Assume the decimal is infinite (otherwise the 0 repeats). Imagine evaluating the quotient by long division. After each step after the numerator has become all 0's, you have to carry over something less than the denominator. Eventually you'll have to carry something you carried before, because you carry something on every step. - By "carry over", I presume you mean it's the remainder. But I wouldn't have known that if I hadn't already known how to do this. – Michael Hardy Apr 1 '12 at 17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419418573379517, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/123514-quadratic-nonresidues-residues.html
# Thread: 1. ## quadratic nonresidues and residues How can I show that the number of quadratic residues modulo m is equal to the number of quadratic nonresidues modulo m in the set of reduced residue system modulo m? 2. Note that $1^2\equiv (p-1)^2(\bmod.p)$, $2^2\equiv (p-2)^2(\bmod.p)$, ... So that the only distinct quadratic residues are: $1^2(\bmod.p)...\left(\tfrac{p-1}{2}\right)^2(\bmod.p)$, but we have $p-1=|\mathbb{Z}_p^{\times}|$ hence the rest of them must be non-quadratic residues, that is we have $\tfrac{p-1}{2}$ quadratic residues and $\tfrac{p-1}{2}$ non-quadratic residues. EDIT: Thought I'd clarify a bit: $x^2\equiv{y^2}(\bmod.p)$ if and only if $(x-y)\cdot (x+y)\equiv{0}(\bmod.p)$ since p is prime either $x \equiv{y}(\bmod.p)$ or $x \equiv{p-y}(\bmod.p)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8423141241073608, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38780/odd-number-of-second-class-constraints?answertab=oldest
# Odd number of second class constraints (!) For my thesis, I have calculated the constraints for a system using Dirac method of constraint analysis. The problem is I got odd number of second class constraints (!), which gives me unusual numbers of degrees of freedom in phase space. I might have made some mistakes. Is there any other method beside Dirac, to analyze the constraints of the system? - ## 3 Answers If the constrained Hamiltonian system has a finite number of real degrees of freedom$^1$, and if all the constraints are regular, then it is mathematically impossible to have an odd number of second-class constraints. (The proof is very similar to the reason why a symplectic manifold or vector space must be even-dimensional.) Perhaps OP is actually considering a constrained Hamiltonian field theory with an infinite number of degree of freedom and an infinite number of second-class constraints? (Typically this happens because all the fields, say a position field $\phi(\vec{x},t)$ and a momentum field $\pi(\vec{x},t)$, are labeled by a continuous index, namely the space point $\vec{x}$). In that case, it does not make sense to label $\infty$ as an odd number. Example$^2$: A typical example of a second-class constraints in 1+1 dimension field theory with canonical equal-time Poisson brackets $$\tag{1} \{\phi(x,t),\pi(y,t)\}~=~ \delta(x-y),$$ $$\tag{2} \{\phi(x,t),\phi(y,t)\}~=~0,$$ $$\tag{3} \{\pi(x,t),\pi(y,t)\}~=~0,$$ is $$\tag{4} \chi(x,t)~:=~\pi(x,t) -\partial_x\phi(x,t).$$ Naively one may think of (4) as a single (i.e. odd!) second-class constraint, but it is really infinitely many second-class constraints labeled by the position $x$. Their equal-time Poisson brackets are $$\tag{5} \Delta(x,y)~:=~\{\chi(x,t),\chi(y,t)\}~=~ 2 \delta^{\prime}(x-y)$$ with a formal$^3$ inverse $$\tag{6} \Delta^{-1}(x,y) ~=~ \frac{1}{4}{\rm sgn}(x-y).$$ For another related example of second-class constraints in Hamiltonian field theory, see also e.g. this Phys.SE answer. $^1$ The definition of degrees of freedom (d.o.f.) is e.g. discussed in this Phys.SE post. (Note that there is also a field-theoretic notion of d.o.f., which is different. E.g. in pure QED in 3+1 dimensions, the photon has 2 physical polarizations, so one would say that pure QED has 2 physical d.o.f., etc. This is not the notion of d.o.f, that I'm considering here. If OP is counting field-theoretic d.o.f., there is no reason to be surprised to meet an odd number, cf. the Example.) $^2$ This example is sometimes referred to as a chiral/self-dual boson in 1+1 dimensions. $^3$ One should impose appropriate boundary conditions at $|x| \to \infty$. - The fields (I am working with) have finite number of DOF of themselves separately. My work was to analyze how they react if they all interact in the same field. In this case, is it possible to have infinite number of DOF? – aries0152 Oct 2 '12 at 8:59 @aries0152: If you would like a more focused response about your system of second-class constraints, you would have to display the explicit form in the question. – Qmechanic♦ Oct 2 '12 at 9:53 – aries0152 Oct 2 '12 at 12:30 Right, that paper considers field theory, i.e. infinitely many d.o.f. – Qmechanic♦ Oct 2 '12 at 13:37 So with infinite number of dof, does the commutation chain breaks in a point (Like $\dot \phi=0$) ? Or it just continue producing new constraints (second class) ? – aries0152 Oct 2 '12 at 17:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209670424461365, "perplexity_flag": "head"}
http://scicomp.stackexchange.com/questions/2974/drawing-3d-projection-of-complex-surface
# Drawing 3d projection of complex surface I have a complex surface (real dimension 2) in $\mathbb{C}^2$ with coordinates $(z,w)$ given explicitely: for any $\xi \in \mathbb{C}$ I know points $w(\xi)$ of intersection of surface with complex line $z = \xi$. I have to draw it's projection on fixed 3d plane. Please help me with algorithm. - 1 A slightly friendlier way of asking for help -- maybe including a description of what you have already tried -- would probably get you more answers. – Wolfgang Bangerth Aug 1 '12 at 15:10 ## 1 Answer Since you have an explicit parametric representation, the easiest way is direct rasterization of a rectilinear patch. You don't say what kind of projection you want, so for concreteness say we want an orthographic (orthogonal) projection onto a 3D hyperplane defined by a linear function $A : \mathbb{R}^3 \to \mathbb{R}^4 = \mathbb{C}^2$. Given a point $y \in \mathbb{R}^4$, the projection is defined by minimizing $$|y - Ax|^2 = |y|^2-2y^TAx+x^TA^TAx$$ over $x \in \mathbb{R}^3$. The minimum is attained at $$x = (A^T A)^{-1} A^T y$$ Now pick a 2D grid of points $z_i \in \mathbb{C}=\mathbb{R}^2$, map them to 4D with your function, and project them back to 3D with the above formula. The result can be passed to a suitable plotting function (I don't know Matlab so I don't know which). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183722138404846, "perplexity_flag": "head"}
http://shlomifish.org/MathVentures/dodeca.html
Shlomi Fish (שלומי פיש) is an Israeli programmer, essayist, and humorist. He is passionate about open source, open content, and freedom and openness in general. It is easy to reach Shlomi using a large number of online means, including E-mail. WWW shlomifish.org Puzzles Section Menu If you like this site, I would appreciate a gift from my wishlist. What’s the Volume of a Dodecahedron? The picture above shows a dodecahedron. It’s a solid body which has 12 perfect pentagons of the same size as its sides. I wondered what is the volume of such a body. What is the volume of a dodecahedron with an edge-length of a? The solution can be found some space below: Solution: Since a dodecahedron is a perfect solid with 12 identical sides at the same distance from its centre, we can divide it into twelve identical pyramids. The corner of each pyramid is found at the dodecahedron’s centre, and the base is one of the sides. The volume of a pyramid is the area of the base multiplied by its height divided by 3. We can calculate the height by the angle between the base and one of the surface sides. Since two pyramids are adjacent at every edge of the dodecahedron, we can find it by taking the angle between two adjacent sides of the dodecahedron and dividing it by 2. To find that, let’s take a corner of the dodecahedron, and form a triangle at the points which are at a certain distance along the edges. We get the following picture: From point C, which is found somewhere along the edge, let’s lower two perpendiculars to the edge, down to the edges of the base triangle. We get the triangle CAB. Now since the surface sides of this pyramid are isosceles triangles, and the angle of a perfect pentagon is equal to: $$\frac{180° \cdot 3}{5} = 108°$$ Then the angle CDA is equal to: $$\frac{180°-108°}{2}=36°$$ Since angle ACD is a right angle, we find that CA (and CB) is equal to AD * sin(36°), and since the base triangle is perfect, AB is equal to AD. Thus, we know that: $\frac{AE}{CA} = \frac{\frac{AB}{CA}}{2} = \frac{\frac{AD}{CA}}{2} = \frac{\frac{1}{\frac{CA}{AD}}}{2} = \frac{1}{2\sin{36°}}$ Thus, the angle ACE, which is half the angle between two sides of the dodecahedron and the angle we seek, is equal to: $$\arcsin{\frac{1}{2 \sin{36°}}}$$ Approximately 58.28°. Now, let’s take the look at a side pyramid of a dodecahedron and calculate its volume: Since the angle of a perfect pentagon is equal to 108°, angle OBA which is half of it is equal to 54°. Since AB is a/2, OA is equal to tan(54°)*a/2. Now, the base of the pyramid is made of 5 equal triangles each of which has a base of length a and a height of OA. Thus, the area of the base is 5*(OA*a/2) = 5*tan(54°)*a/2*a/2 = 5/4*tan(54°)*a^2. In the previous section we found out that the angle OAD is equal to arcsin(1/(2*sin(36°))), and therefore: $OD = OA \cdot \tan{\left[\arcsin{\left(\frac{1}{2 \sin{36°}}\right)}\right]} = \frac{1}{2} \tan{54°} \tan{\left[\arcsin{\left(\frac{1}{2 \sin{36°}}\right)}\right]} \cdot {\bf a}$ The volume of the pyramid is the area of its base multiplied by its height (OD) divided by 3 and so it is equal to: $\frac{5}{4} \cdot \tan{54°} \cdot {\bf a}^2 \cdot \\* \frac{1}{2} \cdot tan{54°} \cdot \tan{\left[\arcsin{\left(\frac{1}{2 \sin{36°}}\right)}\right]} \cdot {\bf a} \cdot \frac{1}{3} = \\* \frac{5}{24} \cdot \tan^2{54°} \cdot \tan{\left[\arcsin{\left(\frac{1}{2 \sin{36°}}\right)}\right]} \cdot {\bf a}^3$ Since there are 12 such pyramids in a dodecahedron, its volume is equal to this volume multiplied by 12. Thus, we get that the volume of a dodecahedron is: $\frac{5}{2} \cdot \tan{54°}^2 \cdot \tan{\left[\arcsin{\left(\frac{1}{2 \sin{36°}}\right)}\right]} \cdot {\bf a}^3 = ~7.66 * {\bf a}^3.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444800615310669, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19591?sort=newest
## Dimension of subalgebras of a matrix algebra ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If n is given and A is a subalgebra of M_n(C), the algebra of n-by-n matrices with entries in the field of complex numbers, then what are the possible values of dimension of A as a vector space over C? - Ok, I just found out that there is an interesting result due to Schur which gives a partial answer to my question. Here it is for those who are interested: If F is a field, then there exists a "commutative" subalgebra A of M_n(F) with dim_F A = k if and only if k \leq [n^2/4] + 1, where [ ] is the floor function. I'm starting to think that there exists a subalgebra of M_n(F) of any dimension! – abcba Mar 28 2010 at 8:02 1 @abcba: here's a hint for constructing that commutative subalgebra: write an n x n matrix as (A B;C D) with A,B,C,D n/2 x n/2 matrices, and then consider the space with A=C=D=0. To get further start eating into B. Add scalar multiples of the identity if you're the sort of person whose algebras have to contain 1. – Kevin Buzzard Mar 28 2010 at 8:05 3 Is there an 8 dimensional subalgebra of M_3? – Jonas Meyer Mar 28 2010 at 8:35 3 One can get all dimensions up to $n(n+1)/2$ by using subalgebras of upper triangular matrices. We can alo get some larger examples by the construction $(A\ B;0\ D)$ where $A$ and $D$ run through given subalgebras of $M_k$ and $M_{n-k}$ and $B$ is arbitrary. Some dimensions are not accessible by these constructions, e.g., dimension $8$ when $n=3$. Are there any subalgebras with these dimensions? – Robin Chapman Mar 28 2010 at 8:49 3 A nice proof of Schur's theorem is at M. Mirzakhani `A simple proof of a theorem of Schur' Amer. Math. Monthly 105 (1998), 260-262. – Robin Chapman Mar 28 2010 at 8:51 ## 3 Answers Soit $E$ un $\mathbb C$-espace vectoriel de dimension $n$. J'ai démontré entre autres les deux résultats suivants dans un article à paraître dans la revue française Quadrature : • On suppose que $k$ vérifie les inégalités $k \ge 2$ et $k^{2}\le n$. Soit $\mathcal{A}$ une sous-algèbre de $\mathcal{L}(E)$ qui vérifie la relation $n^{2}-kn+k^{2}-k+1 < \dim \mathcal{A} < n^{2}-kn+n.$ Alors, $\mathcal{A}$ vérifie la relation $\dim \mathcal{A}=n^{2}-kn+k^{2}.$ • Soient $n$ un entier naturel et $p$ un entier de l'intervalle $[0,n^{2}].$ On suppose $p$ écrit sous la forme $p=n(n-k)+t,\ 0\le t \le n-1$. Alors il existe une sous-algèbre de dimension $p$ dans $\mathcal M_n (\mathbb C )$ si et seulement s'il existe une sous-algèbre de dimension $t$ dans $\mathcal M_k(\mathbb C)$. - 1 Is there a copy of this paper available on the internet? – S. Carnahan♦ Jul 4 2011 at 2:42 On peut consulter mon article à l'adresse : logique.jussieu.fr/~chalons/z2009/articleabou.pdf Bonne lecture – abou Jul 8 2011 at 0:01 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think that the fact that every proper subalgebra is contained in am maximal parabollic follows immediately from Jacobson's density theorem because if a subalgebra does not preserve any subspace, then $C^n$ is a simple module for it. This is of course true over any field. In the case of Lie algebras rather than associative algebras, then a classification of maximal subalgebras of finite dimensional simple Lie algebras over the complex numbers was obtained by Dynkin. In the positive characteristic case a classiifcation can probably be obtained using arguments which were used for the classifcation of maximal subgroups of finite simple groups. This is at least what I understood talking to Liebeck and Seitz, but I am not an expert on these matters. However, in the Lie case an elementary argument that the maximal dimension of a proper subalgebra of $sl_n(F)$ is $n^2-n$, assuming $F$ has characteristic different than 2 can be found in Y. Barnea and A. Shalev, Hausdorff dimension, pro-p groups, and Kac-Moody algebras, Trans. Amer. Math. Soc. 349 (1997), 5073-5091 (Theorem 1.7). Other related stuff (related to possible dimensions) but more on the group theoretic side can be found in the same paper. A generalization of this to other classical Lie algebras can be found in Abért, Miklós; Nikolov, Nikolay; Szegedy, Balázs Congruence subgroup growth of arithmetic groups in positive characteristic. Duke Math. J. 117 (2003), no. 2, 367--383 (Theorem 4). - Rough answer : almost all small dims can appear, there are some restrictions to large dims. For example, considering 1 matrix all dims between 1 and n appear. Taking centralizers of these all numbers of the form sum a_i^2 where a is a partition of n appear. In general, consider k-tuples of positive integers a and b such that their scalar product a.b=n (a should be thought of as the Morita setting, b as the matrix-sizes of the semi-simple part of the subalgebra), then any number of the form sum b_i^2 + subsum b_ib_j is possible (here 'subsum' means that one takes all terms b_xb_y for all x,y in a substring 1 <= i_1 < i_2 < ... < i_l <=k for any 0<=l<=k) Edit : the subsum gives the dimension of the Jacobson radical. This answer cannot be the final one, as it only detects the subalgebras of global dimension 1. For example any n-diml algebra can be embedded in nxn matrices. There are some obvious restriction wrt large dimensions. For example, there cannot be an 8-dml subalgebra of 3x3 matrices as its semi-simple part can be at most C x M_2(C) and so its dimension must be smaller or equal to 7. For general n there cannot be subalgebras with dimensions between the dim of the largest parabolic subgroup of GL(n) and n^2. Edit : a closely related question can be found here : problems concerning subspaces of mxm matrices. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7942304611206055, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Talk:Orthogonality
# Talk:Orthogonality WikiProject Mathematics (Rated B-class, Mid-importance) This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Mathematics rating: B Class Mid Importance Field: Algebra One of the 500 most frequently viewed mathematics articles. ## Orthogonality existed long before computers! Those trained in computer science think they invented everything known before computers existed: integrals, mathematical induction, orthogonality, etc. I've left the page a bit of a messy hodge-podge, but far better than what was here. Michael Hardy 02:27, 13 Jan 2004 (UTC) If non-orthodox is "heterodox", is "heterogonal" non-orthogonal? (Google has one hit for that word, in an unmaths context.) 142.177.126.230 21:05, 5 Aug 2004 (UTC) It is a needless complication of the definition of orthogonality to bring in the subscripts i and j when one is only trying to define what it means to say that two functions are orthogonal. Also, it is incorrect, unless one has first given the subscripts some meaning. Michael Hardy 01:45, 6 Sep 2004 (UTC) ## Examples I'd like an example of two simple functions that are orthogonal. - Omegatron 16:22, Sep 29, 2004 (UTC) Take two orthogonal vectors and then change basis to {1, t, t^2, ..., t^n}?Dysprosia 22:32, 29 Sep 2004 (UTC) No, that won't work until you specify a measure (or "weight function") with respect to which those are orthogonal. See for example Chebyshev polynomials, Legendre polynomials, and Hermite polynomials (all exceptions to the rule that it is better use singular words as Wikipedia article titles). Those are examples. Also, see Bessel functions. Michael Hardy 00:32, 30 Sep 2004 (UTC) Well, it does depend on the inner product you use to determine orthogonality, though. But yes, if you use the inner product defined in the article, it won't work. Dysprosia 01:48, 30 Sep 2004 (UTC) Some of us don't know what that means... Aren't sin(x) and cos(x) orthogonal? Also, certain pulse trains? - Omegatron 22:53, Sep 29, 2004 (UTC) If you use the inner product from the article, and take the integral from -a to a with weight one, sin(x) and cos(x) are indeed orthogonal functions (calculate the integral for yourself). Dysprosia 01:48, 30 Sep 2004 (UTC) Also, please explain why the integral is a to b instead of -∞ to +∞? - Omegatron 16:24, Sep 29, 2004 (UTC) No reason, though you can define another inner product with those bounds and then consider orthogonality with respect to that inner product.Dysprosia 22:32, 29 Sep 2004 (UTC) I see that the a and b are used in the artcle on inner product, too. Omegatron 22:53, Sep 29, 2004 (UTC) You need to understand that when we set the limits of an integral as [a, b], then a and b can be whatever we want them to be, including minus or plus infinity, as long as the limits are taken to be real and not complex. 98.67.108.12 (talk) 00:22, 25 August 2012 (UTC) It is important to realize that functions are orthogonal only on a predefined interval. In other words, sin(x) and cos(x) are not orthogonal, generally speaking. They are only orthogonal on the interval [a, b] if |b - a| = n*pi where n is a nonzero integer. This is also why inner products (for sinusoids) are defined on [a, b] and not -∞ to +∞. Severoon 22:41, 1 May 2006 (UTC) ## Missing bracket There is a missing opening square bracket on the integration example image, I believe. --anon Fixed now. I think that bracket was left out on purpose. But I agree with you that things look better with the bracket in. Oleg Alexandrov 18:25, 15 May 2005 (UTC) ## Vectors for some positive integer a, and for 1 ≤ k ≤ a-1, these vectors are orthogonal, for example (1,0,0,1,0,0,1,0)T,(0,1,0,0,1,0,0,1)T ,(0,0,1,0,0,1,0,0)T are orthogonal. interesting. So this is where discretely sampled signals like ```...0,0,1,0,0,1,1... ...1,0,0,0,1,0,0... ...0,1,0,1,0,0,0... ``` come from? Also, these signals are orthogonal too, according to another site I saw. Can we extrapolate the signal processing version from the many dimensional vector version? Maybe graphs? - Omegatron 13:41, Sep 30, 2004 (UTC) They appear to be. Calculate the dot product of these "signals", so to speak, across each triplet. If they sum to 0 for all the bit triplets over your time period they are orthogonal. I don't understand what you mean about "extrapolate the signal processing version from the many dimensional vector version". Dysprosia 14:04, 30 Sep 2004 (UTC) the difference being that this is a discrete function instead of a vector, function $f[n] = ...,0,1,0,0,4,0,0,-1,0,2,...$ vector $\mathbf{a} = (...,0,1,0,0,4,0,0,-1,0,2,...)$ but I guess they can be seen as the same thing from different perspectives? Can you have infinite-dimensional vectors? The discrete-"time" function can be "converted" to a continuous-time function (think sampling), though, which can also be orthogonal to another similar function if they have the same "shape" relationship... - Omegatron 14:40, Sep 30, 2004 (UTC) Heh. Lots of "quotes". I can explain better later. I will draw some pictures... - Omegatron 14:41, Sep 30, 2004 (UTC) Yes, you can have vectors of infinite dimension. You know there is in fact nothing really special about any of these definitions of orthogonality - what is the important property is the inner product, which determines whether two vectors in a vector space are orthogonal or not, or determines a "length" or not. Change the inner product, and these definitions change also. Dysprosia 14:49, 30 Sep 2004 (UTC) Not sure that I understand what you're trying to say. So you could define your own "inner product" for which a cat is orthogonal to a dog? - Omegatron 19:55, Sep 30, 2004 (UTC) Metaphorically, yes, as long as the inner product you define is in fact an inner product. There are some requirements on this, see inner product. Literally, you have to define what you mean by a cat and dog first before you can say they are orthogonal to each other... ;) Dysprosia 01:07, 1 Oct 2004 (UTC) Can you have infinite-dimensional vectors? Except that it's the space that is infinite-dimensional, rather than the vectors themselves. The two most well-known infinite-dimensional vector spaces are $\ell^2$, which is the set of all sequences of scalars such that the sum of the squares of their norms is finite (for example (1, 1/2, 1/3, ...) is such a vector because 12 + (1/2)2 + (1/3)2 + ... is finite) and L2, the set of all functions f such that $\int_\mathrm{whatever\ space}\left|f\right|^2 < \infty.$ ("Whatever space" could be for example the interval from 0 to 2π, or could be the whole real line, or could be something else.) Michael Hardy 19:30, 30 Sep 2004 (UTC) Yes. So what is the connection between the discrete function with an infinite number of points ...,f[-1],f[0],f[1],... and a vector with an infinite number of dimensions (...,x-1,x0,x1,...)? Are these the same concept said in two different ways or are there subtle differences? For instance, in MATLAB or GNU Octave you use vectors or matrices for everything, and use them to represent strings of sampled data or two dimensional arrays of data, both of which could also be thought of as functions of the vector or matrix coordinates. Not that this is a site for teaching people math, but it could point out things that need to be included in various articles.:-) Omegatron 19:55, Sep 30, 2004 (UTC) Let xi = f(i)? Dysprosia 01:07, 1 Oct 2004 (UTC) ## Orthogonal curves This article does not mention orthogonal curves or explain what it means that two circles are orthogonal to each other. Hyperbolic geometry mentions orthogonal circles, but I had to look up the exact meaning elsewhere (more precisely, on MathWorld). My question is, should orthogonal curves and circles be covered in this article, or do they qualify as a "related topic"? Fredrik | talk 03:16, 21 Oct 2004 (UTC) The concept is not really that different, though Mathworld's geometric treatment may merit a seperate page. One could perhaps say generally that two curves parametrized by functions f and g are orthogonal, if where they interesect ∇f.∇g = 0, though I'm not sure that's a decent, established, or useful definition... Dysprosia 08:14, 21 Oct 2004 (UTC) A section on orthogonal curves must certainly be added to the article.--Shahab (talk) 08:43, 8 March 2008 (UTC) ## Quantum mechanics The article states that In quantum mechanics, two wavefunctions $\psi_m$ and $\psi_n$ are orthogonal unless they are identical, i.e. m=n. This means, in Dirac notation, that $< \psi_m | \psi_n > = 0$ unless m=n, in which case $< \psi_m | \psi_n > = 1$. The fact that $< \psi_m | \psi_n > = 1$ is because wavefunctions are normalized. This is wrong in the general case. The author probably supposed that $\psi_m$ and $\psi_n$ are eigenstates of the same observable relating to two different eigenvalues, in which case it is trivially true. The definition of orthogonality in quantum mechanics is the same as in the $L^2$ space in mathematics, so that this precision can be removed without there lacking anything.--82.66.238.66 20:11, 16 April 2006 (UTC) It is "trivial"? Not for everyone! I'm not an expert in quantum mechanics--my specialisation is in complex systems--so I will not defend my original statement down to the last letter. However, I do feel strongly that the comments on quantum mechanics should be modified, not removed. The reason that I added the paragraph in question is because when I was studying for my last quantum mechanics class, I found that Wikipedia did not answer the questions that I had about orthogonality. If you simply take out the stuff on quantum mechanics, then other people will likely come along with the same queries as me--and they'll be unsatisfied too. If you want to clarify that it's for the two eigenvalues of the same observable, that's fine. But just because it's not the most general case doesn't mean it's not an important one. Ckerr 16:12, 19 April 2006 (UTC) Since there has been no reply, I'm going to reinstate the part on QM. Please correct it if it needs correcting, but please don't just axe it! Ckerr 09:04, 25 April 2006 (UTC) If I may give my opinion: I think this should be removed or moved. The definition of orthogonality already caters for the quantum mechanics explanation. The only reason the two wavefunctions are said to be orthogonal is because they ARE orthogonal in the mathematical sense, therefore it does not make sense for this entry to be under "Derived Meanings". I will give a chance for the author to reply to my suggestion, but if I don't hear from you in 2 or 3 weeks I'll move it to the "Examples" section. Maszanchi 09:38, 16 June 2007 (UTC) I have performed the move and rewrote the section according to the comment above. The part on normality was removed as it didn't seem relavent to orthogonality. TomC phys 09:04, 5 September 2007 (UTC) ## Weight Function? Why is there mention of a weight function w(x) in the definition of the inner product? Its presence plays no role whatsoever in the definition of the inner product of f and g, so why not remove it? (I understand the role of a weight function in PDEs like the heat eqn, but isn't it unnecessary and extraneous in a page on orthogonality?) Severoon 22:45, 1 May 2006 (UTC) Well, I suppose weight functions aren't truly essential to the notion being discussed, but they make it much more accessible. We could just say "Given an inner product $\langle f, g \rangle$, f and g are orthogonal if ....". But the use of weight functions gives a good motivation for the construction of inner products, and for the notion that one can construct different inner products, and hence different notions of orthogonality, on the same underlying set of objects (e.g. polynomials.) On second thought, I see your point. The section isn't very clear. I'll fix it. William Ackerman 15:46, 12 May 2006 (UTC) I agree that the section is not clear. In fact, it's so unclear it seems to have led to confusion right in the examples section: "These functions are orthogonal with respect to a unit weight function on the interval from −1 to 1." (See the third example.) In fact, the functions in the example are not "orthogonal w.r.t. a unit weight function"...they're orthogonal to each other on the specified interval! This definitely needs to be changed. The introduction of a weight function should be brought up in the context of a physical example, something like the heat equation on a 1D conductive rod of nonuniform density. Short of an explicit physical application, it just seems to be confusing things. Severoon 23:34, 12 May 2006 (UTC) ## Emergency fix I have just put in an emergency fix for the question raised by 66.91.134.99, and left a note on his talk page. This was a proof that an orthogonal set is a linearly independent set. It's not at all clear that putting in this proof is the right thing for the article as a whole -- I just needed a quick fix. (It's not even clear that this is the best proof. It was off the top of my head. And it is definitely not formatted well.) Maybe the linear independence is truly obvious, and saying anything about it is just inappropriate for the level of the discussion. Maybe the proof/discussion should be elsewhere. If/when someone has the time to look over the whole article, and think about the context of the orthogonality/independence issue, and figure out the right way to deal with all this, it would be a big help. William Ackerman 16:08, 21 July 2006 (UTC) Thanks for the proof. But I tend to agree with your doubt that the proof was not the right thing for the article as a whole, especially that early in the article. Proofs are not really encyclopedic to start with (see also Wikipedia:WikiProject Mathematics/Proofs). I removed the proof for now. Oleg Alexandrov (talk) 08:33, 22 July 2006 (UTC) ## On radio communications 1 The radio communications subsection claims that TDMA and FDMA are non-orthogonal transmission methods. However, in the theoretically ideal situation, this is not the case. For FDMA, note the orthogonality of sinusionds of different frequencies. Thus, restricting users to a certain frequency range IS orthogonal so long as the frequency ranges are nonoverlapping. This is similarly true for the TDMA case. Assume that each user is restricted to transmit in in specific, non-overlapping time, i.e., $f_1(x) = 0 \; \forall x \notin [a,b]$ and $f_2(x) = 0 \; \forall x \notin [b,c]$, so that the inner product $\int_{-\infty}^{\infty} f_1(x)f_2^*(x) dx = 0$. ## On radio communications 2 I agree with the comment already present in the comment page. The sentence "An example of an orthogonal scheme is Code Division Multiple Access, CDMA. Examples of non-orthogonal schemes are TDMA and FDMA," is wrong and should be deleted. All in all the section on Radio Communications is not satisfactory as it is. I would delete and replace with something such as the following text, or similar one: "Ideally FDMA (Frequency Division Multiple Access) and TDMA (Time Division Multiple Access) are both orthogonal multiple access techniques, and they achieve orthogonality in the frequency domain and in the time domain, respectively. In practice all orthogonal techniques are subject to impairements, which however can be controlled to any desired level with appropriate design. In the case of FDMA the loss of orthogonality arises due to the imperfection of spectrum shaping, and it can be combatted with appriapriate guard bands. In the case of TDMA, the loss of orthogonality is the result of imperfect system syncronization. The question can be asked if there are other "domains" in which orthogonality can be imposed, and the answer is that a third domain is the so called "code domain". This leads to CDMA (Code Division MA), which is a techniques which impresses a codeword on top the digital signal. If the set of codewords is chosen appropriately (e.g. Walsh-Hadamard codes), and some more conditions are assumed on the signal and on the channel conditions, CDMA can be orthogonal. However, in many conditions, to guarantee near ideal orthogonal condition for the CDMA implementations is more critical. In packet communications, with noncoordinated terminals, other MA techniques are used. For example the Aloha technique originally invented for computer communications via satellite [FALSE. It was made for terrestrial radio communications in Hawaii.] Since the terminals transmit as soon as they have a packet ready, in an uncoordinated manner, packets can collide at the receiver, so producing interference. Therefore Aloha is one example on nonorthogonal MA technique, even under ideal operational conditions." 213.230.129.21 09:55, 1 October 2006 (UTC) The above totally disregards the concept of "slotted Aloha", which has been widely used. 98.67.108.12 (talk) 00:22, 25 August 2012 (UTC) Response to changes inserted regarding Orthogonality and Radio Communications: The statement that TDMA and general FDMA are examples of orthogonal schemes, while CDMA is not, is incorrect. There are many in the wireless industry who erroneously believe that orthogonality is defined by whether or not two things interfere or produce "cross talk". However, that is NOT what defines orthogonality. Orthogonality is a mathematical property with well-defined and SPECIFIC criteria: ``` Integral [ Fi(x) * Fj(x) dx ] = kronecker_delta (i.e. non zero if and ONLY if i = j) ``` Note the definition does NOT contain a windowing function (or a weighting function). Two non-coincidental events that do not interfere (0 sum), are NOT necessarily orthogonal. A bus and a train passing over a railroad crossing 15 minutes apart do not interfere. This does NOT indicate that buses and trains are orthogonal. A TDMA message sent duromg one second followed by a second one sent some time later do not interfere because they are not simultaneous and therefore never have the opportunity to interfere. This does NOT indicate they are orthogonal. IF TDMA signals were orthogonal, why then do signals sent from adjacent cells within the same network interfere with each other? Arbitrarily injecting a windowing function into the definition would suggest that ANY two functions could be orthogonal, which absolutely is not true. If we transmit a message convolved with a polynomial in "x" and 2 seconds later transmit a message convolved with [sin^2](x), the two (non-simultaneous) messages will not interfere. This is not because x^2 and [sin^2](x) are orthogonal (they are NOT), but because they were sent at completely different times. Orthogonal-FDM IS orthogonal (by design), but generic FDMA is NOT orthogonal. If FDMA were orthogonal, then why would we in the industry have to spend BILLIONS of dollars on filtering specifically to keep adjacent signals from interfering with each other? Orthogonal-FDM meets the mathematical criteria: sin(nx) and sin(mx) are orthogonal functions only when "n" and "m" are distinct integers, but otherwise they are NOT. CDMA IS orthogonal (again, by design) due to the orthogonality of the Walsh Codes employed (?) (provided all the Walsh Codes are synchronous - a mathematical requirement for all orthogonal functions). The suggestion that CDMA is NOT orthogonal since it requires an integrator and "basis codes" to reject unwanted signals, reveals a significant lack of understanding regarding CDMA and orthogonality in general, in that the use of orthogonal Walsh codes is at the very core of what CDMA is and how it operates. The use of an integrator in CDMA fulfills the role of the integration process, which is itself fundamental to the definition of orthogonality. No, CDMA often operates by using long orthogonal pseudorandom binary sequences (PN sequences). Read up on these in the Tracking and Data Relay Satellite System, for example. 98.67.108.12 (talk) 00:22, 25 August 2012 (UTC) You cannot [USUALLY] simply multiply two discrete fragments of any two orthogonal functions and get 0. For example sin(x) and cos(x) are orthogonal over multiples of pi over 2, but sin(45)*cos(45)=/=0. Existance of orthogonality between two such functions requires full integration over an extended window (e.g. over one or more periods). If orthogonality didn't exist in CDMA, how then do hundreds of CDMA calls transmitted SIMULTANEOUSLY over the SAME RF channel remain isolated from one another? BILLIONS of such calls have been processed over active CDMA channels this past decade with enormous success, which would NOT have been possible IF these CDMA signals were not orthogonal to each other. Stevex99, 5 July 07 Yes, by using orthogonal PC sequences. 98.67.108.12 (talk) 00:22, 25 August 2012 (UTC) A couple of points: • TDMA is orthogonal. Separating by time is one way of satisfying the orthogonality condition, as $\ \int_0^T g(t) g(t-kT) dt = \delta[k]$ (assuming the signalling pulses are T or less in time). • In practice, CDMA is pseudo-orthogonal, not orthogonal. While the channelization codes from a single base-station are typically orthogonal, the scrambing codes are not (in WCDMA, Gold codes are used, for instance). And you've already noted another problem, which is the requirement for perfect synchronization. In practice, this is very rarely achieved. Oli Filth 08:45, 6 July 2007 (UTC) Thanks Oli for your comments, however I would have to disagree. Separating by time is effectively inserting a windowing function on the signals being transmitted, and it does not make the fundamental signals orthogonal to each other (which is why they tend to interfere with emissions from adjacent cells). Stevex99 What do you consider to be the "fundamental" signals, and why? If you mean the underlying sinusoidal carriers, then yes of course they're not orthogonal when delayed, but that's why we apply the "windowing function" (normally, this is known as the "pulse-shaping function"), to make the transmitted signals orthogonal to one another. In a baseband model, all you have is the pulse-shaping function. Oli Filth 15:46, 6 July 2007 (UTC) But that's kind of my whole point. If you have to take action specifically to prevent two signals from interfering (in this case gating them on and off), then they clearly don't posess the mathematical characteristic of orthogonality. And, if they were in fact orthogonal, then you wouldn't have to deal with the issue of intercell interference within the system. Orthogonal-FDM signals (for example), can exist simultaneously on the same channel specifically because they do meet the definition for orthogonality (by design). And, not to be nit-picky, but the integral that you show really just shifts the two functions in time. To actually model the TDMA scheme, it would need a windowing function (which, again, isn't part of the definition for orthogonality).It was nice to debate this with you, but I better get some work done. All the best! Stevex99. The transmitted signals can be described by mathematical functions which are orthogonal. Therefore the transmitted signals are orthogonal. I'm not sure what else there is to say! The mathematical functions which describe the signals that occur at an earlier point in the transmitter processing chain (e.g. the carrier) may or may not be orthogonal, but so what? They aren't the signals being transmitted. Yes, in all cellular systems, we have to deal with intercell interference. This is no different for OFDM or CDMA. On a cell-by-cell basis, the signals used may or may not be mathematically orthogonal, this doesn't remove the need for intercell considerations. And yes, that is exactly what my integral shows. There is no requirement for a windowing function. If the function g(t) is T or less in support, then the integral will be zero. Therefore, the orthogonality is satisfied. There are many ways of obtaining an orthogonal signal family. Separation in time is just one method. Oli Filth 02:57, 7 July 2007 (UTC) ## Discrete function orthoganality? If someone thinks its appropriate could they add the definitition for orthoginality for discrete functions. For example the kernel of the DFT. Thanks. ## Statistics We say that two variables are orthogonal if they are independent. Uncorrelated seems much more plausible, since the distribution of the product of two variables is an inner product of the variables. Septentrionalis PMAnderson 05:27, 19 September 2007 (UTC) Should we give more prominence to the section on statistics in this article? My discipline is psychology, and if I were to refer to two variables as orthogonal, I would mean that they are not statistically significantly correlated. I would probably add a wikilink to orthogonal, so that curious readers could find out what this term means by going to this article, rather than defining the term in every article for which I use this word. However, at the moment, this article is rather heavy for a non-specialist. ACEOREVIVED 19:23, 13 October 2007 (UTC) That's right: two random variables are called orthogonal if they're simply uncorrelated, not necessarily independent. I made the appropriate change. There's often confusion about this because for Gaussian random variables, uncorrelatedness implies independence. For general r.v.'s, though, this isn't true: independence is a stronger condition. Jgmakin 06:37, 25 October 2007 (UTC) ## Intro Rewrite Here is the current intro: In mathematics, orthogonal, as a simple adjective not part of a longer phrase, is a generalization of perpendicular. It means "at right angles". The word comes from the Greek ὀρθός (orthos), meaning "straight", and γωνία (gonia), meaning "angle". Two streets that cross each other at a right angle are orthogonal to one another. In recent years, "perpendicular" has come to be used more in relation to right angles outside of a coordinate plane context, whereas "orthogonal" is used when discussing vectors or coordinate geometry. I propose this replacement: In mathematics, two vectors are orthogonal if they are perpendicular, i.e., they form a right angle. The word comes from the Greek ὀρθός (orthos), meaning "straight", and γωνία (gonia), meaning "angle". For example, a subway and the street above, although they do not physically intersect, are orthogonal if they meet at a right angle. This version avoids the need to say that "orthogonal" is a generalization of "perpendicular" by saying that they are identical in the generalized context of vector mathematics. The example is improved by not using the coordinate plane. Lastly, the note about common usage is redundant because the intro sentence has already described the context of "orthogonal". --Beefyt (talk) 05:52, 4 August 2008 (UTC) When I read this subway example I got very confused. How do a subway and a street above "meet" at a right angle? Once I read this talk page, I understood what you meant. I propose using the word "cross" instead... but I'm not sure if that's better. What do you guys think? Sunbeam44 (talk) 16:07, 23 October 2008 (UTC) From beginning Euclidean geometry, two lines (straight) that do not intersect are "skew" [False. In plane geometry, two lines either intersect or they are parallel to one another. There are no any other possibilities.] "Perpendicular" implies the lines intersect in a right angle. "Orthogonal" implies far more than mere perpendicularity, as evidenced in the article and its attendant commentaries. —Preceding unsigned comment added by Lionum (talk • contribs) 06:46, 11 October 2008 (UTC) But the lead says "two vectors are orthogonal if they are perpendicular", not "two lines are orthogonal if they are perpendicular". Would you be satisfied with "two lines are orthogonal if their vectors are perpendicular"? --beefyt (talk) 21:28, 30 January 2009 (UTC) Why not "two lines are orthogonal if they meet at a right angle"? Michael Hardy (talk) 22:17, 30 January 2009 (UTC) The direction of a line in a two-dimensional plane is defined by a two-dimensionsl vector. Also, the direction of a line in three-dimensional space is defined by a three-dimensional vector. Two lines are perpendicular if and only if their defining vectors are perpendicular to each other. This is taught in advanced high-school mathematics. 98.67.108.12 (talk) 00:22, 25 August 2012 (UTC) The introduction needed to be rewritten, so I decided to Be Bold and do it. I expect it to be revised. The citations are poor but at least they are there, and they back-up what is said in the subsections. When the revisions start, I would like to make these suggestions: • Go from the general to the specific • Be as nontechnical as possible • Stay open to the many different meanings of the word in different contexts. KSnortum (talk) 20:19, 13 January 2012 (UTC) ## T - shape? Isn't orthogonal a T shape? Can it say so on the article as a better description than right angle? Or, if the orthogonal is the L shape, is it OK to say "An Orthogonal is an L shaped intersection." and provide a nice L shaped joint picture? I can't remember if it is T or L and I cannot read complex formulae. ~ R.T.G 15:21, 30 January 2009 (UTC) Two lines are orthogonal if they meet at a right angle. Thus the strokes of L are orthogonal, as are those of T or +. —Tamfang (talk) 03:08, 23 October 2011 (UTC) ## Interesting You might be interested that the book Applied Mathematics for Database Professionals refers readers to this Wikipedia article. - 114.76.235.170 (talk) 14:17, 23 June 2010 (UTC) ## Additional citations Why, what, where, and how does this article need additional citations for verification? Hyacinth (talk) 05:26, 2 August 2010 (UTC) Where it says "citation needed." --KSnortum (talk) 18:06, 13 January 2012 (UTC) ## (a, g, and n) Another scheme is orthogonal frequency-division multiplexing (OFDM), which refers to the use, by a single transmitter, of a set of frequency multiplexed signals with the exact minimum frequency spacing needed to make them orthogonal so that they do not interfere with each other. Well known examples include (a, g, and n) versions of 802.11 Wi-Fi; WiMAX; ITU-T G.hn, DVB-T, the terrestrial digital TV broadcast system used in most of the world outside North America; and DMT, the standard form of ADSL. Is there a good reason for (a, g, and n) to be enclosed in ()? —Tamfang (talk) 03:07, 23 October 2011 (UTC) ## Orthogonality vs. Independence in .... Orthogonality vs. Independence in random variables and statistics. Independence is a much stronger specification or assumption than orthogonality or uncorrelated. Orthogonal means that E(XY) = 0. If E(XY) = E(X)E(Y), then X and Y are uncorrelated. The above must not be confused by the following. If two random variables or statistics X and Y are jointly Gaussian; And X and Y are both zero mean; [This is often forgotten about.] And X and Y are orthogonal; Then X and Y are independent. Otherwise, the independence of X and Y has to be considered on a case-by-case basis. For independence, if f(x,y) is the joint probability density of X and Y, then we must have f(x,y) = f(x)f(y). There is no other way. If E(XY) = 0, and either E(X) = 0 or E(Y) = 0, then X and Y are uncorrelated because E(XY) = E(X)E(Y) = 0. — Preceding unsigned comment added by 98.67.108.12 (talk) 00:50, 25 August 2012 (UTC) ## Ancient history Attributing the concept of orthogonality (really perpendicularity) to the Babylonians or the Egyptians or whatever is probably not reasonable. All ancient mathematical civilizations (Babylonian, Egyptian, Indian, Chinese) had the concept of perpendicularity in two dimensions -- a discussion which belongs in perpendicularity. I am not sure that any of them had any of the generalizations that are called "orthogonality". --Macrakis (talk) 00:51, 25 January 2013 (UTC)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553483128547668, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/03/29/tangent-vectors-at-a-point/?like=1&source=post_flair&_wpnonce=e94a57fbb5
# The Unapologetic Mathematician ## Tangent Vectors at a Point Tangent vectors are a very important concept in differential geometry, and they’re one of the biggest stumbling blocks in comprehension. There are two major approaches: one more geometric, and one more algebraic. I find the algebraic approach a bit more satisfying, since it gets straight into the important properties of tangent vectors and how they are used, and it helps set the stage for tangent vectors in other contexts like algebraic geometry. Unfortunately, it’s not at all clear at first what this definition means geometrically, and why these things deserve being called “tangent vectors”. So I have to ask a little patience. Now, we take a manifold $M$ with structure sheaf $\mathcal{O}$. We pick some point $p\in M$ and get the stalk $\mathcal{O}_p$ of germs of functions at $p$. This is a real algebra, and we define a “tangent vector at $p$” to be a “derivation at $p$” of this algebra. That is, $v$ is a function $v:\mathcal{O}_p\to\mathbb{R}$ satisfying $\displaystyle\begin{aligned}v(cf+dg)&=cv(f)+dv(g)\\v(fg)&=v(f)g(p)+f(p)v(g)\end{aligned}$ The first of these conditions says that $v$ is a linear functional on $\mathcal{O}_p$. It’s the second that’s special: it tells us that $v$ obeys something like the product rule. Indeed, let’s take a point $x\in\mathbb{R}$ and consider the operation $D_x$ defined by $D_x(f)=f'(x)$ for any function $f$ that is differentiable at $x$. This is linear, since both the derivative and evaluation operations are linear. The product rule tells us that $\displaystyle\begin{aligned}D_x(fg)&=\left[fg\right]'(x)\\&=f'(x)g(x)+f(x)g'(x)\\&=D_x(f)g(x)+f(x)D_x(g)\end{aligned}$ So $D_x$ satisfies the definition of a “tangent vector at $x$“. Indeed, as it turns out $D_x$ corresponds to what we might normally consider the vector based at $x$ pointing one unit in the positive direction. It should immediately be clear that the tangent vectors at $p$ form a vector space. Indeed, the sum of two tangent vectors at $p$ is firstly the sum of two linear functionals, which is again a linear functional. To see that it also satisfies the “derivation” condition, let $v$ and $w$ be tangent vectors at $p$ and check $\displaystyle\begin{aligned}\left[v+w\right](fg)&=v(fg)+w(fg)\\&=v(f)g(p)+f(p)v(g)+w(f)g(p)+f(p)w(g)\\&=\left(v(f)+w(f)\right)g(p)+f(p)\left(v(g)+w(g)\right)\\&=\left[v+w\right](f)g(p)+f(p)\left[v+w\right](g)\end{aligned}$ Checking that scalar multiples of tangent vectors at $p$ are again tangent vectors at $p$ is similar. We write $\mathcal{T}_pM$ to denote this vector space of tangent vectors at $p$ to the manifold $M$. I want to call attention to one point of notation here, and I won’t really bother with it again. We seem to be using each of $f$ and $g$ to refer to two different things: a germ in $\mathcal{O}_p$ — which is an equivalence class of sorts — and some actual function in $\mathcal{O}(U)$ for some neighborhood $U$ of $p$ which represents the germ. To an extent we are, and the usual excuse is that since we only ever evaluate the function at $p$ itself, it doesn’t really matter which representative of the germ we pick. However, a more nuanced view will see that we’ve actually overloaded the notation $f(p)$. Normally this would mean evaluating a function at a point, yes, but here we interpret it in terms of the local ring structure of $\mathcal{O}_p$. Given a germ $f\in\mathcal{O}_p$ there is a projection $\mathcal{O}_p\to\mathbb{R}$, which we write as $f\mapsto f(p)$. If all this seems complicated, don’t really worry about it. You can forget the whole last paragraph and get by on “sometimes we use a germ as if it’s an actual function defined in a neighborhood of $p$, and it will never matter which specific representative function we use because we only ever ask what happens at $p$ itself.” ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 13 Comments » 1. [...] « Previous | [...] Pingback by | March 30, 2011 | Reply 2. [...] a point in an -dimensional manifold , we have the vector space of tangent vectors at . Given a coordinate patch around , we’ve constructed coordinate vectors at , and shown [...] Pingback by | March 31, 2011 | Reply 3. Should the second equation have $v(fg)(p)$ on its lhs? Comment by Avery Andrews | April 1, 2011 | Reply 4. Or perhaps a $\lambda p.$ on its rhs Comment by Avery Andrews | April 1, 2011 | Reply 5. No, I go into this down near the bottom. Since $f$ and $g$ are germs at $p$, they each have a “value at $p$“, which we write $f(p)$ and $g(p)$. Everything in sight is taking place “at” the single point $p$ here. A tangent vector $v$ at $p$ takes a germ $f$ at $p$ and gives a real number. Comment by | April 1, 2011 | Reply 6. [...] of a point in an -dimensional manifold , we get coordinate vectors which form a basis for the tangent space . But this is true of any coordinate patch! If we have another patch , we can get another basis . [...] Pingback by | April 1, 2011 | Reply 7. [...] far we’ve talked about tangent spaces one at a time. For each we get a tangent space at . But things get really interesting when we [...] Pingback by | April 4, 2011 | Reply 8. [...] any point of an open interval, the tangent space is one-dimensional. And, in fact, it comes equipped with a canonical vector to use as a basis: , [...] Pingback by | April 8, 2011 | Reply 9. There is an obvious typo in the first equation (you have a $c$ where you meant $d$). Comment by | April 9, 2011 | Reply 10. Thanks, fixed. Comment by | April 9, 2011 | Reply 11. [...] another construct in differential topology and geometry that isn’t quite so obvious as a tangent vector, but which is every bit as useful: a cotangent vector. A cotangent vector at a point is just an [...] Pingback by | April 13, 2011 | Reply 12. [...] manifolds, with the -dimensional product manifold. Given points and we want to investigate the tangent space of this product at the point [...] Pingback by | April 27, 2011 | Reply 13. [...] with boundary , then at all the interior points it looks just like a regular manifold, and so the tangent space is just the same as ever. But what happens when we consider a point [...] Pingback by | September 15, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 63, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399206042289734, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/33006/relation-between-the-dedekind-zeta-function-and-quadratic-reciprocity
# Relation between the Dedekind Zeta Function and Quadratic Reciprocity I was trying to learn a little about the Dedekind zeta function. The first place I looked at was obviously the Wikipedia article above. So my question comes from a sentence by the end of the article in the section on relations to other L-functions. That section states that if you have an abelian extension $K/\mathbb{Q} \,$, then the Dedekind zeta function of $K$ is a product of Dirichlet L-functions. In particular it states that if $K$ is a quadratic field, then the ratio $$\frac{\zeta_K (s)}{\zeta(s)} = L(s, \chi)$$ or equivalently, that $$\zeta_K (s) = \zeta(s) L(s, \chi)$$ where $\zeta(s)$ is the Riemann zeta function and $L(s, \chi) \,$ is the Dirichlet L-function associated to the Dirichlet character defined by the Jacobi symbol as follows. If $K = \mathbb{Q}(\sqrt{d}) \,$, and $D$ is the discriminant of this number field, then $$\chi(n) := \left ( \frac{D}{n} \right )$$ Then the Wikipedia article says the following: That the zeta function of a quadratic field is a product of the Riemann zeta function and a certain Dirichlet L-function is an analytic formulation of the quadratic reciprocity law of Gauss. This is were I was absolutely amazed, because even though I've seen in the past that the existence of the Euler product for the Riemann zeta function is equivalent to the unique factorization property of the integers, which apparently is also reflected more generally in the context of Dedekind zeta functions and number fields, this time realized as the unique factorization of ideals into products of prime ideals, I just find marvelous that these two things can be equivalent (in some sense which I don't know yet). So my question is why is this analytic fact about the Dedekind zeta function of a quadratic number field, $$\zeta_K (s) = \zeta(s) L(s, \chi)$$ an analytic reformulation of the quadratic reciprocity law? And maybe if it is possible, to push it a little bit further, are there analogues of this, say for higher reciprocity laws, like for cubic or biquadratic reciprocity? Or is this a "peculiarity" that occurs just for quadratic fields? Thank you very much for any help. - ## 1 Answer This is not an answer to all your questions, but anyway. $\zeta_K=\sum N(\mathfrak{a})^{-s}=\prod_\mathfrak{p} 1/(1-N(\mathfrak{p})^{-s})$. In the product we have $1/(1-p^{-s})$ for every prime dividing $D$, $1/(1-p^{-s})^2$ for every prime that splits in $K$, i.e. such that $D$ is a square mod $p$, and $1/(1-p^{-2s})=1/((1-p^{-s})(1+p^{-s}))$ for every prime which doesn't split. For every prime, the factor is thus $1/((1-p^{-s})(1-(\frac{D}{p})p^{-s}))$. The product over all primes is thus $$\zeta(s)\,\prod_p 1/(1-(\frac{D}{p})p^{-s})$$ and your equation becomes $$L(s,\chi)=\prod_p 1/(1-(\frac{D}{p})p^{-s}).$$ The function $\chi$ in $L(s,\chi)=\sum_n \chi(n) n^{-s}$ is a (quadratic) Dirichlet character modulo $D$, i.e. it is a group morphism $(\mathbb{Z}/D\mathbb{Z})^\times\to\{+1,-1\}$, which is then extended to a function $\mathbb{Z}\to \{0,+1,-1\}$ by $\chi(n)=0$ if $(n,D)\neq1$. Since $\chi$ is multiplicative, we have $$L(s,\chi)=\prod_p 1/(1-\chi(p)p^{-s}).$$ We therefore indeed have $$\chi(p)=(\frac{D}{p}).$$ This implies that $(\frac{D}{p})$ depends on $p$ only modulo $D$ - something not evident at all from its definition, but an easy consequence of quadratic reciprocity. If in particular $D=(-1)^{(q-1)/2} q$ (where $q$ is a prime) then there is only one quadratic Dirichlet character, namely $\chi(n)=(\frac{n}{q})$. We therefore have $$(\frac{p}{q})=\big(\frac{(-1)^{(q-1)/2}\, q}{p}\big)$$ i.e. quadratic reciprocity. - Thank you very much for your answer. I have to admit that this is the direction in which I'm most interested about, namely that the identity for the Dedekind zeta function implies quadratic reciprocity. – Adrián Barquero Apr 19 '11 at 20:58 Nevertheless I don't understand all the details in your argument, so I'm starting a bounty for this question. If you can provide more details to your answer and nobody else responds with a "better" answer I'd be very happy to award the bounty to you. For example, I do understand the first half of your answer but the second half is giving me some trouble to understand. I'm not quite clear on how you conclude that $\chi (p) = (D/p)$, and the part about quadratic reciprocity towards the end is not very clear to me. Thank you for your help. – Adrián Barquero Apr 19 '11 at 21:11 1 – user8268 Apr 20 '11 at 20:21 1 + you will probably have to tell me what is not clear with quadratic reciprocity at the end. Here it is again: $q$ is a prime, $K=\mathbb{Q}(\sqrt{q^*})$ with $q^*=(-1)^{(q-1)/2} q$, so that $D=q^*$. There is only one non-trivial (i.e. surjective) quadratic Dirichlet character modulo $q$, namely $\chi(x)=(\frac{x}{q})$ (as $\chi(x^2)=\chi(x)^2=1$, there is no other choice). $(\frac{p}{q})=\chi(p)=(\frac{q^*}{p})$ is the quadratic reciprocity law. – user8268 Apr 21 '11 at 17:01 1 some extra info: A (quadratic) Dirichlet character $(\mathbb{Z}/N\mathbb{Z})^\times\to\{+1,-1\}$ is called called primitive if it can't be reduced to $(\mathbb{Z}/M\mathbb{Z})^\times\to\{+1,-1\}$ for any proper divisor $M$ of $N$. For a given $N$ there is at most one primitive quadratic character (namely the Kronecker-Jacobi-Legendre symbol), and it exists iff $M=\pm D$ where $D$ is the discriminant of a quadratic extension of $\mathbb{Q}$. The $\chi$ in $\zeta_K(s)=\zeta(s)L(s,\chi)$ is primitive (for $N$ the discriminant of $K$). – user8268 Apr 21 '11 at 17:12 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471050500869751, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Polarization_(waves)
# Polarization (waves) For other uses, see Polarization. Polarization on rubber thread. (Circularly→linearly polarized standing wave.) Polarization (also polarisation) is a property of waves that can oscillate with more than one orientation. Electromagnetic waves, such as light, and gravitational waves exhibit polarization; sound waves in a gas or liquid do not have polarization because the medium vibrates only along the direction in which the waves are travelling. By convention, the polarization of light is described by specifying the orientation of the wave's electric field at a point in space over one period of the oscillation. When light travels in free space, in most cases it propagates as a transverse wave—the polarization is perpendicular to the wave's direction of travel. In this case, the electric field may be oriented in a single direction (linear polarization), or it may rotate as the wave travels (circular or elliptical polarization). In the latter case, the field may rotate in either direction. The direction in which the field rotates is the wave's chirality or handedness. The polarization of an electromagnetic (EM) wave can be more complicated in certain cases. For instance, in a waveguide such as an optical fiber or for radially polarized beams in free space,[1] the fields can have longitudinal as well as transverse components. Such EM waves are either TM or hybrid modes. For longitudinal waves such as sound waves in fluids, the direction of oscillation is by definition along the direction of travel, so there is no polarization. In a solid medium, however, sound waves can be transverse. In this case, the polarization is associated with the direction of the shear stress in the plane perpendicular to the propagation direction. This is important in seismology. Polarization is significant in areas of science and technology dealing with wave propagation, such as optics, seismology, telecommunications and radar science. The polarization of light can be measured with a polarimeter. A polarizer is a device that affects polarization. ## Theory ### Basics: plane waves The simplest manifestation of polarization to visualize is that of a plane wave, which is a good approximation of most light waves (a plane wave is a wave with infinitely long and wide wavefronts). For plane waves Maxwell's equations, specifically Gauss's laws, impose the transversality requirement that the electric and magnetic field be perpendicular to the direction of propagation and to each other. Conventionally, when considering polarization, the electric field vector is described and the magnetic field is ignored since it is perpendicular to the electric field and proportional to it. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled x and y (with z indicating the direction of travel). For a simple harmonic wave, where the amplitude of the electric vector varies in a sinusoidal manner in time, the two components have exactly the same frequency. However, these components have two other defining characteristics that can differ. First, the two components may not have the same amplitude. Second, the two components may not have the same phase, that is they may not reach their maxima and minima at the same time. Mathematically, the electric field of a plane wave can be written as, $\vec{E}(\vec{r},t) = \mathrm{Re} \left[\left(A_{x}, A_{y}\cdot e^{i\phi}, 0 \right) e^{i(kz - \omega t)} \right]$ or alternatively, $\vec{E}(\vec{r},t) = (A_{x}\cdot \cos(kz - \omega t), A_{y}\cdot \cos(kz - \omega t + \phi), 0)$ where $A_{x}$ and $A_{y}$ are the amplitudes of the x and y directions and $\phi$ is the relative phase between the two components. ### Polarization state The shape traced out in a fixed plane by the electric vector as such a plane wave passes over it (a Lissajous figure) is a description of the polarization state. The following figures show some examples of the evolution of the electric field vector (black), with time (the vertical axes), at a particular point in space, along with its x and y components (red/left and blue/right), and the path traced by the tip of the vector in the plane (yellow in figure 1&3, purple in figure 2): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation. Linear Circular Elliptical In the leftmost figure above, the two orthogonal (perpendicular) components are in phase. In this case the ratio of the strengths of the two components is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarization. The direction of this line depends on the relative amplitudes of the two components. In the middle figure, the two orthogonal components have exactly the same amplitude and are exactly ninety degrees out of phase. In this case one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the x component can be ninety degrees ahead of the y component or it can be ninety degrees behind the y component. In this special case the electric vector traces out a circle in the plane, so this special case is called circular polarization. The direction the field rotates in depends on which of the two phase relationships exists. These cases are called right-hand circular polarization and left-hand circular polarization, depending on which way the electric vector rotates and the chosen convention. Another case is when the two components are not in phase and either do not have the same amplitude or are not ninety degrees out of phase, though their phase offset and their amplitude ratio are constant.[2] This kind of polarization is called elliptical polarization because the electric vector traces out an ellipse in the plane (the polarization ellipse). This is shown in the above figure on the right. Animation of a circularly polarized wave as a sum of two components The "Cartesian" decomposition of the electric field into x and y components is, of course, arbitrary. Plane waves of any polarization can be described instead by combining any two orthogonally polarized waves, for instance waves of opposite circular polarization. The Cartesian polarization decomposition is natural when dealing with reflection from surfaces, birefringent materials, or synchrotron radiation. The circularly polarized modes are a more useful basis for the study of light propagation in stereoisomers. Though this section discusses polarization for idealized plane waves, all the above is a very accurate description for most practical optical experiments which use TEM modes, including Gaussian optics. ### Unpolarized light Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be unpolarized. If there is partial correlation between the emitters, the light is partially polarized. If the polarization is consistent across the spectrum of the source, partially polarized light can be described as a superposition of a completely unpolarized component, and a completely polarized one. One may then describe the light in terms of the degree of polarization, and the parameters of the polarization ellipse. ### Parameterization This section needs attention from an expert in Physics. Please add a reason or a talk parameter to this template to explain the issue with the section. WikiProject Physics (or its Portal) may be able to help recruit an expert. (February 2009) For ease of visualization, polarization states are often specified in terms of the polarization ellipse, specifically its orientation and elongation. A common parameterization uses the orientation angle, ψ, the angle between the major semi-axis of the ellipse and the x-axis[3] (also known as tilt angle or azimuth angle[citation needed]) and the ellipticity, ε, the major-to-minor-axis ratio[4][5][6][7] (also known as the axial ratio). An ellipticity of zero or infinity corresponds to linear polarization and an ellipticity of 1 corresponds to circular polarization. The ellipticity angle, χ = arccot ε= arctan 1/ε, is also commonly used.[3] An example is shown in the diagram to the right. An alternative to the ellipticity or ellipticity angle is the eccentricity, however unlike the azimuth angle and ellipticity angle, the latter has no obvious geometrical interpretation in terms of the Poincaré sphere (see below). Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector): $\mathbf{e} = \begin{bmatrix} a_1 e^{i \theta_1} \\ a_2 e^{i \theta_2} \end{bmatrix} .$ Here $a_1$ and $a_2$ denote the amplitude of the wave in the two components of the electric field vector, while $\theta_1$ and $\theta_2$ represent the phases. The product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. The basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization. Regardless of whether polarization ellipses are represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north. #### S and P Polarization Another coordinate system frequently used relates to the plane made by the propagation direction and a vector perpendicular to the plane of a reflecting surface. This is known as the plane of incidence. The component of the electric field parallel to this plane is termed p-like (parallel) and the component perpendicular to this plane is termed s-like (from senkrecht, German for perpendicular). Light with a p-like electric field is said to be p-polarized, pi-polarized, tangential plane polarized, or is said to be a transverse-magnetic (TM) wave. Light with an s-like electric field is s-polarized, also known as sigma-polarized or sagittal plane polarized, or it can be called a transverse-electric (TE) wave. However, there is no universal convention in this TE and TM naming scheme; some authors refer to light with p-like electric field as TE and light with s-like electric field as TM.[citation needed]. Traditionally, TE and TM are used to indicate whether the electric or the magnetic field is horizontal.[citation needed] #### Parameterization of incoherent or partially polarized radiation In the case of partially polarized radiation, the Jones vector varies in time and space in a way that differs from the constant rate of phase rotation of monochromatic, purely polarized waves. In this case, the wave field is likely stochastic, and only statistical information can be gathered about the variations and correlations between components of the electric field. This information is embodied in the coherency matrix: $\mathbf{\Psi} = \left\langle\mathbf{e} \mathbf{e}^\dagger \right\rangle\,$ $=\left\langle\begin{bmatrix} e_1 e_1^* & e_1 e_2^* \\ e_2 e_1^* & e_2 e_2^* \end{bmatrix} \right\rangle$ $=\left\langle\begin{bmatrix} a_1^2 & a_1 a_2 e^{i (\theta_1-\theta_2)} \\ a_1 a_2 e^{-i (\theta_1-\theta_2)}& a_2^2 \end{bmatrix} \right\rangle$ where angular brackets denote averaging over many wave cycles. Several variants of the coherency matrix have been proposed: the Wiener coherency matrix and the spectral coherency matrix of Richard Barakat measure the coherence of a spectral decomposition of the signal, while the Wolf coherency matrix averages over all time/frequencies. The coherency matrix contains all second order statistical information about the polarization. This matrix can be decomposed into the sum of two idempotent matrices, corresponding to the eigenvectors of the coherency matrix, each representing a polarization state that is orthogonal to the other. An alternative decomposition is into completely polarized (zero determinant) and unpolarized (scaled identity matrix) components. In either case, the operation of summing the components corresponds to the incoherent superposition of waves from the two components. The latter case gives rise to the concept of the "degree of polarization"; i.e., the fraction of the total intensity contributed by the completely polarized component. The coherency matrix is not easy to visualize, and it is therefore common to describe incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. An alternative and mathematically convenient description is given by the Stokes parameters, introduced by George Gabriel Stokes in 1852. The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown in the equations and figure below. Poincaré sphere diagram $S_0 = I \,$ $S_1 = I p \cos 2\psi \cos 2\chi\,$ $S_2 = I p \sin 2\psi \cos 2\chi\,$ $S_3 = I p \sin 2\chi\,$ Here Ip, 2ψ and 2χ are the spherical coordinates of the polarization state in the three-dimensional space of the last three Stokes parameters. Note the factors of two before ψ and χ corresponding respectively to the facts that any polarization ellipse is indistinguishable from one rotated by 180°, or one with the semi-axis lengths swapped accompanied by a 90° rotation. The Stokes parameters are sometimes denoted I, Q, U and V. The Stokes parameters contain all of the information of the coherency matrix, and are related to it linearly by means of the identity matrix plus the three Pauli matrices: $\mathbf{\Psi} = \frac{1}{2}\sum_{j=0}^3 S_j \mathbf{\sigma}_j,\text{ where}$ $\begin{matrix} \mathbf{\sigma}_0 &=& \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} & \mathbf{\sigma}_1 &=& \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \\ \\ \mathbf{\sigma}_2 &=& \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} & \mathbf{\sigma}_3 &=& \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \end{matrix}$ Mathematically, the factor of two relating physical angles to their counterparts in Stokes space derives from the use of second-order moments and correlations, and incorporates the loss of information due to absolute phase invariance. The figure above makes use of a convenient representation of the last three Stokes parameters as components in a three-dimensional vector space. This space is closely related to the Poincaré sphere, which is the spherical surface occupied by completely polarized states in the space of the vector $\mathbf{u} = \frac{1}{S_0}\begin{bmatrix} S_1\\S_2\\S_3\end{bmatrix}.$ All four Stokes parameters can also be combined into the four-dimensional Stokes vector, which can be interpreted as four-vectors of Minkowski space. In this case, all physically realizable polarization states correspond to time-like, future-directed vectors. ### Propagation, reflection and scattering This section needs attention from an expert in Physics. Please add a reason or a talk parameter to this template to explain the issue with the section. WikiProject Physics (or its Portal) may be able to help recruit an expert. (February 2009) In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space and time while the polarization state does not. That is, $\mathbf{e}(z+\Delta z,t+\Delta t) = \mathbf{e}(z, t) e^{i k (c\Delta t - \Delta z)},$ where k is the wavenumber and positive z is the direction of propagation. As noted above, the physical electric vector is the real part of the Jones vector. When electromagnetic waves interact with matter, their propagation is altered. If this depends on the polarization states of the waves, then their polarization may also be altered. In many types of media, electromagnetic waves may be decomposed into two orthogonal components that encounter different propagation effects. A similar situation occurs in the signal processing paths of detection systems that record the electric field directly. Such effects are most easily characterized in the form of a complex 2×2 transformation matrix called the Jones matrix: $\mathbf{e'} = \mathbf{J}\mathbf{e}.$ In general the Jones matrix of a medium depends on the frequency of the waves. For propagation effects in two orthogonal modes, the Jones matrix can be written as $\mathbf{J} = \mathbf{T} \begin{bmatrix} g_1 & 0 \\ 0 & g_2 \end{bmatrix} \mathbf{T}^{-1},$ where g1 and g2 are complex numbers representing the change in amplitude and phase caused in each of the two propagation modes, and T is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors. For those media in which the amplitudes are unchanged but a differential phase delay occurs, the Jones matrix is unitary, while those affecting amplitude without phase have Hermitian Jones matrices. In fact, since any matrix may be written as the product of unitary and positive Hermitian matrices, any sequence of linear propagation effects, no matter how complex, can be written as the product of these two basic types of transformations. Paths taken by vectors in the Poincaré sphere under birefringence. The propagation modes (rotation axes) are shown with red, blue, and yellow lines, the initial vectors by thick black lines, and the paths they take by colored ellipses (which represent circles in three dimensions). Media in which the two modes accrue a differential delay are called birefringent. Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). An easily visualized example is one where the propagation modes are linear, and the incoming radiation is linearly polarized at a 45° angle to the modes. As the phase difference starts to appear, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) with an azimuth angle perpendicular to the original direction, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360° phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes (this is a consequence of the isomorphism of SU(2) with SO(3)). Examples for linear (blue), circular (red), and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarized images of whatever is viewed through them. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarization state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colors and rainbow-like effects. Media in which the amplitude of waves propagating in one of the modes is reduced are called dichroic. Devices that block nearly all of the radiation in one mode are known as polarizing filters or simply "polarizers". In terms of the Stokes parameters, the total intensity is reduced while vectors in the Poincaré sphere are "dragged" towards the direction of the favored mode. Mathematically, under the treatment of the Stokes parameters as a Minkowski 4-vector, the transformation is a scaled Lorentz boost (due to the isomorphism of SL(2,C) and the restricted Lorentz group, SO(3,1)). Just as the Lorentz transformation preserves the proper time, the quantity det Ψ = S02 − S12 − S22 − S32 is invariant within a multiplicative scalar constant under Jones matrix transformations (dichroic and/or birefringent). In birefringent and dichroic media, in addition to writing a Jones matrix for the net effect of passing through a particular path in a given medium, the evolution of the polarization state along that path can be characterized as the (matrix) product of an infinite series of infinitesimal steps, each operating on the state produced by all earlier matrices. In a uniform medium each step is the same, and one may write $\mathbf{J} = Je^{\mathbf{D}},$ where J is an overall (real) gain/loss factor. Here D is a traceless matrix such that αDe gives the derivative of e with respect to z. If D is Hermitian the effect is dichroism, while a unitary matrix models birefringence. The matrix D can be expressed as a linear combination of the Pauli matrices, where real coefficients give Hermitian matrices and imaginary coefficients give unitary matrices. The Jones matrix in each case may therefore be written with the convenient construction $\begin{matrix} \mathbf{J_b} &=& J_be^{\beta \mathbf{\sigma}\cdot\mathbf{\hat{n}}} & \text{and} & \mathbf{J_r} &=& J_re^{\phi i\mathbf{\sigma}\cdot\mathbf{\hat{m}}}, \end{matrix}$ where σ is a 3-vector composed of the Pauli matrices (used here as generators for the Lie group SL(2,C)) and n and m are real 3-vectors on the Poincaré sphere corresponding to one of the propagation modes of the medium. The effects in that space correspond to a Lorentz boost of velocity parameter 2β along the given direction, or a rotation of angle 2φ about the given axis. These transformations may also be written as biquaternions (quaternions with complex elements), where the elements are related to the Jones matrix in the same way that the Stokes parameters are related to the coherency matrix. They may then be applied in pre- and post-multiplication to the quaternion representation of the coherency matrix, with the usual exploitation of the quaternion exponential for performing rotations and boosts taking a form equivalent to the matrix exponential equations above. (See Quaternion rotation) In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on angle of incidence and the angle of refraction. In addition, if the plane of the reflecting surface is not aligned with the plane of propagation of the wave, the polarization of the two parts is altered. In general, the Jones matrices of the reflection and transmission are real and diagonal, making the effect similar to that of a simple linear polarizer. For unpolarized light striking a surface at a certain optimum angle of incidence known as Brewster's angle, the reflected wave will be completely s-polarized. Certain effects do not produce linear transformations of the Jones vector, and thus cannot be described with (constant) Jones matrices. For these cases it is usual instead to use a 4×4 matrix that acts upon the Stokes 4-vector. Such matrices were first used by Paul Soleillet in 1929, although they have come to be known as Mueller matrices. While every Jones matrix has a Mueller matrix, the reverse is not true. Mueller matrices are frequently used to study the effects of the scattering of waves from complex surfaces or ensembles of particles. ## Examples and applications ### In nature and photography For more details on this topic, see Polarizing filter (Photography). Effect of a polarizer on reflection from mud flats. In the picture on the left, the polarizer is rotated to transmit the reflections as well as possible; by rotating the polarizer by 90° (picture on the right) almost all specularly reflected sunlight is blocked. The effects of a polarizing filter on the sky in a photograph. The picture on the right uses the filter. Light reflected by shiny transparent materials is partly or fully polarized, except when the light is perpendicular to the surface. It was through this effect that polarization was first discovered in 1808 by the mathematician Étienne-Louis Malus. A polarizing filter, such as a pair of polarizing sunglasses, can be used to observe this effect by rotating the filter while looking through it at the reflection off of a distant horizontal surface. At certain rotation angles, the reflected light will be reduced or eliminated. Polarizing filters remove light polarized at 90° to the filter's polarization axis. If two polarizers are placed atop one another at 90° angles to one another, there is minimal light transmission. Polarization by scattering is observed as light passes through the atmosphere. The scattered light produces the brightness and color in clear skies. This partial polarization of scattered light can be used to darken the sky in photographs, increasing the contrast. This effect is easiest to observe at sunset, on the horizon at a 90° angle from the setting sun. Another easily observed effect is the drastic reduction in brightness of images of the sky and clouds reflected from horizontal surfaces (see Brewster's angle), which is the main reason polarizing filters are often used in sunglasses. Also frequently visible through polarizing sunglasses are rainbow-like patterns caused by color-dependent birefringent effects, for example in toughened glass (e.g., car windows) or items made from transparent plastics. The role played by polarization in the operation of liquid crystal displays (LCDs) is also frequently apparent to the wearer of polarizing sunglasses, which may reduce the contrast or even make the display unreadable. Polarizing sunglasses reveal stress in car window (see text for explanation.) The photograph on the right was taken through polarizing sunglasses and through the rear window of a car. Light from the sky is reflected by the windshield of the other car at an angle, making it mostly horizontally polarized. The rear window is made of tempered glass. Stress from heat treatment of the glass alters the polarization of light passing through it, like a wave plate. Without this effect, the sunglasses would block the horizontally polarized light reflected from the other car's window. The stress in the rear window, however, changes some of the horizontally polarized light into vertically polarized light that can pass through the glasses. As a result, the regular pattern of the heat treatment becomes visible. ### Biology Many animals are capable of perceiving some of the components of the polarization of light, e.g., linear horizontally polarized light. This is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. In the latter case, one species measures all six orthogonal components of polarization, and is believed to have optimal polarization vision.[8] The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization was thought to be perceived by pigeons, which was assumed to be one of their aids in homing, but research indicates this is a popular myth.[9] The naked human eye is weakly sensitive to polarization, without the need for intervening filters. Polarized light creates a very faint pattern near the center of the visual field, called Haidinger's brush. This pattern is very difficult to see, but with practice one can learn to detect polarized light with the naked eye. ### Geology Photomicrograph of a volcanic sand grain; upper picture is plane-polarized light, bottom picture is cross-polarized light, scale box at left-center is 0.25 millimeter. The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See optical mineralogy for more details. Shear waves in elastic materials exhibit polarization. These effects are studied as part of the field of seismology, where horizontal and vertical polarizations are termed SH and SV, respectively. ### Chemistry Polarization is principally of importance in chemistry due to circular dichroism and "optical rotation" (circular birefringence) exhibited by optically active (chiral) molecules. It may be measured using polarimetry. The term "polarization" may also refer to the through-bond (inductive or resonant effect) or through-space influence of a nearby functional group on the electronic properties (e.g., dipole moment) of a covalent bond or atom. This concept is based on the formation of an electric dipole within a molecule, which is related to polarization of electromagnetic waves in infrared spectroscopy. Molecules will absorb infrared light if the frequency of the bond vibration is resonant with (identical to) the incident light frequency, where the molecular vibration at hand produces a change in the dipole moment of the molecule. In some nonlinear optical processes, the direction of an oscillating dipole will dictate the polarization of the emitted electromagnetic radiation, as in vibrational sum frequency generation spectroscopy or similar processes. Polarized light does interact with anisotropic materials, which is the basis for birefringence. This is usually seen in crystalline materials and is especially useful in geology (see above). The polarized light is "double refracted", as the refractive index is different for horizontally and vertically polarized light in these materials. This is to say, the polarizability of anisotropic materials is not equivalent in all directions. This anisotropy causes changes in the polarization of the incident beam, and is easily observable using cross-polar microscopy or polarimetry. The optical rotation of chiral compounds (as opposed to achiral compounds that form anisotropic crystals), is derived from circular birefringence. Like linear birefringence described above, circular birefringence is the "double refraction" of circular polarized light.[10] ### Astronomy Main article: Polarization in astronomy In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe. Synchrotron radiation is inherently polarised. It has been suggested that astronomical sources caused the chirality of biological molecules on Earth.[11] ### 3D movies Polarization is also used for some 3D movies, in which the images intended for each eye are either projected from two different projectors with orthogonally oriented polarizing filters or, more typically, from a single projector with time multiplexed polarization (a fast alternating polarization device for successive frames). Polarized 3D glasses with suitable polarized filters ensure that each eye receives only the intended image. Historical stereoscopic projection displays used linear polarization encoding because it was inexpensive and offered good separation. Circular polarization makes left-eye/right-eye separation insensitive to the viewing orientation; circular polarization is used in typical 3-D movie exhibition today, such as the system from RealD. Polarized 3-D only works on screens that maintain polarization (such as silver screens); a normal projection screen would cause depolarization which would void the effect. ### Communication and radar All radio transmitting and receiving antennas are intrinsically polarized, special use[clarification needed] of which is made in radar. Most antennas radiate either horizontal, vertical, or circular polarization although elliptical polarization also exists. The electric field or E-plane determines the polarization or orientation of the radio wave. Vertical polarization is most often used when it is desired to radiate a radio signal in all directions such as widely distributed mobile units. AM and FM radio use vertical polarization, while television uses horizontal polarization. Alternating vertical and horizontal polarization is used on satellite communications (including television satellites), to allow the satellite to carry two separate transmissions on a given frequency, thus doubling the number of channels a customer can receive through one satellite. Electronically controlled birefringent devices such as photoelastic modulators are used in combination with polarizing filters as modulators in fiber optics. ### Materials science Strain in plastic glasses In engineering, the relationship between strain and birefringence motivates the use of polarization in characterizing the distribution of stress and strain in prototypes. ### Navigation Main article: Rayleigh Sky Model Sky polarization has been exploited in the "sky compass", which was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g., under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone") in their extensive expeditions across the North Atlantic in the 9th–11th centuries, before the arrival of the magnetic compass in Europe in the 12th century. Related to the sky compass is the "polar clock", invented by Charles Wheatstone in the late 19th century. ## Notes and references • Principles of Optics, 7th edition, M. Born & E. Wolf, Cambridge University, 1999, ISBN 0-521-64222-1. • Fundamentals of polarized light: a statistical optics approach, C. Brosseau, Wiley, 1998, ISBN 0-471-14302-2. • Polarized Light, second edition, Dennis Goldstein, Marcel Dekker, 2003, ISBN 0-8247-4053-X • Field Guide to Polarization, Edward Collett, SPIE Field Guides vol. FG05, SPIE, 2005, ISBN 0-8194-5868-6. • Polarization Optics in Telecommunications, Jay N. Damask, Springer 2004, ISBN 0-387-22493-9. • Optics, 4th edition, Eugene Hecht, Addison Wesley 2002, ISBN 0-8053-8566-5. • Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University, 1985, ISBN 0-521-25862-6. • Polarised Light in Science and Nature, D. Pye, Institute of Physics, 2001, ISBN 0-7503-0673-4. • Polarized Light, Production and Use, William A. Shurcliff, Harvard University, 1962. • Ellipsometry and Polarized Light, R. M. A. Azzam and N. M. Bashara, North-Holland, 1977, ISBN 0-444-87016-4 • Secrets of the Viking Navigators—How the Vikings used their amazing sunstones and other techniques to cross the open oceans, Leif Karlsen, One Earth Press, 2003. 1. Dorn, R. and Quabis, S. and Leuchs, G. (dec 2003). "Sharper Focus for a Radially Polarized Light Beam". Physical Review Letters 91 (23,): 233901–+. Bibcode:2003PhRvL..91w3901D. doi:10.1103/PhysRevLett.91.233901. 2. Subrahmanyan Chandrasekhar (1960) Radiative transfer, p.27 3. ^ a b 4. Merrill Ivan Skolnik (1990) Radar Handbook, Fig. 6.52, sec. 6.60. 5. Hamish Meikle (2001) Modern Radar Systems, eq. 5.83. 6. T. Koryu Ishii (Editor), 1995, Handbook of Microwave Technology. Volume 2, Applications, p. 177. 7. John Volakis (ed) 2007 Antenna Engineering Handbook, Fourth Edition, sec. 26.1. Note: in contrast with other authors, this source initially defines ellipticity reciprocally, as the minor-to-major-axis ratio, but then goes on to say that "Although [it] is less than unity, when expressing ellipticity in decibels, the minus sign is frequently omitted for convenience", which essentially reverts back to the definition adopted by other authors. 8. Sonja Kleinlogel, Andrew White (2008). "The secret world of shrimps: polarisation vision at its best". PLoS ONE 3 (5): e2190. arXiv:0804.2162. Bibcode:2008PLoSO...3.2190K. doi:10.1371/journal.pone.0002190. PMC 2377063. PMID 18478095. 9. "No evidence for polarization sensitivity in the pigeon electroretinogram", J. J. Vos Hzn, M. A. J. M. Coemans & J. F. W. Nuboer, The Journal of Experimental Biology, 1995. 10. Hecht, Eugene (1998). Optics (3rd ed.). Reading, MA: Addison Wesley Longman. ISBN 0-19-510818-3. 11. Clark, S. (1999). "Polarised starlight and the handedness of Life". American Scientist 97: 336–43. Bibcode:1999AmSci..87..336C. doi:10.1511/1999.4.336.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9031950831413269, "perplexity_flag": "middle"}
http://lowentropymusings.wordpress.com/
# Encrypted USB drives with eCryptfs I’ve got several USB flash drives that I carry around with me on a regular basis, and it’s nice to be able to use those for small backups of important things, in addition to my usual system-wide backups that get dumped onto a couple external hard drives. This way, even if something happens to my computer and the backup drives, (the house burns down, my computer and hard drives are stolen, or an EMP bomb goes off in the garage) I still have the important things with me. Being me, I like these important things to be encrypted so that if I lose the USB drive, it’s not cause for panic. That said, I also use my USB drives for more mundane things, like transferring files between two computers and carrying around my music. Since I’m a Linux geek surrounded by people using Microsoft and Apple products, it’s nice to have something that works with both. After all, it’s hard to sing the praises of Linux effectively when they’re asking questions about why your USB drives won’t work. Today I drew up a list of features that my ideal USB drive would have. 1. Must be capable of storing encrypted data in such a way that it can be mounted as a filesystem. Backup software shouldn’t need to worry about the encryption. 2. The USB drive must be capable of being read and written by Linux, Windows, and Mac OSX. The encrypted data only needs to be read on my personal computers, which all run Linux. 3. If I have free space left over after backing up, I should be able to use it to transfer files, store music, or whatever else I want to use the USB drive for. The second requirement means that the USB drive will have to have at least part of it formatted as a filesystem that everyone can read and write to. I could format it as NTFS, which is what Windows uses now, but I don’t believe Mac OSX allows writes to NTFS filesystems, and I have previously had issues with using NTFS on Linux myself. It may be the case that I could work around any Linux-NTFS issues, but I don’t want to have to convince Mac users to install things or tweak system settings to make my USB drive work. I can’t use Mac’s normal filesystem, HFS+,  either, because Windows users wouldn’t be able to use it without installing new drivers. Finally, I can’t use something like ext4 or Btrfs, because then it would be limited to Linux users only without extra software. FAT32 is not my favourite filesystem, but it’s supported on all the target platforms with no extra software needed, so that will be my filesystem of choice, guaranteeing that whatever systems I’m likely to encounter will be compatible with my USB drive. Turning to the first requirement, I need to be able to store encrypted data such that it can be mounted and used like a regular filesystem. In Linux, there are two basic ways of going about this, you can use block level encryption like dm-crypt/LUKS, which I use for encrypting my hard disk or you can use filesystem-level encryption, like eCryptfs, which I also previously mentioned. To figure out which of these two solutions to use, I’ll look at the final requirement, which is the ability to dynamically resize to avoid wasting space. I could partition my USB drive and format the first part as FAT32 with the rest encrypted using dm-crypt with whatever filesystem I like (encrypted data only needs to be usable in Linux) on top of the encrypted block device. This is not very nice however, as I’m pre-allocating the space before I know what I’m storing. If I don’t store much in my backup area, then the space is wasted and I can’t use it for other things. On the other hand, if I suddenly need more backup space, I’d have to manually repartition, expand the filesystem, and so on. That’s annoying, so I won’t go that route. The second alternative, eCryptfs, is a lot better suited to this dynamically resizable storage problem. I can format the entire drive as FAT32, then create a directory to use for eCryptfs backing files. If I then mount an eCryptfs filesystem with that directory as the backing directory, all the files I write into the mount are encrypted before being written onto my USB stick. Now I can just use that as output for my backups, and I have the dynamically resizable encryption scheme that I wanted. It only takes up as much space as the encrypted files, since they are just files stored in the FAT32 filesystem. If I create a new file in the eCryptfs mount, one new file is created in FAT32, and if I delete something in eCryptfs, the file goes away for FAT32 as well. So the final solution is to format the whole USB drive as FAT32, then stack eCryptfs on top of that. Now I can safely carry around my backups and still have room should I need to use a USB drive like a normal person, or should normal people want to use my USB drive. # What I’ve been up to recently As part of my degree requirements, I’ve got to complete a large project within a group of four people. The project goal is self selected, but must be useful and related to software engineering. I’ll be working with Michael Chang, Zameer Manji, and Alvin Tran until early 2014 to add integrity protection to eCryptfs, a cryptographic file system that can be used on Linux. In this post, I’ll present some concepts related to the project along with some details about the project itself. ### Confidentiality and Integrity Two important concepts in computer security are those of confidentiality and integrity. There is also usually a third concept mentioned alongside these, that of availability, but I’m only mentioning it here for completeness. Confidentiality protection attempts to prevent certain parties from reading information while allowing other parties to access the information. In the case of a cryptographic file system like eCryptfs, this is done by encrypting the files before writing them to disk, and decrypting the files when they are needed later. This could be done manually, but it is much easier and less error prone to have the file system handle this sort of thing than to try to encrypt all sensitive information by hand. Integrity protection attempts to ensure that information has not been unintentionally changed. This might entail actually trying to prevent modifications to the information, or it may simply indicate when the information has been changed. Cryptographically, this can be done using a Message Authentication Code (MAC), which is a short binary string that can be easily calculated with a file and a key, but cannot be calculated without both. Additionally, if the file changes then the calculated MAC will be different. Anyone knowing the key and having access to the file can calculate the MAC and compare it to one that was calculated and stored earlier, and if the two are different, then the file must have been changed. ### Current state of eCryptfs The eCryptfs file system is a stacking file system, which means that it relies on a lower file system to handle stuff like I/O and buffering, and just manages file encryption and decryption. Currently, that is all it manages, as it does not include any integrity protection. The contents of files are made unreadable to anyone without the correct key, but it is still possible to modify those files in partly predictable ways, as presented below. There is already a wide user base for eCryptfs, with Ubuntu and it’s derivatives using it to provide the encrypted home directory feature, and within Google’s ChromeOS. ### Attack against CBC mode Cipher Block Chaining (CBC) is one of the most common modes of operation for block ciphers, and is used currently by eCryptfs. In this mode of operation, each block of plaintext is XORed with the previous ciphertext block before encryption. This ensures that the same block of plaintext won’t encrypt to the same ciphertext, unless the previous ciphertext block is the same as well. A one-block initialization vector stands in for the previous ciphertext block during the first encryption. CBC decryption just reverses the process, first decrypting the ciphertext block, then XORing it with the previous ciphertext block. Operations can be expressed in the following way (Taken from Wikipedia) Encryption: $C_i = E_K(P_i \oplus C_{i-1}), C_0 = IV$ Decryption: $P_i = D_K(C_i) \oplus C_{i-1}, C_0 = IV$ Now let’s perform the attack. Let’s say we want to change a certain plaintext block $P_n$ into a different plaintext ${P_n}'$ by flipping some bits. We’ll denote this change as $\Delta$. That is, ${P_n}' = P_n \oplus \Delta$ It turns out that if we don’t care what happens to the previous plaintext block, $P_{n-1}$, all we have to do is replace $C_{n-1}$ with ${C_{n-1}}' = C_{n-1} \oplus \Delta$ We can substitute this into the decryption formula above to see what will happen. ${P_n}' = D_K(C_n) \oplus {C_{n-1}}'$ ${P_n}' = D_K(C_n) \oplus {C_{n-1}} \oplus \Delta$ ${P_n}' = P_n \oplus \Delta$ This is an integrity issue, as an attacker can now modify files without ever knowing the key used to encrypt them. It’s also not guaranteed that this modification is detectable, depending on whether the previous block can be checked for validity. If it can be checked, great, but that’s just another form of integrity protection, and the project I’m working on aims to implement integrity protection regardless of the data stored. If it can’t be checked for correctness, or is ignored (maybe it’s a different record in a database) then the modification will go unnoticed. ### Galois Counter Mode Galois Counter Mode (GCM) is another mode of operation for block ciphers, but in addition to encryption, also produces a piece of data known as an authentication tag. This tag acts as a MAC taken over the data that was encrypted. An attacker could still modify the ciphertext, but now the resultant changes to the plaintext will invalidate the tag, making them detectable. The attacker cannot modify the tag so that it validates the new data, because calculating the tag requires the cryptographic key that was used to encrypt the data, and the attacker does not know this key. Another benefit to GCM is speed. It’s true that the same effect on security could be had by encrypting the data and calculating a MAC separately, but that requires two passes of the file, one for each operation. GCM does both in one pass over the file, speeding things up. This is important in a file system, as you’d rather have access to your files quickly. The project aims to implement GCM as the mode of operation for eCryptfs, thus providing both integrity and confidentiality protection. Integrity protection was something the original developers wanted to have from the beginning, but didn’t have the time to implement. I’m proud to be helping to create the first widely used integrity protected cryptographic file system. # Password Restrictions Really Bug Me Warning, rant ahead. Maybe it’s just me, but when I hit restrictions on what I can use as my password, I get annoyed. Lower limits, such as “at least 6 characters long” are fine, but there are several things that I can see no reason for, that make me doubt the competence of the programmers involved in the system. When I realised that my bank’s website did all of the things mentioned in this post, I was really annoyed. Thousands of dollars of my money are sitting there, just one string of characters away from an attacker getting it all, and of course, reading the fine print of their security agreement reveals that it’s not their responsibility if my password or reset questions are compromised. Character restrictions Passwords are passwords, not HTML, not shell scripts, not anything that needs to be parsed by machines. They should be treated as opaque sequences of bytes, and the only thing that should be done with them while logging in is salting and hashing. When I see restrictions like, “To preserve online security, your information cannot contain unacceptable symbols or words (for example, “%”, “<”, “{“, “www.”, “ftp”,”https”, etc.)”, I’m astounded that they let these people touch code at all. There is no reason at all that they should need to check for that sort of thing in passwords. Passwords should never be displayed, not on their website, not in email, not anywhere. There should be no cause to worry about XSS attacks, SQL injection, or any other sort of incomplete mediation attack via passwords if they’re properly handled as opaque data. I’m not complaining about entropy figures here. A 12 character string composed of random alphanumeric character would be approximately 71 bits of entropy. Adding in the printable punctuation and whitespace characters on a standard US keyboard only adds about 5 bits of entropy to that figure for a random 12 character string. The reason I’m annoyed by these restrictions is that they hint at deeper problems about how the password is handled by the system. They also impede those of us who do want to use “special” characters in our passwords for whatever reason, from non-ASCII characters in a preferred language to an obsession over password entropy. Upper limit on length I would have thought that the days of small fixed size strings were behind us, but apparently not. My bank puts an upper limit of 12 characters on password length, which precludes using an easily memorable passphrase. They put a similarly low limit of 25 characters on the password reset questions and answers. One of the few things they did right is allowing me to write my own security question, as I definitely didn’t want to use the default questions. I then got cut off halfway through writing a short sentence by this low character limit. Is this due to some aspiring database administrator learning that CHARs were faster than VARCHARs, and deciding to speed up logging in by a few milliseconds? Is the process of logging into an account, or resetting a password really where the bottlenecks are? If the problem doesn’t lie with the storage, but with the login system itself, then it’s time the programmers learned about dynamic allocation. In addition to making it impossible to use longer passwords, this upper length limit also hints at improper handling of the passwords, as properly salted and hashed passwords would be constant length, and the length of the original password would be completely irrelevant to the storage requirements. Required character classes This sort of attempt to increase security is what leads to users choosing “Password1″ instead of “password” to protect their life savings. No system is idiot proof, so rather than treating the symptoms by attempting to programmatically enforce good passwords, try to treat the problem by educating the idiots on how to choose a good password, and mentioning why they care. Suggest using a diceware passphrase, or use my password suggestions. Of course, this is only effective if users actually can use good passwords, so fixing the first two issues is a priority. While I’m ranting about programmatically enforcing password strength, I should mention that I’m of the opinion that the only checks that I think are valid for this would be checking against a list of common passwords, and checking against information like account name or other public details associated with the account. These are going to be among the first things a social engineering attacker would try, and common workarounds for required character classes are not going to stop them, making that method of enforcement worthless. Not really a password restriction, but it’s related, and it ticks me off. The usual culprits, along the lines of ”what was your first [job, school, pet's name, car]“, just train people to think this information is unguessable. After all, if so many places use the same questions, there must be a good reason right? I did a quick experiment where I looked at the public information about some of my friends on Google+, searched through their post history, and if I could find a link to their website, looked at that as well. In many cases, I could find answers to at least one of those questions just from these sources of information. The current system might as well be an all-you-can-eat buffet for social engineers. Perhaps more obscure questions could be used by default, or perhaps websites should start using alternate methods of authentication, for example, resetting via SMS or OpenID.  Approached from an alternate perspective, why should we even need to give answers to these questions in the first place? What business do random websites have knowing trivia about me, and more importantly, why is it the same default trivia that protects my bank account? </rant> # Killing Machines Automated systems are already doing much of our work for us, making everyday decisions to remove the burden from humans. Anything from Google’s licensed self driving car navigating the roads alongside human-guided vehicles to the computers doing stock market trading on behalf of investors. Both of these are technologies that do what humans can do, but better, faster, or more reliably. Humans suffer lapses of concentration and fatigue, and do so unpredictably, whereas computers don’t. Computers have their own set of interesting problems, like the priority inversion bug that crashed Mars Pathfinder, but those can be found and removed from systems. The question that springs to mind for me is “Is there anything we shouldn’t let a computer decide, even if it could make that decision faster or more reliably than a human?” My answer is that a machine should not be allowed to decide whether to kill a human. I’m not against computers aiding humans in acts of war, that’s just technological progression, and it’s happening already. Modern fighter jets are extremely unstable, to aid in maneuverability, so much so that a human pilot could not possibly stabilize the aircraft, so the jets are stabilized by computer. The human pilot still has the decision of whether or not to fire the weapons, and at what targets. I’d also like to point out that I’m not mentioning anything about sentient computers when I say “decision”. Computers make decisions every time they execute conditional statements, without any sort of capacity for sentience or consciousness. A machine could be programmed to identify humans and kill some subset of them without human intervention, and it would be making a decision whether or not to kill humans. A machine that identified humans and then asked whether or not to kill them would not be making the decision to kill a human, as it would be passed off to a human operator. By giving a machine the decision to kill a human being, we have created something capable of autonomously waging war. Most people consider war to be something to be avoided and minimized if possible. It is also known that the more indirect the method of killing, the easier it will be for a person to rationalize it, and the less aversion to performing the killing they will have. Psychologically, it is much easier to kill someone by pressing a button that launches a missile than to shoot someone that you can see, and shooting someone is psychologically easier than stabbing someone to death. If we remove the decision entirely from humans, the killing is now out of sight, out of mind, making it much easier for mass killings to take place without psychological consequence for those waging war. One problem with this answer is how to define “deciding to kill”. Through various means, computers can control much of how we view the world. For example, information people view on the Internet is managed by computers. If the search engine ratings on some website are low, fewer people will be impacted by that website. If the search engines raise the rating, more people are likely to see it. By controlling what we see, computers could indirectly control what we think, and could theoretically manipulate one human into killing another. As such, it is likely an exercise in futility to try to prove that a computer could not decide to kill a human, even though that scenario seems to be very unlikely at this point in time. # SSH X Forwarding Recently I was messing about with X forwarding through SSH, and I realized something that perhaps should have been obvious, but caught me off guard, so I’ll share it here. The scenario involves 2 computers and a fairly resource intensive graphical application that could take advantage of 3D acceleration. The computers involved were a desktop computer with a reasonable graphics card, and a netbook with whatever graphics capabilities were built into the motherboard, but no 3D acceleration. I wanted to take advantage of the netbook screen, effectively using it as a second monitor, but running the application on the more powerful desktop. I figured I should be able to forward the X session over SSH, and end up with the netbook displaying it, but all the hard work done on my desktop. Unfortunately, when I set this up, I found the application could not take advantage of 3D acceleration anymore. The reason for this is that when an X session is forwarded, only the X traffic is transferred across the network. This traffic consists of things like “draw a rectangle here” or “draw this bitmap there”. Where I had gone wrong was assuming that my graphical application would have the 3D acceleration done on the desktop’s fully capable graphics card, then have the resulting bitmap sent over the network. In reality, the netbook was doing all the graphical rendering, leading to a lack of 3D acceleration capability. Despite the fact that I couldn’t get my 3D acceleration, this is actually a much smarter way of doing things, as it significantly reduces the network traffic involved. My netbook’s screen size is 1024×768, and let’s assume I wanted 30fps and 32 bit colours. The resultant network traffic would be (1024×768 pixels/frame)x(30frames/second)x(32 bits/pixel), coming out to a little over 750Mb/s just for the image going one way. If I recall correctly, the actual network load while I was doing this was a little over 100Mb/s. The lesson learned here is that hardware acceleration is done on the display end (the X server) rather than the client end of an X connection. Tagged hardware acceleration, SSH, X forwarding. # Turkey Trivia It’s Thanksgiving here in Canada, and so instead of the usual range of topics, here’s some trivia about that favourite food, the turkey. 1. Canadians consumed 143.4 million kg (Mkg) of turkey in the year 2011. 2. At Thanksgiving 2011, 3.0 million whole turkeys were purchased by Canadians, equal to 32% of all whole turkeys that were sold over the year. 3. At Christmas 2011, 4.4 million whole turkeys were purchased by Canadians, equal to 46 % of all whole turkeys that were sold over the year. 4. Turkeys are omnivorous. Most of their diet is grass and grain, but they will also eat insects, berries and small reptiles. 5. The wild turkey’s bald head can change color in seconds with excitement or emotion. The birds’ heads can be red, pink, white or blue. 6. Turkeys see in color and have excellent daytime vision that is three times better than a human’s eyesight and covers 270 degrees, but they have poor vision at night. 7. The long, red fleshy growth from the base of the beak that hangs down over the neck of a turkey is called the snood. 8. The fastest time to carve a turkey is 3 min 19.47 sec and was achieved by Paul Kelly (UK) at Little Claydon Farm, Essex, UK, on 3 June 2009. 9. Turkeys originated in North and Central America, and evidence indicates that they have been around for over 10 million years. 10. Wild turkeys can fly for short distances at up to 88 kilometres per hour. Wild turkeys are also fast on the ground, running at speeds of up to 40 kilometres per hour. Info taken from Tagged canada, thanksgiving, turkey. # They’re Always Watching Right now, as you read this page, you are secretly being watched. Ad networks and analytics companies are tracking you across the Web, using a variety of techniques designed for one purpose: knowing everything about you. There is an entire industry built upon identifying and tracking web users, and chances are you haven’t heard of most of them. This post will discuss some of the ways they track you, how you can make it harder for them, and why you should. You may be wondering why tracking is so bad, and why you should care. You might think that there are so many people that picking you out of the crowd is practically impossible. It’s not. The problem is that you don’t know who has your data, you don’t know what they have, and you can’t control how they use it. The data associated with you might seem fairly innocuous, such as the IP addresses you’ve used, what sort of sites you visit, or your email address, but in the wrong hands it can be rather dangerous.  Potential employers could see your search history, including not only search terms, but what you clicked on. Phishers and identity thieves could launch targeted attacks using information off your social networking sites and your location from the IP addresses you’ve used. Commerce sites could adjust their prices and charge you more for things you’re interested in, assuming you’re still likely to buy them. From the point of view of the people doing the tracking, there are two main goals. First they have to get as much information as they can from each place you visit, and second, they have to link it together to create a profile of you. This allows them to build the detailed view of your Web history that they can then sell to anyone willing to pay them for it. ### Gathering Information Every time you make a request to a web server, your browser sends a bunch of information along with that request. It will generally send an “agent string” identifying the browser you use, the operating system you’re using it on, and some other information such as version numbers. It also sends a header telling the server what the last web page you looked at was, one telling the server what languages you prefer, and if you connect through an HTTP proxy, it will add a header with your original IP address. On top of all this, any cookies previously set by the domain (more on this later) will be sent to the server. This is quite a lot of information, and it may be sent out multiple times, not only for the web page itself, but also for any extra resources in the page, such as images or ad banners. This is one major source of information for the tracking companies. All they have to do is arrange for some resource hosted on their servers to be included in the web page you visit, and all that information will be sent to them when the page loads. This isn’t hard, it happens every time you see an advertisement on a page you visit. Even if you don’t see any ads on a page, you’re still not in the clear. Web analytics companies will often place transparent 1×1 pixel images, also known as “web bugs”, in the pages they keep stats on. Every time someone loads the page, their browser sends all that information to the tracking company along with their request for these images. So how do you stop this from happening? There are two ways to do this, either you can change or reduce the information in the headers, or better, you can avoid fetching the third party content altogether. If you never request things from their servers, it becomes much harder to track you. Two browser extensions that help you avoid connecting to these servers are Adblock Plus for Firefox and Chrome, and Do Not Track Plus For Firefox, Chrome, Safari and IE. These extensions attempt to keep you from loading the third party resources that come from advertisers and Web analytics companies without affecting anything else. If the site you’re connecting to directly is tracking you, then they won’t help, but they do a pretty good job at blocking third party tracking. The other method for reducing the information given out is to avoid giving it out, or at least make it meaningless. There are extensions for Firefox and Chrome (And probably others) that will allow you to suppress sending the URL of the last page you visited along with each request. To protect yourself from trackers identifying where you live based on your IP address, you can use something such as Tor to hide your location. (Remember what I said about proxies sending the original IP address along with the request. Tor is safer, if slower) Other extensions exist that will allow you to modify the requests you send to web servers by removing or changing other headers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483805298805237, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/21919/pushing-with-a-lorentz-contracting-stick
# Pushing with a lorentz contracting stick If I use a stick to push and accelerate an object, my hand pushes one end of the stick distance $x$, while the other end of the stick pushes the object distance $y$. Distance $y$ is smaller than distance $x$, because of Lorentz contraction of the stick. My hand does work $Fx$. Work $Fy$ is done on the object. Energy $F \cdot(\text{Lorentz contraction of the stick})$ seems to disappear. So I'm asking, what happens to the "missing" energy? EDIT: In this thought experiment pushing causes the object and the stick to accelerate, which causes the stick to Lorentz-contract. In extreme case the length of the stick becomes zero, which means my hand moved a distance of the stick's length kind of unnecessarily. Shorter stick saves energy. EDIT2: I noticed that "lost" energy approaches zero, when force approaches zero. This suggests the energy loss is linked to deformation of the stick. EDIT3: This very simple problem may be very difficult to understand, so I ask this way: A good push rod is rigid. Relativity says rigid push rods don't exist. So what kind of energy goes into a push rod, that is as rigid as relativity allows, when we use the push rod, using moderate force, and the speed that the push rod is accelerated to, is relativistic? - 1 Why would y be less than x? Lorentz contraction just makes the whole stick shorter. Perhaps I'm not understanding the setup properly? – Nathaniel Mar 5 '12 at 18:34 See edit ....... – kartsa Mar 6 '12 at 2:02 Edit2: thats cos $F=d(\gamma mv)/dt$. Recognize the $\gamma$? Same one in length contraction. – Manishearth♦ Mar 6 '12 at 7:59 I see. Interesting question. My intuition says it's $\text{work} = \text{force} \times \text{distance}$ that breaks down at relativistic speeds, but I'll have to think about it. – Nathaniel Mar 9 '12 at 15:08 ## 2 Answers $y=x$ For a constant pushing velocity, lorentz contraction is constant. It's just a smaller, rigid rod, solve classically. V2: The missing energy went into accelerating the stick, of course. I'm not sure if you even are allowed to use an accelerating situation in SR. - See the edit .. – kartsa Mar 6 '12 at 2:02 @kartsa You forgot about energy lost in acceleration. See above. – Manishearth♦ Mar 6 '12 at 3:01 If the stick has mass, then there happens a force decrease in the stick. But I'm interested about the distance part of energy = force * distance – kartsa Mar 6 '12 at 8:04 I'm saying that aside from the difference in forces, you're increasing kinetic energy of the rod. That accounts for a loss of work done. You can't separately conserve types of energy. – Manishearth♦ Mar 6 '12 at 8:07 Is kinetic energy of a rod that was used to push an object to the speed 0.99 c, by using a force of 10 Newtons, larger than kinetic energy of a rod that was used to push an object to the speed 0.99 c by using a force of 5 Newtons? If not, then the "lost" energy did not turn into kinetic energy of the rod. We agreed that there is a lost energy that is proportional to force, didn't we? – kartsa Mar 6 '12 at 11:09 show 2 more comments As velocity increases, the flow of energy through the stick becomes time dilated, which means there is an increasing amount of energy in the stick. The type of energy is pressure energy. When hand stops pushing, the object is still pushed by the other end of the stick. If the stick is massless, and the pushing force decreases slowly, all energy from the stick goes into the object. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320618510246277, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/183248/convergence-in-l1-problem
# Convergence in $L^1$ problem. Problem: Let $f \in L^1(\mathbb{R},~\mu)$, where $\mu$ is the Lebesgue measure. For any $h \in \mathbb{R}$, define $f_h : \mathbb{R} \rightarrow \mathbb{R}$ by $f_h(x) = f(x - h)$. Prove that: $$\lim_{h \rightarrow 0} \|f - f_h\|_{L^1} = 0.$$ My attempt: So, I know that given $\epsilon > 0$, we can find a continuous function $g : \mathbb{R} \rightarrow \mathbb{R}$ with compact support such that $$\int_{\mathbb{R}} |f - g|d\mu < \epsilon.$$ We can then use the inequality $|f - f_h| \leq |f - g| + |g - g_h| + |g_h - f_h|$ to reduce the problem to the continuous case, so to speak, since the integral of the first and last terms will be $< \epsilon$. But now I'm stuck trying to show that $$\lim_{h \rightarrow 0} \|g - g_h\|_{L^1} = 0.$$ I tried taking a sequence $(h_n)_{n \in \mathbb{N}}$ converging to $0$ and considering $g_n := g_{h_n}$, but I don't have monotonicity and the convergence doesn't seem to be dominated either, so I don't know what to do. Any help appreciated. Thanks. - 1 It follows from the uniform continuity of $g$ on its support. For a detailed proof, see Rudin, Real and complex analysis, Theorem 9.4. It also appears on Brezis' book (at least in the italian edition), but the main step of the proof is left to the reader! – Siminore Aug 16 '12 at 15:41 ## 2 Answers By your construction, $g$ is continuous and compactly supported. Let $K$ be the support of $g$, and let $K_h=K\cup (h+K)$. Then we have $$\int|g(x)-g(x-h)|\,\mathrm{d}x\leq |K_h|\|g-g_h\|_{L^\infty(K_h)}.$$ For all $h>0$ sufficiently small we have $K_h\subset K_1$, and you can invoke uniform continuity of $g$. - I think you can use the fact that $f_h$ is in $L^1$ by translation invariance and the fact that $|f-f_h|\leq|f|+|f_h|$. So now you have a function $g_n=|f-f_h|$ which converges to 0 and is bounded by an integrable function. - 1 What is the integrable function that bounds $g_h$? – timur Aug 16 '12 at 15:47 1 What is a dominant function for $|f_h|$, independent of $h$? Of course it is a multiple of $|f|$ when $f$ is continuous, but then your suggestion reduces to the continuous case. – Siminore Aug 16 '12 at 15:48 I believe $|f|+|f_h|$ is integrable. – Dave Aug 16 '12 at 16:01 But $|f_h|$ is not fixed.. – timur Aug 16 '12 at 20:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952964723110199, "perplexity_flag": "head"}
http://mathoverflow.net/questions/39762/vopenkas-principle-at-small-cardinals
## Vopenka’s Principle at Small Cardinals ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to understand Vopěnka's Principle, which is a large cardinal axiom. One version of the principle is that there does not exist a proper class of directed graphs such that there are no homomorphisms between any two graphs in the class. This is a large cardinal axiom because it implies the existence of a proper class of measurable cardinals. If Vopěnka's principle is true, then it is not provable in ZFC, since it implies the consistency of ZFC. (It's falsity may be provable in ZFC.) I presume that since it functions as a large cardinal axiom, then it must fail at small cardinals (i.e. cardinals whose existence is provable within ZFC, such as $\aleph_\omega$). Is there an explicit construction for any such cardinal $\kappa$ there exists a set of graphs of size $\kappa$ such that Vopěnka's Principle fails for that set? In other words, there are no two homomorphisms between the two graphs in that set? I can come up with a construction for $\aleph_0$, but that's it. (For $\aleph_0$, the set of directed cycle graphs with a prime number of vertices does the trick, I think.) - ## 2 Answers If $\kappa$ is almost huge (another large cardinal property), then for each family of size $\kappa$ of graphs of size $<\kappa$, one of the graphs embeds into another one from the family. See this post by Harvey Friedman. Looking at $\kappa$-many graphs of size $<\kappa$ seems to be the right set analog to a class of graphs (that are sets). Could you clarify whether your family is of size $\kappa$, or the graphs? Added after arsmath's comment: If you scroll down in Friedman's note, he says that if $\kappa$ is Vopenka, then the set of extendible cardinals below $\kappa$ is stationary in $\kappa$. Extendibility implies measurability (this should be in Kanamori's "The Higher Infinite") and hence there is a measurable cardinal below every Vopenka cardinal. I think this qualifies as "no small cardinal is Vopenka". - The family is of size $\kappa$. I guess what I'm wondering is if you can prove in ZFC that small, everyday cardinals (I don't know what the technical term is) are <i>not</i> Vopěnka cardinals. – arsmath Sep 23 2010 at 16:17 Thanks! That does answer the original question. I was hoping for explicit construction. I edited the question accordingly. – arsmath Sep 23 2010 at 17:19 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I found a reference for an explicit construction, which I am recording here for posterity: Chapters 2G and 6A of Locally Presentable and Accessible Categories by Ademek and Rosicky. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512376189231873, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Straight_Line_and_its_construction&diff=28278&oldid=13907
# Straight Line and its construction ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (13:21, 29 November 2011) (edit) (undo) (Reducing length of Image Name.) | | | (26 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | - | |ImageName=How to draw a straight line without a straight edge | + | |ImageName=Drawing a Straight Line | | - | |ImageSize=280px | + | | | | |Image=S35-1.jpg | | |Image=S35-1.jpg | | - | |ImageDescElem=What is a straight line? How do you define straightness? How do you construct something straight without assuming you have a straight edge? These are questions that seem silly to ask because they are so intuitive. We come to accept that straightness is simply straightness and its definition, like that of point and line, is simply assumed. However, compare this to the way we draw a circle. When using a compass to draw a circle, we are not starting with a figure that we accept as circular; instead, we are using a fundamental property of circles that the points on a circle are at a fixed distance from the center. This page explores the properties of a straight line and hence its construction. | + | | | - | |ImageDesc=== What is a straight line? == | + | | | - | {{{!}}border="1" | + | | | - | {{!}}align="center"{{!}}[[Image:Straightline.jpg|center|border|400px]]Image 1{{!}}{{!}}Today, we simply define a line as a one-dimensional object that extents to infinity in both directions and it is straight, i.e. no wiggles along its length. But what is straightness? It is a hard question because we have the picture in our head and the answer right there under our breath but we simply cannot articulate it. | + | | | | | | | | | | + | =Introduction= | | | | + | What is a straight line? How do you define straightness? The questions seem silly to ask because they are so intuitive. We come to accept that straightness is simply straightness and its definition, like that of point and line, is simply assumed. However, why do we not assume the definition of circle? When using a compass to draw a circle, we are not starting with a figure that we accept as circular; instead, we are using a fundamental property of circles, that the points on a circle are at a fixed distance from the center. This page explores the answer to the question "how do you construct a straight line without a straight edge?" | | | | | | | - | In Euclid's book '''''Elements''''', he defined a straight line as "lying evenly between its extreme points" and it has "breadthless width." The definition is pretty useless. What does he mean if he says "lying evenly"? It tells us nothing about how to describe or construct a straight line. So what is a straightness anyway? There are a few good answers. For instance, in the Cartesian Coordinates, the graph of <math>y=ax+b</math> is a straight line. In addition, we are most familiar with another definition is the shortest distance between two points is a straight line. However, it is important to realize that the definitions of being "shortest" and "straight" are different from that on a flat plane. For example, the shortest distance between two points on a sphere is the the "great circle", a section of a sphere that contains a diameter of the sphere, and great circle is straight on the spherical surface. | + | =What Is A Straight Line?--- A Question Rarely Asked.= | | | | + | {{{!}} | | | | + | {{!}}colspan="2"{{!}}Today, we simply define a line as a one-dimensional object that extents to infinity in both directions and it is straight, i.e. no wiggles along its length. But what is straightness? It is a hard question because we can picture it, but we simply cannot articulate it. | | | | | | | | | | | | - | For more properties on staight line, you refer to the book '''''Experience Geometry''''' by zzz. | + | In Euclid's book '''''Elements''''', he defined a straight line as "lying evenly between its extreme points" and as having "breadthless width." This definition is pretty useless. What does he mean by "lying evenly"? It tells us nothing about how to describe or construct a straight line. | | - | {{!}}align="center"{{!}}[[Image:SmallGreatCircles 700.gif|center|border]] | + | | | | | | | | | | | | | | | + | So what is a straightness anyway? There are a few good answers. For instance, in the {{EasyBalloon|Link=Cartesian Coordinates|Balloon=A Cartesian coordinate system specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length.<ref>Wikipedia (Cartesian coordinate system)</ref>}}, the graph of <math>y=ax+b</math> is a straight line as shown in '''Image 1'''. In addition, the shortest distance between two points on a flat plane is a straight line, a definition we are most familiar with. However, it is important to realize that the definitions of being "shortest" and "straight" will change when you are no longer on flat plane. For example, the shortest distance between two points on a sphere is the the {{EasyBalloon|Link="great circle"|Balloon=A section of a sphere that contains a diameter of the sphere, and great circle is straight on the spherical surface}}as shown in '''Image 2'''. | | | | | | | | | + | Since we are dealing with plane geometry here, we define straight line as the curve of <math>y=ax+b</math> in Cartesian Coordinates. | | | | | | | | | | | | - | Image 2 | + | For more comprehensive discussion of being straight, you can refer to the book '''''Experiencing Geometry''''' by David W. Henderson. | | | | + | | | | | + | Take a minute to ponder the question: "How do you produce a straight line?" Well light travels in straight line. Can we make light help us to produce something straight? Sure but does it always travel in straight line? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Well, another simpler method is just to fold a piece of paper and the crease will be a straight line. However, to achieve our ultimate goal (construct a straight line without a straight edge), we need a {{EasyBalloon|Link=linkage|Balloon=It is defined as a series of rigid links connected with joints to form a closed chain, or a series of closed chains. Each link has two or more joints, and the joints have various degrees of freedom to allow motion between the links.<ref>Wikipedia (Linkage (mechanical))</ref>}} and that is much more complicated and difficult than folding a piece of paper. The rest of the page revolves around the discussion of straight line linkage's history and its mathematical explanation. | | | | + | {{!}}- | | | | + | {{!}}align="center"{{!}}[[Image:Straightline.jpg|center|border|300px]] '''Image 1'''{{!}}{{!}}align="center"{{!}}[[Image:SmallGreatCircles 700.gif|center|border|400px]]'''Image 2''' <ref>Weisstein</ref> | | | {{!}}} | | {{!}}} | | | | | | | - | == The Quest to Draw a Straight Line == | + | = The Quest to Draw a Straight Line = | | | | | | | - | ==='''The Practical Need'''=== | + | =='''The Practical Need'''== | | | | | | | - | {{{!}}border="1" | + | {{{!}} | | - | {{!}}Now having defined what a straight line is, we have to figure out a way to construct it on a plane without using anything that we assume to be straight such as a straight edge (or ruler) just like how we construct a circle using a compass. Historically, it has been of great interest to mathematicians and engineers not only because it is an interesting question to ponder about but also it has important application in engineering. Since the invention of various steam engines and machines that are powered by them, engineers have been trying to perfect the mechanical linkage to convert all kinds of motions (especially circular motion) to linear motions. | + | {{!}}Now having defined what a straight line is, we must figure out a way to construct it on a plane. However, the challenge is to do that without using anything that we assume to be straight such as a straight edge (or ruler) just like how we construct a circle using a compass. Historically, it has been of great interest to mathematicians and engineers not only because it is an interesting question to ponder about, but also because it has important applications in engineering. Since the invention of various steam engines and machines that are powered by them, engineers have been trying to perfect the mechanical linkage to convert all kinds of motions (especially circular motion) to linear motions. | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Img324.gif|center|border|350px]] | | {{!}}[[Image:Img324.gif|center|border|350px]] | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}Image 3 | + | {{!}}align="center"{{!}}'''Image 3'''<ref>Bryant, & Sangwin, 2008, p. 18</ref> | | | {{!}}- | | {{!}}- | | - | {{!}}The picture above shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (on the left), a cylinder with piston, a beam (on top) and a pump (on the right side) at the other end. The pump was usually used to extract water from the mines. When the piston is at its lowest position, steam is let into the cylinder from valve K and it pushes the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure caused the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilized atmospheric pressure to cause the downward action of the piston (steam only balances out the atmospheric pressure and allow the piston to return to the highest point). Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (under stretching) and that is why a chain is used as the connection. | + | {{!}}'''Image 3''' shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (lower left corner), a cylinder with piston (above the boiler), a beam (on top, pivoted at the middle) and a pump (lower right corner) at the other end. The pump was usually used to extract water from the mines but other devices can also be driven. | | | {{!}}- | | {{!}}- | | - | {{!}}Anyway, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arch of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the process of wear and tear is very much quickened and the efficiency of the engine greatly compromised. Now considering that the up-and-down cycle repeats itself hundreds of times every minute and the engine is expected to run 24/7 to make profits for the investors, such defect in the engine must not be tolerated and thus poses a great need for improvements. | + | {{!}}{{HideShowThis|ShowMessage=Click here to show how this engine works.|HideMessage=Click here to hide text|HiddenText=When the piston is at its lowest position, steam is let into the cylinder from valve K to push the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure causes the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilizes atmospheric pressure to cause the downward action of the piston. Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (it is being stretched by forces at two ends) and that is why a chain is used as the connection.<ref>Bryant, & Sangwin, 2008, p. 18</ref> <ref>Wikipedia (Steam Engine)</ref>}} | | | {{!}}- | | {{!}}- | | - | {{!}}[[Image:Img325.gif|center|border|500px]] | + | {{!}}Ideally, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arc of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the rate of attrition is very much expedited and the efficiency of the engine is greatly compromised. Durability is important in the design of any machine, but it was especially essential for the early steam engines. For these machines were meant to run 24/7 to make profits for the investors. Therefore, such defect in the engine posed a great need for improvements.<ref>Bryant, & Sangwin, 2008, p. 18-21</ref> | | | | + | {{!}}} | | | | + | | | | | + | {{{!}} | | | | + | {{!}}colspan="2"{{!}}Improvements were soon developed to force the end of the piston rod move in a straight line, but these brought about new mechanical problems. The two pictures below show two improvements at the time. The hidden text explains how these improvements work and why they have failed to produce satisfactory results. | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}Image 4 | + | {{!}}align="center"{{!}}[[Image:Img325.gif|center|border|500px]]'''Image 4'''<ref>Bryant, & Sangwin, 2008, p. 18-21</ref>{{!}}{{!}}align="center"{{!}}[[Image:Img326.gif|border|center|200px]]'''Image 5''' <ref>Bryant, & Sangwin, 2008, p. 18-21</ref> | | | {{!}}- | | {{!}}- | | - | {{!}}Improvements were made. Firstly, "double-action" engines were made, part of which is shown in the picture on top. Atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will take turns to be in tension throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solved the problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. | + | {{!}}colspan="2"{{!}}{{HideShowThis|ShowMessage=Click to read more about these systems.|HideMessage=Hide|HiddenText=Firstly, "double-action" engines were built, part of which is shown in '''Image 4'''. Secondly, the beam was dispensed and replaced by a gear as shown in '''Image 5'''. However, both of these improvements were unsatisfactory and the need for a straight line linkage was still imperative. In '''Image 4''', atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will took turns being taut throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solve the straight line problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. | | - | {{!}}- | + | | | - | {{!}}[[Image:Img326.gif|border|center|200px]] | + | In '''Image 5''', after the beam was replaced by gear actions, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem remained unsolved. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts.<ref>Bryant, & Sangwin, 2008, p. 18-21</ref>}} | | - | {{!}}- | + | | | - | {{!}}align="center"{{!}}Image 5 | + | | | - | {{!}}- | + | | | - | {{!}}Secondly, beam was dispensed and replaced by a gear as shown on the left. Consequently, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem was still there. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts. Therefore, both of these methods were not satisfactory and the need for a linkage that produces straight line action was still imperative. | + | | | | {{!}}} | | {{!}}} | | | | | | | | | | | | - | ==='''James Watt's breakthrough'''=== | + | =='''James Watt's breakthrough'''== | | | | | | | - | {{{!}}border="1" | + | {{{!}} | | - | {{!}}colspan="2"{{!}}James Watt found a mechanism that converted the linear motion of pistons in the cylinder to the semi circular motion of the beam (or the circular motion of the [http://en.wikipedia.org/wiki/Flywheel flywheel]) and vice versa. In 1784, he invented a [http://en.wikipedia.org/wiki/Linkage_(mechanical) three member linkage] that solved the linear motion to circular problem practically as illustrated by the animation below. In its simplest form, there are two radius arms that have the same lengths and a connecting arm with midpoint P. Point P moves in a straight line. However, this linkage only produced approximate straight line (a stretched figure 8 actually) as shown on the right, much to the chagrin of the mathematicians who were after absolute straight lines. There is a more general form of the Watt's linkage that the two radius arms having different lengths like shown in the figure in the middle. To make sure that Point P still move in the stretched figure 8, it has to be positioned such that it adheres to the ratio<math>\frac{AB}{CD} = \frac{CP}{CB}</math>. | + | {{!}}colspan="2"{{!}}James Watt found a mechanism that converted the linear motion of pistons in the cylinder to the semi circular motion (that is moving in an arc of the circle) of the beam (or the circular motion of the [http://en.wikipedia.org/wiki/Flywheel flywheel]) and vice versa. In this way, energy in the vertical direction is converted to rotational energy of the flywheel from where is it converted to useful work that the engine is desired to do. In 1784, he invented a [http://en.wikipedia.org/wiki/Linkage_(mechanical) three member linkage] that solved the linear-motion-to-circular problem practically as illustrated by the animation below. In its simplest form, there are two radius arms that have the same lengths and a connecting arm with midpoint P. Point P moves in a straight line while the two hinges move in circular arcs. However, this linkage only produced approximate straight line (a stretched figure 8 actually) as shown in '''Image 7''', much to the chagrin of the mathematicians who were after absolute straight lines. There is a more general form of the Watt's linkage that the two radius arms having different lengths like shown in '''Image 6'''. To make sure that Point P still move in the stretched figure 8, it has to be positioned such that it adheres to the ratio<math>\frac{AB}{CD} = \frac{CP}{CB}</math>.<ref>Bryant, & Sangwin, 2008, p. 24</ref> | | | {{!}}- | | {{!}}- | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}Image 6{{!}}{{!}}align="center"{{!}}Image 7 | + | {{!}}align="center"{{!}}'''Image 6''' <ref>Bryant, & Sangwin, 2008, p. 23</ref>{{!}}{{!}}align="center"{{!}}'''Image 7''' <ref>Wikipedia (Watt's Linkage)</ref> | | | {{!}}} | | {{!}}} | | | | | | | - | ==='''The Motion of Point P'''=== | + | =='''The Motion of Point P'''== | | - | We intend to described the path of <math>P</math> so that we could show it does not move in a straight line (which is obvious) and more importantly to pinpoint the position of <math>P</math> using certain parameter we know such as the angle of rotation or one coordinate of point <math>P</math>. This is awfully important in engineering as engineers would like to know that there are no two parts of the machine will collide with each other throughout the motion. | + | We intend to describe the path of <math>P</math> so that we can show it does not move in a straight line (which is obvious in the animation). More importantly, this will allow us to pinpoint the position of <math>P</math> using certain parameters we know, such as the angle of rotation or one coordinate of point <math>P</math>. This is awfully crucial in engineering as engineers would like to know that there are no two parts of the machine will collide with each other throughout the motion. In addition, you can use the parametrization to create your own animation like that in '''Image 7'''. | | | | | | | - | ===='''Algebraic Description'''==== | + | ==='''Algebraic Description'''=== | | - | {{{!}}border="1" | + | {{{!}} | | - | {{!}}We see that <math>P</math> moves in a stretched figure 8 and will tend to think that there should be a nice close form of the relationship between the coordinates of <math>P</math> like that of the circle. But after this section, you will see that there is a closed form, at least theoretically, but it is not "nice" at all. | + | {{!}}We see that <math>P</math> moves in a stretched figure 8 and will tend to think that there should be a nice {{EasyBalloon|Link=closed form|Balloon=In mathematics, an expression is said to be a closed-form expression if, and only if, it can be expressed analytically in terms of a bounded number of certain "well-known" functions. Typically, these well-known functions are defined to be elementary functions – constants, one variable x, elementary operations of arithmetic (+ – × ÷), nth roots, exponent and logarithm (which thus also include trigonometric functions and inverse trigonometric functions).<ref>Wikipedia (Closed-form expression)</ref>}} of the relationship of the <math>x</math> and <math>y</math> coordinates of <math>P</math> like that of the circle. After this section, you will see that there is a closed form, at least theoretically, but it is not "nice" at all. | | | {{!}}- | | {{!}}- | | - | {{!}}[[Image:JW point P.png|center|600px]] | + | {{!}}[[Image:JW point P.png|center|500px]] | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}Image 8 | + | {{!}}align="center"{{!}}'''Image 8''' | | | {{!}}- | | {{!}}- | | - | {{!}}We know coordinates <math>A</math> and <math>D</math>. Hence let the coordinates of <math>A</math> be <math>(0,0)</math>, coordinates of <math>B</math> be <math>(c,d)</math>. We also know the length of the bar. Let <math>AB=CD=r, BC=m</math>. | + | {{!}}{{SwitchPreview|ShowMessage=Show derivation of the relationship of the <math>x</math> and <math>y</math> coordinates of <math>P</math>.|HideMessage=Hide|PreviewText=We know coordinates <math>{\color{Gray}A}</math> and <math>{\color{Gray}D}</math> because they are fixed.|FullText=We know coordinates <math>A</math> and <math>D</math> because they are fixed. Hence suppose the coordinates of <math>A</math> are <math>(0,0)</math> and coordinates of <math>B</math> are <math>(c,d)</math>. We also know the length of the bar. Let <math>AB=CD=r, BC=m</math>. | | | | | | | - | Suppose that at one instance we know the coordinates of <math>B</math> as <math>(a,b)</math>, then <math>P</math> will be on the circle centered at <math>B</math> with a radius of <math>m</math>. Since <math>P</math> is on the circle centered at <math>D</math> with radius <math>r</math>. Then the coordinates of <math>C</math> have to satisfy the two equations below. | + | Suppose that at one instance we know the coordinates of <math>B</math> as <math>(a,b)</math>, then <math>C</math> will be on the circle centered at <math>B</math> with a radius of <math>m</math>. Since <math>C</math> is on the circle centered at <math>D</math> with radius <math>r</math>. Then the coordinates of <math>C</math> have to satisfy the two equations below. | | | | | | | | <math>\begin{cases} | | <math>\begin{cases} | | Line 95: | | Line 99: | | | | | | | | | | | | | - | {{{!}} | + | {{EquationRef2|Eq. 1}}<math>x^2+y^2-2ax-2by+a^2+b^2=m^2</math> | | - | {{!}}rowspan="2"{{!}}<math>\begin{cases} | + | | | - | x^2+y^2-2ax-2by+a^2+b^2=m^2 \cdots \cdots \\ | + | {{EquationRef2|Eq. 2}}<math>x^2+y^2-2cx-2dy+c^2+d^2=r^2</math> | | - | x^2+y^2-2cx-2dy+c^2+d^2=r^2 \cdots \cdots \\ | + | | | - | \end{cases}</math>{{!}}{{!}}{{EquationRef2|Eq. 1}} | + | | | - | {{!}}- | + | | | - | {{!}}{{EquationRef2|Eq. 2}} | + | | | - | {{!}}} | + | | | | | | | | | | | | | | Subtract {{EquationNote|Eq. 2}} from {{EquationNote|Eq. 1}} we have, | | Subtract {{EquationNote|Eq. 2}} from {{EquationNote|Eq. 1}} we have, | | - | {{{!}} | + | | | - | {{!}}<math>(-2a+2c)x-(2b-2d)y+(a^2+b^2)-(c^2+d^2)=m^2-r^2\cdots \cdots </math>{{!}}{{!}}{{EquationRef2|Eq. 3}} | + | {{EquationRef2|Eq. 3}} <math>(-2a+2c)x-(2b-2d)y+(a^2+b^2)-(c^2+d^2)=m^2-r^2</math> | | - | {{!}}} | + | | | | Substituting <math>a^2+b^2=r^2</math> and rearranging we have, | | Substituting <math>a^2+b^2=r^2</math> and rearranging we have, | | | | | | | | <math>(-2a+2c)x-(2b-2d)y=m^2-2r^2+c^2+d^2</math> | | <math>(-2a+2c)x-(2b-2d)y=m^2-2r^2+c^2+d^2</math> | | | | | | | | | + | Hence {{EquationRef2|Eq. 4}} <math>y=\frac {-2a+2c}{2b-2d}x-\frac {m^2-2r^2+c^2+d^2}{2b-2d}</math> | | | | | | | | | + | Now, we can manipulate {{EquationNote|Eq. 3}} to get an expression for <math>b</math>, i.e. <math>b=f(a,c,d,m,r,x,y)</math>. Next, we substitute <math>b=f(a,c,d,m,r,x,y)</math> back into {{EquationNote|Eq. 1}} and will be able to obtain an expression for <math>a</math>, i.e. <math>a=g(x,y,d,c,m,r)</math>. Since <math>b=\pm \sqrt {r^2-a^2}</math>, we have expressions of <math>a</math> and <math>b</math> in terms of <math>x,y,d,c,m</math> and <math>r</math>. | | | | | | | - | {{{!}} | + | Say point <math>P</math> has coordinates <math>(x',y')</math>, then <math>x'=\frac {a+x}{2}</math> and <math>y'=\frac {b+y}{2}</math> which will yield | | - | {{!}}Hence <math>y=\frac {-2a+2c}{2b-2d}x-\frac {m^2-2r^2+c^2+d^2}{2b-2d} \cdots \cdots </math>{{!}}{{!}}{{EquationRef2|Eq. 4}} | + | | | - | {{!}}} | + | | | - | Now, we could manipulate {{EquationNote|Eq. 3}} to get an expression for <math>b</math>, i.e. <math>b=f(a,c,d,m,r,x,y)</math>. Next, we substitute <math>b=f(a,c,d,m,r,x,y)</math> back into {{EquationNote|Eq. 1}} and will be able to obtain an expression for <math>a</math>, i.e. <math>a=g(x,y,d,c,m,r)</math>. Since <math>b=\pm \sqrt {r^2-a^2}</math>, we have expressions of <math>a</math> and <math>b</math> in terms of <math>x,y,d,c,m</math> and <math>r</math>. | + | | | | | | | | - | Say point <math>P</math> has coordinates <math>(x',y')</math>, then <math>x'=\frac {a+x}{2}</math> and <math>y'=\frac {b+y}{2}</math> which will yield | + | {{EquationRef2|Eq. 5}} <math>x=2x'-a</math> | | - | {{{!}} | + | | | - | {{!}}rowspan="2"{{!}}<math>\begin{cases} | + | {{EquationRef2|Eq. 6}} <math>y=2y'-b</math> | | - | x=2x'-a \cdots \cdots \\ | + | | | - | y=2y'-b \cdots \cdots | + | | | - | \end{cases}</math>{{!}}{{!}}{{EquationRef2|Eq. 5}} | + | | | - | {{!}}- | + | | | - | {{!}}{{EquationRef2|Eq. 6}} | + | | | - | {{!}}} | + | | | | | | | | - | In the last step we substitute <math>a=g(x,y,d,c,m,r)</math>,<math>b=\pm \sqrt {r^2-a^2}</math>, {{EquationNote|Eq. 5}} and {{EquationNote|Eq. 6}} back into {{EquationNote|Eq. 4}} and we will finally have a relationship between <math>x'</math> and <math>y'</math>. Of course, it will be a messy one but we could definitely use Mathematica to do the maths. | + | In the last step we substitute <math>a=g(x,y,d,c,m,r)</math>,<math>b=\pm \sqrt {r^2-a^2}</math>, {{EquationNote|Eq. 5}} and {{EquationNote|Eq. 6}} back into {{EquationNote|Eq. 4}} and we will finally have a relationship between <math>x'</math> and <math>y'</math>. Of course, it will be a messy closed form but we could definitely use Mathematica to do the maths. The point is, there is no nice algebraic form for that figure 8, though it has closed form and that is why we have to find something else.}} | | | {{!}}} | | {{!}}} | | | | | | | - | ===='''Parametric Description'''==== | + | | | - | {{{!}}border="1" | + | ==='''Parametric Description'''=== | | - | {{!}}Alright, since the algebraic equations are not agreeable at all, we have to resort to the parametric description. To think about, it would not be too bad if we could describe the motion of <math>P</math> using the angle of ration. As a matter of fact, it is easier to obtain the angle of rotation than knowing one of <math>P</math>'s coordinates. | + | {{{!}} | | | | + | {{!}}Alright, since the algebraic equations are not agreeable at all, we have to resort to the parametric description. To think about, it may be more manageable to describe the motion of <math>P</math> using the angle of ratation. As a matter of fact, it is easier to obtain the angle of rotation than knowing one of <math>P</math>'s coordinates. | | | {{!}}- | | {{!}}- | | - | {{!}}[[Image:JWpara.png|center|600px]] | + | {{!}}[[Image:ParaP.png|center|500px]] | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}Image 9 | + | {{!}}align="center"{{!}}'''Image 9''' | | | {{!}}- | | {{!}}- | | - | {{!}}We will parametrize the <math>P</math> with the angle <math>\theta</math> in conformation of most parametrizations of point. | + | {{!}}{{SwitchPreview|ShowMessage=Show parameterization of <math>P</math>.|HideMessage=Hide|PreviewText=We will parametrize the <math>{\color{Gray}P}</math> with the angle <math>{\color{Gray}\theta}</math> in conformation of most parametrizations of point.|FullText=We will parametrize the <math>P</math> with the angle <math>\theta</math> in conformation of most parametrizations of point. | | | | | | | | <math>\begin{cases} | | <math>\begin{cases} | | Line 172: | | Line 166: | | | | \end{align}</math> | | \end{align}</math> | | | | | | | - | Now, <math>\overrightarrow {AP}</math> is parametrized in term of <math>\theta, c, d, r</math> and <math>m</math>. | + | Now, <math>\overrightarrow {AP}</math> is parametrized in term of <math>\theta, c, d, r</math> and <math>m</math>.}} | | | {{!}}- | | {{!}}- | | | {{!}}[[Image:Watt2.gif|center|border|450px]] | | {{!}}[[Image:Watt2.gif|center|border|450px]] | | | {{!}}- | | {{!}}- | | - | {{!}}align="center"{{!}}Image 10 | + | {{!}}align="center"{{!}}'''Image 10''' <ref>Lienhard, 1999, February 18</ref> | | - | {{!}}- | + | {{!}}} | | - | {{!}}Imitations were a big problems beck in those days. When filing for a patent, James Watt and other inventors, had to explain how their devices work without revealing the critical secrets so that others could easily copy them. As seen in the original patent illustration on the bottom right, Watt illustrated his simple linkage on a separate diagram but we couldn't find it in anywhere in the illustration. That is Watt's secret. What he had actually used on his engine was the modified version of the basic linkage as show on the left. The link <math>ABCD</math> is the original three member linkage with <math>AB=CD</math> and point <math>P</math> being the midpoint of <math>BC</math>. A is the pivot of the beam fixed on the engine frame while D is also fixed. However, Watt modified it by adding a parallelogram <math>BCFE</math> to it and connecting point <math>F</math> to the piston rod. We now know that point <math>P</math> moves in quasi straight line as shown previously. The importance for two points move in a straight line is that one has to be connected to the piston rod that drives the beam, another will convert the circular motion to linear motion so as to drive the valve gears that control the opening and closing of the valves. It turns out that point F moves in a similar quasi straight line as point P. | + | | | - | {{!}}- | + | | | - | {{!}}[[Image:Watt1.png|center|border|600px]] | + | | | - | {{!}}- | + | | | - | {{!}}align="center"{{!}}Image 11 | + | | | - | {{!}}- | + | | | - | {{!}}How would we find the parametric equation for point <math>F</math> then? Well, it is easy enough. | + | | | - | {{!}}- | + | | | - | {{!}}[[Image:JWpointF.png|center|550px]] | + | | | - | {{!}}- | + | | | - | {{!}}align="center"{{!}}Image 12 | + | | | - | {{!}}- | + | | | - | {{!}}<math>\overrightarrow {AB} = (r \sin \theta, r \cos \theta) | + | | | | | | | | - | \therefore \overrightarrow {AE} = \frac {e+f}{r}(r \sin \theta, r \cos \theta)</math> | + | ==Watt's Secret== | | | | + | {{{!}} | | | | + | {{!}}colspan="2"{{!}}Another reason we parameterized <math>P</math> is that Watt did not simply used that three bar linkage shown in '''Image 6''' and '''Image 7'''. Instead he used something different. To understand that, our knowledge of the parameterizaion of <math>P</math> is crucial. Imitations were a big problems back in those days. When filing for a patent, James Watt and other inventors had to explain how their devices worked without revealing the critical secrets so that others could easily copy them. As shown in '''Image 10''', the original patent illustration, Watt illustrated his simple linkage on a separate diagram on the upper left hand corner but try looking for it on the engine illustration itself. Can you find it at all? That is Watt's secret. This is the equivalent of telling you by using the principle of 1+1 makes 2 you could get 34 x 45; the crucial step in understanding (and to make the engine work smoothly in Watt's case) is avoided. What he had actually used on his engine was the modified version of the basic linkage as show in '''Image 11'''. | | | | | | | - | Furthermore <math>\overrightarrow {AF} = \overrightarrow {AE} + \overrightarrow {BC}</math> | | | | | | | | | - | Therefore, <math> \overrightarrow {AF} = \frac {e+f}{r}(r \sin \theta, r \cos \theta) + (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha))</math> | + | The link <math>ABCD</math> is the original three member linkage with <math>AB=CD</math> and point <math>P</math> being the midpoint of <math>BC</math>. A is the pivot of the beam fixed on the engine frame while D is also fixed. However, Watt modified it by adding a parallelogram <math>BCFE</math> to it and connecting point <math>F</math> to the piston rod. We now know that point <math>P</math> moves in quasi straight line as shown previously. It is important for two points to move in straight lines now is because one has to be connected to the piston rod that drives the beam, another has to convert the circular motion to linear motion so as to drive the valve gears that control the opening and closing of the valves. It turns out that point F moves in a similar quasi straight line as point P. This is the truly famous James Watt's "parallel motion" linkage. | | | | + | {{!}}- | | | | + | {{!}}[[Image:Watt1.png|center|border|400px]]{{!}}{{!}}[[Image:PointF.png|center|350px]] | | | | + | {{!}}- | | | | + | {{!}}align="center"{{!}}'''Image 11'''{{!}}{{!}}align="center"{{!}}'''Image 12''' | | | | + | {{!}}- | | | | + | {{!}}{{SwitchPreview|ShowMessage=How would we find the parametric equation for point <math>F</math> then?|HideMessage=Hide|PreviewText=Well, it is easy enough. Refer to '''Image 12'''.|FullText=Well, it is easy enough. Refer to '''Image 12'''. <math>\overrightarrow {AB} = (r \sin \theta, r \cos \theta)\therefore \overrightarrow {AE} = \frac {e+f}{r}(r \sin \theta, r \cos \theta)</math>. Furthermore <math>\overrightarrow {AF} = \overrightarrow {AE} + \overrightarrow {BC}</math>. Therefore, <math>\overrightarrow {AF} = \frac {e+r}{r}(r \sin \theta, r \cos \theta) + (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha))</math>. We now have the parameterization of point <math>F</math> and <math>P</math> and Watt's secret is eventually cracked.}} | | | {{!}}} | | {{!}}} | | | | | | | | | | | | - | ==='''The First Planar Straight Line Linkage - Peaucellier-Lipkin Linkage'''=== | + | =='''The First Planar Straight Line Linkage - Peaucellier-Lipkin Linkage'''== | | - | [[Image:Peaucellier linkage animation.gif|right|border]]Mathematicians and engineers have being searching for almost a century to find the solution to the straight line linkage but all had failed until 1864, a French army officer Charles Nicolas Peaucellier came up with his ''inversor linkage''. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. | + | {{{!}} | | - | | + | {{!}}align="center"{{!}}[[Image:Peaucellier linkage animation.gif|center|border]]'''Image 13''' <ref>Wikipedia (Peaucellier–Lipkin linkage)</ref>{{!}}{{!}}Anyway, mathematicians and engineers had being searching for almost a century to find the solution to a straight line linkage but all had failed until 1864 when a French army officer Charles Nicolas Peaucellier came up with his ''inversor linkage''. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. '''Taimina''' | | | | | | | - | Take a minute to ponder the question: "How do you produce a straight line?" We all know, or rather assume, that light travels in straight line. But does it always do that? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Another simpler method is just to fold a piece of paper and the crease will be a straight line. | | | | | | | | | - | | + | {{!}}- | | - | Now, the linkage that produces a straight line motion is much more complicated than folding a piece of paper but the Peaucellier-Lipkin Linkage is amazingly simple as shown on the left and right. In the next section, a proof of how this linkage draws a straight line is provided. | + | {{!}}colspan="2"{{!}}[[Image:PL cell.png|center|border|500px]] | | - | | + | {{!}}- | | - | | + | {{!}}align="center" colspan="2"{{!}}'''Image 14''' | | - | [[Image:PL cell.png|center|border|550px]] | + | {{!}}- | | - | | + | {{!}}colspan="2"{{!}}Let's turn to a skeleton drawing of the Peaucellier-Lipkin linkage in '''Image 14'''. It is constructed in such a way that <math>OA = OB</math> and <math>AC=CB=BP=PA</math>. Furthermore, all the bars are free to rotate at every joint and point <math>O</math> is a fixed pivot. Due to the symmetrical construction of the linkage, it goes without proof that points <math>O</math>,<math>C</math> and <math>P</math> lie in a straight line. Construct lines <math>OCP</math> and <math>AB</math> and they meet at point <math>M</math>. | | - | Let's turn to a skeleton drawing of the Peaucellier-Lipkin linkage. It is constructed in such a way that <math>OA = OB</math> and <math>AC=CB=BP=PA</math>. Furthermore, all the bars are free to rotate at every joint and point <math>O</math> is a fixed pivot. Due to the symmetrical construction of the linkage, it goes without proof that points <math>O</math>,<math>C</math> and <math>P</math> lie in a straight line. Construct lines <math>OCP</math> and <math>AB</math> and they meet at point <math>M</math>. | + | | | | | | | | | Since shape <math>APBC</math> is a rhombus | | Since shape <math>APBC</math> is a rhombus | | Line 227: | | Line 213: | | | | <math>\begin{align} | | <math>\begin{align} | | | (OA)^2 - (AP)^2 & = (OM)^2 - (PM)^2\\ | | (OA)^2 - (AP)^2 & = (OM)^2 - (PM)^2\\ | | - | & = (OM-PM)\cdot(PM + PM)\\ | + | & = (OM-PM)\cdot(OM + PM)\\ | | | & = OC \cdot OP\\ | | & = OC \cdot OP\\ | | | \end{align}</math> | | \end{align}</math> | | | | | | | | Let's take a moment to look at the relation <math>(OA)^2 - (AP)^2 = OC \cdot OP</math>. Since the length <math>OA</math> and <math>AP</math> are of constant length, then the product <math>OC \cdot OP</math> is of constant value however you change the shape of this construction. | | Let's take a moment to look at the relation <math>(OA)^2 - (AP)^2 = OC \cdot OP</math>. Since the length <math>OA</math> and <math>AP</math> are of constant length, then the product <math>OC \cdot OP</math> is of constant value however you change the shape of this construction. | | - | | + | {{!}}- | | - | | + | {{!}}colspan="2"{{!}}[[Image:PLcellproof2.png|border|center|500px]] | | - | [[Image:PLcellproof2.png|border|center|550px]] | + | {{!}}- | | - | | + | {{!}}align="center" colspan="2"{{!}}'''Image 15''' | | - | Refer to the graph above. Let's fix the path of point <math>C</math> such that it traces out a circle that has point <math>O</math> on it. <math>QC</math> is the the extra link pivoted to the fixed point <math>Q</math> with <math>QC=QO</math>. Construct line <math>OQ</math> that cuts the circle at point <math>R</math>. In addition, construct line <math>PN</math> such that <math>PN \perp OR</math>. | + | {{!}}- | | | | + | {{!}}colspan="2"{{!}}Refer to '''Image 15'''. Let's fix the path of point <math>C</math> such that it traces out a circle that has point <math>O</math> on it. <math>QC</math> is the extra link pivoted to the fixed point <math>Q</math> with <math>QC=QO</math>. Construct line <math>OQ</math> that cuts the circle at point <math>R</math>. In addition, construct line <math>PN</math> such that <math>PN \perp OR</math>. | | | | | | | | Since, <math> \angle OCR = 90^\circ</math> | | Since, <math> \angle OCR = 90^\circ</math> | | | | | | | | | | | | - | We have <math> \vartriangle OCR \sim \vartriangle ONP, \frac{OC}{OR} = \frac{ON}{OP}</math>, | + | We have <math> \vartriangle OCR \sim \vartriangle ONP and \frac{OC}{OR} = \frac{ON}{OP}</math>. | | | | | | | - | and <math> OC \cdot OP = ON \cdot OR</math> | + | Moreover <math> OC \cdot OP = ON \cdot OR</math>. | | | | | | | - | Therefore <math> ON = \frac {OC \cdot OP}{OR} = </math>constant, i.e. the length of <math>ON</math>(or the x-coordinate of <math>P</math> w.r.t <math>O</math>) does not change as points <math>C</math> and <math>P</math> move. Hence, point <math>P</math> moves in a straight line. ∎ | + | Therefore <math> ON = \frac {OC \cdot OP}{OR} = </math>constant, i.e. the length of <math>ON</math>(or the x-coordinate of <math>P</math> w.r.t <math>O</math>) does not change as points <math>C</math> and <math>P</math> move. Hence, point <math>P</math> moves in a straight line. ∎<ref>Bryant, & Sangwin, 2008, p. 33-36</ref> | | | | + | {{!}}} | | | | | | | - | | + | =='''Inversive Geometry in Peaucellier-Lipkin Linkage'''== | | - | ==='''Inversive Geometry in Peaucellier-Lipkin Linkage'''=== | + | | | | As a matter of fact, the first part of the proof given above is already sufficient. Due to inversive geometry, once we have shown that points <math>O</math>,<math>C</math> and <math>P</math> are collinear and that <math>OC \cdot OP</math> is of constant value. Points <math>C</math> and <math>P</math> are inversive pairs with <math>O</math> as inversive center. Therefore, once <math>C</math> moves in a circle that contains <math>O</math>, then <math>P</math> will move in a straight line and vice versa. ∎ See [[Inversion]] for more detail. | | As a matter of fact, the first part of the proof given above is already sufficient. Due to inversive geometry, once we have shown that points <math>O</math>,<math>C</math> and <math>P</math> are collinear and that <math>OC \cdot OP</math> is of constant value. Points <math>C</math> and <math>P</math> are inversive pairs with <math>O</math> as inversive center. Therefore, once <math>C</math> moves in a circle that contains <math>O</math>, then <math>P</math> will move in a straight line and vice versa. ∎ See [[Inversion]] for more detail. | | | | | | | | | | | | - | ==='''Peaucellier-Lipkin Linkage in action'''=== | + | =='''Peaucellier-Lipkin Linkage in Action'''== | | - | [[Image:Adapted.jpg|border|600px|center|Mr.Prim's adaptation]] | + | {{{!}} | | - | The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, <math>DA=DC</math>, <math>AF=CF</math> and <math>AB = BC</math>. Point <math>E</math> and <math>F</math> are fixed pivots. In the diagram above. F is the inversive center and points <math>D</math>,<math>F</math> and <math>B</math> are collinear and <math>DF \cdot DB</math> is of constant value. I left it to you to prove the rest. Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown at the center. The slate-lined air cylinders had rubber-flap inlet and exhaust valves and a piston whose periphery was formed by two rows of brush bristles. Prim's machine was driven by a steam engine. | + | {{!}}[[Image:Adapted.jpg|border|550px|center|Mr.Prim's adaptation]] | | - | [[Image:Blowing engine.jpg|center|thumb|600px]] | + | {{!}}- | | | | + | {{!}}align="center"{{!}}'''Image 16''' | | | | + | {{!}}- | | | | + | {{!}}The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, <math>DA=DC</math>, <math>AF=CF</math> and <math>AB = BC</math>. Point <math>E</math> and <math>F</math> are fixed pivots. In '''Image 16'''. F is the inversive center and points <math>D</math>,<math>F</math> and <math>B</math> are collinear and <math>DF \cdot DB</math> is of constant value. | | | | + | {{!}}- | | | | + | {{!}}[[Image:Blowing engine.jpg|center|border|600px]] | | | | + | {{!}}- | | | | + | {{!}}align="center"{{!}}'''Image 17''' | | | | + | {{!}}- | | | | + | {{!}}Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown in the middle of '''Image 17'''. Prim's machine was driven by a steam engine.<ref>Ferguson, 1962, p. 205</ref> | | | | + | {{!}}} | | | | | | | | | | | | - | After Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich devised a new linkage that contained only four links which is the blue part as shown in the picture below. Point <math>O</math> is the inversion center with <math>OP</math> and <math>OQ</math> collinear and <math>OP \cdot OQ =</math> constant. When point <math>P</math> is constrained to move in a circle that passes through point <math>O</math>, then point <math>Q</math> will trace out a straight line. See below for proof. | + | {{{!}} | | - | [[Image:Hartlinkage3.png|border|center|600px]] | + | {{!}}After the Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich Academy <ref>Kempe, 1877, p. 18</ref> devised a new linkage that contained only four links which is the blue part as shown in '''Image 18'''. The next part will prove that point <math>O</math> is the inversion center with <math>OP</math> and <math>OQ</math> collinear and <math>OP \cdot OQ =</math> constant. When point <math>P</math> is constrained to move in a circle that passes through point <math>O</math>, then point <math>Q</math> will trace out a straight line. See below for proof. | | - | | + | {{!}}- | | - | We know that <math>AB = CD, BC = AD</math> | + | {{!}}[[Image:Hartlinkage3.png|border|center|550px]] | | | | + | {{!}}- | | | | + | {{!}}align="center"{{!}}'''Image 18''' | | | | + | {{!}}- | | | | + | {{!}}We know that <math>AB = CD, BC = AD</math> | | | | | | | | As a result, <math>BD \parallel AC</math> | | As a result, <math>BD \parallel AC</math> | | Line 285: | | Line 286: | | | | </math>, | | </math>, | | | | | | | - | We then have <math> AC \cdot BD = (ED)^2 - (EB)^2 = (AD)^2 - (AB)^2</math>. | + | we then have <math> AC \cdot BD = (ED)^2 - (EB)^2 = (AD)^2 - (AB)^2</math>. | | | | | | | - | Further, due to <math> \frac{OP}{BD} = m, \frac{OQ}{AC} = 1-m </math> | + | Further, let's define <math> \frac{OP}{BD} = m, hence \frac{OQ}{AC} = 1-m </math> | | | where <math>0<m<1</math> | | where <math>0<m<1</math> | | | | | | | - | We have <math>\begin{align} | + | We finally have <math>\begin{align} | | | OP \cdot OQ & = m(1-m)BD \cdot AC\\ | | OP \cdot OQ & = m(1-m)BD \cdot AC\\ | | | & = m(1-m)((AD)^2 - (AB)^2) | | & = m(1-m)((AD)^2 - (AB)^2) | | - | \end{align}</math> | + | \end{align}</math>which is what we wanted to prove. | | | | + | {{!}}} | | | | | | | | | + | =='''Other Straight Line Mechanism'''== | | | | + | {{{!}} | | | | + | {{!}}[[Image:Circle in circle 1.png|border|center|200px]]{{!}}{{!}}[[Image:Circle in circle 2.png|border|center|200px]]{{!}}{{!}}[[Image:Img335.gif|border|center|200px]] | | | | + | {{!}}- | | | | + | {{!}}align="center"{{!}}'''Image 19'''{{!}}{{!}}align="center"{{!}}'''Image 20'''{{!}}{{!}}align="center"{{!}}'''Image 21''' <ref>Bryant, & Sangwin, 2008, p.44</ref> | | | | + | {{!}}- | | | | + | {{!}}colspan="3"{{!}}There are many other mechanisms that create straight line. I will only introduce one of them here. Refer to '''Image 19'''. Consider two circles <math>C_1</math> and <math>C_2</math> with radius having the relation <math>2r_2=r_1</math>. We roll <math>C_2</math> inside <math>C_1</math> without slipping as show in '''Image 20'''. Then the arch lengths <math>r_1\beta = r_2\alpha</math>. Voila! <math>\alpha = 2\beta</math> and point <math>C</math> has to be on the line joining the original points <math>P</math> and <math>Q</math>! The same argument goes for point <math>P</math>. As a result, point <math>C</math> moves in the horizontal line and point <math>P</math> moves in the vertical line. In 1801, James White patented his mechanism using this rolling motion. It is shown in '''Image 21''' <ref>Bryant, & Sangwin, 2008, p.42-44</ref>. | | | | + | {{!}}- | | | | + | {{!}}colspan="3" align="center"{{!}}[[Image:Ellipsograph2.png|border|center|500px]] | | | | + | {{!}}- | | | | + | {{!}}colspan="3" align="center"{{!}}'''Image 22''' | | | | + | {{!}}- | | | | + | {{!}}colspan="3"{{!}}Interestingly, if you attach a rod of fixed length to point <math>C</math> and <math>P</math> and the end of the rod <math>T</math> will trace out an ellipse as seen in '''Image 22'''. Why? Consider the coordinates of <math>P</math> in terms of <math>\theta</math>, <math>PT</math> and <math>CT</math>. Point <math>T</math> will have the coordinates <math>(CT \cos \theta, PT \sin \theta)</math>. | | | | | | | - | ==='''Other straight line mechanism'''=== | + | Now, whenever we see <math>\cos \theta</math> and <math>\sin \theta</math> together, we want to square them. Hence, <math>x^2=CT^2 \cos^2 \theta</math> and <math>y^2=PT^2 \sin^2 \theta</math>. | | | | | | | - | [[Image:Circle in circle 1.png|border|center|325px]] | + | Well, they are not so pretty yet. So we make them pretty by dividing <math>x^2</math> by <math>CT^2</math> and <math>y^2</math> by <math>PT^2</math>, obtaining <math>\frac {x^2}{CT^2} = \cos^2 \theta</math> and <math>\frac {y^2}{PT^2} = \sin^2 \theta</math>. Voila again! <math>\frac {x^2}{CT^2} + \frac {y^2}{PT^2}=1</math> and this is exactly the algebraic formula for an ellipse. <ref>Cundy, & Rollett, 1961, p. 240</ref> | | - | There are many other mechanisms that create straight line. I will only introduce one of them here. Refer to the diagrams above. Consider two circles <math>C_1</math> and <math>C_2</math> with radius having the relation <math>2r_2=r_1</math>. We roll <math>C_2</math> inside <math>C_1</math> without slipping as show in the diagram below. | + | {{!}}} | | - | | + | | | - | [[Image:Circle in circle 2.png|border|center|300px]] | + | | | - | Then the arch lengths <math>r_1\beta = r_2\alpha</math>. Voila! <math>\alpha = 2\beta</math> and point <math>C</math> has to be on the line joining the original points <math>P</math> and <math>Q</math>! The same argument goes for point <math>P</math>. As a result, point <math>C</math> moves in the horizontal line and point <math>P</math> moves in the vertical line. | + | | | | | | | | - | [[Image:Img335.gif|border|center|300px]] | + | =Conclusion---The Take Home Message= | | - | In 1801, James White patented his mechanism using this rolling motion. Its picture is shown on the right. Interestingly, if you attach a rod of fixed length to point <math>C</math> and <math>P</math> and the end of the rod <math>T</math> will trace out an ellipse. Why? Consider the coordinates of <math>P</math> in terms of <math>\theta</math>, <math>PT</math> and <math>CT</math>. Point <math>T</math> will have the coordinates <math>(CT \cos \theta, PT \sin \theta)</math>. Now, whenever we see <math>\cos \theta</math> and <math>\sin \theta</math> together, we want to square them. Hence, <math>x^2=CT^2 \cos^2 \theta</math> and <math>y^2=PT^2 \sin^2 \theta</math>. Well, they are not so pretty yet. So we make them pretty by dividing <math>x^2</math> by <math>CT^2</math> and <math>y^2</math> by <math>PT^2</math>, obtaining <math>\frac {x^2}{CT^2} = \cos^2 \theta</math> and <math>\frac {y^2}{PT^2} = \sin^2 \theta</math>. Voila again! <math>\frac {x^2}{CT^2} + \frac {y^2}{PT^2}=1</math> and this is exactly the algebraic formula for an ellipse. | + | We should not take the concept of straight line for granted and there are many interesting, and important, issues surrounding the concepts of straight line. A serious exploration of its properties and constructions will not only give you a glimpse of geometry's all encompassing reach into science, engineering and our lives, but also make you question many of the assumptions you have about geometry. Hopefully, you will start questioning the flatness of a plane, roundness of a circle and the nature of a point and allow yourself to explore the ordinary and discover the extraordinary. | | - | | + | | | - | | + | | | - | [[Image:Ellipsograph2.png|border|center|500px]] | + | | | | |other=A little Geometry | | |other=A little Geometry | | | |AuthorName=Cornell University Libraries and the Cornell College of Engineering | | |AuthorName=Cornell University Libraries and the Cornell College of Engineering | | Line 315: | | Line 324: | | | | |SiteURL=http://kmoddl.library.cornell.edu/model.php?m=244 | | |SiteURL=http://kmoddl.library.cornell.edu/model.php?m=244 | | | |Field=Geometry | | |Field=Geometry | | - | *)http://dlxs2.library.cornell.edu/cgi/t/text/text-idx?c=math;cc=math;view=toc;subview=short;idno=Kemp009 | + | *http://dlxs2.library.cornell.edu/cgi/t/text/text-idx?c=math;cc=math;view=toc;subview=short;idno=Kemp009 | | - | *)http://kmoddl.library.cornell.edu/tutorials/04/ | + | *http://kmoddl.library.cornell.edu/tutorials/04/ | | - | *)http://www.howround.com/ | + | *http://www.howround.com/ | | - | |References=How to draw a straight line: a lecture on linkages, Alfred Bray Kempe, Ithaca, New York: Cornell University Library | + | *http://en.wikipedia.org/wiki/Wikipedia:Citing_sources | | - | | + | =Notes= | | - | How round is your circle?, John Bryant and Chris Sangwin, Princeton, Princeton University Press | + | <references/> | | - | |ToDo=I need to change the size of the main picture and maybe some more theoretical description what a straight line here. | + | |References=#Bryant, John, & Sangwin, Christopher. (2008). How Round is your circle?. Princeton & Oxford: Princeton Univ Pr. | | - | |InProgress=Yes | + | #Cundy, H.Martyn, & Rollett, A.P. (1961). Mathematical models. Clarendon, Oxford : Oxford University Press. | | | | + | #Henderson, David. (2001). Experiencing geometry. Upper Saddle River, New Jersey: Prentice hall. | | | | + | #Kempe, A. B. (1877). How to Draw a straight line; a lecture on linkage. London: Macmillan and Co.. | | | | + | #Taimina, D. (n.d.). How to Draw a Straight Line. Retrieved from The Kinematic Models for Design Digital Library: http://kmoddl.library.cornell.edu/tutorials/04/ | | | | + | #Ferguson, Eugene S. (1962). Kinematics of mechanisms from the time of watt. United States National Museum Bulletin, (228), 185-230. | | | | + | #Weisstein, Eric W. Great Circle. Retrieved from MathWorld--A Wolfram Web Resource: http://mathworld.wolfram.com/GreatCircle.html | | | | + | #Wikipedia (Steam Engine). (n.d.). Steam Engine. Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Steam_engine | | | | + | #Wikipedia (Cartesian coordinate system). (n.d.). Cartesian coordinate system. Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Cartesian_coordinate_system | | | | + | #Wikipedia (Closed-form expression). (n.d.). Closed-form expression. Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Closed-form_expression | | | | + | #Lienhard, J. H. (1999, February 18). "I SELL HERE, SIR, WHAT ALL THE WORLD DESIRES TO HAVE -- POWER". Retrieved from The Engines of Our Ingenuity: http://www.uh.edu/engines/powersir.htm | | | | + | |InProgress=No | | | | + | |ImageSize=280px | | | }} | | }} | ## Current revision Drawing a Straight Line Field: Geometry Image Created By: Cornell University Libraries and the Cornell College of Engineering Website: Model: S35 Peaucellier Straight-line Mechanism Drawing a Straight Line The image shows the first planar linkageIt is defined as a series of rigid links connected with joints to form a closed chain, or a series of closed chains. Each link has two or more joints, and the joints have various degrees of freedom to allow motion between the links. that drew a straight line without using a straight edge. Independently invented by a French army officer, Charles-Nicolas Peaucellier and a Lithuanian (who some argue was actually Russian) mathematician Lipmann Lipkin, it had important applications in engineering and mathematics.[2][3][4] # Introduction What is a straight line? How do you define straightness? The questions seem silly to ask because they are so intuitive. We come to accept that straightness is simply straightness and its definition, like that of point and line, is simply assumed. However, why do we not assume the definition of circle? When using a compass to draw a circle, we are not starting with a figure that we accept as circular; instead, we are using a fundamental property of circles, that the points on a circle are at a fixed distance from the center. This page explores the answer to the question "how do you construct a straight line without a straight edge?" # What Is A Straight Line?--- A Question Rarely Asked. Today, we simply define a line as a one-dimensional object that extents to infinity in both directions and it is straight, i.e. no wiggles along its length. But what is straightness? It is a hard question because we can picture it, but we simply cannot articulate it. In Euclid's book Elements, he defined a straight line as "lying evenly between its extreme points" and as having "breadthless width." This definition is pretty useless. What does he mean by "lying evenly"? It tells us nothing about how to describe or construct a straight line. So what is a straightness anyway? There are a few good answers. For instance, in the Cartesian CoordinatesA Cartesian coordinate system specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length., the graph of $y=ax+b$ is a straight line as shown in Image 1. In addition, the shortest distance between two points on a flat plane is a straight line, a definition we are most familiar with. However, it is important to realize that the definitions of being "shortest" and "straight" will change when you are no longer on flat plane. For example, the shortest distance between two points on a sphere is the the "great circle"A section of a sphere that contains a diameter of the sphere, and great circle is straight on the spherical surfaceas shown in Image 2. Since we are dealing with plane geometry here, we define straight line as the curve of $y=ax+b$ in Cartesian Coordinates. For more comprehensive discussion of being straight, you can refer to the book Experiencing Geometry by David W. Henderson. Take a minute to ponder the question: "How do you produce a straight line?" Well light travels in straight line. Can we make light help us to produce something straight? Sure but does it always travel in straight line? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Well, another simpler method is just to fold a piece of paper and the crease will be a straight line. However, to achieve our ultimate goal (construct a straight line without a straight edge), we need a linkageIt is defined as a series of rigid links connected with joints to form a closed chain, or a series of closed chains. Each link has two or more joints, and the joints have various degrees of freedom to allow motion between the links. and that is much more complicated and difficult than folding a piece of paper. The rest of the page revolves around the discussion of straight line linkage's history and its mathematical explanation. Image 1 Image 2 [7] # The Quest to Draw a Straight Line ## The Practical Need Now having defined what a straight line is, we must figure out a way to construct it on a plane. However, the challenge is to do that without using anything that we assume to be straight such as a straight edge (or ruler) just like how we construct a circle using a compass. Historically, it has been of great interest to mathematicians and engineers not only because it is an interesting question to ponder about, but also because it has important applications in engineering. Since the invention of various steam engines and machines that are powered by them, engineers have been trying to perfect the mechanical linkage to convert all kinds of motions (especially circular motion) to linear motions. Image 3[8] Image 3 shows a patent drawing of an early steam engine. It is of the simplest form with a boiler (lower left corner), a cylinder with piston (above the boiler), a beam (on top, pivoted at the middle) and a pump (lower right corner) at the other end. The pump was usually used to extract water from the mines but other devices can also be driven. [Click here to show how this engine works.] When the piston is at its lowest position, steam is let into the cylinder from valve K to push the piston upwards. Afterward, when the piston is at it [...] [Click here to hide text] When the piston is at its lowest position, steam is let into the cylinder from valve K to push the piston upwards. Afterward, when the piston is at its highest position, cold water is let in from valve E, cooling the steam in the cylinder and causing the pressure in the the cylinder to drop below the atmospheric pressure. The difference in pressure causes the piston to move downwards. After the piston returns to the lowest position, the whole process is repeated. This kind of steam engine is called "atmospheric" because it utilizes atmospheric pressure to cause the downward action of the piston. Since in the downward motion, the piston pulls on the beam and in the upward motion, the beam pulls on the piston, the connection between the end of the piston rod and the beam is always in tension (it is being stretched by forces at two ends) and that is why a chain is used as the connection.[9] [10] Ideally, the piston moves in the vertical direction and the piston rod takes only axial loading, i.e. forces applied in the direction along the rod. However, from the above picture, it is clear that the end of the piston does not move in a straight line due to the fact that the end of the beam describes an arc of a circle. As a result, horizontal forces are created and subjected onto the piston rod. Consequently, the rate of attrition is very much expedited and the efficiency of the engine is greatly compromised. Durability is important in the design of any machine, but it was especially essential for the early steam engines. For these machines were meant to run 24/7 to make profits for the investors. Therefore, such defect in the engine posed a great need for improvements.[11] Improvements were soon developed to force the end of the piston rod move in a straight line, but these brought about new mechanical problems. The two pictures below show two improvements at the time. The hidden text explains how these improvements work and why they have failed to produce satisfactory results. Image 4[12] Image 5 [13] [Click to read more about these systems.] Firstly, "double-action" engines were built, part of which is shown in Image 4. Secondly, the beam was dispensed and replaced by a gear as shown [...] [Hide] Firstly, "double-action" engines were built, part of which is shown in Image 4. Secondly, the beam was dispensed and replaced by a gear as shown in Image 5. However, both of these improvements were unsatisfactory and the need for a straight line linkage was still imperative. In Image 4, atmospheric pressure acts in both upward and downward strokes of the engine and two chains were used (one connected to the top of the arched end of the beam and one to the bottom), both of which will took turns being taut throughout one cycle. One might ask why chain was used all the time. The answer was simple: to fit the curved end of the beam. However, this does not fundamentally solve the straight line problem and unfortunately created more. The additional chain increased the height of the engine and made the manufacturing very difficult (it was hard to make straight steel bars and rods back then) and costly. In Image 5, after the beam was replaced by gear actions, the piston rod was fitted with teeth (labeled k) to drive the gear. Theoretically, this solves the problem fundamentally. The piston rod is confined between the guiding wheel at K and the gear, and it moves only in the up-and-down motion. However, the practical problem remained unsolved. The friction and the noise between all the guideways and the wheels could not be ignored, not to mention the increased possibility of failure and cost of maintenance due to additional parts.[14] ## James Watt's breakthrough James Watt found a mechanism that converted the linear motion of pistons in the cylinder to the semi circular motion (that is moving in an arc of the circle) of the beam (or the circular motion of the flywheel) and vice versa. In this way, energy in the vertical direction is converted to rotational energy of the flywheel from where is it converted to useful work that the engine is desired to do. In 1784, he invented a three member linkage that solved the linear-motion-to-circular problem practically as illustrated by the animation below. In its simplest form, there are two radius arms that have the same lengths and a connecting arm with midpoint P. Point P moves in a straight line while the two hinges move in circular arcs. However, this linkage only produced approximate straight line (a stretched figure 8 actually) as shown in Image 7, much to the chagrin of the mathematicians who were after absolute straight lines. There is a more general form of the Watt's linkage that the two radius arms having different lengths like shown in Image 6. To make sure that Point P still move in the stretched figure 8, it has to be positioned such that it adheres to the ratio$\frac{AB}{CD} = \frac{CP}{CB}$.[15] Image 6 [16] Image 7 [17] ## The Motion of Point P We intend to describe the path of $P$ so that we can show it does not move in a straight line (which is obvious in the animation). More importantly, this will allow us to pinpoint the position of $P$ using certain parameters we know, such as the angle of rotation or one coordinate of point $P$. This is awfully crucial in engineering as engineers would like to know that there are no two parts of the machine will collide with each other throughout the motion. In addition, you can use the parametrization to create your own animation like that in Image 7. ### Algebraic Description We see that $P$ moves in a stretched figure 8 and will tend to think that there should be a nice closed formIn mathematics, an expression is said to be a closed-form expression if, and only if, it can be expressed analytically in terms of a bounded number of certain "well-known" functions. Typically, these well-known functions are defined to be elementary functions – constants, one variable x, elementary operations of arithmetic (+ – × ÷), nth roots, exponent and logarithm (which thus also include trigonometric functions and inverse trigonometric functions). of the relationship of the $x$ and $y$ coordinates of $P$ like that of the circle. After this section, you will see that there is a closed form, at least theoretically, but it is not "nice" at all. Image 8 [Show derivation of the relationship of the $x$ and $y$ coordinates of $P$.] We know coordinates ${\color{Gray}A}$ and ${\color{Gray}D}$ because they are fixed.[Hide] We know coordinates $A$ and $D$ because they are fixed. Hence suppose the coordinates of $A$ are $(0,0)$ and coordinates of $B$ are $(c,d)$. We also know the length of the bar. Let $AB=CD=r, BC=m$. Suppose that at one instance we know the coordinates of $B$ as $(a,b)$, then $C$ will be on the circle centered at $B$ with a radius of $m$. Since $C$ is on the circle centered at $D$ with radius $r$. Then the coordinates of $C$ have to satisfy the two equations below. $\begin{cases} (x-a)^2+(y-b)^2=m^2 \\ (x-c)^2+(y-d)^2=r^2 \end{cases}$ Now, since we know that $B$ is on the circle centered at $A$ with radius $r$, the coordinates of $B$ have to satisfy the equation $a^2+b^2=r^2$. Therefore, the coordinates of $C$ have to satisfy the three equations below. $\begin{cases} (x-a)^2+(y-b)^2=m^2 \\ (x-c)^2+(y-d)^2=r^2 \\ a^2+b^2=r^2 \end{cases}$ Now, expanding the first two equations we have,         $x^2+y^2-2ax-2by+a^2+b^2=m^2$         $x^2+y^2-2cx-2dy+c^2+d^2=r^2$ Subtract Eq. 2 from Eq. 1 we have,          $(-2a+2c)x-(2b-2d)y+(a^2+b^2)-(c^2+d^2)=m^2-r^2$ Substituting $a^2+b^2=r^2$ and rearranging we have, $(-2a+2c)x-(2b-2d)y=m^2-2r^2+c^2+d^2$ Hence          $y=\frac {-2a+2c}{2b-2d}x-\frac {m^2-2r^2+c^2+d^2}{2b-2d}$ Now, we can manipulate Eq. 3 to get an expression for $b$, i.e. $b=f(a,c,d,m,r,x,y)$. Next, we substitute $b=f(a,c,d,m,r,x,y)$ back into Eq. 1 and will be able to obtain an expression for $a$, i.e. $a=g(x,y,d,c,m,r)$. Since $b=\pm \sqrt {r^2-a^2}$, we have expressions of $a$ and $b$ in terms of $x,y,d,c,m$ and $r$. Say point $P$ has coordinates $(x',y')$, then $x'=\frac {a+x}{2}$ and $y'=\frac {b+y}{2}$ which will yield          $x=2x'-a$          $y=2y'-b$ In the last step we substitute $a=g(x,y,d,c,m,r)$,$b=\pm \sqrt {r^2-a^2}$, Eq. 5 and Eq. 6 back into Eq. 4 and we will finally have a relationship between $x'$ and $y'$. Of course, it will be a messy closed form but we could definitely use Mathematica to do the maths. The point is, there is no nice algebraic form for that figure 8, though it has closed form and that is why we have to find something else. ### Parametric Description Alright, since the algebraic equations are not agreeable at all, we have to resort to the parametric description. To think about, it may be more manageable to describe the motion of $P$ using the angle of ratation. As a matter of fact, it is easier to obtain the angle of rotation than knowing one of $P$'s coordinates. Image 9 [Show parameterization of $P$.] We will parametrize the ${\color{Gray}P}$ with the angle ${\color{Gray}\theta}$ in conformation of most para [...] [Hide] We will parametrize the $P$ with the angle $\theta$ in conformation of most parametrizations of point. $\begin{cases} \overrightarrow {AB} = (r \sin \theta, r \cos \theta) \\ \overrightarrow {BC} = (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha)) \\ \end{cases}$ Now let $BD=l$. Then using cosine formula, we have $m^2+l^2-2ml\cos \alpha = r^2$ As a result, we can express $\alpha$ and $\beta$ as $\alpha = \cos^{-1} \frac {m^2+l^2-r^2}{2ml}$ Since $l = \sqrt{(c-r \sin \theta)^2+(d-r \cos \theta)^2}$, $c$ and $d$ being the coordinates of point $D$, we can find $\alpha$ in terms of $\theta$. Furthermore, $\begin{align} \overrightarrow {BD} & = \overrightarrow {AD}-\overrightarrow {AB} \\ & = (c,d) - (r\sin \theta, r \cos \theta) \\ & = (c - r\sin \theta, d - r \cos \theta) \end{align}$ Therefore, $\beta = \tan^{-1}\frac {d-r \cos \theta}{c - r \sin \theta}$ Hence, $\begin{align} \overrightarrow {AP} & = \overrightarrow {AB} + \frac {1}{2} \overrightarrow {BC} \\ & = (r \sin \theta, r \cos \theta) + \frac {m}{2}(\sin (\frac {\pi}{2} + \alpha + \beta), \cos (\frac {\pi}{2} + \alpha + \beta)) \\ \end{align}$ Now, $\overrightarrow {AP}$ is parametrized in term of $\theta, c, d, r$ and $m$. Image 10 [19] ## Watt's Secret Another reason we parameterized $P$ is that Watt did not simply used that three bar linkage shown in Image 6 and Image 7. Instead he used something different. To understand that, our knowledge of the parameterizaion of $P$ is crucial. Imitations were a big problems back in those days. When filing for a patent, James Watt and other inventors had to explain how their devices worked without revealing the critical secrets so that others could easily copy them. As shown in Image 10, the original patent illustration, Watt illustrated his simple linkage on a separate diagram on the upper left hand corner but try looking for it on the engine illustration itself. Can you find it at all? That is Watt's secret. This is the equivalent of telling you by using the principle of 1+1 makes 2 you could get 34 x 45; the crucial step in understanding (and to make the engine work smoothly in Watt's case) is avoided. What he had actually used on his engine was the modified version of the basic linkage as show in Image 11. The link $ABCD$ is the original three member linkage with $AB=CD$ and point $P$ being the midpoint of $BC$. A is the pivot of the beam fixed on the engine frame while D is also fixed. However, Watt modified it by adding a parallelogram $BCFE$ to it and connecting point $F$ to the piston rod. We now know that point $P$ moves in quasi straight line as shown previously. It is important for two points to move in straight lines now is because one has to be connected to the piston rod that drives the beam, another has to convert the circular motion to linear motion so as to drive the valve gears that control the opening and closing of the valves. It turns out that point F moves in a similar quasi straight line as point P. This is the truly famous James Watt's "parallel motion" linkage. Image 11 Image 12 [How would we find the parametric equation for point $F$ then?] Well, it is easy enough. Refer to Image 12.[Hide] Well, it is easy enough. Refer to Image 12. $\overrightarrow {AB} = (r \sin \theta, r \cos \theta)\therefore \overrightarrow {AE} = \frac {e+f}{r}(r \sin \theta, r \cos \theta)$. Furthermore $\overrightarrow {AF} = \overrightarrow {AE} + \overrightarrow {BC}$. Therefore, $\overrightarrow {AF} = \frac {e+r}{r}(r \sin \theta, r \cos \theta) + (m \sin (\frac {\pi}{2} + \beta + \alpha), m \cos (\frac {\pi}{2} + \beta + \alpha))$. We now have the parameterization of point $F$ and $P$ and Watt's secret is eventually cracked. ## The First Planar Straight Line Linkage - Peaucellier-Lipkin Linkage Image 13 [20] Anyway, mathematicians and engineers had being searching for almost a century to find the solution to a straight line linkage but all had failed until 1864 when a French army officer Charles Nicolas Peaucellier came up with his inversor linkage. Interestingly, he did not publish his findings and proof until 1873, when Lipmann I. Lipkin, a student from University of St. Petersburg, demonstrated the same working model at the World Exhibition in Vienna. Peaucellier acknowledged Lipkin's independent findings with the publication of the details of his discovery in 1864 and the mathematical proof. Taimina Image 14 Let's turn to a skeleton drawing of the Peaucellier-Lipkin linkage in Image 14. It is constructed in such a way that $OA = OB$ and $AC=CB=BP=PA$. Furthermore, all the bars are free to rotate at every joint and point $O$ is a fixed pivot. Due to the symmetrical construction of the linkage, it goes without proof that points $O$,$C$ and $P$ lie in a straight line. Construct lines $OCP$ and $AB$ and they meet at point $M$. Since shape $APBC$ is a rhombus $AB \perp CP$ and $CM = MP$ Now, $(OA)^2 = (OM)^2 + (AM)^2$ $(AP)^2 = (PM)^2 + (AM)^2$ Therefore, $\begin{align} (OA)^2 - (AP)^2 & = (OM)^2 - (PM)^2\\ & = (OM-PM)\cdot(OM + PM)\\ & = OC \cdot OP\\ \end{align}$ Let's take a moment to look at the relation $(OA)^2 - (AP)^2 = OC \cdot OP$. Since the length $OA$ and $AP$ are of constant length, then the product $OC \cdot OP$ is of constant value however you change the shape of this construction. Image 15 Refer to Image 15. Let's fix the path of point $C$ such that it traces out a circle that has point $O$ on it. $QC$ is the extra link pivoted to the fixed point $Q$ with $QC=QO$. Construct line $OQ$ that cuts the circle at point $R$. In addition, construct line $PN$ such that $PN \perp OR$. Since, $\angle OCR = 90^\circ$ We have $\vartriangle OCR \sim \vartriangle ONP and \frac{OC}{OR} = \frac{ON}{OP}$. Moreover $OC \cdot OP = ON \cdot OR$. Therefore $ON = \frac {OC \cdot OP}{OR} =$constant, i.e. the length of $ON$(or the x-coordinate of $P$ w.r.t $O$) does not change as points $C$ and $P$ move. Hence, point $P$ moves in a straight line. ∎[21] ## Inversive Geometry in Peaucellier-Lipkin Linkage As a matter of fact, the first part of the proof given above is already sufficient. Due to inversive geometry, once we have shown that points $O$,$C$ and $P$ are collinear and that $OC \cdot OP$ is of constant value. Points $C$ and $P$ are inversive pairs with $O$ as inversive center. Therefore, once $C$ moves in a circle that contains $O$, then $P$ will move in a straight line and vice versa. ∎ See Inversion for more detail. ## Peaucellier-Lipkin Linkage in Action Image 16 The new linkage caused considerable excitement in London. Mr. Prim, "engineer to the House", utilized the new compact form invented by H.Hart to fit his new blowing engine which proved to be "exceptionally quiet in their operation." In this compact form, $DA=DC$, $AF=CF$ and $AB = BC$. Point $E$ and $F$ are fixed pivots. In Image 16. F is the inversive center and points $D$,$F$ and $B$ are collinear and $DF \cdot DB$ is of constant value. Image 17 Mr. Prim's blowing engine used for ventilating the House of Commons, 1877. The crosshead of the reciprocating air pump is guided by a Peaucellier linkage shown in the middle of Image 17. Prim's machine was driven by a steam engine.[22] ## Hart's Linkage After the Peaucellier-Lipkin Linkage was introduced to England in 1874, Mr. Hart of Woolwich Academy [23] devised a new linkage that contained only four links which is the blue part as shown in Image 18. The next part will prove that point $O$ is the inversion center with $OP$ and $OQ$ collinear and $OP \cdot OQ =$ constant. When point $P$ is constrained to move in a circle that passes through point $O$, then point $Q$ will trace out a straight line. See below for proof. Image 18 We know that $AB = CD, BC = AD$ As a result, $BD \parallel AC$ Draw line $OQ \parallel AC$, intersecting $AD$ at point $P$. Consequently, points $O,P,Q$ are collinear Construct rectangle $EFCA$ $\begin{align} AC \cdot BD & = EF \cdot BD \\ & = (ED + EB) \cdot (ED - EB) \\ & = (ED)^2 - (EB)^2 \\ \end{align}$ For $\begin{array}{lcl} (ED)^2 + (AE)^2 & = & (AD)^2 \\ (EB)^2 + (AE)^2 & = & (AB)^2 \end{array}$, we then have $AC \cdot BD = (ED)^2 - (EB)^2 = (AD)^2 - (AB)^2$. Further, let's define $\frac{OP}{BD} = m, hence \frac{OQ}{AC} = 1-m$ where $0<m<1$ We finally have $\begin{align} OP \cdot OQ & = m(1-m)BD \cdot AC\\ & = m(1-m)((AD)^2 - (AB)^2) \end{align}$which is what we wanted to prove. ## Other Straight Line Mechanism | | | | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------| | | | | | Image 19 | Image 20 | Image 21 [24] | | There are many other mechanisms that create straight line. I will only introduce one of them here. Refer to Image 19. Consider two circles $C_1$ and $C_2$ with radius having the relation $2r_2=r_1$. We roll $C_2$ inside $C_1$ without slipping as show in Image 20. Then the arch lengths $r_1\beta = r_2\alpha$. Voila! $\alpha = 2\beta$ and point $C$ has to be on the line joining the original points $P$ and $Q$! The same argument goes for point $P$. As a result, point $C$ moves in the horizontal line and point $P$ moves in the vertical line. In 1801, James White patented his mechanism using this rolling motion. It is shown in Image 21 [25]. | | | | | | | | Image 22 | | | | Interestingly, if you attach a rod of fixed length to point $C$ and $P$ and the end of the rod $T$ will trace out an ellipse as seen in Image 22. Why? Consider the coordinates of $P$ in terms of $\theta$, $PT$ and $CT$. Point $T$ will have the coordinates $(CT \cos \theta, PT \sin \theta)$. Now, whenever we see $\cos \theta$ and $\sin \theta$ together, we want to square them. Hence, $x^2=CT^2 \cos^2 \theta$ and $y^2=PT^2 \sin^2 \theta$. Well, they are not so pretty yet. So we make them pretty by dividing $x^2$ by $CT^2$ and $y^2$ by $PT^2$, obtaining $\frac {x^2}{CT^2} = \cos^2 \theta$ and $\frac {y^2}{PT^2} = \sin^2 \theta$. Voila again! $\frac {x^2}{CT^2} + \frac {y^2}{PT^2}=1$ and this is exactly the algebraic formula for an ellipse. [26] | | | # Conclusion---The Take Home Message We should not take the concept of straight line for granted and there are many interesting, and important, issues surrounding the concepts of straight line. A serious exploration of its properties and constructions will not only give you a glimpse of geometry's all encompassing reach into science, engineering and our lives, but also make you question many of the assumptions you have about geometry. Hopefully, you will start questioning the flatness of a plane, roundness of a circle and the nature of a point and allow yourself to explore the ordinary and discover the extraordinary. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # About the Creator of this Image KMODDL is a collection of mechanical models and related resources for teaching the principles of kinematics--the geometry of pure motion. The core of KMODDL is the Reuleaux Collection of Mechanisms and Machines, an important collection of 19th-century machine elements held by Cornell's Sibley School of Mechanical and Aerospace Engineering. # Notes 2. ↑ Bryant, & Sangwin, 2008, p. 34 3. ↑ Kempe, 1877, p. 12 4. ↑ Taimina 5. ↑ Wikipedia (Cartesian coordinate system) 7. ↑ Weisstein 8. ↑ Bryant, & Sangwin, 2008, p. 18 9. ↑ Bryant, & Sangwin, 2008, p. 18 10. ↑ Wikipedia (Steam Engine) 11. ↑ Bryant, & Sangwin, 2008, p. 18-21 12. ↑ Bryant, & Sangwin, 2008, p. 18-21 13. ↑ Bryant, & Sangwin, 2008, p. 18-21 14. ↑ Bryant, & Sangwin, 2008, p. 18-21 15. ↑ Bryant, & Sangwin, 2008, p. 24 16. ↑ Bryant, & Sangwin, 2008, p. 23 18. ↑ Wikipedia (Closed-form expression) 19. ↑ Lienhard, 1999, February 18 21. ↑ Bryant, & Sangwin, 2008, p. 33-36 22. ↑ Ferguson, 1962, p. 205 23. ↑ Kempe, 1877, p. 18 24. ↑ Bryant, & Sangwin, 2008, p.44 25. ↑ Bryant, & Sangwin, 2008, p.42-44 26. ↑ Cundy, & Rollett, 1961, p. 240 # References 1. Bryant, John, & Sangwin, Christopher. (2008). How Round is your circle?. Princeton & Oxford: Princeton Univ Pr. 2. Cundy, H.Martyn, & Rollett, A.P. (1961). Mathematical models. Clarendon, Oxford : Oxford University Press. 3. Henderson, David. (2001). Experiencing geometry. Upper Saddle River, New Jersey: Prentice hall. 4. Kempe, A. B. (1877). How to Draw a straight line; a lecture on linkage. London: Macmillan and Co.. 5. Taimina, D. (n.d.). How to Draw a Straight Line. Retrieved from The Kinematic Models for Design Digital Library: http://kmoddl.library.cornell.edu/tutorials/04/ 6. Ferguson, Eugene S. (1962). Kinematics of mechanisms from the time of watt. United States National Museum Bulletin, (228), 185-230. 7. Weisstein, Eric W. Great Circle. Retrieved from MathWorld--A Wolfram Web Resource: http://mathworld.wolfram.com/GreatCircle.html 10. Wikipedia (Cartesian coordinate system). (n.d.). Cartesian coordinate system. Retrieved from Wikipedia: http://en.wikipedia.org/wiki/Cartesian_coordinate_system 13. Lienhard, J. H. (1999, February 18). "I SELL HERE, SIR, WHAT ALL THE WORLD DESIRES TO HAVE -- POWER". Retrieved from The Engines of Our Ingenuity: http://www.uh.edu/engines/powersir.htm Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. ``` |colspan="2"|Today, we simply define a line as a one-dimensional object that extents to infinity in both directions and it is straight, i.e. no wiggles along its length. But what is straightness? It is a hard question because we can picture it, but we simply cannot articulate it. ``` In Euclid's book Elements, he defined a straight line as "lying evenly between its extreme points" and as having "breadthless width." This definition is pretty useless. What does he mean by "lying evenly"? It tells us nothing about how to describe or construct a straight line. So what is a straightness anyway? There are a few good answers. For instance, in the Cartesian CoordinatesA Cartesian coordinate system specifies each point uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines, measured in the same unit of length., the graph of $y=ax+b$ is a straight line as shown in Image 1. In addition, the shortest distance between two points on a flat plane is a straight line, a definition we are most familiar with. However, it is important to realize that the definitions of being "shortest" and "straight" will change when you are no longer on flat plane. For example, the shortest distance between two points on a sphere is the the "great circle"A section of a sphere that contains a diameter of the sphere, and great circle is straight on the spherical surfaceas shown in Image 2. Since we are dealing with plane geometry here, we define straight line as the curve of $y=ax+b$ in Cartesian Coordinates. For more comprehensive discussion of being straight, you can refer to the book Experiencing Geometry by David W. Henderson. Take a minute to ponder the question: "How do you produce a straight line?" Well light travels in straight line. Can we make light help us to produce something straight? Sure but does it always travel in straight line? Einstein's theory of relativity has shown (and been verified) that light is bent by gravity and therefore, our assumption that light travels in straight lines does not hold all the time. Well, another simpler method is just to fold a piece of paper and the crease will be a straight line. However, to achieve our ultimate goal (construct a straight line without a straight edge), we need a linkageIt is defined as a series of rigid links connected with joints to form a closed chain, or a series of closed chains. Each link has two or more joints, and the joints have various degrees of freedom to allow motion between the links. and that is much more complicated and difficult than folding a piece of paper. The rest of the page revolves around the discussion of straight line linkage's history and its mathematical explanation. |- |align="center"| Categories: | | | | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 217, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8902508020401001, "perplexity_flag": "middle"}
http://www.scholarpedia.org/article/Neuropercolation
# Neuropercolation From Scholarpedia Robert Kozma (2007), Scholarpedia, 2(8):1360. Curator and Contributors 1.00 - Robert Kozma Neuropercolation is a family of stochastic models based on the mathematical theory of probabilistic cellular automata on lattices and random graphs and motivated by structural and dynamical properties of neural populations. The existence of phase transitions was demonstrated both in continuous and discrete state space models, e.g. in specific probabilistic cellular automata and percolation models. Neuropercolation extends the concept of phase transitions to large interactive populations of nerve cells. ## Probabilistic Cellular Automata: Definitions and Basic Properties ### Cellular Automata Figure 1: Illustration of percolation on the 2-dimensional torus, with local update rule given by $$\ell=2\ ,$$ i.e., a site becomes active if at least 2 of its neighbors are active. The first 4 iteration steps are shown. At the 8th step all sites become active, i.e., the initial configuration percolates over the torus (Bollobas, 2001). In a basic two-state cellular automaton, the state of any lattice point $$x \in \mathbb{Z}^d$$ is either active or inactive. The lattice is initialized with some (deterministic or random) configuration. The states of the lattice points are updated (usually synchronously) based on some (deterministic or probabilistic) rule that depends on the activations of their neighborhood. For related general concepts, see cellular automata such as Conway's Game of Life, Chua's cellular neural network, as well as thermodynamic models like the Ising model and Hopfield nets (Berlekamp et al, 1982; Kauffman, 1990; Hopfield, 1982; Brown and Chua, 1999; Wolfram, 2002). ### Bootstrap Percolation In the original bootstrap percolation model, sites are active in the original configuration independently with probability $$p\ .$$ The update rule however is deterministic: an active site always remains active, and an inactive site becomes active if at least $$\ell$$ of its neighbors are active at the given time (Aizeman and Lebowitz, 1988). If the iterations ultimately lead to a configuration when all sites become active, it is said that there is percolation in the lattice. A main question in bootstrap percolation concerns the presence of percolation as the function of lattice dimension $$d\ ,$$ initial probability $$p\ ,$$ and neighborhood parameter $$\ell\ .$$ It can be shown that on the infinite lattice $$\mathbb{Z}^d\ ,$$ there exists a critical probability $$p_c=f(d,\ell)\ ,$$ that there is percolation for $$p>p_c\ ,$$ and no percolation for $$p<p_c\ ,$$ with probability one. The critical probability defines a phase transition between conditions leading to percolation and conditions which do not percolate (Balister et al., 1993, 2000; Bollobas and Stacey, 1997). For a finite lattice, such as the $$d$$-dimensional torus $$\mathbb{Z}^d_N\ ,$$ the probability of percolation is a continuous function of $$p\ ,$$ and hence there is no precise threshold value for $$p\ .$$ However the probability of percolation rises rapidly from a value close to zero, to a value close to one near some threshold function $$p_c=f(N,d,\ell)\ .$$ • Example 1 (Percolation threshold on infinite lattices): In the case of the 3-dimensional (infinite) lattice ($$d=3$$), a simple example of local neighborhood consists of the 6 direct neighbors of the site, and itself. Selecting $$\ell=3$$ means that an inactive site becomes active if at least 3 of its neighbors are active. It is shown that for $$d=3\ ,$$ $$\ell=3$$ the critical probability $$p_c=0$$ (Schonmann, 1992). • Example 2 (Percolation on the finite torus): It is of practical interest to study bootstrap percolation on finite lattices. E.g., $$\mathbb{Z}^d_N$$ denotes the $$d$$-dimensional torus of size $$N^d\ .$$ For $$d=3\ ,$$ $$\ell=3\ ,$$ Cerf and Cirillo (1999) proved the conjecture of Adler, van Enter, and Duarte (1990), Adler (1991), extending the above result of Schonmann (1992), that the threshold probability is of the order $$1/\log\log N\ ,$$ for a sequence of bootstrap percolation models as $$N\to\infty\ .$$ An example of percolation on the 2-dimensional torus, $$d=2\ ,$$ and $$\ell=2$$ is given in Fig.1. ### Random Bootstrap Percolation Standard bootstrap percolation has the strict limitation that an active site always remains active. This condition is relaxed in the random bootstrap percolation, which can model for example percolation in a polluted environment (Gravner and McDonald, 1997). Accordingly, at every iteration step, an active site is removed with dilution probability $$q\ .$$ In the case of the 2-dimensional lattice with the 2-neighbor rule $$\ell=2\ ,$$ the process percolates with probability one, if $$q/p^2$$ is small enough, and there is no percolation in the opposite case. Generalizations of the original bootstrap percolation models are abundant. A systematic overview of the state-of-art of percolation is given in Bollobas and Riordan (2006). Neuropercolation describes further generalizations of random bootstrap percolations motivated by principles of neural dynamics. ## Basic Principles of Dynamics of Neural Masses The continuum approach to the brain leads to the concept of neural mass, and its spatiotemporal activity can be interpreted in terms of dynamic system theory (Babloyantz and Desthexhe, 1986; Schiff et al, 1994; Hoppensteadt and Izhikevich, 1998; Freeman, 2001; Stam et al., 2005; Steyn-Ross et al, 2005). Some models utilize encoding in complex cycles and chaotic attractors (Aihara et al, 1990; Andreyev et al., 1996; Ishii et al, 1996; Borisyuk and Borisyuk, 1997; Kaneko and Tsuda, 2001). A hierarchical approach to neural dynamics is formulated by Freeman (1975, 2001). It is summarized here as the 10 Building Blocks of the dynamics of neural populations. Here we list the first 5 principles relevant to neuropercolation at present: • State transition of an excitatory population from a point attractor with zero activity to a non-zero point attractor with steady-state activity by positive feedback. • Emergence of oscillations through negative feedback between excitatory and inhibitory neural populations. • State transitions from a point attractor to a limit cycle attractor that regulates steady-state oscillation of a mixed excitatory-inhibitory cortical population. • Genesis of chaos as background activity by combined negative and positive feedback among three or more mixed excitatory-inhibitory populations. • Distributed wave of chaotic activity that carries a spatial pattern of amplitude modulation made by the local heights of the wave. Various components of these and related neurodynamic principles have been implemented in computational models. For example, the Katchalsky K-models use a set of ordinary differential equations with distributed parameters to describe the hierarchy of neural populations starting from micro-columns to the hemispheres (Freeman et al, 2001; Kozma et al, 2003). Neuropercolation, on the other hand, uses tools of percolation theory and random graphs to model principles of neurodynamics based on discrete approach. Extensive work is conducted on the formation and dynamics of structural and functional clusters in the cortex (Bressler, 2006; Sporns, 2006; Jirsa and McIntosh, 2007). Neuropercolation describes these effects in discrete models, and future studies aim at establishing the connection between discrete and continuous approaches. ## Generalizations of Percolation Theory for Neural Masses ### Properties of Neuropercolation Models Basic bootstrap percolation has the following properties: (i) it is a deterministic process following random initialization; (ii) the model always progresses in one direction, i.e., from inactive states to active ones and never backwards. Under such conditions, these mathematical models exhibit phase transitions with respect to the initialization probability $$p\ .$$ Neuropercolation models develop neurobiologically motivated generalizations of bootstrap percolations. Neuropercolation incorporates the following major conditions inferred based on the features of the neuropil, the filamentous neural tissue in the cortex. • Interaction with noise: The dynamics of the interacting neural populations is inherently non-deterministic due to dendritic noise and other random effects in the nervous tissue and external noise acting on the population. This is expressed by Szentagothai (1978, 1990): "Whenever he is looking at any piece of neural tissue, the investigator becomes immediately confronted with the choice between two conflicting issues: the question of how intricate wiring of the neuropil is strictly predetermined by some genetically prescribed blueprint, and how much freedom is left to chance within some framework of statistical probabilities or some secondary mechanism of trial and error, or selecting connections according to necessities or the individual history of the animal." A possible resolution of the determinism-randomness dilemma was based on the principle described as "randomness in the small and structure in the large" (Anninos et al. 1970, Harth et al. 1970). Neuropercolation includes randomness in the evolution rules, and it is described in random cellular automata and in other models. Randomness plays a crucial role in neuropercolation models. The situation resembles the case of stochastic resonance (Moss and Pei, 1995; Bulsara and Gammaitoni, 1996). An important difference from chaotic resonance is the more intimate relationship between noise and the system dynamics, due to the excitable nature of the neuropil (Kozma et al., 2001; Kozma, 2003). • Long axon effects: Neural populations stem ontogenetically in embryos from aggregates of neurons that grow axons and dendrites and form synaptic connections of steadily increasing density. At some threshold the density allows neurons to transmit more pulses than they receive, so that an aggregate undergoes a state transition from a zero point attractor to a non-zero point attractor, thereby becoming a population. Relevant behaviors have been described in random graphs and conditions for phase transitions are given (Erdos and Renyi, 1960, Bollobas, 1985). In neural populations, most of the connections are short, but there are a relatively few long-range connections mediated by long axons (Das and Gilbert, 1995). The effect of long-range axons are similar to small-world phenomena (Watts, Strogatz, 1998; Strogatz, 2001) and it is part of the neuropercolation model. • Inhibition: Another important property of neural tissue that it contains two basic types of interactions: excitatory and inhibitory ones. Increased activity of excitatory populations positively influence (excite) their neighbors, while highly active inhibitory neurons negatively influence(inhibit) the neurons they interact with. Inhibition contributes to the emergence of sustained narrow-band oscillatory behavior in the neural tissue (Aradi et al., 1995; Arbib et al., 1997). Inhibition is key in various brain structures; e.g., hippocampal interneurons are almost exclusively inhibitory (Freund and Buzsaki, 1996). Inhibition is inherent in cortical tissues and it controls stability and metastability observed in brain behaviors (Kelso, 1995; Xu and Principe, 2004; Ilin and Kozma, 2006; Kelso and Angstrom, 2006; Kelso and Tognoli, 2007). Inhibitory effects are are part of neuropercolation models. Neural populations may exhibit scale-free behavior in their structure, dynamics, and function (Aldana and Larralde, 2004; Sporns, 2006; Scale-Free Neocortical Dynamics). Neuronal avalanches have been identified as processes leading to scale-free dynamics in cortical tissue (Beggs et al, 2003). Scale-free behavior in random graphs has been rigorously analyzed by percolation methods (Bollobas, 2001; Bollobas and Riordan, 2003, 2006). Physical and computational modeling of scale-free phenomena, including preferential attachment, produced some spectacular results (Albert and Barabasi, 2002; Barabasi, 2002; Newman et al., 2002). See also Scale-free Neocortical Dynamics entry in this Encyclopedia. ### Probabilistic Cellular Automata A broad family of probabilistic cellular automata is defined over $$d$$-dimensional discrete tori $$\mathbb{Z}^d_N$$ (Balister et al., 2006). Let $$A$$ be the set of possible states. In the simplest case there are just 2 states: active (+) or inactive (-), which case is considered here. The (closed) neighborhood of node $$x$$ is denoted by $$\Gamma_x \subset \mathbb{Z}^d_N\ .$$ At a given time instant t, $$x$$ becomes active with probability $$p$$ which is the function of the state of the sites in $$\Gamma_x\ .$$ Since $$\Gamma_x$$ is a closed neighborhood, $$p$$ may depend on the state of $$x$$ itself. Accordingly, $$p$$ is a function $$p\colon A^{\Gamma_x}\times A\to [0, 1]$$ that assigns for each configuration $$\phi\colon\Gamma_x\to A$$ and each $$a\in A$$ a probability $$p_{\phi,a}$$ with $$\sum_{a\in A}p_{\phi,a}=1$$ for all $$\phi\ .$$ We define a sequence of configurations $$\Phi_t\colon\mathbb{Z}^d_N\to A$$ by setting $$\Phi_{t+1}(x)=a$$ independently for each $$x\in \mathbb{Z}^d_N$$ with probability $$p_{\phi,a}\ .$$ We start the process with some specified initial distribution over the torus $$\Phi_0\ .$$ The process $$\Phi_t$$ is called probabilistic cellular automaton or PCA. These models have also been referred to as contact processes and have been studied in some cases on infinite graphs (Holley and Liggett, 1995). Probabilistic cellular automata generalize deterministic cellular automata such as Conway's game of life. Probabilistic automata display very complex behavior, including fixed points, stable limit cycles, and chaotic behaviors, which pose extremely difficult mathematical problems and are beyond reach of thorough analysis in general. Several rigorous results have been achieved in specific instances. ### Isotropic Cellular Automata It is often assumed that $$p_{\phi,a}$$ depends only on the cardinality of the set of the neighbors which are in active state, and on the state of the given site. These models are called isotropic. Then the notation $$p^{-}_r$$ is used instead of $$p_{\phi,+}\ ,$$ where $$r$$ is the number of active sites in $$\Gamma_x$$ and $$\Phi(x) = -\ .$$ Similarly, $$p^{+}_r$$ is used for the given $$r$$ and with the condition $$\Phi(x) = +\ .$$ Isotropic models are substantially more restrictive than the general case, but they still have complex behavior, sometimes including spontaneous symmetry breaking (Balister et al, 2006). We call the model fully isotropic if $$p^{+}_r=p^{-}_r=p_r$$ for all $$r\ .$$ In this case, the site itself is treated on the same basis as its neighbors. If the behavior of the isotropic model is unchanged while interchanging + and -, it is called symmetric. • Example 3 (Probabilistic Cellular Automata with Majority Voting Rule): The case of two-dimensional torus is considered of lattice size $$N \times N\ .$$ Define $$\Gamma_x$$ be the local neighborhood which consists of 5 nodes, i.e., the 4 nearest neighbors and the node itself. For a fixed probability $$0 < p < 1\ ,$$ the probabilistic majority voting is expressed as follows. The probability of being active the next time step is given by $$p$$ if the majority of the nodes in the neighborhood are active, and $$(1-p)$$ if the minority of the nodes are active. These are also called $$p$$-majority percolation. The majority voting rule defines an isotopic and symmetric model with transition probabilities $$p^{-}_{0} = p^{-}_{1} = p^{-}_{2}$$ = $$(1-p) = p^{+}_{0} = p^{+}_{1} = p^{+}_{2}\ ,$$ and $$p^{-}_{3} = p^{-}_{4} = p^{-}_{5}$$ = $$p = p^{+}_{3} = p^{+}_{4} = p^{+}_{5}\ .$$ ### Mean Field Models Mean field models are related to probabilistic cellular automata as follows. In the mean field model, instead of considering the number of active nodes in the specified neighborhood $$\Gamma\ ,$$ the activations of $$|\Gamma|$$ randomly selected grid nodes are used in the update rule (with replacement). Since there is no ordering of the neighbors, the transition probabilities depend only on the number of active states in the selected $$|\Gamma|$$-tuples. It is clear that the mean field model does not depend on the topology of the grid. Considering a 2D torus of size $$N\times N\ ,$$ the density of active points $$\rho_t\in[0,1]$$ is defined as $$\rho_t = N_{A,t}/N^2\ ,$$ where $$N_{A,t}$$ is the number of active nodes on torus at time $$t\ .$$ The density $$\rho_t$$ acts as an order parameter and it can exhibit various dynamic behaviors depending on the details of the probabilistic rules. Mean field models are mathematically more tractable at present and they provide initial insight into the dynamics of more general neuroperoclation models. ## Mathematical Results on Phase Transitions in Neuropercolation ### Phase Transitions in Random Majority Percolation Models In local models, a rigorous proof has been found of the fact that for extremely small values of $$p$$ (depending on the size of the lattice $$N$$) the model spends a long time in either low- or high-density configurations before the very rapid transition to the other state (Balister et al., 2005). Fairly good bounds have been found on the (very long) time the model spends in the two essentially stable states and on the (comparatively very short) time it takes to cross from one essentially stable state to another. The proof is only given for the case of a very weak random component. The behavior of the lattice models differs from that in the mean field model in the manner of these transitions. For the mean field model, transitions typically occur when random density fluctuations result in about one half of the states being active. When this occurs, the model passes through a configuration which is essentially symmetric between the low- and high-density configurations, and is equally likely then to progress to either one. In the lattice models, certain configurations with very low density can have a large probability of leading to the high-density configuration, and transitions from low to high density typically occur via one of these (non-symmetric) configurations. Figure 2: Low density configuration of active sites on an $$N \times N$$ torus that nevertheless will with high probability lead to a high-density configuration in time $$O(N/p)\ .$$ Each band is of width at least 2 and wraps around the torus. It is also known that there is a constant $$p_0<0.5$$ such that for \(p_0 p_c\ ,\) the model spends most of its time with a density about 0.5, but for $$p<p_c$$ and $$N$$ sufficiently large, the model spends most of its time in either a low-density or a high density state. • Example 4 (Phase Transition in 2D Majority Percolation): Consider a 2-dimensional torus of size $$N \times N$$ with $$p$$-majority transition rules, when the neighborhood contains 5 sites. Then one only needs two thin intersecting bands of active sites to ensure a high probability of reaching the high-density state in a short time; an example of the required 2-band configuration is shown on Figure 2. The transition is proven for probability $$p \propto 1/N^{2}\ ,$$ and it is conjectured to be valid for a broader range of probabilities (Balister et al., 2005). ### Large-scale Deviations in Mean-field Models of Probabilistic Cellular Automata In mean field models described previously, a given number of randomly selected grid nodes are used in the update rule (with replacement). The number of selected sites is chosen as the cardinality of the neighborhood set $$|\Gamma|\ .$$ Mean field models have at least one stable fixed point and can have several stable and unstable fixed points, limit cycles, and chaotic oscillations. For large lattice size $$N\ ,$$ the mean density of active sites $$\rho_t$$ is given approximately as a normal distribution. The mean of the normal distribution is $$f_{m}(x)\ ,$$ where (for a fully isotropic model): $f_m(x) = \sum_r{{|\Gamma|} \choose {r}}p_rx^r(1-x)^{|\Gamma|-r}.$ Iterations of the map $$\rho_{t+1} = f_{m}(\rho_t)$$ can result in stable fixed points, limit cycles, or chaotic behavior depending on the initial value $$\rho_0\ .$$ Various conditions have been derived for stable fixed point solutions, and phase transitions between stable fixed points have been analyzed in various mean field models (Balister et al., 2006). Figure 3: Stable and unstable fixed points of the mean field models as the function of the system noise $$p\ .$$ Solid line: stable fixed points, dash: unstable fixed points. • Example 5 (Phase Transitions in 2D Mean Field Models): Consider the symmetric fully isotropic mean field model on the 2-dimensional lattice. Transition probabilities are reduced to the ones given in Example 3. The fixed point is determined by the condition that $$x_{t} = f_{m}(x_{t})\ .$$ This fixed point is denoted as $$\rho\ .$$ Using the majority update rule, one can readily arrive at a transcendental equation for the fixed points. It can be shown that there is a stable fixed point for $$p_c < p \leq 0.5\ ,$$ while there are two stable and an unstable fixed points for $$p < p_c\ .$$ Here $$p_c$$ is the critical probability, and the exact value $$p_c = 7/30$$ is derived in the case of neighborhood size $$|\Gamma| = 5$$ and majority update rule. Near the critical point the density versus $$p$$ relationship approximates the following power law behavior with very good accuracy$|\rho - 0.5| \propto (p_{c} - p)^{\beta},$ where $$\beta =0.5\ .$$ Figure 3 illustrates the stable density values as solid lines. Density level 0.5 is the unique stable fixed point of the process above the critical point $$p\ge p_c\ ,$$ while it becomes unstable below $$p_c\ .$$ ### Transition Time in Majority Percolation and Mean Field Models The average time between transitions is governed by the average time it takes for one of these special configurations to occur (see Fig. 2), and transitions do not typically go through symmetric configurations. A snapshot of the model transitioning half way from a high-density to a low-density configuration will look very different from a snapshot of the transition from a low to a high-density configuration. On an $$N \times N$$ torus in the case of local majority percolation the average time between transitions is $$\exp(O(N \log p))$$ (Balister et al, 2005). For the mean field model, the average waiting time between transitions is $$\exp(O(N^2 \log p))$$ (Balister et al., 2006). The transition itself is however fast, requiring only time $$O(N/p)\ .$$ The rapid transitions between persistent states can be interpreted in the context of metastability as introduced in HKB model by Kelso and colleagues (Kelso, 1995; Kelso and Tognoli, 2007). The theoretical results justify the use of the terminology 'neuropercolation', describing the exponentially long waiting period, followed by a quick transition from one metastable state to another. The quick transition can be described effectively as a percolation phenomenon. ### Open Mathematical Problems Probabilistic cellular automata, random majority percolation, and various neuropercolation models are relatively new and little known mathematical objects. They pose a number of challenging mathematical problems, including the following ones: What is the behavior of the $$p$$-majority cellular automata in the general case? What are the conditions of stable states? Is there a phase transition depending on $$p\ ?$$ How does additional randomness, e.g., rewiring with long-range connections, influence the dynamics? How to estimate the time the system stays in a stable state before it flips into another stable state? Answering these and a lot of related questions with mathematical rigor is beyond reach at present. Computational simulations can provide guidance for working hypotheses toward further mathematical analysis, as described in the next section. ## Computational Models of Neuropercolation and Critical Behavior ### Critical Behavior in Local Probabilistic Cellular Automata Figure 4: Snapshots of 3 PCA systems with noise levels $$p$$ = 0.11, 0.134, and 0.20, respectively. The second diagram illustrates critical behavior, while the other two figures show subcritical (ferromagnetic) and supercritical (paramagnetic) regimes. As opposed to mean field models, an analytical solution is not available for the local models and computer simulations are used to study these systems. First, nearest neighbor configuration is considered with $$p$$-majority percolation on the 2-dimensional torus. Figure 4 illustrates the system behavior for $$p$$ values 0.11, 0.134, and 0.20, respectively. The first panel of Fig. 4 is for $$p=0.11$$ and one can see the dominance of active sites (white). This is an illustration of clear nonzero magnetization as in ferromagnetic states. On the third panel of Fig. 4 $$p=0.20$$ and the active and inactive sites are equally likely. The magnetization is close to zero (paramagnetic regime). The middle panel on Fig. 4 shows a behavior where very large clusters of active and inactive sites are formed. This case has been calculated for $$p=0.134\ .$$ Finite size scaling theory of statistical physics is applied to characterize the observed behavior. The behavior of the local PCA is qualitatively similar to mean field models shown in Fig. 3. Namely, there is a critical probability $$p_c\ ,$$ and for $$p>p_c$$ the stationary density distribution of $$\rho_t$$ is unimodal, while it becomes bimodal for $$p<p_c\ .$$ There are two phases, one with high density and one with low density, similarly to mean field models. Calculations show that, in the local model, the critical probability is significantly below the one obtained for the mean field$p_c\approx 0.134\ ,$ compared to $$p_c=0.233\ ,$$ respectively. The exponent of the power law scaling of $$m$$ near the critical point is different as well, compare $$\beta =0.5$$ for the mean field model, and $$\beta \approx 0.130$$ for the local model. Methods of finite size scaling from statistical physics are used to interpret these findings, see next section. ### Critical Exponents and Finite Size Scaling The methodology previously developed for Ising spin glass systems (Binder, 1981) is applied here to characterize processes in PCA. If the number of active and inactive sites is equal at a given time, the activation density $$\rho_t$$ becomes 0.5. This corresponds to a basal state in magnetic materials with no magnetization. Deviations from the 0.5 level at any time are given as ($$|\rho_t - 0.5|$$) signify magnetization. The expected value of magnetization $$m$$ is estimated for a series of $$T$$ iterations as follows$<m> = <|\rho_t - 0.5|> \approx 1/T \sum_{t=1}^T{|\rho_t-0.5|}\ .$ The susceptibility is defined using magnetization $$m$$ as$\chi = <m^2> - <m>^2 \ .$ For the definition of correlation length $$\xi\ ,$$ see (Makowiec, 1999). For Ising systems, magnetization, susceptibility, and correlation length satisfy a power law scaling behavior near criticality. In order to determine whether the terminology critical behavior is justified in the case of neuropercolation models, various statistical properties of the computed processes have been evaluated. Recall, that in mean field models, the scaling law for magnetization is given by Ex. 5, near the critical probability $$p_c\ .$$ The scaling laws for $$\chi$$ and $$\xi$$ are defined similarly: $\chi \sim |p - p_c|^{-\gamma} ~~~, \xi \sim |p - p_c|^{-\nu}$ The fourth order cumulants are defined as $$U(N, p) = <m^4>/<m^2>^2\ ,$$ where $$N$$ is the lattice size, and $$p$$ is the noise parameter. Finite size scaling theory tells that the fourth order cumulants are expected to intersect each other at a unique point which is independent of lattice size. The corresponding probability of this unique point is the critical probability, see Fig. 5. Figure 5: Critical probability estimation using the fourth order cumulants given by Eq. 3; the curves correspond to lattice sizes 45, 64, 91 and 128. The obtained value of $$p_c$$ = 0.13423 $$\pm$$ 0.00002 (Puljic et al., 2005). In order to test the consistency of the critical behavior in neuropercolation models, the identity relationship $$2\beta + \gamma = 2\nu$$ has been calculated. Recall that this identity holds for the critical exponents in Ising systems (Binder, 1980). This identity is considered as a measure of the quality of the estimation on the critical exponents is a given system. $$\beta$$ $$\gamma$$ $$\nu$$ Error PCA 0.1308 1.8055 1.0429 0.02 TCA 0.12 1.59 0.85 0.13 Ising ( 2D ) 0.125 1.75 1 0 CML 0.115 1.55 0.89 0.00 The results of PCA calculations are summarized and in Table 1 along with the parameters of the Ising system, Toom CA, and coupled map lattice models (CML). The 'Error" in the last column indicates the error of the identity function of the critical exponents. As Table 1 shows, the identity function is satisfied with high accuracy in the studied neuropercolation models. This indicates that the local PCA exhibits behavior close to an Ising model, i.e., it belongs to the weak Ising class (Kozma et al., 2005). This result also lends support to the terminology generalized phase transitions in the context of the studied neuropercolation models. These concepts are generalized further in even more complex neuropercolation models with small world effects and inhibition. ### Long-range Axonal and Inhibition Effects in Neuropercolation Figure 6: Activation density as the function of the noise level in systems with no random long-range neighbor, and with various ratios of remote neighbors (Kozma et al., 2005). Long axon effects are modelled when a certain proportion ($$0 \leq q \leq 1)$$ of regular lattice connections is replaced (rewired) by randomly selected links (Kozma et al., 2004; Puljic and Kozma, 2005). The case of $$q = 0$$ describes completely regular lattice connections, while $$q = 1$$ means that all connections are selected at random as in mean field models. An intermediate value of $$q$$ characterizes a system with some rewiring, just as in the small-world models (Strogatz, 2002). Figure 6 contains results that generalize the mean field case, c.f., Fig. 3. Different Curves correspond to different rewiring ratios (Kozma et al., 2005). The rightmost curve corresponds to the mean field case (all connections are rewired), while the leftmost curve describes the regular lattice with local connections only (no rewiring). Intermediate situations are shown with the curves between local and mean field models. $$\beta$$ $$\gamma$$ $$\nu$$ Error PCA:local 0.1308 1.8055 1.0429 0.02 SW:6.25% 0.3071 1.1920 0.9504 0.09 SW:12.5% 0.4217 0.9873 0.9246 0.02 SW:100% 0.4434 0.9371 0.9026 0.02 The critical exponents obtained for models with various degrees of small-world effects are given in Table 2; notations are same as in Table 1. In the case of SW (6.25$$\%$$) model, 6.25$$\%$$ of the local lattice connections are rewired to. Table II shown that the non-local systems may belong to a weak-Ising class, where the hyperscaling relationship is approximately satisfied (Puljic, Kozma, 2005). Figure 7: Phase lag values evolving in time for a two-layer lattice system with 6.25$$\%$$ nonlocal (axonal) connections for a system with 256 channels; (a) Noise level 13% (subcritical): high synchrony is seen across the array. (b) Noise level 15% (critical noise): there is spontaneous, intermittent desynchronization across the array. (c) Noise level 16% (super-critical noise): the synchrony between channels is diminished (Puljic, Kozma, 2006). The behavior of neuropercolation model with excitatory and inhibitory nodes is illustrated on Fig. 7. Due to the negative feedback, these models may generate sustained limit cycle and non-periodic oscillations, similar to the behavior previously observed in models based on coupled differential equations. The spatial distribution of synchronization shows that the subcritical regime is characterized by rather uniform synchronization patterns. On the other hand, supercritical regime shows high-amplitude, unstructured oscillations. Near critical parameters, intermittent oscillations emerge, i.e., relatively quiet periods of weak oscillations followed by periods of intensive oscillations in the synchronization (Puljic and Kozma, 2006). The sparseness of connectivity to and from inhibitory populations acts as a control parameter, in addition to the system noise level $$p$$ and the rewiring ratio $$q\ .$$ The system shown in Figs. 7a-c has a few $$\%$$ of connectivity between excitatory and inhibitory units. ### Example of Ontogenetic Development and Criticality in the Neuropil Figure 8: Illustration of self-organization of critical behavior in the percolation model of the neuropil. By way of structural evolution, the neuropil evolves toward regions of criticality or edge-of-criticality. Once the critical regions are established, the connectivity structure remains essentially unchanged. However, by adjusting the noise and gain levels, the system can be steered towards or away from critical regions (Kozma et al., 2005). The following hypothesis is proposed regarding the emergence of critical behavior in the neuropil. The neural connectivity is sparse in the neuropil at the embryonic stage. Following birth, the connectivity increases and ultimately reaches a critical level, at which the neural activity becomes self-sustaining. The brain tissue as a collective system is at the edge of criticality. The combination of structural properties and dynamical factors, like noise level and input gain, the system may transit between subcritical, critical, and supercritical regimes. This mechanism is illustrated on Fig. 8. By way of structural evolution, the neuropil evolves toward regions of criticality or edge-of-criticality. Once critical regions are established, the connectivity structure remains essentially unchanged. However, by adjusting the noise and/or gain levels, the system can be steered towards or away from critical regions. Clearly, the outlined mechanism is incomplete and in realistic neural systems a host of additional factors play crucial role. However, the given mechanism is very robust and it may provide the required dynamical behavior in a wide range of real life conditions. ## References • Adler, J. (1991) Bootstrap percolation, Physica A, 171, 453-470. • Adler, J., van Enter and J. A. Duarte (1990) Finite-size effects for some bootstrap percolatioin models, J. Statist. Phys, 60, 322-332. • Aihara, K., Takabe T., Toyada M., (1990). Chaotic neural networks, Phys. Lett. A., 144(6-7), 333-340. • Aizeman and Lebowitz (1988) Metastability effects in bootstrap percolation, Journal Phys. A, 21, 3801-3813. • Albert, R., Barabási, A.L., Statistical mechanics of complex networks, Rev. Mod. Phys. 74, 47 (2002). • Aldana, M., Larralde, H. (2004) Phase transitions in scale-free neural networks: Departure from the standard mean-field universality class, Phys. Rev. E, 70, 066130. • Andreyev, Y.V., Dimitriev, A.S., Kuminov D.A. (1996) 1-D maps, chaos and neural networks for information processing, Int. J. Bifurcation and Chaos, 6(4), 627-646. • Anninos PA, Beek B., Csermely T., Harth E., Pertile G. (1970) Dynamics of neural structures, J.Theor. Biol., 26: 121-148. • Aradi, I., Barna G., Erdi P. (1995), Chaos and Learning in the olfactory bulb, Int. J. Intel. Syst., 10(1), 89-117. • Arbib, M.A., Erdi, P., Szentagothai, J. (1997) Neural Organization: Structure, Function, Dynamics, MIT Press, Cambridge, MA. • Babloyantz, A., and Desthexhe A., (1986), Low-dimensional chaos in an instance of epilepsy, Proc. Natl. Acad. Sci. USA, 83, 3513-3517. • Balister P., Bollobas, B., and A. Stacey (1993) Upper bounds for the critical probability of oriented percolation in two dimensions, Proc. Royal Soc., London Sr., A., 400., no 1908, 202-220. • Balister, P.N., Bollobas, B., and A. M. Stacey (2000) Dependent percolation in two dimensions, Probability Theory and Related Fields, 117, No.4, 495-513. • Balister, P., Bollobas, B., Johnson, R., Walters, M. (2005) Majority Percolation (submitted, revised). • Balister, P., B. Bollobas, R. Kozma (2006) Large-Scale Deviations in Probabilistic Cellular Automata, Random Structures and Algorithms, 29, 399-415. • Barabasi A-L (2002) Linked. The New Science of Networks. Cambridge MA: Perseus Press. • Beggs, J. M. and D. Plenz (2003). Neuronal avalanches in neocortical circuits. J Neurosci 23(35): 11167-77. • Berlekamp, E.R., JH Conway, and RK Guy, (1982) Winning Ways for your mathematical plays, Vol. 1: Games in General, Academic Press, New York, NY. • Binder, K. Finite scale scaling analysis of Ising model block distribution function, Z. Phys. B. 43, 119-140, 1981. • Bollobas, B., and Stacey, A. (1997) Approximate upper bounds for the ciritcal probability of oriented percolation in two dimensions based on rapidly mixing Markov chains, J. Appl. Probaility, 34. no. 4, 859-867. • Bollobas B (2001) Random Graphs. Cambridge Studies in Advanced Mathematics 2nd Ed. Cambridge University Press, Cambridge, UK. • Bollobas, B., Riordan, O. (2003) Results on scale-free random graphs. Handbook of graphs and networks, 1-34, Wiley-VCH, Weinheim. • Bollobas, B., Riordan, O. (2006) Percolation. Cambridge University Press, Cambridge, UK. • Borisyuk, R.M., Borisyuk, G.N., (1997), Information coding on the basis of synchronization neuronal activity, Biosystems, 40(1-2), 3-10. • Bressler, S.L., Tognoli, E. (2006) Operational principles of neurocognitive networks, Int J Psychophysiol., 60(2), 139-48. • Brown, R., Chua, L. (1999) Clarifying chaos 3. Chaotic and stochastic processes, chaotic resonance and number theory, Int. J. Bifurcation and Chaos, 9, 785-803. • Bulsara, A., Gammaitoni, L. (1996) Tuning in to noise. Physics Today, March, 1996, 39-45. • Cerf, R. and Cirillo, E.N., (1999) Finite size scaling in three-dimensional bootstrap percolation, Ann. Probab., 27, no. 4., 1837-1850. • Das, A., Gilbert, C.D. (1995) Long-range horizontal connections and their role in cortical reorganization revealed by optical recording of cat primary visual cortex. Nature, 375, 780-784. • Erdos, P. and Renyi A. (1960). On the evolution of random graphs, Publ. Math. Inst. Hung. Acad. Sci. 5: 17-61. • Freeman, W.J. (1975) Mass Action in the Nervous System. Academic Press, New York. • Freeman, W.J. How Brains Make up Their Minds, Columbia University Press, 2001. • Freeman, W.J., Kozma, R., and Werbos, P. J., (2001). Biocomplexity - Adaptive Behavior in Complex Stochastic Dynamical Systems, BioSystems, 59, 109-123. • Freund T.F., Buzsaki G. (1996) Interneurons of the hippocampus. Hippocampus 6:347-470. • Gravner, J. and McDonald, E., (1997) Bootstrap percolation in a polluted environment, J. Stat. Phys., 87 (3-4), 915-927. • Grimmett, G. (1999) Percolation in Fundamental Principles of Mathematical Sciences, Spinger-Verlag, Berlin. • Grossberg, S. (1988), Nonlinear Neural Networks: Principles, Mechanisms, and Architectures, Neural Networks, 1, 17-61. • Harth, E.M., Csermely, T., Beek, B., Lindsay, R.P. (1970) Brain functions and neural dynamics, J.Theor.Biol. 26: 93-100. • Holley, R., T.M. Liggett (1995) Ann. Probability 5, 613–636. • Hopfield, J.J., (1982) Neural networks and physical systems with emergent collective computational abilities, Proc. National Academy of Sciences USA, 79, 2554-2558. • Hoppensteadt F.C., Izhkevich E.M. (1998) Thalamo-cortical interactions modeled by weakly connected oscillators: could the brain use FM radio principles? BioSystems, 48: 85-94. • Ilin, R., Kozma, R. (2006) Stability of coupled excitatory–inhibitory neural populations and application to control of multi-stable systems, Phys. Lett. A 360, 66–83. • Ishii, S., Fukumizu K., Watanabe S., (1996), A network of chaotic elements for information processing, Neur. Netw. 9(1), 25-40. • Jirsa, V. K.; McIntosh, A.R. (Eds.) (2007) Handbook of Brain Connectivity, Understanding Complex Systems, Springer Verlag, Heidelberg. ISBN: 978-3-540-71462-0 • Kaneko K, Tsuda I. Complex Systems: Chaos and Beyond. A Constructive Approach with Applications in Life Sciences, 2001. • Kauffman, S. A. (1990), Requirements for evolvability in complex systems: orderly dynamics and frozen components, Phys. D, 42, 135-152. • Kelso, J. A. S. (1995) Dynamic Patterns: The Self-Organization of Brain and Behavior. MIT Press, Cambridge, MA. • Kelso, J.A.S., Engstrom, D.(2006) The Complementary Nature. MIT Press, Cambridge, MA. • Kelso, J.A.S, Tognoli, E., (2007) Toward a Complementary Neuroscience: Metastable Coordination Dynamics of the Brain, in: “Neurodynamics of Cognition and Consciousness,” Perlovsky, L. and Kozma, R. (eds), Understanding Complex Systems, Springer Verlag, Heidelberg. • Kozma, R. and Freeman, W.J. (2001), Chaotic Resonance - Methods and applications for robust classification of noisy and variable patterns, Int. J. Bifurcation and Chaos, 11(6), 2307-2322. • Kozma R, Freeman WJ, Erdi P. (2003) The KIV model – nonlinear spatio-temporal dynamics of the primordial vertebrate forebrain. Neurocomputing, 52: 819-826. • Kozma, R., (2003) On the Constructive Role of Noise in Stabilizing Itinerant Trajectories, Chaos, 13(3), 1078-1090. • Kozma, R., and Freeman, W.J., (2003) Basic Principles of the KIV Model and its application to the Navigation Problem, Int. J. Integrat. Neurosci., 2, 125-139. • Kozma, R., Puljic, M., Balister, P., Bollobas, B., and Freeman, W. J. (2004). Neuropercolation: A random cellular automata approach to spatio-temporal neurodynamics. Lecture Notes in Computer Science, 3305, 435-443. http://repositories.cdlib.org/postprints/1013/ • Kozma, R., Puljic, M., Balister, P., Bollobas, B., and Freeman, W. J. (2005). Phase transitions in the neuropercolation model of neural populations with mixed local and non-local interactions. Biological Cybernetics, 92(6), 367-379. http://repositories.cdlib.org/postprints/999/ • Makowiec, D. (1999) Stationary states for Toom cellular automata in simulations, Phys. Rev. E 60, 3787-3796. • Marcq, P., Chate, H., Manneville, P. (1997) Universality in Ising-like phase transitions of lattices of coupled chaotic maps, Phys. Rev. E, 55(3), 2606-2627. • Moss, F. and Pei, X., (1995) Stochastic resonance - Neurons in parallel, Nature, 376, 211-212 • Newman, M.E.J., Jensen, I., Ziff, R.M. (2002) Percolation and epidemics in a two-dimensional small world, Phys. Rev. E, 65, 021904, 1-7. • Puljic, M. and Kozma, R. (2005). Activation clustering in neural and social networks. Complexity, 10(4), 42-50. • Puljic, M., Kozma, R. (2006) Noise mediate Intermittent Synchronization of behaviors in the Random Cellular Automaton Model of Neural Populations, Proc. ALIFEX, MIT Press. • Schiff, S.J. et al, (1994). Controlling chaos in the brain, Nature, 370, 615-620. • Schonmann, R. (1992) On the behavior of some cellular automata related to bootstrap percolation, Ann. Probability, 20(1), 174-193 • Stam, C.J., et al. (2005) Nonlinear dynamical analysis of EEG and MEG: Review of an emerging field, Clinical Neurophysiology 116 (2005) 2266-2301. • Sporns, O. (2006) Small-world connectivity, motif composition, and complexity of fractal neuronal connections, BioSystems, 85, 55-64. • Steyn-Ross, D.A., Steyn-Ross, M.L., Sleigh, J.W., Wilson, M.T., Gillies, I.P., Wright, J.J. The Sleep Cycle Modeled as a Cortical Phase Transition, Journal of Biological Physics 31: 547-569, 2005. • Strogatz, S. H. (2001) Nature, 410 (6825) 268–276. • Szentagothai, J. (1978) Specificity versus (quasi-) randomness in cortical connectivity; in: Architectonics of the Cerebral Cortex Connectivity, Brazier, M.A.B., and Petsche, H. (Eds.), New York, Raven Press, pp.77-97. • Szentagothai, J. (1990) "Specificity versus (quasi-) randomness" revisited, Acta Morphologica Hungarica, 38:159-167. • Watts DJ, Strogatz SH. Collective dynamics of “small-world” networks. Nature 1998, 393: 440-442. • Wolfram, S. (2002) A New Kind of Science, Wolfram Media Inc., Champaign, IL. • Xu, D., J.C. Principe, IEEE Trans. Neural Networks 15 (2004) 1053. Internal references • John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815. • Valentino Braitenberg (2007) Brain. Scholarpedia, 2(11):2918. • James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629. • James M. Bower and David Beeman (2007) GENESIS. Scholarpedia, 2(3):1383. • Walter J. Freeman (2007) Hilbert transform for brain waves. Scholarpedia, 2(1):1338. • John J. Hopfield (2007) Hopfield network. Scholarpedia, 2(5):1977. • Peter Jonas and Gyorgy Buzsaki (2007) Neural inhibition. Scholarpedia, 2(9):3286. • John M. Beggs (2007) Neuronal avalanche. Scholarpedia, 2(1):1344. • Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. • Walter J. Freeman (2007) Scale-free neocortical dynamics. Scholarpedia, 2(2):1357. • Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838. • Catherine Rouvas-Nicolis and Gregoire Nicolis (2007) Stochastic resonance. Scholarpedia, 2(11):1474. • Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 167, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8533755540847778, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/classical-physics+homework
# Tagged Questions 1answer 69 views ### Standing Waves: finding the number of antinodes A string with a fixed frequency vibrator at one end forms a standing wave with 4 antinodes when under tension T1. When the tension is slowly increased, the standing wave disappears until tension T2 is ... 2answers 87 views ### Calculating phase difference of sound waves An observer stands 3 m from speaker A and 5 m from speaker B. Both speakers, oscillating in phase, produce waves with a frequency of 250 Hz. The speed of sound in air is 340 m/s. What is the phase ... 2answers 48 views ### effect of vertical collision on kinetic friction and subsequent change in horizontal velocity Suppose somehow a block of mass $m$ is moving on ground, and the coefficient of kinetic friction between the block and the block is $\mu_k$. If I drop a tennis ball(of same mass) on it from a ... 1answer 130 views ### Solution of a partial differential heat equation with derivative and boundary conditions I want to solve the following partial different equation. Find $u(x, t)$, satisfying $u_t = u_{xx}$ , $u(x, 0) = x − x^2$ , $u(0, t) = T_0$ , $u_x (1, t) = 0$ and $|u|$ is bounded. Using separation ... 3answers 442 views ### Rotational speed of a coil in a uniform magnetic field at equilibrium I'm looking at the following problem from "Physics 3" by Halliday, Resnick and Krane (4th edition): The armature of a motor has 97 turns each of area 190 cm² and rotates in a uniform magnetic ... 0answers 82 views ### Aerodynamic drag on a cannonball? I'm trying to build a ballistics simulation where I shoot a cannonball. I want to allow for drag and am trying to work out the math to do so. I can work the drag out using \$F = Cd\times S\times ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9095101356506348, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/37237-express-curve-standard-form.html
# Thread: 1. ## Express curve in standard form Use an orthogonal transformation and a translation to express in standard form the curve given by the equation 2. Originally Posted by matty888 Use an orthogonal transformation and a translation to express in standard form the curve given by the equation You will want to rotate the axes here. $x = x'~cos(\theta) - y'~sin(\theta)$ $y = x'~sin(\theta) + y'~cos(\theta)$ Then you want to pick a value for $\theta$ such that the x'y' term has a coefficient of 0. Then its just a matter of completing the square on x' and y'. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072474837303162, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/43303/relationship-between-hierarchy-problem-and-higgs-fine-tuning?answertab=votes
# Relationship between hierarchy problem and higgs fine tuning? I often hear of hierarchy problem being used synonymous with Higgs fine tuning (esp with regards with motivations for SUSY). What exactly is the relationship between the two problems? As I understand it, the quadratic divergence from the Higgs self coupling means you need a lot of fine tuning to get a low Higgs mass. However, the hierarchy problem is the following: Why is the electroweak scale (where W/Z physics is important, roughly 1 TeV) SO much less than the Planck scale. So, why is this in effect, the same problem as the higgs fine tuning? - ## 1 Answer It's the same problem because the low scale matches in both definitions; and the high scale matches in both definitions, too. Both problems are the puzzle why the two scales are so much different. First, the low scale. In the Higgs fine-tuning, you define the low scale as the Higgs mass. But the Higgs mass can't be parameterically greater than the Z-boson or W-boson mass. If it were much greater – assume the Standard Model – than the quartic coupling for the Higgs would have to be much greater than a number of order one and the perturbative series for the Higgs self-coupling would break down. In fact, at a slightly higher energy scale, due to running, one would run into the Landau pole and the coupling would diverge. So as an order-of-magnitude estimate, the Z-boson mass, the W-boson mass, and the (lightest) Higgs boson (and vev) have to be of the same order and we call it the electroweak scale. Now, the high scale. In your definition of the hierarchy problem, you just define it as the Planck scale. In the Higgs fine-tuning problem, you don't define the high scale explicitly but it's the scale of new particles or effects whose masses affect the Higgs mass via the quadratic divergence. So whenever you have something like a particle of mass $\Lambda$, its loops connected to the Higgs in some way shift the squared Higgs mass by terms of order $\Lambda^2$. Clearly, the effects connected with the highest value of $\Lambda$ are the most important, dominant ones. The Planck scale, or slightly beneath the Planck scale, is the highest energy scale at which quantum field theory of some sort should hold. That's why it's legitimate to substitute the effects from this scale to $\Lambda^2$ and say that they contribute $m_{Pl}^2$ to the squared Higgs mass. Other effects contribute as well and the question is why the total Higgs mass is so much smaller – the squared Higgs mass is $10^{30}+$ times smaller than the squared Planck mass. One can't believe any quantum field theory at energy scales exceeding the Planck scale because that's where gravity becomes strong and one needs a full theory of quantum gravity – probably synonymous with string/M-theory – which is strictly speaking not just a quantum field theory and the naive "addition of $\Lambda^2$" and similar QFT wisdom can't be relied upon. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468032717704773, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/standard-model?page=3&sort=active&pagesize=15
# Tagged Questions A model of the basic particles and forces featuring six quarks, three charged leptons, three massless neutral leptons and four fundamental force carrying bosons. The twelve fermions are arranged into three generations, while the bosons serve to explain the electromagnetic interaction plus the strong ... 4answers 803 views ### Why is Neutron Heavier than Proton? This is Neutron decay: $$n^o \to p^+ + e^- + \overline {\nu_e}.$$ and this is proton one: $$p^+ \to n^o + e^+ + \nu_e$$ so when the $e^+ =e^-$ and $\nu_e=\overline {\nu_e}$ why $n \not= p$? my ... 4answers 2k views ### Why do electron and proton have the same but opposite electric charge? What is the explanation between equality of proton and electron charges (up to a sign)? This is connected to the gauge invariance and renormalization of charge is connected to the renormalization of ... 1answer 216 views ### Higgs potential The potential for the Higgs field is standard a quartic one (Mexican hat). Is this done for simplicity or are there fundamental reasons for this choice? I can imagine further contributions to this ... 1answer 236 views ### Why are all observable gauge theories not vector-like? Why are all observable gauge theories not vector-like? Will this imply that the electron and/or fermions do not have mass? How is this issue resolved? Background: The Standard Model is a ... 2answers 380 views ### Is the Higgs a quantum field or a particle? The Higgs is not detected in the asymptotic data, so it is possible that there is no particle interpretation for the Higgs quantum field. Indeed, the Higgs potential is only positive definite if the ... 0answers 95 views ### What would the universe be like if Electroweak symmetry were unbroken? [duplicate] Possible Duplicate: What happens to matter in a standard model with zero Higgs VEV? What if the Higgs did not have a "Mexican hat" potential and the therefore it's vacuum expectation value ... 2answers 263 views ### Some very basic questions on the Higgs Boson What exactly is a boson? Is the Higgs boson the cause of gravity or a result of it? Does the collision of particles at the LHC create a gravity field or waves or somehow interact with the gravity ... 1answer 116 views ### Lepton masses in the Standard Model Some simple questions regarding leptonic masses in the Standard Model (SM): Why there is not an explicit mass term in addition to the effective mass term that arises from the Yukawa terms after ... 0answers 109 views ### Relation among anomaly, unitarity bound and renormalizability There is something I'm not sure about that has come up in a comment to other question: Why do we not have spin greater than 2? It's a good question--- the violation of renormalizability is linked ... 0answers 50 views ### Is there mathematical proof of the vectorial character of the strong and em forces? In a old paper, http://arxiv.org/abs/hep-th/9509163 Becca Asquith argues that it is possible to prove that if the SU(2)xU(1) sector of the standard model is chiral, then the SU(3)xU(1) sector is ... 2answers 97 views ### Charges of quarks and leptons Are there any theoretical restrictions within the framework of QFT that fix the relative sign between charged leptons and up-type quarks? We know that in our universe, they have opposite signs -- ... 1answer 283 views ### Quark Radius Upper Bound If quarks had internal structure (contradicting current beliefs), what is the lowest upper bound on their "radius" based on current experimental results? If possible, I'd prefer to only consider ... 5answers 1k views ### How does Higgs field relate to Aether theories? I am an amateur learning about the Higgs because I was interested in what the LHC's purpose is. I read that as a particle passes through space, it is actually passing through a Higgs field and there ... 1answer 456 views ### What is the difference between 'running' and 'current' quark mass? When looking at the PDG, there is a difference between the 'running' and the 'current' quark masses. Does anyone know which is the difference between these two? 1answer 186 views ### What is meant by the rest energy of non-composite particle? When talking about the rest energy of a composite particle such as a proton, part of the rest energy is accounted for by the internal kinetic energy of its constituent quarks. But what is physically ... 3answers 386 views ### Does every elementary particle have its own separate field? Higgs field is pretty simple for me to understand, you have one field that creates one particle (Higgs boson). So I continue to assume one field one particle. Up field creates a up quark. Down field ... 0answers 267 views ### On the naturalness problem I know that there are several questions about the naturalness (or hierarchy or fine-tunning) problem of scalars masses in physics.stackexcange.com, but I have not found answers to any of the following ... 0answers 105 views ### What is the rate of B violation expected in the standard model during high energy collisions? In a recent question Can colliders detect B violation? I asked about detecting B violation in collisions. Here I am interested in the theory aspect. (I asked both questions originally in the same ... 2answers 228 views ### How does Annihilation work? I'm wondering why matter and antimatter actually annihilates if they come into contact. What exactly happens? Is that a known process? Is it just because of their different charges? Then what about ... 2answers 407 views ### Is the Higgs 3/4 detected already? Can someone provide an expanded explanation on the statement that the Higgs field is already 3/4 detected? Link to ref (@nic, sorry I left it off, do a quick search on Higgs to find the right ... 2answers 179 views ### What has been measured at the Higgs experiment and what do we know now? Explained at the level of a 5$^{\text {th}}$ semester physics student (i.e. pre QFT, but far beyond the level of a news article for non-physicists, which avoids all details and only deals in ... 3answers 167 views ### Neutron decay and electron anti neutrino $n\to p + e + \bar{\nu}_e$ Why do we need neutrino to explain neutron decay? Is there any evidence regarding existence neutrinos in the context of $n\to p + e + \bar{\nu}_e$? 1answer 152 views ### How to calculate Rest Mass practically with Standard Model? With relativistic physics, we can apply force to see resistance against acceleration. It'd give us relativistic mass and we have well established formula to get to the Rest Mass as long as we know the ... 3answers 256 views ### Does Standard Model confirm that mass assigned by Higgs Mechanism creates gravitational field? I am not comparing passive gravitational mass with rest inertial mass. Is there an evidence in Standard Model which says that active gravitational mass is essentially mass assigned by Higgs mechanism. ... 2answers 495 views ### Does Dark Matter interact with Higgs Field? Dark matter does have gravitational mass as we know from its discovery. Does it have inertial mass? 1answer 88 views ### Would the Standard Model allow two energetic photons to form a particle-like, zero-spin resonance? The title is the question: Would the Standard Model allow two energetic photons to form a particle-like, zero-spin resonance? 4answers 265 views ### AQFT and the Standard Model The German physicist Rudolf Haag presented a new approach to QFT that centralizes the role of an algebra of observables in his book "Local Quantum Physics". The mathematical objects known as operator ... 0answers 50 views ### Which higgless models still predict a higgs-like resonance below the TeV scale? [closed] Given today's announcement, I assume a bunch of wikipedia pages will need editing! The question is, which ones? Which higgless models still predict a resonance similar to the one observed by the ATLAS ... 1answer 170 views ### Why not accurate masses of elementary particles? In the standard model of particle accuracy in calculating mass is very low. And you can not predict the upper limit of Higgs particle mass accurately. Why not accurate masses of elementary particles? 1answer 75 views ### Future of colliders and technical limitations Are there any technical limitations (theoretical or technological) that prevent quark based colliders? ie. Colliding two quarks together. 1answer 177 views ### Lepton Number Conservation What is the global symmetry of the electroweak Lagrangian that gives rise to lepton number conservation? As I understand it, electric charge is some linear combination of the conserved quantities ... 3answers 656 views ### Left and Right-handed fermions Is there a simple intuitive way to understand the difference between left-handed and right-handed fermions (electrons say)? How to experimentally distinguish between them? 5answers 1k views ### Could gravity be an emergent property of nature? Sorry if this question is naive. It is just a curiosity that I have. Are there theoretical or experimental reasons why gravity should not be an emergent property of nature? Assume a standard model ... 1answer 357 views ### Origin of the Higgs field Are there any attempts in the literature at addressing the origin of the Higgs field? And, which lines of research that find it inevitable to address this question? 1answer 129 views ### Particle mixing and indistinguishability Neutral kaons have two flavor combinations: $\mathrm{d}\bar{\mathrm{s}}$ and $\mathrm{s}\bar{\mathrm{d}}$. They can also be weak eigenstates: $\mathrm{\frac{d\bar{s} \pm s\bar{d}}{\sqrt{2}}}$. But ... 1answer 408 views ### Introduction to Physical Content from Adjoint Representations In particle Physics it's usual to write the physical content of a Theory in adjoint representations of the Gauge group. For example: \$24\rightarrow (8,1)_0\oplus (1,3)_0\oplus (1,1)_0\oplus ... 0answers 274 views ### A dictionary of string - standard physics correspondences Motivated by the (for me very useful) remark ''Standard model generations in string theory are the Euler number of the Calabi Yau, and it is actually reasonably doable to get 4,6,8, or 3 ... 2answers 362 views ### Quarks as preons for the whole standard model This is a sequel to an earlier question about Alejandro Rivero's correspondence, the "super-bootstrap". The correspondence itself was introduced in his "Supersymmetry with composite bosons"; see the ... 3answers 401 views ### Hilbert space and Lie algebra in quantum mechanics We are looking for a publication or website that explains the Standard Model in terms of Hilbert space and Lie algebra. We are reading Debnath's Introduction to Hilbert Spaces and Applications and ... 0answers 58 views ### How can one activate the decay of the quark b with PYTHIA event generator? This is my problem and I hope finding a solution. _In the simplest alternative, MSTJ(22) = 2, the comparison is based on the average lifetime, or rather (c*tau "time life") , measured in mm. Thus ... 2answers 1k views ### Why is the Higgs boson spin 0? Why is the Higgs boson spin 0? Detailed equation-form answers would be great, but if possible, some explanation of the original logic behind this feature of the Higgs mechanism (e.g., "to provide ... 1answer 220 views ### Why are WW gg ττ branching ratios so similar for a 115 GeV SM Higgs? In a previous question on Higgs branching ratios, I find this image (originally from page 15 here). I am VERY intrigued by the fact that decays to WW, gg, and ττ are almost equally probable, for ... 1answer 118 views ### Fine Tuned Universe Is the fine tuning that cosmologists talk about (that our Universe is fine tuned for intelligent life) is the same as the fine tuning of the squared mass parameter of the Higgs in the Standard Model? ... 2answers 331 views ### More questions on string theory and the standard model This is a followup question to How does string theory reduce to the standard model? Ron Maimon's answer there clarified to some extent what can be expected from string theory, but left details open ... 2answers 997 views ### How does string theory reduce to the standard model? It is said that string theory is a unification of particle physics and gravitation. Is there a reasonably simple explanation for how the standard model arises as a limit of string theory? How does ... 2answers 321 views ### Neutrino oscillations versus CMK quark mixing I wish to describe in simple but correct terms the analogy between the Cabibbo–Kobayashi–Maskawa (CMK) and Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrices. The CMK matrix describes the rotation ... 3answers 423 views ### Building the meson octet and singlet I am very lost in this topic. I understand that there are $3\times 3$ possible combinations of a quark and an anti-quark, but why should one decide arbitrarily (that's how it appears to me) that one ... 3answers 568 views ### Why are there 4 Dimensions and 4 Fundamental Forces? Is it a coincidence that there are four fundamental forces and four spacetime dimensions ? Does a universe with three spacetime dimension contain four fundamental forces? Can magnetism be realized in ... 1answer 185 views ### In SUSY why does electroweak symmetry breaking only happen in the SM sector? This is a difficult question to phrase succinctly, so I hope the title makes sense. What I want to understand is what seems like a lack of symmetry (besides SUSY-breaking) between the SM sector and ... 3answers 147 views ### What barriers exist to prevent us from turning a baryon into a anti-baryon? At present the only way we can produce anti-matter is through high powered collisions. New matter is created from the energy produced in these collisions and some of them are anti-matter particles ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9143463373184204, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/7098/is-1-in-k-sat-np-complete-for-k-3
# Is 1-in-k SAT NP-complete for k > 3 It is well-known that 1-in-k SAT is NP-complete for k=3. What about for k > 3? - 3 Indeed it is NP-Complete. – Geekster Dec 1 '12 at 3:08 ## 1 Answer It's still in NP by using a truth assignment as a proof. You can reduce 1-in-3 SAT to 1-in-$k$ SAT to show hardness as follows. Let $\phi$ be an instance of 1-in-3 SAT with clauses $C_i$ for $1 \le i \le m$. For each $C_i$ we define $C_i'$ by adding $k - 3$ instances of the literal $y$ (where $y$ is a new variable that does not appear in any $C_i$). We also construct the new clause $C_{m+1}'$ which has $k-1$ occurences of the literal $y$, and one occurence of the literal $\neg y$. Let $\phi'$ be the instance of 1-in-$k$ SAT with clauses $C_i'$ for $1 \le i \le m+1$. I claim $\phi'$ is a YES-instance of 1-in-$k$ sat if and only if $\phi$ is a YES-instance of 1-in-3 SAT. First, assume $\phi'$ is a YES-instance, and choose a satisfying assignment. $C_{m+1}'$ is satisfied only when $y$ is false, so this assignment has $y$ false. Therefore, the literals $x$ are not satisfied, and so exactly one of the first 3 literals of $C_i'$ is satisfied for each $i$. These are exactly the literals of $C_i$, so exactly one of the literals of $C_i$ is satisfied for each $C_i$ and so $\phi$ is a YES-instance. In the other direction, suppose $\phi$ is a YES-instance, and choose a satisfying assignment. Extend this assignment to give $y$ the value falsehood. This assignment shows that $\phi'$ is a YES-instance. To illustrate the reduction, here is an example (going to 1-in-5 SAT): $\phi = (x_1 \vee \neg x_1 \vee x_2) \wedge (x_3 \vee \neg x_2 \vee \neg x_2)$ $\phi' = (x_1 \vee \neg x_1 \vee x_2 \vee y \vee y) \wedge (x_3 \vee \neg x_2 \vee \neg x_2 \vee y \vee y) \wedge (y \vee y \vee y \vee y \vee \neg y)$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319182634353638, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/16353/why-is-time-order-invariant-in-timelike-interval/16355
# why is time order invariant in timelike interval? why do two observers measure the same order of events if we are inside the light cone? (e.g. if ds^2 > 0 time order is preserved according to the class mech book I am reading, but it doesn't give any proof of this) I assume there is some simple geometrical argument I am missing and analgously why do two observers measure possible different order of events if we are outside the light cone? - – Bozostein Oct 30 '11 at 0:40 ## 3 Answers For a geometrical argument, you're looking for basically what Ron posted. But you can also argue this one mathematically: as you may know, the difference between two spacetime events is represented by a time difference $\Delta t$ and a spatial difference $\Delta x$. Under a Lorentz boost, these quantities transform like this: $$\begin{align}c\Delta t' &= \gamma(c\Delta t - \beta\Delta x) \\ \Delta x' &= \gamma(\Delta x - \beta c\Delta t)\end{align}$$ Now, the spacetime interval is $\Delta s^2 = c^2\Delta t^2 - \Delta x^2$. For a timelike interval, $\Delta s^2 > 0$, this means $c\Delta t > \Delta x$, assuming that both differences are positive (and you can always arrange for that to be the case). Using the Lorentz boost equations, you can see that in this case, $c\Delta t'$ has to be positive. So for two events separated by a timelike interval, if one observer (in the unprimed reference frame) sees event 2 later than event 1, any other observer (in the primed reference frame) will also see event 2 later than event 1. On the other hand, suppose you have a spacelike interval, $\Delta s^2 < 0$. In this case, $\Delta x > c\Delta t$, so it is possible to get $c\Delta t' < 0$ for a specific velocity (namely $\beta > \frac{c\Delta t}{\Delta x}$). So if one observer (in the unprimed reference frame) sees event 2 later than event 1, it's still possible for another observer (in the primed reference frame) to see them in the reverse order. - in this sense time-order is associated with the plus or minus sign of t. interesting thanks, david... ron's answer is a little over my head... I need to look at that some more... – Bozostein Oct 30 '11 at 13:31 My argument is no less mathematical because it doesn't use symbols – Ron Maimon Oct 30 '11 at 19:48 To get a feel for Lorentz 'rotations' in spacetime, you might want to have a look at this GIF: Notice the events outside the light cone to move up and down in response to the accelerations of the reference frame, and as a result, these can end up at both sides of the 'now' of the observer at the origin. This is not the case for events within the light cone. It is these latter events that can have an influence on the observer at the origin. - i don't understand this diagram at all... – Bozostein Oct 30 '11 at 19:50 The animation shows the changing views of spacetime along the world line of a rapidly accelerating observer. In this 1+1 dimensional animation, the light cone takes the shape of two diagonal lines. The dashed curve is the spacetime trajectory ("world line") of the observer. Note that the view of spacetime changes when the observer accelerates (the slope of the world line at the apex of the light cone denotes his instantaneous velocity). The order of events within the light cone, and in particular those along the world line of the observer do not change. – Johannes Oct 30 '11 at 22:22 Thanks David, for your assistance in embedding the gif. – Johannes Oct 30 '11 at 22:24 The circles in geometry are the curves with $$x^2 + y^2 = C$$ In relativity, the analog of circles are hyperbolas: $$t^2 - x^2 - y^2 - z^2 = C$$ These curves, unlike circles, are disconnected hyperbolas. For any x,y,z, and positive C, there are two solutions for t, positive and negative, and they are never closer than 2C in t. The two branches of the hyperbola go up in time, and down, and define the forward and backward branch of the hyperbola. Much as a rotation takes points around a circle, a lorentz transformation takes points along the hyperbola. Those Lorentz transformations which rotate the point continuously cannot move points from the upper hyperbola to the lower hyperbola. Any timelike interval is either in the forward or backward hyperbola, and is either strictly to the future, or to the past. Null intervals too, by continuity. - ron i'm a little confused... I understand the concept of a circle and I believe that indeed that is the equation for a hyperbola... what I do not understand is what you mean by "rotate a point continuously cannot move points from the upper hyperbola to the lower hyperbola" . to rotate a point on the circle you just spin it... how can you rotate a point on a hyperbola? – Bozostein Oct 30 '11 at 13:41 perhaps I do not understand what is meant by "rotating a point". is the problem. about what axis? – Bozostein Oct 30 '11 at 13:42 – Ron Maimon Oct 30 '11 at 19:50 Elementary rotations in higher dimensions are not "about an axis". They are "in a plane". So you can ask "rotation in what plane?". The rotation in relativity is in the plane defined by the time axis and the velocity vector. The relativistic rotation tilts the time-path of a stationary observer, which is just parallel to the time axis, to be the tilted path for a moving observer. – Ron Maimon Oct 30 '11 at 19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174306988716125, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/83691/list
## Return to Answer 2 added 371 characters in body This is more of a longish comment explaining why I believe that the answer should be yes, and that a confirmation should be within reach using current technology. Let $K$ be the field of $n$-th roots of unity, and $\zeta_p$ a primitive $p$-th root of unity for some prime factor $p \mid n$. The question is whether there is a nonsquare unit $\omega \in {\mathcal O}_K^\times$ such that $\omega \equiv 1 - \zeta \bmod 4$. I can see no reason why such units should not exist, even in the special case $n = p$. But one may have to look for a while before stumbling over an example. The related problem of finding a nonsquare unit $\omega \equiv 1 \bmod 4$, for example, is not solvable for $n = p < 29$ since the corresponding cyclotomic fields have odd class number; ${\mathbb Q}(\zeta_{29})$, on the other hand, has a class group of type $(2,2,2)$ and a good chance of containing such a unit. This can probably be verified by looking only at cyclotomic units, which are known explicitly. In your case, you should look at products of units of the form $$\omega = (1+\xi)^{a_1}(1+\xi+\xi^2)^{a_2}(1+\xi+\xi^2+\xi^3)^{a_3} \cdots$$ with not all $a_j$ even, and check whether one of these lies in the residue class $1 - \xi^j \bmod 4$. Using sage or pari, this should actually be doable. Perhaps some linear algebra and the Chinese remainder theorem can be used to speed up the calculations. Edit. I was so convinced that there would be a solution of the problem for a small $n$ that I did not do what I should have done: the problem in question is equivalent to the congruence $\omega \equiv \alpha^2 (1 - \zeta) \bmod 4$ for some cyclotomic integer $\alpha$ coprime to $2$. I guess Dror's code can easily be adapted to the more general congruence. 1 This is more of a longish comment explaining why I believe that the answer should be yes, and that a confirmation should be within reach using current technology. Let $K$ be the field of $n$-th roots of unity, and $\zeta_p$ a primitive $p$-th root of unity for some prime factor $p \mid n$. The question is whether there is a nonsquare unit $\omega \in {\mathcal O}_K^\times$ such that $\omega \equiv 1 - \zeta \bmod 4$. I can see no reason why such units should not exist, even in the special case $n = p$. But one may have to look for a while before stumbling over an example. The related problem of finding a nonsquare unit $\omega \equiv 1 \bmod 4$, for example, is not solvable for $n = p < 29$ since the corresponding cyclotomic fields have odd class number; ${\mathbb Q}(\zeta_{29})$, on the other hand, has a class group of type $(2,2,2)$ and a good chance of containing such a unit. This can probably be verified by looking only at cyclotomic units, which are known explicitly. In your case, you should look at products of units of the form $$\omega = (1+\xi)^{a_1}(1+\xi+\xi^2)^{a_2}(1+\xi+\xi^2+\xi^3)^{a_3} \cdots$$ with not all $a_j$ even, and check whether one of these lies in the residue class $1 - \xi^j \bmod 4$. Using sage or pari, this should actually be doable. Perhaps some linear algebra and the Chinese remainder theorem can be used to speed up the calculations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451099634170532, "perplexity_flag": "head"}
http://mathoverflow.net/questions/58732?sort=oldest
## Torus based cryptography ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In cryptography one needs finite groups $G$ in which the discrete logarithm problem is infeasible. Often they use the multiplicative group $\mathbb{G}_m(\mathbb{F}_p)$ where $p$ is a prime number of bit length $500$, say. Rubin and Silverberg suggested (cf. [1]) to use certain tori instead, if the goal is to minimize the key size. In the easiest case, this comes down to using the group $$T_2(p)=ker(Norm: \mathbb{F}_{p^2}^\times\to \mathbb{F}_p^\times).$$ If I understood correctly, then the underlying philosopy seems to be: *The group $T_2(p)$ should be as secure as $\mathbb{F}_{p^2}^\times$, but its size is only $p+1$.* (So, if you use groups of type $T_2(p)$ instead of groups of type $\mathbb{G}_m(\mathbb{F}_p)$, then you can achive the same security with half the key size.) Question. What are the reasons, be they heurisical or strictly provable, to believe in this philosopy? Denote by $\mathbb{G}'_m$ the quadratic twist of the algebraic group $\mathbb{G}_m$. It is easy to see that $T_2(p)$ is isomorphic to $\mathbb{G}'_m(\mathbb{F}_p)$. (This isomorphism is easy to compute). The philosophy predicts: The quadratic twist of the multiplicative group should be better than the multiplicative group itself. (Compare with elliptic curves: If $E/\mathbb{F}_p$ is an elliptic curve, then I would certainly not expect its quadratic twist to be better than $E$ itself.) Remark: I concentrated on the simplest case above. One also considers certain groups $T_n(p)$ which are expected to be as secure as $\mathbb{F}_{p^n}^\times$, while their size is only $\approx\varphi(n)p$. Lemma 7 in [1] is meant to explain this. However, I would be keen on a more detailed explanation. [1] Lect. Notes in Comp. Sci. 2729 (2003) 349-365. (available at http://math.stanford.edu/~rubin/) - Interpreted strictly, the philosophy does not make sense since nothing prevents you from taking a subgroup of order 2 or 3. – Franz Lemmermeyer Mar 17 2011 at 11:05 Did Rubin and Silverberg make their suggestion in writing? Maybe if we had something to look at, we could tease out the reasons. – Gerry Myerson Mar 17 2011 at 11:42 @Gerry: I added a reference. – Sebastian Petersen Mar 17 2011 at 12:34 At Franz: One argument is favour of the security of T_2(p) seems to be that it does not lie in a proper subfield of $mathbb{F}_{p^2}$. But still, I do not see why one should expect that it is in fact as secure as $mathbb{F}_{p^2}$. – Sebastian Petersen Mar 17 2011 at 12:36 ## 3 Answers The discrete log problem in the multiplicative group of a finite field may be solved using the index calculus, not the number field sieve (although sieves are used to speed the process of checking numbers for smoothness during the index calculus algorithm). Anyway, the idea is as follows. Using index calculus on a field $\mathbb{F}_q^*$ has running time $L(q,1/3)$, which is a subexponential function of $q$ that is approximately $\exp((\log q)^{1/3})$. Now suppose that we work in a subgroup $G\subset \mathbb{F}_q^*$ of order $N$. Then operations in $G$ may be much faster to compute, so we get a more efficient system. And to solve the discrete log problem in $G$, we have two options. We can use a collision algorithm such as Pollard's $\rho$ method, which has running time proportional to $\sqrt{N}$, or we can use the index calculus. But the index calculus doesn't work directly on $G$, so even though we're using elements of $G$, no one knows how to do the index calculus faster than $L(q,1/3)$. So balancing $N$ and $q$ appropriately, one can get a secure cryptosystem that is more efficient than if one worked with arbitrary elements of $\mathbb{F}_q^*$. Note that I'm not claiming that one can't do the DLP in $G$ faster than the minimum of $O(\sqrt{N})$ and $L(q,1/3)$, I'm simply saying that at present, no one knows how to do it. (But that's true of the security of all the problems being used in cryptography, we have no proofs that they are actually difficult.) - Thanks a lot for the precise answer! Best regards, Basti – Sebastian Petersen Mar 19 2011 at 7:42 This is not entirely correct, since native DLP algorithms for algebraic tori are now known; see my answer below. – Granger Mar 20 2012 at 15:36 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The philosophy, as stated, seems off: The multiplicative groups of finite fields have discrete logarithm problems that are vulnerable to the number field sieve. On the other hand, the best known method in an 'abstract' cyclic group is a variant of the Shanks baby-step-giant-step method -- this is much slower than the number field sieve, since its worst-case (and average) running time in a cyclic group of order $n$ are about the size of $\sqrt{n}$ (and this worst case includes the assumption than $n$ is prime). What applies to the multiplicative groups of finite fields carries over to their large quotients as long as computing lifts is easy, and carries over to their large subgroups. (Here, "large" means 'large enough that the number field sieve is faster than any variant of baby-step-giant-step'.) How can baby-step-giant-step be sped up when $n$ is not prime? Let $n = p_{1}^{e_{1}}\ldots p_{k}^{e_{k}}$ be the prime factorization of $n$. Then the idea is to break up the discrete logarithm problem in the cyclic group of order $n$ into $e_{1}$ problems in cyclic groups of order $p_{1}$, $\ldots$, and $e_{k}$ problems in cyclic groups of order $p_{k}$. In more detail: $g^{x} = m$ implies $(g^{L})^{x} = g^{Lx} = m^{L}$ for any integer $L$. Choose $L_{i}$, where $1 \leq i \leq k$, so that $L_{i} \equiv 1 \mod{p_{i}^{e_{i}}}$ and $L_{i} \equiv 0 \mod{p_{j}^{e_{j}}}$ when $j \neq i$. Then $g$ such that $g^{x} = m$ is obtained as $g^{L_{1} + \ldots + L_{k}} = g^{L_{1}}\ldots g^{L_{k}}$. Each $g^{L_{i}}$ solves $( g^{ L_{i} } )^{x} = g^{L_{i} x} = m^{L_{i}}$, which is a discrete logarithm problem in the cyclic subgroup of order $p_{i}^{e_{i}}$ in our cyclic group of order $n$. As for solving the discrete logarithm problem in a cyclic group of order $p_{i}^{e_{i}}$ (here we write $g^{L_{i}} = h$ and $m^{L_{i}} = M$): If $e_{i} > 1$, then $h^{x} = M$ implies $(h^{L})^{x} = h^{Lx} = M^{L}$, where this time $L = p_{i}^{e_{i}-1}$. This is a discrete logarithm problem in a cyclic group of order $p_{i}$, and its solution (call it $x^{'}$) is the solution to $h^{x} = M$, reduced modulo $p_{i}$. Then $h^{x} = M$ implies $h^{x-x^{'}} = M h^{-x^{'}}$, which implies $(h^{p_{i}})^{\frac{x-x^{'}}{p_{i}}} = M h^{-x^{'}}$. This is a discrete logarithm problem in a cyclic group of order $p_{i}^{e_{i}-1}$. Finally, the variant of the basic baby-step-giant-step method when $e_{i} = 1$ (still write this discrete logarithm problem as $h^{x} = M$ for simplicity): Compute $h^{1}, h^{2}, \ldots, h^{S}$ and store that as a sorted list (so that searches can be made using a logarithmic time binary search). Then compute $M, Mh^{-S}, Mh^{-2S}, \ldots$ until a $t$ is found such that $Mh^{-tS}$ is on the list. Then one obtains $Mh^{-tS} = h^{u}$ for some $u$ with $1 \leq u \leq S$, which gives $M = h^{u + tS}$, so that $x = u + tS$. Here, for maximum speed, choose $S$ to be the result of rounding $\sqrt{p_{i}}$ to the nearest integer. - Can I refer you to my paper `On the Discrete Logarithm Problem on Algebraic Tori', Advances in Cryptology – CRYPTO 2005, Lecture Notes in Computer Science, 2005, Volume 3621/2005, 66-85, in which myself and Frederik Vercauteren studied this very problem. In particular, we showed that the compression mechanism afforded by the birationality of some algebraic tori may be exploited to obtain a faster discrete logarithm algorithm for some cryptographically practical field sizes. In these instances, attacking the discrete logarithm in $\mathbb{F}_{p^n}^{\times}$ via its decomposition $\prod_{d \mid n} T_d(\mathbb{F}_p)$ is faster than using L[1/3] index calculus techniques. Since then, other work has improved the L[1/3] index calculus techniques. However, our work demonstrates that it is naive to argue that the DLP in algebraic tori must be hard purely because the DLP in the multiplicative group of the extension field is hard, precisely because an attack on the former provides an attack on the latter. - Thanks for the references correcting my incomplete answer. – Joe Silverman Mar 20 2012 at 15:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 96, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365859031677246, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/190709/graph-theory-adjacency-vs-incident/190712
# Graph theory: adjacency vs incident Okay, so I think if 2 vertices are adjacent to each other, they are incident to each other....or do I have it wrong? Is this just different terminology. I thought I was totally clear on this for my class, but now I am doubting myself reading the book and looking at my notes. I just want to know if I have it correct, and if I don't could someone explain to me what the difference is between the two. I found several wiki's and different university definitions, but none ever said that the two are alike and I'm confused and would like some reassurance. Thanks in advance. - ## 3 Answers Usually one speaks of adjacent vertices, but of incident edges. Two vertices are called adjacent if they are connected by an edge. Two edges are called incident, if they share a vertex. Also, a vertex and an edge are called incident, if the vertex is one of the two vertices the edge connects. - 3 I would go so far as to say that vertex-edge incidence is the more common usage. – Erick Wong Sep 4 '12 at 0:13 Okay, thank you so much. I am now reviewing what I have and I had thought they were both referring to the vertices for both cases. This makes more sense now. – pqsk Sep 4 '12 at 0:19 1 @ErickWong: that seems right, considering objects like the incidence matrix. Thank you for the insight, I will modify my sentence. – Gregor Bruns Sep 4 '12 at 0:27 If for two vertices $A$ and $B$ there is an edge $e$ joining them, we say that $A$ and $B$ are adjacent. If two edges $e$ and $f$ have a common vertex $A$, the edges are called incident. If the vertex $A$ is on edge $e$, the vertex $A$ is often said to be incident on $e$. There is unfortunately some variation in usage. So you need to check the particular book or notes for the definition being used. - Thank you for your answer. It makes it more clear. I'm going to mark Bruns' as the answer, since I feel that it was more clear to me, but thank you so much for your input. Very useful as well. – pqsk Sep 4 '12 at 0:20 Excerpted from wikipedia: • Two edges of a graph are called adjacent (sometimes coincident) if they share a common vertex. • Similarly, two vertices are called adjacent if they share a common edge. • An edge and a vertex on that edge are called incident. This terminology seems very sensible to my ear. - 1 It does to me too now. Lol....my eyes were seeing vertex for everything. I did not realize that for an incident it was referring to the edges. The problem when you are working non-stop day in and day out and then going to school on your off time. :-S – pqsk Sep 4 '12 at 0:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9795286655426025, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/159193-writing-up-recurrence-relation-problem.html
# Thread: 1. ## Writing up a Recurrence Relation Problem... I know how to solve Recurrence Relation problems, I just need help writing up the equation: Assume that the deer population of Rustic County is 200 at time n = 0 and 220 at time n = 1 and that the increase from time n-1 to time n is twice the increase from time n-2 to time n-1. Write a recurrence relation and an initial condition that de ne the deer population at time n and then solve the recurrence relation. 2. Let the population be $x_n$ a time $n$, $x_{n-1}$ at time $n-1$ and $x_{n-2}$ at time $n-2$. Using this notation, you know how to write the increase in population from time $n-1$ to $n$, don't you? 3. Hello, aamiri! Assume that the deer population of Rustic County is 200 at time 0, and 220 at time 1. And that the increase from time $\,n-1$ to time $\,n$ is twice the increase from time $\,n-2$ to time $\,n-1.$ (a) Write a recurrence relation and an initial condition that define the deer population at time $\,n.$ (b) Solve the recurrence relation. Let $P(t)$ = deer population at time $\,t.$ We are given: . $P(0) = 200,\;P(1) = 220$ We are told that: . $P(n) \;=\;P(n-1) + 2\!\cdot\!\text{(previous difference)}$ . . . . . . . . . . . . . $P(n) \;=\;P(n-1) + 2\!\cdot\![P(n-1) - P(n-2)]$ . . . . . . . . . . . . . $P(n) \;=\;P(n-1) + 2\!\cdot\!P(n-1) - 2\!\cdot\!P(n-2)$ . . . . . . . . . . . . . $P(n) \;=\;3\!\cdot\!P(n-1) - 2\cdot\!P(n-2)$ .(a) Hence: . $P(n) - 3\cdot P(n-1) + 2\cdotP(n-2) \;=\;0$ Let $X^n =P(n)\!:\;\;X^n - 3X^{n-1} + 2X^{n-2} \;=\;0$ Divide by $X^{n-2}\!:\;\;X^2 - 3X + 2 \:=\:0 \quad\Rightarrow\quad (X-1)(X-2) \:=\:0<br />$ . . Hence: . $X \:=\:1,\,2$ The function is of the form: . $P(n) \;=\;A\!\cdot1^n + B\!\cdot\!2^n$ We know the first two terms of the sequence: $\begin{array}{cccccccccc}P(0) = 200\!: & A + B &=& 200 & [1] \\<br /> P(1) = 220\!: & A + 2B &=& 220 & [2] \end{array}$ Subtract [2] - [1]: . $B \,=\,20$ Substitute into [1]: . $A + 20 \:=\:200 \quad\Rightarrow\quad A \:=\:180$ Therefore: . $P(n) \;=\;180 + 20\!\cdot\!2^n$ (b) #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8738763332366943, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/55789-question-what-kind-diff-eq.html
# Thread: 1. ## Question: what kind of diff. eq is that? What kind of diff.eq. is that? I don't have any idea from where to begin with 2. Originally Posted by JrShohin What kind of diff.eq. is that? I don't have any idea from where to begin with The solution to the equation, $y'' + y = 0$ Is given by $y=c_1\sin x + c_2\cos x$ Here you have the equation, $y''+y = x$ Thus you need to find a particular solution. By inspection it is easy to see that $y=x$ works. Thus, the full solution is $y=c_1\sin x + c_2\cos x + x$ 3. this is a second grade ordinary differential equation and you solve it by getting the characteristic equation which in this case is: $m^2+1=0$ then $m=\sqrt{-1}= i$ (+ or - ) and then you gotta use the standard solutions for complex numbers and u will get: $<br /> <br /> y=c_1\sin x + c_2\cos x<br /> <br />$ but this is the general solution of the eq. so u gotta apply the parameters variation method or the undefined coefficient method to get the particular solution or also by inspection as the mod said
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611421823501587, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/weak-turbulence/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘weak turbulence’ tag. ## Weakly turbulent solutions for the cubic defocusing nonlinear Schrödinger equation 14 August, 2008 in math.AP, math.DS, math.NT, paper | Tags: Arnold diffusion, frequency cascade, Gigliola Staffilani, Hideo Takaoka, Jim Colliander, Larry Bird, Mark Keel, McDonalds, Michael Jordan, NLS, Pythagorean triples, Super Bowl commercial, weak turbulence | by Terence Tao | 13 comments Jim Colliander, Mark Keel, Gigliola Staffilani, Hideo Takaoka, and I have just uploaded to the arXiv the paper “Weakly turbulent solutions for the cubic defocusing nonlinear Schrödinger equation“, which we have submitted to Inventiones Mathematicae. This paper concerns the numerically observed phenomenon of weak turbulence for the periodic defocusing cubic non-linear Schrödinger equation $-i u_t + \Delta u = |u|^2 u$ (1) in two spatial dimensions, thus u is a function from ${\Bbb R} \times {\Bbb T}^2$ to ${\Bbb C}$.  This equation has three important conserved quantities: the mass $M(u) = M(u(t)) := \int_{{\Bbb T}^2} |u(t,x)|^2\ dx$ the momentum $\vec p(u) = \vec p(u(t)) = \int_{{\Bbb T}^2} \hbox{Im}( \nabla u(t,x) \overline{u(t,x)} )\ dx$ and the energy $E(u) = E(u(t)) := \int_{{\Bbb T}^2} \frac{1}{2} |\nabla u(t,x)|^2 + \frac{1}{4} |u(t,x)|^4\ dx$. (These conservation laws, incidentally, are related to the basic symmetries of phase rotation, spatial translation, and time translation, via Noether’s theorem.) Using these conservation laws and some standard PDE technology (specifically, some Strichartz estimates for the periodic Schrödinger equation), one can establish global wellposedness for the initial value problem for this equation in (say) the smooth category; thus for every smooth $u_0: {\Bbb T}^2 \to {\Bbb C}$ there is a unique global smooth solution $u: {\Bbb R} \times {\Bbb T}^2 \to {\Bbb C}$ to (1) with initial data $u(0,x) = u_0(x)$, whose mass, momentum, and energy remain constant for all time. However, the mass, momentum, and energy only control three of the infinitely many degrees of freedom available to a function on the torus, and so the above result does not fully describe the dynamics of solutions over time.  In particular, the three conserved quantities inhibit, but do not fully prevent the possibility of a low-to-high frequency cascade, in which the mass, momentum, and energy of the solution remain conserved, but shift to increasingly higher frequencies (or equivalently, to finer spatial scales) as time goes to infinity.  This phenomenon has been observed numerically, and is sometimes referred to as weak turbulence (in contrast to strong turbulence, which is similar but happens within a finite time span rather than asymptotically). To illustrate how this can happen, let us normalise the torus as ${\Bbb T}^2 = ({\Bbb R}/2\pi {\Bbb Z})^2$.  A simple example of a frequency cascade would be a scenario in which solution $u(t,x) = u(t,x_1,x_2)$ starts off at a low frequency at time zero, e.g. $u(0,x) = A e^{i x_1}$ for some constant amplitude A, and ends up at a high frequency at a later time T, e.g. $u(T,x) = A e^{i N x_1}$ for some large frequency N. This scenario is consistent with conservation of mass, but not conservation of energy or momentum and thus does not actually occur for solutions to (1).  A more complicated example would be a solution supported on two low frequencies at time zero, e.g. $u(0,x) = A e^{ix_1} + A e^{-ix_1}$, and ends up at two high frequencies later, e.g. $u(T,x) = A e^{iNx_1} + A e^{-iNx_1}$.  This scenario is consistent with conservation of mass and momentum, but not energy.  Finally, consider the scenario which starts off at $u(0,x) = A e^{i Nx_1} + A e^{iNx_2}$ and ends up at $u(T,x) = A + A e^{i(N x_1 + N x_2)}$.  This scenario is consistent with all three conservation laws, and exhibits a mild example of a low-to-high frequency cascade, in which the solution starts off at frequency N and ends up with half of its mass at the slightly higher frequency $\sqrt{2} N$, with the other half of its mass at the zero frequency.  More generally, given four frequencies $n_1, n_2, n_3, n_4 \in {\Bbb Z}^2$ which form the four vertices of a rectangle in order, one can concoct a similar scenario, compatible with all conservation laws, in which the solution starts off at frequencies $n_1, n_3$ and propagates to frequencies $n_2, n_4$. One way to measure a frequency cascade quantitatively is to use the Sobolev norms $H^s({\Bbb T}^2)$ for $s > 1$; roughly speaking, a low-to-high frequency cascade occurs precisely when these Sobolev norms get large.  (Note that mass and energy conservation ensure that the $H^s({\Bbb T}^2)$ norms stay bounded for $0 \leq s \leq 1$.)  For instance, in the cascade from $u(0,x) = A e^{i Nx_1} + A e^{iNx_2}$ to $u(T,x) = A + A e^{i(N x_1 + N x_2)}$, the $H^s({\Bbb T}^2)$ norm is roughly $2^{1/2} A N^s$ at time zero and $2^{s/2} A N^s$ at time T, leading to a slight increase in that norm for $s > 1$.  Numerical evidence then suggests the following Conjecture. (Weak turbulence) There exist smooth solutions $u(t,x)$ to (1) such that $\|u(t)\|_{H^s({\Bbb T}^2)}$ goes to infinity as $t \to \infty$ for any $s > 1$. We were not able to establish this conjecture, but we have the following partial result (“weak weak turbulence”, if you will): Theorem. Given any $\varepsilon > 0, K > 0, s > 1$, there exists a smooth solution $u(t,x)$ to (1) such that $\|u(0)\|_{H^s({\Bbb T}^2)} \leq \epsilon$ and $\|u(T)\|_{H^s({\Bbb T}^2)} > K$ for some time T. This is in marked contrast to (1) in one spatial dimension ${\Bbb T}$, which is completely integrable and has an infinite number of conservation laws beyond the mass, energy, and momentum which serve to keep all $H^s({\Bbb T}^2)$ norms bounded in time.  It is also in contrast to the linear Schrödinger equation, in which all Sobolev norms are preserved, and to the non-periodic analogue of (1), which is conjectured to disperse to a linear solution (i.e. to scatter) from any finite mass data (see this earlier post for the current status of that conjecture).  Thus our theorem can be viewed as evidence that the 2D periodic cubic NLS does not behave at all like a completely integrable system or a linear solution, even for small data.  (An earlier result of Kuksin gives (in our notation) the weaker result that the ratio $\|u(T)\|_{H^s({\Bbb T}^2)} / \|u(0)\|_{H^s({\Bbb T}^2)}$ can be made arbitrarily large when $s > 1$, thus showing that large initial data can exhibit movement to higher frequencies; the point of our paper is that we can achieve the same for arbitrarily small data.) Intuitively, the problem is that the torus is compact and so there is no place for the solution to disperse its mass; instead, it must continually interact nonlinearly with itself, which is what eventually causes the weak turbulence. Read the rest of this entry » ### Recent Comments Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue… Luqing Ye on 245A, Notes 2: The Lebesgue… E.L. Wisty on Simons Lecture I: Structure an…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8978872895240784, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Integers
# Integer (Redirected from Integers) Symbol often used to denote the set of integers (see List of mathematical symbols) Group theory • Integers (Z) • Lattice Modular groups • PSL(2,Z) • SL(2,Z) • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) An integer is a number that can be written without a fractional or decimal component. For example, 21, 4, and −2048 are integers; 9.75, 5½, and √2 are not integers. The set of integers is a subset of the real numbers, and consists of the natural numbers (0, 1, 2, 3, ...) and the negatives of the non-zero natural numbers (−1, −2, −3, ...). The name derives from the Latin integer (meaning literally "untouched," hence "whole": the word entire comes from the same origin, but via French[1]). The set of all integers is often denoted by a boldface Z (or blackboard bold $\mathbb{Z}$, Unicode U+2124 ℤ), which stands for Zahlen (German for numbers, pronounced ).[2] The integers (with addition as operation) form the smallest group containing the additive monoid of the natural numbers. Like the natural numbers, the integers form a countably infinite set. In algebraic number theory, these commonly understood integers, embedded in the field of rational numbers, are referred to as rational integers to distinguish them from the more broadly defined algebraic integers. Integers can be thought of as discrete, equally spaced points on an infinitely long number line. Nonnegative integers (purple) and negative integers (red). ## Algebraic properties Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, the sum and product of any two integers is an integer. However, with the inclusion of the negative natural numbers, and, importantly, 0, Z (unlike the natural numbers) is also closed under subtraction. Z is not closed under division, since the quotient of two integers (e.g., 1 divided by 2), need not be an integer. Although the natural numbers are closed under exponentiation, the integers are not (since the result can be a fraction when the exponent is negative). The following lists some of the basic properties of addition and multiplication for any integers a, b and c.[citation needed] Addition Multiplication a + b is an integer a × b is an integer a + (b + c) = (a + b) + c a × (b × c) = (a × b) × c a + b = b + a a × b = b × a a + 0 = a a × 1 = a a + (−a) = 0 An inverse element usually does not exist at all. a × (b + c) = (a × b) + (a × c) and (a + b) × c = (a × c) + (b × c) If a × b = 0, then a = 0 or b = 0 (or both) In the language of abstract algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, since every nonzero integer can be written as a finite sum 1 + 1 + ... + 1 or (−1) + (−1) + ... + (−1). In fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z.[citation needed] The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However not every integer has a multiplicative inverse; e.g. there is no integer x such that 2x = 1, because the left hand side is even, while the right hand side is odd. This means that Z under multiplication is not a group.[citation needed] All the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of such algebraic structure. Only those equalities of expressions are true in Z for all values of variables, which are true in any unital commutative ring. At last, the property (*) says that the commutative ring Z is an integral domain. In fact, Z provides the motivation for defining such a structure.[citation needed] The ring Z is the initial ring with unity, which means that it homomorphically maps to any such ring. Any integer number exists in any unital ring, with all arithmetic equalities on integers satisfied, although certain non-zero integers map to zero for certain rings. The lack of multiplicative inverses, which is equivalent to the fact that Z is not closed under division, means that Z is not a field. The smallest field with the usual operations containing the integers is the field of rational numbers. The process of constructing the rationals from the integers can be mimicked to form the field of fractions of any integral domain.[citation needed] And back, starting from an algebraic number field (an extension of rational numbers), its ring of integers can be extracted, which includes Z as its subring. Although ordinary division is not defined on Z, the division "with remainder" is defined on them. It is called Euclidean division and possesses the following important property: that is, given two integers a and b with b ≠ 0, there exist unique integers q and r such that a = q × b + r and 0 ≤ r < | b |, where | b | denotes the absolute value of b. The integer q is called the quotient and r is called the remainder of the division of a by b. The Euclidean algorithm for computing greatest common divisors works by a sequence of Euclidean divisions. Again, in the language of abstract algebra, the above says that Z is a Euclidean domain. This implies that Z is a principal ideal domain and any positive integer can be written as the products of primes in an essentially unique way. This is the fundamental theorem of arithmetic.[citation needed] ## Order-theoretic properties Z is a totally ordered set without upper or lower bound. The ordering of Z is given by:[citation needed] ... −3 < −2 < −1 < 0 < 1 < 2 < 3 < ... An integer is positive if it is greater than zero and negative if it is less than zero. Zero is defined as neither negative nor positive. The ordering of integers is compatible with the algebraic operations in the following way: 1. if a < b and c < d, then a + c < b + d 2. if a < b and 0 < c, then ac < bc. It follows that Z together with the above ordering is an ordered ring.[citation needed] The integers are the only integral domain whose positive elements are well-ordered, and in which order is preserved by addition.[citation needed] ## Construction Red points represent ordered pairs of natural numbers. Linked red points are equivalence classes representing the blue integers at the end of the line. The integers can be formally constructed as the equivalence classes of ordered pairs of natural numbers (a, b).[3] The intuition is that (a, b) stands for the result of subtracting b from a.[3] To confirm our expectation that 1 − 2 and 4 − 5 denote the same number, we define an equivalence relation ~ on these pairs with the following rule: $(a,b) \sim (c,d) \,\!$ precisely when $a + d = b + c. \,\!$ Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers;[3] denoting by [(a,b)] the equivalence class having (a,b) as a member, one has: $[(a,b)] + [(c,d)] := [(a+c,b+d)].\,$ $[(a,b)]\cdot[(c,d)] := [(ac+bd,ad+bc)].\,$ The negation (or additive inverse) of an integer is obtained by reversing the order of the pair: $-[(a,b)] := [(b,a)].\,$ Hence subtraction can be defined as the addition of the additive inverse: $[(a,b)] - [(c,d)] := [(a+d,b+c)].\,$ The standard ordering on the integers is given by: $[(a,b)] < [(c,d)]\,$ iff $a+d < b+c.\,$ It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes. Every equivalence class has a unique member that is of the form (n,0) or (0,n) (or both at once). The natural number n is identified with the class [(n,0)] (in other words the natural numbers are embedded into the integers by map sending n to [(n,0)]), and the class [(0,n)] is denoted −n (this covers all remaining classes, and gives the class [(0,0)] a second time since −0 = 0.[citation needed] Thus, [(a,b)] is denoted by[citation needed] $\begin{cases} a - b, & \mbox{if } a \ge b \\ -(b-a), & \mbox{if } a < b. \end{cases}$ If the natural numbers are identified with the corresponding integers (using the embedding mentioned above), this convention creates no ambiguity.[citation needed] This notation recovers the familiar representation of the integers as {... −3,−2,−1, 0, 1, 2, 3, ...}. Some examples are: $\begin{align} 0 &= [(0,0)] &= [(1,1)] &= \cdots & &= [(k,k)] \\ 1 &= [(1,0)] &= [(2,1)] &= \cdots & &= [(k+1,k)] \\ -1 &= [(0,1)] &= [(1,2)] &= \cdots & &= [(k,k+1)] \\ 2 &= [(2,0)] &= [(3,1)] &= \cdots & &= [(k+2,k)] \\ -2 &= [(0,2)] &= [(1,3)] &= \cdots & &= [(k,k+2)]. \end{align}$ ## Integers in computing Main article: Integer (computer science) An integer is often a primitive data type in computer languages. However, integer data types can only represent a subset of all integers, since practical computers are of finite capacity. Also, in the common two's complement representation, the inherent definition of sign distinguishes between "negative" and "non-negative" rather than "negative, positive, and 0". (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Fixed length integer approximation data types (or subsets) are denoted int or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.).[citation needed] Variable-length representations of integers, such as bignums, can store any integer that fits in the computer's memory. Other integer data types are implemented with a fixed size, usually a number of bits which is a power of 2 (4, 8, 16, etc.) or a memorable number of decimal digits (e.g., 9 or 10).[citation needed] ## Cardinality The cardinality of the set of integers is equal to $\aleph_0$ (aleph-null). This is readily demonstrated by the construction of a bijection, that is, a function that is injective and surjective from Z to N. If N = {0, 1, 2, ...} then consider the function: $f(x) = \begin{cases} 2|x|, & \mbox{if } x < 0 \\ 0, & \mbox{if } x = 0 \\ 2x-1, & \mbox{if } x > 0. \end{cases}$ {... (-4,8) (-3,6) (-2,4) (-1,2) (0,0) (1,1) (2,3) (3,5) ...} If N = {1, 2, 3, ...} then consider the function: $g(x) = \begin{cases} 2|x|, & \mbox{if } x < 0 \\ 2x+1, & \mbox{if } x \ge 0. \end{cases}$ {... (-4,8) (-3,6) (-2,4) (-1,2) (0,1) (1,3) (2,5) (3,7) ...} If the domain is restricted to Z then each and every member of Z has one and only one corresponding member of N and by the definition of cardinal equality the two sets have equal cardinality. ## Notes 1. Evans, Nick (1995). "A-Quantifiers and Scope". In Bach, Emmon W. Quantification in Natural Languages. Dordrecht, The Netherlands; Boston, MA: Kluwer Academic Publishers. p. 262. ISBN 0-7923-3352-7. 2. Miller, Jeff (2010-08-29). "Earliest Uses of Symbols of Number Theory". Retrieved 2010-09-20. 3. ^ a b c Campbell, Howard E. (1970). The structure of arithmetic. Appleton-Century-Crofts. p. 83. ISBN 0-390-16895-5. ## References • Bell, E.T., Men of Mathematics. New York: Simon and Schuster, 1986. (Hardcover; ISBN 0-671-46400-0)/(Paperback; ISBN 0-671-62818-6) • Herstein, I.N., Topics in Algebra, Wiley; 2 edition (June 20, 1975), ISBN 0-471-01090-1. • Mac Lane, Saunders, and Garrett Birkhoff; Algebra, American Mathematical Society; 3rd edition (April 1999). ISBN 0-8218-1646-2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8803691864013672, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/34803/can-two-positive-integers-be-uniquely-recovered-from-their-difference-and-xor?answertab=votes
# Can two positive integers be uniquely recovered from their difference and XOR? As part of an answer to a Stack Overflow question I made the assumption that if I choose two distinct positive integers $m$ and $n$, them give you $m - n$ and $m$ XOR $n$, then you can uniquely determine what $m$ and $n$ were. For all the examples I've tried this seems to work correctly, though I have no reason to believe that this should work in general. Moreover, I'm not familiar enough with the interactions of differences (or sums, for that matter) and XOR to deivse a proof or counterexample. Is my claim true? If so, how would you go about proving it? If not, is there a nice counterexample? Thanks so much! - ## 2 Answers I believe this is false. Let $2^r \gt m \gt n$. Then $2^r + m$ and $2^r +n$ have same difference and XOR as $m,n$. - How about (2,3) and (16,17)? plus some characters -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430542588233948, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/6668/how-to-obtain-a-one-value-share-in-shamirs-secret-sharing
How to obtain a one-value share in Shamir's secret sharing This is a trivial question, but I had to ask: since each generated share in a Shamir's secret sharing scheme initially consists of a pair of values (representing the coordinates of a point on the plane), how do I reduce the two values to one, so that the share that I eventually hand out will consist of only one value? The way this is normally done is: Share #1: (1, 65428) Share #2: (2, 935747) Share #3: (3, 3524) But this does not seem very satisfactory to me, because the share still consists of two values, as the share's number (which could be seen as a metadata) is still one of two values. This way of doing is also inconvenient in the sense that if a share is marked as Share #7, this normally implies that there are at least 7 shares, possibly more. The solution I am looking for is a way to have shares with totally random x values, such as (324,4634634) (23, 945) (8, 45634) (944, 356345) and no serial number on a share (e.g. Share #8), so that by intercepting one share, and adversary cannot even try to guess how many shares there are. Of course, the share generating mechanism is exactly the same. The only difference is that the x values, instead of being 1, 2, 3, 4... are random numbers. So, now the question is: assuming I decide to use random x numbers, how can I conflate the two values of each share into one value, also assuming that no external (metadata) markings of the share (e.g. share #3) will be used? - – fgrieu Mar 12 at 14:10 I had already thought of concatenation, but it does not seem a very elegant solution. – Penn Mar 12 at 14:33 If shares are bound to some ID, you could consider using a hash function to map that ID to an x-value. It is fine for the x-values to be predictable, so there's no problem there, and the hash's collision resistance should guarantee that you choose distinct x-values. The holder of the share can then simply recompute x. – Maeher Mar 12 at 15:58 Shamir's secret-sharing scheme does not need to have share #$3$ be the value of that polynomial $f(x)$ at $x=3$; the $n$ share-values are the values of $f(x)$ at $x_1, x_2, \ldots, x_n$ where the $x_i$ are $n$ distinct nonzero elements of the field. One does not have to choose $x_i = i$. But, of course, what each share holder is given is $(x_i, f(x_i))$, or, possibly, $(g(x_i), f(x_i))$ where $g(x_i)$ is the result of encrypting $x_i$ via an encryption scheme known to the trusted authority who will reconstruct the secret. – Dilip Sarwate Mar 12 at 23:13 1 Answer Well, Shamir Secret Sharing is done using a field $GF(p^k)$, for some prime $p$ and some integer $k$. A share consists of two integers $(x, y)$, where $0 \le x, y < p^k$. So, the obvious way to express a single share $(x, y)$ as a single value would be to use the value $x p^k + y$ (using integer arithmetic, not field operations); each potential share would map to a distinct value, and during reconstruction, each value can easily be mapped by to the $(x, y)$ format. Yes, this is just another way of doing concatination; you might find this somewhat more elegant. - Thanks, Poncho. This is what I was looking for. One more thing: you said that by using xp^k+y each share would map to a distinct value. So, how would I get back the two values of x and y from the 'unified' share? That is, what is the inverse operation? – Penn Mar 12 at 16:01 @Penn: the inverse operation mapping $value = xp^k + y$ back to $(x,y)$ would be $x = value / p^k$ (round down), and $y = value \bmod p^k$. – poncho Mar 12 at 16:05 Thanks again, Poncho! – Penn Mar 12 at 16:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9402061104774475, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/76483/limit-of-a-prime-sequence?answertab=votes
# limit of a prime sequence Let $p[n]$ be the $n$-th prime. Let $0 \leq m < k$. Prove $$\lim_{n\rightarrow\infty}\frac{ p[(n+k)^2] - p[(n+m)^2] }{ p[n]} = 4(k-m)\;.$$ This is a generalization of something I looked at a while ago. I have some empirical evidence for it but cannot prove it. I think it is hard (and interesting). I do not think the PNT helps. - I might be completely wrong on this, but isn't it a consequence of the PNT that $p(n)\sim n\ln(n)$? Could this not give an easy proof? – Olivier Bégassat Oct 27 '11 at 21:57 @SivaramAmbikasaran I don't understand, $p(n)$ must grow quicker than $n$, and thus quicker than $\frac{n}{\ln(n)}$ – Olivier Bégassat Oct 27 '11 at 22:01 @SivaramAmbikasaran You must have meant $\pi(n)$ :) – Olivier Bégassat Oct 27 '11 at 22:02 @Oliver: Right. I thought the op meant $\pi(n)$. Sorry for the previous comment. – user17762 Oct 27 '11 at 22:03 2 – joriki Oct 27 '11 at 22:03 show 7 more comments ## 1 Answer This would follow from unproved hypotheses on the distribution of prime numbers in short intervals. Therefore it's surely correct, but nobody knows how to prove it. It is conjectured that the number of primes in a short interval of the form $(x,x+x^\theta)$ is asymptotic to $x^\theta/\ln x$, for any fixed $1\ge\theta>0$. (This is only known for $\theta>0.55$ or something like that, I can't remember. The Baker-Harman-Pintz result related to $\theta=0.525$ is not an asymptotic but only a lower bound.) This short-interval conjecture is equivalent to saying that $p[n+f(n)] - p[n]$ is asymptotic to $f(n) \ln n$ for any nice function $f(n)$ satisfying $n \ge f(n) > n^\epsilon$ for some $\epsilon>0$. That in turn implies that $p[n^2+f(n)] - p[n^2+g(n)]$ is asymptotic to $(f(n)-g(n))\ln n^2$ for two such nice functions. Your quotient is the case $f(n) = 2kn+k^2$, $g(n) = 2mn+m^2$. If we knew the short interval conjecture for some $\theta<\frac12$, say (since $f$ and $g$ are about the square root of $n^2$), it would follow that the numerator of your quotient is asymptotic to $4(k-m)n\ln n$, while the denominator is known to be asymptotic to $n\ln n$. - I was wondering about the $0.525$ result: Wikipedia has this as an asymptotic result, but the paper only contains the lower bound; the same for Huxley's result. Perhaps you (or someone else knowledgeable enough) could clean up that section? – joriki Oct 28 '11 at 0:41 – joriki Oct 28 '11 at 0:57 I just plucked the 0.55 number out of the air - I remember the 0.525 offhand, but not the exponent for the asymptotic formula. – Greg Martin Oct 28 '11 at 6:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308140873908997, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/tagged/black-scholes?sort=votes&pagesize=50
# Tagged Questions Black-Scholes is a mathematical model used for pricing options. 6answers 2k views ### Paradoxes in quantitative finance Everyone seems to agree that the option prices predicted by the Black-Merton-Scholes model are inconsistent with what is observed in reality. Still, many people rely on the model by using "the wrong ... 8answers 1k views ### Option pricing before Black-Scholes According to the Wikipedia article, Contracts similar to options are believed to have been used since ancient times. In London, puts and "refusals" (calls) first became well-known trading ... 3answers 873 views ### Are there any new Option pricing models? Back in the mid 90's I used the Black-Scholes Model and the Cox-Ross-Rubenstein (Binomial) Model's to price Options. That was nearly 15 years ago and I was wondering if there are any new models being ... 3answers 1k views ### Why hold options when you can dynamically replicate their payoff? When holding vanilla options, you can cancel out, theoretically, all risk with dynamic (delta) hedging. Then you earn the "risk free rate of return". Why would you make such a portfolio when you can ... 1answer 1k views ### Transformation from the Black-Scholes differential equation to the diffusion equation - and back I know the derivation of the Black-Scholes differential equation and I understand (most of) the solution of the diffusion equation. What I am missing is the transformation from the Black-Scholes ... 2answers 549 views ### How do we use option price models (like Black-Scholes Model) to make money in practice? In quantitative finance, we know we have a lot of option price models such as geometric Brownian motion model (Black-Scholes models), stochastic volatility model (Heston), jump diffusion models and so ... 3answers 1k views ### Is there an all Java options-pricing library (preferably open source) besides jquantlib? I am looking for an all-java implementation of black scholes, preferably open source. I found jquantlib and quantlib (C++). Any other recommendations? The jquantlib site seems to be down. I'd prefer ... 5answers 837 views ### How to conduct Monte Carlo simulations to test validity of Black Scholes for a specific option? In reference to the original Black Scholes model, what approach is best to test the model in a rigorous way? Is there a standard approach that can accomplish this in a reasonable amount of time? ... 3answers 2k views ### What are the main limitations of Black Scholes? Pls explain and discuss these limitations, and explain which models can I use to overcome these limitations. Alternatively, provide examples of how to modify the original Black Scholes to overcome ... 2answers 1k views ### Why a self-financing replicating portfolio should always exist? According to my understanding the derivation of the Black-Scholes PDE is based on the assumption that the price of the option should change in time in such a way that it should be possible to ... 1answer 248 views ### Appropriate measure of Volatility for economic returns from an asset? I am doing research on uncertainty analysis and risk assessment for oil field development. For doing economic forecast and valuation I use Real Options theory, which is almost similar to theory used ... 3answers 712 views ### What tools are used to numerically solve differential equations in Quantitative Finance? There are a lot of Quantitative Finance models (e.g. Black-Scholes) which are formulated in terms of partial differential equations. What is a standard approach in Quantitative Finance to solve these ... 1answer 289 views ### What are the main differences in Jump Volatility and Local Volatility Is a JV model simply Local Vol + Jump Diffusion? If so, it seems logical that an existing JV model be able to be used for valuation of both Vanilla and Exotic options. Is this true? Does a Local ... 4answers 950 views ### Methods for pricing options I'm looking at doing some research drawing comparisons between various methods of approaching option pricing. I'm aware of the Monte Carlo simulation for option pricing, Black-Scholes, and that ... 2answers 349 views ### Convexity of BS Equation for Call and Put I have a simple question. Is the Black-Scholes Formula convex with respect to Implied volatility parameter $\sigma$ (for calls or put) ? When I say Black-Scholes I mean for a call the following one ... 2answers 655 views ### Why doesn't Black-Scholes work in discrete time? I have a question considering Financial markets in discrete Time: One of the main theorems in discrete time is: In finite discrete Time with trading times t={1,...,T} the following are equivallent: ... 1answer 310 views ### Simulating the joint dynamics of a stock and an option I want to know the joint dynamics of a stock and it's option for a finite number of moments between now and $T$ the expiration date of the option for a number of possible paths. Let $r_{\mathrm{s}}$ ... 1answer 1k views ### Easiest and most accessible derivation of Black-Scholes formula I am preparing a QuantFinance lecture and I am looking for the easiest and most accessible derivation of the Black-Scholes formula (NB: the actual formula, not the differential equation). My favorite ... 1answer 219 views ### When pricing options, what precision should I work with? I'm wondering if there's any point at all in double-precision calculations, or whether it's ok to just do everything in single-precision, seeing how the difference on non-Tesla GPUs for single and ... 1answer 652 views ### How should I estimate the implied volatility skew term when calculating the skew-adjusted delta? I'm trying to come up with the implied volatility skew adjusted delta for SPY options. I'm working with the following formula: Skew Adjusted Delta = Black Scholes Delta + Vega * Vol Skew Slope. I ... 10answers 1k views ### Using Black-Scholes equations to “buy” stocks From what I understand, Black-Scholes equation in finance is used to price options which are a contract between a potential buyer and a seller. Can I use this mathematical framework to "buy" a stock? ... 2answers 1k views ### What causes the call and put volatility surface to differ? I currently have a local volatility model that uses the standard Black Scholes assumptions. When calculating the volatility surface, what causes the difference between the call volatility surface, ... 2answers 386 views ### Vanilla European options: Monte carlo vs BS formula I have implemented a monte carlo simulation for a plain vanilla European Option and I am trying to compare it to the analytical result obtained from the BS formula. Assuming my monte carlo pricer is ... 1answer 173 views ### Prove or disprove “If at least 10% of an option's value is time value, it has a delta less than 90” "If at least 10% of an option's value is time value (ie. time value >= 0.1*call price), it has a delta less than 90". In practice and after doing many tests with an option pricing calculator, this ... 1answer 1k views ### What is a self-financing and replicating portfolio? I try to understand the derivation of the Black-Scholes equation based on the "constructing a replicating portfolio". From mathematical point of view it looks simple. We assume that: Stock prices ... 1answer 416 views ### How to 'calibrate' simple pricing models for equity index options and equity options? I am interested in doing some research on plain vanilla equity options and equity index options. I have historical data for these options. I also happen to have market maker 'fair price' (bid and ask) ... 5answers 2k views ### How do you explain the volatility smile in the Black-Scholes framework? Does anyone have an explanation for the currently naturally forming volatility smile (and the variations) in the market? 2answers 786 views ### How to extrapolate implied volatility for out of the money options? Estimation of model-free implied volatility is highly dependent upon the extrapolation procedure for non-traded options at extreme out-of-the-money points. Jiang and Tian (2007) propose that the ... 2answers 290 views ### A few questions about signs of the Greek letters Rho is the partial derivative of the value of call option, $C$, w.r.t the riskfree interest rate $r$: $$\rho \equiv \frac{\partial C}{\partial r}$$ In the standard B-S formula this term is positive, ... 3answers 444 views ### Black-Scholes No Dividends assumption I am doing some research involving black-scholes model and got stuck with dividend-paying stocks when evaluating options. What is the real-world approach on handling the situations when an underlying ... 1answer 336 views ### Extensions of Black-Scholes model For the Black-Scholes model my feeling is that the volatility parameter is like sweeping stuff under the rug. Are there models which improve on the volatility aspect of Black-Scholes by adding other ... 5answers 342 views ### In Black-Scholes, why is $\log{\frac{S_{t+\triangle t}}{S_t}} \sim \phi{((\mu - \frac{1}{2}\sigma^2)\triangle t, \sigma^2 \triangle t)}$? Namely, I dont understand why the mean is $(\mu - \frac{1}{2}\sigma^2)\triangle t$ and not just $\mu \triangle t$. I am aware that it is supposed to represent a lognormal distribution, but I guess I'm ... 3answers 300 views ### Is it possible to demonstrate that one pricing model is better than another? Take the classic GBM (geometric Brownian motion) model for equities as an example: ds = mu * S * dt + sigma * S * dW. It is the basis for the classic ... 4answers 357 views ### Expected Growth The model assumption of the Black-Scholes formula has two parameters for the geometric Brownian motion, the volatility $\sigma$ and the expected growth $\mu$ (which disappears in the option formulae). ... 5answers 699 views ### How to improve the Black-Scholes framework? Since the distribution of daily returns are obviously not lognormal, my bottom line question is has BS been reworked for a better fitting distribution? Google searches give me nada. The best dist ... 1answer 252 views ### Taylor series expansion (Volatility Trading book) explanation sought I am currently reading Volatility Trading, I have only just started, but I am trying to understand a "derivation from first principles" of the BSM pricing model. I understand how the value of a long ... 2answers 318 views ### Basket option pricing: step by step tutorial for beginners I would like to learn how to price options written on basket of several underlyings. I've never tried to do it and I would appreciate if you can provide some documents, papers, web sites and so on in ... 1answer 169 views ### Can we explain physical similarities between Black Scholes PDE and the Mass Balance PDE (e.g. Advection-Diffusion equation)? Both the Black-Scholes PDE and the Mass/Material Balance PDE have similar mathematical form of the PDE which is evident from the fact that on change of variables from Black-Scholes PDE we derive the ... 1answer 409 views ### Better understanding of the Datar Mathews Method - Real Option Pricing in their paper "European Real Options: An intuitive algorithm for the Black and Scholes Formula" Datar and Mathews provide a proof in the appendix on page 50, which is not really clear to me. It's ... 1answer 253 views ### Black-Scholes American Put Option Here is my question: This is a question about Black-Scholes model, but it may be applicable to more complicated models. Throughout the discussion, the strike price $K$, interest rate $r$ and ... 2answers 193 views ### Black-Scholes and Fundamentals So basically $dS_t=\mu S_tdt+\sigma S_tdWt$ and $\mu=r-\frac12\sigma^2$ I have just been thinking about this later equation. This is very interesting because it ties together risk-free ... 4answers 544 views ### Ways of treating time in the BS formula The Black-scholes formula typically has time as $\sqrt{T-t}$ or some such. My questions: What is the granularity of this? If we treat $t$ as the number of days, then logically on the day of expiry, ... 1answer 335 views ### Can American options with no dividends and zero risk-free rate be treated as European? Let's say you've got American options on a future of a stock index. There are no dividends, and no risk-free rate either (assume $r=0$). Can these options then be treated as European from the ... 1answer 288 views ### price of a “Cash-or-nothing binary call option” I'm stuck with one homework problem here: Assume there is a geometric Brownian motion \begin{equation} dS_t=\mu S_t dt + \sigma S_t dW_t \end{equation} Assume the stock pays dividend, with the ... 2answers 137 views ### Is vega of Black-Scholes European type option always positive? We assume we work in the risk-neural measure with a stock which pays no dividend and a continuous discount rate. For PUT and CALL only: can someone please clarify if what I said is correct? The ... 2answers 137 views ### Trading days or calendar days for Black-Scholes parameters? Black-Scholes requires volatility estimated in trading days. How does this affect other parameters? Specifically, should the time-to-expiration also be in trading days? And how does this affect the ... 1answer 194 views ### What are $d_1$ and $d_2$ for Laplace? What are the formulae for d1 & d2 using a Laplace distribution? 3answers 271 views ### Basic question about Black Scholes derivation In the derivation of the Black Scholes equation, the value of the portfolio at time $t$ is given by $$P_t = -D_t + \frac{{\partial D_t}}{{\partial S_t}}S_t$$ where $P_t$ is the value of the ... 0answers 62 views ### Changes to option valuation for dollar-pegged underlying In Russia, options on futures on the RTS index are priced in points instead of currency, with points being directly related to the value of the US dollar such that, for example, if the dollar rises, ... 0answers 259 views ### Can the Heston model be shown to reduce to the original Black Scholes model if appropriate parameters are chosen? Summary For Heston model parameters that render the variance process constant, the solution should revert to plain Black-Scholes. Closed from solutions to the Heston model don't seem to do this, even ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8954079747200012, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/70289?sort=newest
## laplacian for metrics on $S^n$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is true that the restiction of the Laplace operator on $\mathbb R^n$ to functions on the sphere is the Laplacian for the round metric on the sphere. Is this true for any Riemannian metric $g$ on $\mathbb R^n$? I mean, is it true that the restriction of $\Delta_g$ to functions on the sphere is the Laplacian on $S^{n-1}$ of the metric induced by $g$? Thanks in advance. - 2 This seems like a nice exercise for you to do yourself. – Deane Yang Jul 14 2011 at 3:17 3 Your first sentence is not correct. The Laplace operator on $R^n$ does not act on functions on the sphere, so I don't see what you mean by restriction. Once your reformulate the sentence you'll probably find that it's false, although something is true for homogeneous functions. For a general metric $g$ I don't quite see what simple statement could be correct. – Jean-Marc Schlenker Jul 14 2011 at 6:18 First, as Jean-Marc points out, you need to restate your question more precisely (and correctly). Second, your question applies to any hypersurface in a Riemannian manifold, and you can compare the two Laplacians by writing them with respect to local co-ordinates, where the hypersurface is the level set of one of the co-ordinates. – Deane Yang Jul 14 2011 at 11:45 3 Here's one possible statement to prove or refute: Given a hypersurface in a Riemannian manifold, consider a function that is constant along each geodesic normal to the hypersurface. The manifold Laplacian of this function, restricted to the hypersurface, is equal to the hypersurface Laplacian of the function restricted to the hypersurface. – Deane Yang Jul 14 2011 at 13:11 Dear Yang, I was able to prove your statement. I'm very grateful for your help. I'll try to post the solution in the following days. Thank you again. Yaiza. – unknown (google) Jul 15 2011 at 5:08 ## 1 Answer In $\mathbb{R}^n$, in terms of polar coordinates $(r,\theta)$ where $r>0$ and $\theta\in S^{n-1}$, we have the following formula: $$\Delta_{\mathbb{R}^n}=\frac{\partial^2}{\partial r^2}+\frac{1}{r}\frac{\partial}{\partial r}+\frac{1}{r^2}\Delta_{S^{n-1}}.$$ To prove it, you can first try to prove it when $n=2$: When $n=2$, $(x,y)=(r\cos\theta, r\sin\theta)$...I think you can fill out the details. So the answer to your question is yes when $g$ is Euclidean. - Hi Paul, thank you for your quick answer. I knew this already, it's what I wote in the first line. What I wanted to know is if it would hold for any other Riemannian metric on $R^n$. – unknown (google) Jul 14 2011 at 5:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360844492912292, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/206099-dense-itself-set-print.html
# Dense-in-itself set Printable View • October 25th 2012, 05:40 PM xinglongdada Dense-in-itself set Let $E\subset \mathbb{R}^1$ be a non-empty countable set. Suppose $E$ has no isolated points, prove that $\bar E\backslash E$ is dense in $\bar E.$ It is difficult, isn't it? • October 25th 2012, 08:51 PM hollywood Re: Dense-in-itself set I think the best approach might be to suppose that $\bar E\backslash E$ is not dense in $\bar E$. Start unraveling the definitions and see what you can prove about the set E. - Hollywood • October 26th 2012, 12:59 AM xinglongdada Re: Dense-in-itself set I could not see any property of $E$, if I assume $\bar E'\backslash E$ is not dense in $E$. I argue as follows. We need only prove the closure of $\bar E'\backslash E$ contains $\bar E.$ Suppose then $x$ not in the closure of $\bar E'\backslash E$, then there exists a $\tilde \delta>0$ , such that for any $\delta\in (0,\tilde \delta)$, the ball $B(x,\delta)\subset (E'\backslash E)^c=E^{'c}\cup E$. Since $E$ is countable, we see $B(x,\delta)\cap E^{'c}\neq \emptyset.$ This, by definition, means $x\in E^{'c-}$ (the closure of $E^{'c}$). Here, I could only prove $x\in E^{'c-}$, but not $x\in E^{'c}.$ Would you help me out? Thank you. • October 26th 2012, 01:25 PM Plato Re: Dense-in-itself set Quote: Originally Posted by xinglongdada Let $E\subset \mathbb{R}^1$ be a non-empty countable set. Suppose $E$ has no isolated points, prove that $\bar E\backslash E$ is dense in $\bar E.$ Another way to define being dense-in-itself to to say that the set contains no isolated points. Thus is $E$ dense-in-itself? Can $\overline{E}\setminus E$ be dense-in-itself? • October 26th 2012, 02:20 PM xinglongdada Re: Dense-in-itself set Yes, indeed they are equivalent. $E\subset E'$ is equivalent to saying $E$ has no isolated points. I do not see how to prove the statement. • October 26th 2012, 03:06 PM Plato Re: Dense-in-itself set Quote: Originally Posted by xinglongdada Prove that $\overline{E}\backslash E$ is dense in $\overline{E}.$ Well frankly I was confused by the title of the thread and the question above. If you want to prove the above question, can you show that $\overline{\overline{E}\setminus E}=\overline{E}~?$ • October 26th 2012, 03:13 PM hedi Re: Dense-in-itself set E has no isolated points so every point x in cl(E) is limit of a sequence xn of different elements of E.E is countable so in the neigborhood of each xn there is yn in cl(E)-E.yn is converging to x,so we have our claim. • October 26th 2012, 03:18 PM xinglongdada Re: Dense-in-itself set Yes, I want to prove this. • October 26th 2012, 03:29 PM xinglongdada Re: Dense-in-itself set Quote: Originally Posted by hedi E has no isolated points so every point x in cl(E) is limit of a sequence xn of different elements of E.E is countable so in the neigborhood of each xn there is yn in cl(E)-E.yn is converging to x,so we have our claim. Why " in the neigborhood of each xn there is yn in cl(E)-E"? I know only $B(x_n,1/n,x_n+1/n)\subset E^{-c}\cup (\bar E\backslash E)\cup E.$ Thus, by the countability of $E$, I could only see $B(x_n-1/n,x_n+1/n)$ contains an element $y_n$ lies in $\bar E^c$ or $\bar E\backslash E$? • October 26th 2012, 06:11 PM Plato Re: Dense-in-itself set Quote: Originally Posted by xinglongdada Why " in the neigborhood of each xn there is yn in cl(E)-E"? I know only $B(x_n,1/n,x_n+1/n)\subset E^{-c}\cup (\bar E\backslash E)\cup E.$ Thus, by the countability of $E$, I could only see $B(x_n-1/n,x_n+1/n)$ contains an element $y_n$ lies in $\bar E^c$ or $\bar E\backslash E$? I HATE THESE TRICKY PROOFS. Notation: $\mathcal{B}(x;\delta)=\left( {x - \delta ,x + \delta } \right)$ where $\delta>0$. Note that if $t\in E$ then $\mathcal{B}(t;1)$ contains a point $x_1\in E\setminus\{t\}$ because $E$ has no isolated points. But $\mathcal{B}(t;1)$ is uncountable so $\left( {\exists y_1 \notin E} \right)\left[ {y_1 \in \mathcal{B}(t;1)} \right]$ So if $\delta_1=1$ then $\exists\delta_2$ such that $\mathcal{B}(t;\delta_2)}\subseteq\mathcal{B}(t; \delta_1)}$ so that $x_1\notin \mathcal{B}(t;\delta_2)}$ AND $y_1\notin \mathcal{B}(t;\delta_2)}$. Can you see why I HATE this? There is a sequence of points $\left( {x_n } \right)$ in $E$ such that $\left( {x_n } \right) \to t$. There is a sequence of points $\left( {y_n } \right)$ in $E^c$ such that $\left( {y_n } \right) \to t$. • October 26th 2012, 08:37 PM xinglongdada Re: Dense-in-itself set Then you prove that $E\subset \partial E.$ How to prove the statement then? • October 27th 2012, 07:34 AM Plato Re: Dense-in-itself set Quote: Originally Posted by xinglongdada Then you prove that $E\subset \partial E.$ How to prove the statement then? • October 27th 2012, 08:39 AM hedi Re: Dense-in-itself set since E has no isolated points, the open sets in cl(E) are uncountable so they must intersect cl(E)-E,because E is countable.(notice that cl(E) is uncountable). • October 27th 2012, 11:35 AM hollywood Re: Dense-in-itself set A friend of mine came up with this proof having to do with perfect sets. By definiton, a set E is perfect if E=E', the set of limit points of E. A set is perfect if it is closed and has no isolated points. And if a set is perfect, it is uncountable. This Wikipedia article has some background: Derived set (mathematics) - Wikipedia, the free encyclopedia You can also find a proof that a perfect set is uncountable on the web. Ok. Suppose that $\bar E\backslash E$ is not dense in $\bar E$. Then there exists $x\in\bar{E}$ and $\epsilon>0$ such that $B(x,\epsilon)\cap(\bar E\backslash E)$ is empty. So $\bar{B}(x,\frac{\epsilon}{2})\cap(\bar E\backslash E)$ is empty. This means that in $\bar{B}(x,\frac{\epsilon}{2})$ every point of $\bar{E}$ is also a point of E. So $\bar{B}(x,\frac{\epsilon}{2})\cap\bar{E}$ is a subset of E. But $\bar{B}(x,\frac{\epsilon}{2})\cap\bar{E}$ is closed, and since it is a subset of E, contains no isolated points. So $\bar{B}(x,\frac{\epsilon}{2})\cap\bar{E}$ is perfect, and therefore uncountable. This is a contradiction, so $\bar E\backslash E$ is dense in $\bar E$. - Hollywood All times are GMT -8. The time now is 05:52 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 79, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307360053062439, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/23279/why-does-a-coin-falls-faster-when-its-flipping-as-well
# Why does a coin falls faster when it's flipping as well? From my experiments with measuring how fast a coin falls, I have consistently measured a faster falling rate for a coin that flips as it falls. As an example, a coin dropping on its edge from height of $45 \:\rm{cm}$ hits the ground $20 \:\rm{ms}$ later than a flipping coin falling from the same height. Now here's the catch: I use a microphone to mark the events. I drop the coin off the edge of a table letting it slightly brush off it. The bang noise of this event combined with the noise the coin makes as it hits the hard ground let's me measure the fall duration accurately (I hope). I also take into account the time it takes the sound of coin hitting the ground to come back up to the mic. Using $\approx340\:\rm{m/s}$ for speed of sound and $9.806\:\rm{m/s^2}$ for acceleration due to gravity, my measurement of height is dead accurate, BUT only for a coin dropped on its edge. A flipping coin constantly gives me a measurement less than correct value. First I suspected the air resistance, but if that was the case, shouldn't the coin falling on its edge fall quicker? Any ideas? - 2 1) How are you flipping the coin?? Note that most processes that flip a coin will impart some extra translational energy to it as well. Also, if the coin has not yet begun flipping at the time it brushes the mic, then there will be a time discrepancy. – Manishearth♦ Apr 5 '12 at 7:18 Thanks for replacing my numbers with proper ones (I'm not fluent in TeX). Well I drop the coin from about half a cm above the edge of the table while holding it horizontally. When it hits the edge, it start flipping as well as making the noise for mic to pick up. – Mansour Apr 5 '12 at 7:26 1 Can you comment a bit on the accuracy/reproducability of your set-up? Is this 20ms based on the average of 100 realizations with a standard deviation of 100 ms for example, or is $\sigma$ 0.1 ms. – Bernhard Apr 5 '12 at 7:39 I must warn you that I'm not a physics student, so my method may not be proper. I consistently (100% of time) get a faster falling rate for the flipping coin. The difference however changes but for the 45cm fall, I recorded it between 15ms to 22ms after 20 tries. – Mansour Apr 5 '12 at 8:01 1 To extend on my remark about the difference in the fall distance of the center of mass: A difference of only 1.1 millimeters in the fall distance would change the fall time by 15 milliseconds. That is on the order of the thickness of a typical small coin. – Benjamin Franz Apr 6 '12 at 14:38 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406985640525818, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/188842-implicit-differentiation-word-problem.html
Thread: 1. Implicit differentiation word problem I have a written assignment in which one specific word problem is confusing me greatly. I just recently learned implicit differentiation but haven't gotten much into depth with it. The question is as follows: "Each time we use implicit differentiation, we are able to rewrite the resulting equation in the form: f(x,y)=y′ ·g(x,y) for some expressions f and g. Explain why this can always be done; that is, why doesn’t the chain rule ever produce a term like: (y′)^2 or 1/y′ " I really am not grasping the form of the initial equation given let alone the logic behind what it is asking. I don't want anyone to give me the answer to the problem, but any help in understanding what it is actually asking, the concepts behind the question and the thought process required to answer the question would be of great help. 2. Re: Implicit differentiation word problem $F(x,y(x))=0\Rightarrow \frac {\partial F}{\partial x}+\frac {\partial F}{\partial y}y'(x)=0\Rightarrow \ldots$ 3. Re: Implicit differentiation word problem Originally Posted by elrpsu I have a written assignment in which one specific word problem is confusing me greatly. I just recently learned implicit differentiation but haven't gotten much into depth with it. The question is as follows: "Each time we use implicit differentiation, we are able to rewrite the resulting equation in the form: f(x,y)=y′ ·g(x,y) for some expressions f and g. Explain why this can always be done; that is, why doesn’t the chain rule ever produce a term like: (y′)^2 or 1/y′ " I really am not grasping the form of the initial equation given let alone the logic behind what it is asking. I don't want anyone to give me the answer to the problem, but any help in understanding what it is actually asking, the concepts behind the question and the thought process required to answer the question would be of great help. Every function of y is a function of x. So if you wanted to differentiate something like $\displaystyle z= y^2$ (in other words, $\displaystyle z = \left[y(x)\right]^2$ )with respect to x, you need to use the chain rule. The "inner" function is $\displaystyle y(x)$, and the "outer" function is $\displaystyle y^2$. So using the chain rule, if $\displaystyle z = y^2$ and we let $\displaystyle u = y \implies z = u^2$, then $\displaystyle \frac{du}{dx} = \frac{dy}{dx}$ and $\displaystyle \frac{dz}{du} = 2u = 2y$. Therefore $\displaystyle \frac{dz}{dx}= \frac{dz}{du} \cdot \frac{du}{dx} = 2y\,\frac{dy}{dx}$. So whenever you want to differentiate a function of y, since y is the inner function, you are always going to get $\displaystyle \frac{dy}{dx}$ as a factor because of the chain rule. 4. Re: Implicit differentiation word problem Originally Posted by Prove It Every function of y is a function of x. So if you wanted to differentiate something like $\displaystyle z= y^2$ (in other words, $\displaystyle z = \left[y(x)\right]^2$ )with respect to x, you need to use the chain rule. The "inner" function is $\displaystyle y(x)$, and the "outer" function is $\displaystyle y^2$. So using the chain rule, if $\displaystyle z = y^2$ and we let $\displaystyle u = y \implies z = u^2$, then $\displaystyle \frac{du}{dx} = \frac{dy}{dx}$ and $\displaystyle \frac{dz}{du} = 2u = 2y$. Therefore $\displaystyle \frac{dz}{dx}= \frac{dz}{du} \cdot \frac{du}{dx} = 2y\,\frac{dy}{dx}$. So whenever you want to differentiate a function of y, since y is the inner function, you are always going to get $\displaystyle \frac{dy}{dx}$ as a factor because of the chain rule. Thank you! That helped me understand this all quite a bit better than I was able to before. 5. Re: Implicit differentiation word problem Another way of expressing it is that differentiation is a linear operation: (f+ g)'= f'+ g' and (fg)'= f'g+ fg'. You never get "squares" or more complicated functions of the derivative.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518436193466187, "perplexity_flag": "head"}
http://mathoverflow.net/questions/12829/convex-polyhedra
## Convex Polyhedra ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Exactly what set of mathematical tools (means: set of areas of mathematical knowledge) are appropriate to begin with to analyse (to enumerate face vectors associated with polyhedron, to calculate the combinatorial types) the Convex Polyhedra (starting with simple polyhedron to general). - 2 -1: This question is prohibitively vague. What kind of "analysis" do you want to do? Your tag suggests that you want to somehow enumerate them, but the set of convex polyhedra is uncountable. – Pete L. Clark Jan 24 2010 at 12:53 Yes, Pete is right about its vagueness; my pupose is to count face vectors. – Ali Dino Jumani Jan 24 2010 at 13:19 Assuming you have 3-dimensional convex polyhedra in mind look at Branko Grunbaum's book Convex Polytopes. For k-valent polyhedra (k = 3, 5, or 5), if I understand what you want, then there are some partial results going under the name of what are called "Eberhard Theorems." – Joseph Malkevitch Jan 24 2010 at 14:12 Joseph Malkevitch's own AMS monthly column "Euler's Polyhedral Formula: PartII" is also a nice elaboration in this respect; but naming the explicit set of tools will be helpful. – Ali Dino Jumani Jan 24 2010 at 14:51 2 -1. I am not sure that "finding the exact set of mathematical tools" is a good way to learn things: that isn't how mathematics works. I mean, maybe you should learn homology theory, but maybe you don't need to bother... There are several books on convex polytopes and polyhedra, so if a list of such books is what you were after, perhaps you should edit the question? – Yemon Choi Jan 24 2010 at 22:02 show 1 more comment ## 2 Answers Dear Ali, Well there are various tools which are useful to the study of convex polytopes. The following list is perhaps not complete and it certainly should not be frightening. (I dont know very well various of these tools.) 1) Basic tools of linear algebra and convexity. The notions of supporting hyperplanes, seperation theorems, Caratheodory, Helly and Radon theorem etc. 2) Combinatorics Some of the study of convex polytopes translates geometric questions to purely combinatorial questions. So familiarity with combinatorial notions and techniques is useful. 3) Graph theory As Joe mentioned the study of polytopes in 3 dimensions is closely related to the study of planar graphs. There are few other connections to graph theory so it is useful to be familiar with some graph theory. 4) Gale duality The notion of Gale duality is a linear-algebra concept which privides an important technique in the study of convex polytopes. 5) Some basic algebraic topology Euler's theorem and its higher dimensional analogues is of central imoprtance and this theorem is closely related to algebraic topology. Another example: there is a result by Perles that the [d/2]-dimensional skeleton of a simplicial polytope determines the entire combinatorial structure. (See this paper by Jerome Dancis.) The proof is based on an elementary topological argument. Borsuk-Ulam theorem also has various nice applications for the study of polytopes. 6) Some functional analysis There is a result by Figiel, Lindenstrauss, and Milman that a centrally symmetric convex polytope in d dimension satisfies $$log f_0(P) \cdot log f_{d-1}(P) \ge \gamma d$$ for some absolute positive constant $\gamma$. The proof is based on a certain variation of Dvoretzky theorem and I am not aware of an alternative approach. 7) Some commutative algebra Several notions and results from commutative algebra plays a role in the study of convex polytopes and related objects. Especially important is the notion of Cohen Macaulay rings and results about these rings. 8) Toric varieties Understanding the topology of certain verieties called "toric varieties" turned out to be quite important for the study of convex polytopes. All these items refer to general polytopes. There is also a (related) reach study of polytopes arising in combinatorial optimization. Here is a link to a paper entitled "Polyhedral combinatorics an annotated bibliography" by Karen Aardal and Robert Weismantel. ### references Here are some relevant references: Ziegler's book: Lectures on Convex Polytopes, and the second edition of Grunbaum's book "Convex polytopes" will give a very nice introduction to topics 1) - 4). The connection with commutative algebra and some comments and references to the connection with toric varieties (topics 7 and 8) can be found in (chapters 2 and 3 of) the second edition of Stanley's book "Combinatorics and Commutative Algebra". Relations with algebraic topology and with functional analysis can be found in various papers. This wikipedia article can also be useful. - 2 Wouldn´t the more interesting question be, really, what tools are useless to study polytopes? :) – Mariano Suárez-Alvarez Sep 2 2010 at 12:45 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you are looking for software, I recommend POLYMAKE, an open source program which can take a polyhedron specified either by inequalities or vertices and return (among many things) the $f$-vector. - I am writing the terms "Explicit set of Tools" or the " Mathematical Tools" ; which means the mathematical machinary needed to enumerate face vectors associated with polyhedron, to calculate the combinatorial types. – Ali Dino Jumani Jan 24 2010 at 18:27 Since planar 3-connected graphs correspond to 3-polytopal graphs by Steinitz's Theorem one typically starts with an equation, say (), derived from Euler's polyhedral formula.This formula can involve only numbers of faces or it can involve numbers of faces and vertices. If one successfully constructs a polyhedron whose values satisfy the equation () one tries to find another solution that satisfies (*) by using graph theory ideas to make some "local change" in the graph that solved the original case. Perhaps if you are more specific about the problem that interests I can be more specific. – Joseph Malkevitch Jan 24 2010 at 19:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9165825247764587, "perplexity_flag": "head"}
http://mathoverflow.net/questions/58718?sort=oldest
Degree reduction argument in Guth-Katz’sproof of Erdos distinct distance problem in the plane Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the middle of page 9 of http://arxiv.org/PS_cache/arxiv/pdf/1011/1011.4105v1.pdf. They said " Now we select a random subset....choosing lines independently with probability $\frac{Q}{100}$. With positive probability.... I can not see why there is positive probability... Could any one explain a bit about what is going on there? I feel they are applying large number law, but I can not see it clearly, for example what is the probability measure space, what is the random variables, how the law is used?.. - 1 Answer Not a complete answer but a quick explanation of how I read page 9 in that paper. 1. The underlying probabilistic model is that for every line $l\in\mathfrak L'$ you throw a coin that shows head with probability $100/Q$ and tail with probability $(Q-100)/Q$, and then $\mathfrak L''$ is the set of all lines whose coin showed head. In other words, you choose a random subset $\mathfrak L''$ of $\mathfrak L'$ and the probability for choosing a particular set $\mathfrak L_0\subseteq \mathfrak L'$ equals $$\mathbf{P}[\mathfrak L''=\mathfrak L_0]=\left(\frac{100}{Q}\right)^{|\mathfrak L_0|}\left(1-\frac{100}{Q}\right)^{|\mathfrak L'\setminus\mathfrak L_0|}.$$ 2. By linearity of expectation the expected cardinality of $\mathfrak L''$ is $\mathbf{E}(|\mathfrak L''|)=\frac{100\alpha N^2}{Q}$, and this implies $$\mathbf{P}\left[|\mathfrak L''|\leqslant\frac{200\alpha N^2}{Q}\right]\geqslant\frac12.$$ 3. We are done when we can show that the probability of the event "Every line in $\mathfrak L'$ intersects $N/20$ lines in $\mathfrak L''$" has probability at least $1/2+\varepsilon$ for some positive $\varepsilon$, since then the event "$|\mathfrak L''|\leqslant\frac{200\alpha N^2}{Q}$ and every line in $\mathfrak L'$ intersects $N/20$ lines in $\mathfrak L''$" has probability at least $\varepsilon$. As I understand it, the intuition is that a typical line in $\mathfrak L'$ intersects a quadratic number of lines in $\mathfrak L'$ so it is highly unlikely that less than $N/20$ of these are chosen for $\mathfrak L''$. At the moment I don't see how to make that rigorous. I haven't read the rest of the paper, yet, but my impression is that it might be convenient (or even necessary) to replace $\mathfrak L'$ by something slightly smaller, throwing away some rubbish: • I don't see why $\mathfrak L'$, as it is defined, cannot contain a few exceptional lines that have almost all their intersections with lines outside $\mathfrak L'$, so they intersect less than $N/20$ lines in $\mathfrak L'$. If this is the case, these lines have no chance to intersect $N/20$ lines in $\mathfrak L''$. • It looks easier to show that with high probability almost every line in $\mathfrak L'$ intersects $\mathfrak L''$ at least $N/20$ times (instead of "every line in $\mathfrak L'$" ). Maybe it is sufficient to replace $\mathfrak L'$ by this big subset. Let's hope for a better answer by someone who understands what's going on. - Thanks Kali, I think you are right. then one can take large N to ensure the 3 by Large number law. I agree that one should work on a smaller set. – rrrq Mar 17 2011 at 15:11 Hmm. I'm not really convinced by my own answer. I have to think about it some more. – Thomas Kalinowski Mar 17 2011 at 21:45 in fact one can do estimate directly to see the probability is positive – rrrq Mar 18 2011 at 1:39 one can do this as following Claim: suppose both $L_{1}$ and $L_{2}$ has $O(N^{2})$ lines, each line in $L_{1}$ intersects with at least almost $QN$ lines in $L_{2}$, now choosing line in $L_{2}$ independently with probability $\frac{1}{Q}$, the resulting subset of $L_{2}$ is denoted by $L_{3}$ then one has similar statement in 2: with probability bigger than 1/2 $L_{3}$ contains $\frac{O(N^{2})}{Q}$ lines for 3, the probability that a line in $L_{1}$ intersects more than $\frac{N}{2}$ lines in $L_{3}$ is bigger than $1-e^{-N/100}$ – rrrq Mar 18 2011 at 3:21 this can be done by using estimate in en.wikipedia.org/wiki/Binomial_distribution. then the probability that every line in $L_{2}$ intersects with at least $\frac{N}{2}$ lines in $L_{3}$ is bigger than $(1-e^{-N/100})^{N^{2}}$, which goes to 1 when $N$ very large. – rrrq Mar 18 2011 at 3:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9595860838890076, "perplexity_flag": "head"}
http://mathoverflow.net/questions/107530?sort=oldest
## Fusion category and Hopf algebra ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $H$ be a semisimple Hopf algebra over an algebraically closed field of characteristic zero. Further, let $K\subseteq H$ be a normal Hopf subalgebra. As we all know, $H$ then can be reconstructed from $K$ by some compatible data [1]. I want to know if there exists a similar result on Rep($H$) and Rep($K$), where Rep($H$) is the fusion category of finite-dimensional representations of $H$? If not, what can be said about Rep($H$) and Rep($K$)? Thank you! [1]N. Andruskiewitsch, Notes on extensions of Hopf algebras, Canad. J. Math. 48 (1996), 3-42 - ## 2 Answers The following article may interest you: C. Pinzari and J. Roberts, A Duality Theorem for Ergodic Actions of Compact Quantum Groups on C *-Algebras, Communications in Mathematical Physics 277 (2008) no 2, 385-421. I hope this helps... - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Thanks, I will read that paper! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8443677425384521, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/14436/how-do-you-prove-s-sum-p-ln-p?answertab=votes
# How do you prove $S=-\sum p\ln p$? How does one prove the formula for entropy $S=-\sum p\ln p$? Obviously systems on the microscopic level are fully determined by the microscopic equations of motion. So if you want to introduce a law on top of that, you have to prove consistency, i.e. entropy cannot be a postulate. I can imagine that it is derived from probability theory for general system. Do you know such a line? Once you have such a reasoning, what are the assumptions to it? Can these assumptions be invalid for special systems? Would these system not obey thermodynamics, statistical mechanics and not have any sort of temperature no matter how general? If therodynamics/statmech are completely general, how would you apply them the system where one point particle orbits another? - 4 You'll probably want to research information theory. This is the Shannon entropy. Interestingly, it's a constant of motion for Hamiltonian systems! You have a very interesting question, yet the answer could fill books. – Kasper Meerts Sep 7 '11 at 18:44 Shannon entropy exists, but it's still no answer why it is used in physics. Shannon entropy probably has some presuppositions. So why does physics satisfy these presuppositions? – Gerenuk Sep 7 '11 at 18:55 How do you define $S$? – Gerben Sep 7 '11 at 20:19 I guess the correct outline of an answer would be to start with classical thermodynamics and get to Carnot/reversible heat engine, find entropy as a state function, before delving into stat mech to give it a microscopic interpretation... Seems like a big job... – genneth Sep 7 '11 at 23:02 The common Carnot argument wouldn't help. The question how it is supposed to be connected with all microscopic processes in general would still be as open as before. Someone must have tried that before? I know common literature but it's not in there :(; S is whatever people use for showing irreversibility. – Gerenuk Sep 7 '11 at 23:36 show 1 more comment ## 3 Answers The theorem is called the noiseless coding theorem, and it is often proven in clunky ways in information theory books. The point of the theorem is to calculate the minimum number of bits per variable you need to encode the values of N identical random variables chosen from $1...K$ whose probabilities of having a value $i$ between $1$ and $K$ is $p_i$. The minimum number of bits you need on average per variable in the large N limit is defined to be the information in the random variable. It is the minimum number of bits of information per variable you need to record in a computer so as to remember the values of the N copies with perfect fidelity. If the variables are uniformly distributed, the answer is obvious: there are $K^N$ possiblities for N throws, and $2^{CN}$ possiblities for $CN$ bits, so $C=\log_2(k)$ for large N. Any less than CN bits, and you will not be able to encode the values of the random variables, because they are all equally likely. Any more than this, you will have extra room. This is the information in a uniform random variable. For a general distribution, you can get the answer with a little bit of law of large numbers. If you have many copies of the random variable, the sum of the probabilities is equal to 1, $$P(n_1, n_2, ... , n_k) = \prod_{j=1}^N p_{n_j}$$ This probability is dominated for large N by those configurations where the number of values of type i is equal to $Np_i$, since this is the mean number of the type i's. So that the P value on any typical configuration is: $$P(n_1,...,n_k) = \prod_{i=1}^k p_i^{Np_i} = e^{N\sum p_i \log(p_i)}$$ So for those possibilities where the probability is not extremely small, the probability is more or less constant and equal to the above value. The total number M(N) of these not-exceedingly unlikely possibilities is what is required to make the sum of probabilities equal to 1. $$M(N) \propto e^{ - N \sum p_i \log(p_i)}$$ To encode which of the M(N) possiblities is realized in each N picks, you therefore need a number of bits B(N) which is enough to encode all these possibilities: $$2^{B(N)} \propto e^{ - N \sum p_i \log(p_i)}$$ which means that $${B(N)\over N} = - \sum p_i \log_2(p_i)$$ And all subleading constants are washed out by the large N limit. This is the information, and the asymptotic equality above is the Shannon noiseless coding theorem. To make it rigorous, all you need are some careful bounds on the large number estimates. ### Replica coincidences There is another interpretation of the Shannon entropy in terms of coincidences which is interesting. Consider the probability that you pick two values of the random variable, and you get the same value twice: $$P_2 = \sum p_i^2$$ This is clearly an estimate of how many different values there are to select from. If you ask what is the probability that you get the same value k-times in k-throws, it is $$P_k = \sum p_i p_i^{k-1}$$ If you ask, what is the probability of a coincidence after $k=1+\epsilon$ throws, you get the Shannon entropy. This is like the replica trick, so I think it is good to keep in mind. ### Entropy from information To recover statistical mechanics from the Shannon information, you are given: • the values of the macroscopic conserved quantities (or their thermodynamic conjugates), energy, momentum, angular momentum, charge, and particle number • the macroscopic constraints (or their thermodynaic conjugates) volume, positions of macroscopic objects, etc. Then the statistical distribution of the microscopic configuration is the maximum entropy distribution (as little information known to you as possible) on phase space satisfying the constraint that the quantities match the macroscopic quantities. - 4 The last section is about equilibrium stat. mech, and it would be nice to explicitly acknowledge that, because there's a lot of literature on using information theory for non-equilibrium stat. mech. I used to be very confused on how the latter works, because it seemed like trying to get something for nothing --- truly non-equilibrium states can be infinitely complex. I finally realised that (via Jaynes) that one can replace the equilibrium condition with "reproducible", and in fact one always means the latter anyway. (cont) – genneth Sep 8 '11 at 7:18 1 (cont) This pushes stat. mech to have a more inference-driven flavour, which probably aligns better with the OP's question. The point is that we do experiments and find out that some experimental controls are sufficient for some outcomes --- it is then simply logic that there are sufficient relationships between them to specify the macroscopic behaviour. If we then know the microscopic behaviour we can then play this game of maximum entropy and derive the statistical mechanics of the experiment. – genneth Sep 8 '11 at 7:20 @genneth: I would do that if I thought there was a single example where this description worked. Do you know any system? The only maximal entropy distributions I know are in equilibrium stat-mech. Everywhere else, it's just a terrible zeroeth approximation. – Ron Maimon Sep 8 '11 at 15:49 1 @Gerenuk: Your intuition about this is faulty, because you are used to the situation where you can see the particle, and therefore know where it is and how fast its going at all times. If you don't know where the particle is, there is an entropy associated with the particle, and the p_i are a p(x,v) to find it at any position and velocity. The laws of black hole entropy are different, and life has nothing to do with entropy (but it doesn't violate it). – Ron Maimon Sep 9 '11 at 14:44 2 @Gerenuk: I know where you are wrong, and the answer is very easy and well known. It is treated completely in the Jaynes reference, and I have nothing more to add to this. The thing I put above is something which doesn't appear in many places, namely a good simple explanation of noiseless coding, because this justifies $p\log(p)$. Everything else is philosophy, and Jaynes explains it well (in the link above). – Ron Maimon Sep 10 '11 at 3:35 show 7 more comments The best (IMHO) derivation of the $\sum p \log p$ formula from basic postulates is the one given originally by Shannon: Shannon (1948) A Mathematical Theory of Communication. Bell System Technical Journal. http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf However, Shannon was concerned not with physics but with telegraphy, so his proof appears in the context of information transmission rather than statistical mechanics. To see the relevance of Shannon's work to physics, the best references are papers by Edwin Jaynes. He wrote dozens of papers on the subject. My favorite is the admittedly rather long Jaynes, E. T., 1979, `Where do we Stand on Maximum Entropy?' in The Maximum Entropy Formalism, R. D. Levine and M. Tribus (eds.), M. I. T. Press, Cambridge, MA, p. 15; http://bayes.wustl.edu/etj/articles/stand.on.entropy.pdf - The functional form of the entropy $S = - \sum p \ln p$ can be understood if one requires that entropy is extensive, and depends on the microscopic state probabilities $p$. Consider a system $S_{AB}$ composed of two independent subsystems A and B. Then $S_{AB} = S_A +S_B$ and $p_{AB} = p_A p_B$ since A and B are decoupled. $$S_{AB} = - \sum p_{AB} \ln p_{AB} = -\sum p_{A} \sum p_B \ln p_A -\sum p_{A} \sum p_B \ln p_B$$ $$= -\sum p_{A} \ln p_A - \sum p_B \ln p_B = S_A + S_B$$ This argument is valid up to a factor, which turns out to be the Boltzmann constant $k_B$ in statistical mechanics: $S = - k_B \sum p \ln p$ which is due to Gibbs, long before Shannon. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334155321121216, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/forces
Tagged Questions This tag is for the classical concept of forces, i.e. the quantities causing an acceleration of a body. It expands to the strong/electroweak force only insofar as they act comparable to ‘classical’ forces. Use [tag:particle-physics] for decay channels due to forces and [tag:newtonian-mechanics] or ... 2answers 61 views How can a car's engine move the car? Newton's First Law of Motion states that an object at rest or uniform motion tends to stay in that state of motion unless an unbalanced, external force acts on it. Say if I were in a car and I push ... 1answer 32 views If An Object Explodes With A Force, What Force Are Fragments Given? So let's say for example I have an object with 5kg of mass. It explodes with a force of 500N. The object fragments into four fragments: a 0.5kg, a 1kg, a 1.5kg and a 2kg object. What force does ... 1answer 34 views Another Inclined plane question I did the FBD, and I found too many variables which are not eliminating...Moreover, I believe this question is based on kinetic and static friction. But, $\mu$ here is ambiguously defined...How Do I ... 1answer 47 views Comparing Static Frictions In this figure, which of the static frictional forces will be more? My aim isn't to solve this particular problem but to learn how is static friction distributed . Since each of the rough-surfaces ... 2answers 55 views Massless string Paradox If we introduce the notion of a massless string to denote the fact that net force on a massless string will always be $0$, since it is massless . How can these massless strings ever accelerate when ... 1answer 41 views How large of a solar sail would be needed to travel to mars in under a year? I'm attempting to approach this using the identity $$F/A = I/c$$ I can solve for Area easily enough $$A = F(c/I)$$ and I know the distance $d$ is $$d=1/2(at^2)$$ But I'm having difficulty trying to ... 1answer 32 views Falling through the ground [duplicate] I do not know much about physics but I know that according to Newtons third law of motion when we walk we are pushing the ground down but the ground is pushing us up. What force is making the ground ... 1answer 44 views A theoretical problem on Mechanics [closed] Two particles with masses $m_1$ and $m_2$ are moving in 3D space with some Cartesian coordinate system. There are known the laws of motion of these particles, i.e. the position vectors $\vec{r_1}(t)$ ... 2answers 92 views Repulsion does not exist; Gravity assist slingshot as a repulsive force [closed] unification theory might say that there is only one force in universe that is called gravity(attraction). So at atomic level for same charges that repel eachother(electrons etc.), if thought of as a ... 1answer 56 views Why body starts moving when force is applied? [duplicate] The recent question by m.buettner regarding self-inductance and its resistance to EMF Faraday's law - does the induced current's magnetic field affect the change in flux?, recalled me the ... 0answers 33 views Does throwing a football harder/faster mean someone has more strength? [closed] I have played football with another individual and noticed that they are able to propel this ball faster and harder than me, despite me being able to out lift them in virtually almost every major ... 0answers 15 views Acceleration by spherical particles (micron-scale) by an external force I am looking for an expression for the velocity of a micron sized (1 - 10 micron diameter) sized particles under accelerating forces. I have aerosols in mind. This is what I have in mind The ... 0answers 49 views Does the slinky base stay perfectly level during the initial free fall [duplicate] In this related question: Slinky base does not immediately fall due to gravity It is observed that the base does not fall immediately. Obviously the center of mass is in free fall, and the tension ... 2answers 47 views Clarification regarding Newton's Third Law of Motion and why movement is possible [duplicate] Newton's third law states that to every action, there is an equal and opposite reaction. If that's the case, then how do things move at all? Shouldn't all applied forces be canceled by the equal and ... 1answer 48 views Air pressure relative to a force on a bag? Assume an airtight bag occupied by air such that the pressure inside the bag is equal to the atmospheric pressure. Assume the surface tension of the bag is negligible. What is the change in air ... 2answers 44 views Electrical force as a replacement for Gravitational force [closed] Suppose the force between the Earth and Moon were electrical instead of gravitational, with the Earth having a positive charge and the Moon having a negative one. If the magnitude of each charge ... 3answers 88 views Man in elevator, holding it, on a scale This is the scenario where my mass is $60 kg$, the mass of the elevator is $30kg$, and due to a malfunction, I have to hold myself and the elevator at rest. The question is, if there is a weighing ... 3answers 97 views Rotating a stone with a string Suppose we are given a stick and a stone tied to the stick by a string. Now if we rotate the stone around the stick the stone rises in height (see picture below). My question is which force accounts ... 1answer 27 views Terminal velocity and force pull [closed] I can't figure out this problem . Buoyancy force and gravity remain constant and viscous force by is $-kv$. And these forces all balance, but data isn't given according to that, or I am not able to ... 2answers 48 views How would you determine whether an object is at equilibrium? How would you determine whether an object is at equilibrium or not? What is the definition of equilibrium? 0answers 72 views How to calculate mechanical advantage of a worm gear? How to calculate mechanical advantage of a worm gear? My textbook simply use the turn ratio as the mechanical advantage, but I'm not sure how that works. My thinking: If the worm has a radius of ... 2answers 77 views Forces as One-Forms and Magnetism Well, some time ago I've asked here if we should consider representing forces by one-forms. Indeed the idea as, we work with a manifold $M$ and we represent a force by some one-form \$F \in ... 1answer 73 views Given a potential energy function, find expression of the force of a particle? This comes from an AP review packet. I'm given a potential energy functon, $$U(r)=br^{-3/2} + c,$$ where $b$ and $c$ are constants, and need to find the expression for the force on the particle. ... 3answers 72 views Smallest force to move a brick Having a brick lying on a table, I can exert horizontal force equal to $\mu m g$ to a middle of it's side, and it will start moving (assume $\mu$ is the friction coefficient). However, can I make the ... 1answer 47 views Problem about moments and equilibrium. [closed] A plank is on top of 2 branches. It is in contact at 2 points, A and B. The plank is 90 cm long. A is on the left of B. A is 15 cm from the left end of the plank. B is 20 cm from the right end of the ... 2answers 654 views Why can't Humans run any faster? If you wanted to at least semi-realistically model the key components of Human running, what are the factors that determine the top running speed of an individual? The primary things to consider would ... 4answers 410 views Why do we still need to think of gravity as a force? Firstly I think shades of this question have appeared elsewhere (like here, or here). Hopefully mine is a slightly different take on it. If I'm just being thick please correct me. We always hear ... 1answer 37 views Mass of a jumper, given height jumped and force exerted [closed] This is the question: An exceptional vertical jump from rest would raise a person 0.83m off the ground. To do this, a constant force of 2593N would need to be exerted against the ground. ... 0answers 20 views Some basic questions about electric field & nucleus [duplicate] I am not good in physics.You can say I am beginner in this field. I have some basic questions. I ju st want to know that [1] If there is repulsive force between same charges proton-proton then why ... 0answers 27 views Let F(x,y,z)=−c(r/||r||3) be the force resulting from the inverse square law… [closed] c is a constant and r=(x,y,z). Show that f(x,y,z)=c/sqrt(x2+y2+z2) is a potential function for F. What can be concluded from any path from point A to point B in F? What can be concluded about a simple ... 0answers 46 views Robot controling pouring process from a bottle I need to solve a problem within mechanic of fluids for a part of my thesis. Robot will pick up a bottle of beer, cola, julebrus or any other kind of beverage. And then it has to bring it to the glass ... 1answer 79 views Energy needed to lift and bring down an object A mass of 0.5 Kg needs to be moved from point A to another point (B) which is 1 meters above point A. The time for this movement should be 0.2 seconds, then the mass is kept at position B for another ... 3answers 124 views Why does a rod rotate? I'm a physics tutor tutoring High School students. A question confused me a lot. Question is: Suppose a mass less rod length $l$ has a particle of mass $m$ attached at its end and the rod is ... 2answers 116 views Physics behind the flow of gas coming out of a balloon I'm working with stratospheric balloons (latex ones) and I want to put a valve on it so it can float for a longer time. I'm trying to define which valve I should use, which demands I estimate the flow ... 1answer 62 views Convert a 200mm linear stroke into 90 degrees motion Can anyone help me Convert a 200 mm linear stroke into 90 degrees motion with as much mechanical advantage as possible or into two 90 degrees motions with as much mechanical advantage as possible? ... 3answers 162 views How do forces work Is there a mechanistic-type explanation for how forces work? For example, two electrons repel each other. How does that happen? Other than saying that there are force fields that exert forces, how ... 1answer 248 views Is acceleration an average? Background I'm new to physics and math. I stopped studying both of them in high-school, and I wish I hadn't. I'm pursuing study in both topics for personal interest. Today, I'm learning about ... 2answers 46 views Force applied in a body moving at high speed Consider a rod of length $l$ and uniform density is moving at high speed. I want to deflect the rod where should I need to apply the minimum force, so that the rod is deflected..? 1answer 46 views Forces on a particle moving in a vertical circle In the diagrma, a particle (A, mass 0.6kg) is moving in a vertical circle. The question is: When it gets to the lowest point, what is the tension in the light rod that is between the center of the ... 1answer 86 views Problem solving moments in equilibrium Two straight ladders, AC and BC, have equal weights $W$. They stand in equilibrium with their feet, A and B, on rough horizontal ground and their tops freely hinged at C. Angle CAB = 60$^o$, angle ... 1answer 31 views Find the bending moment of a pole attached to a moving block I'm having trouble with the following problem. What I've done so far: x-y is the usual coordinate system. $a=\frac{F}{m}=\frac{800}{60}$ and the y component of this is $a_y=a\sin{60^\circ}$. To ... 0answers 72 views Tension $T$ in cable [closed] Calculate the tension $T$ in each cable, as well as the magnitude and direction of the force exerted on the strut by the pivot in each case mostrados.En systems, w is the weight of the suspended box, ... 2answers 62 views Resolution of vectors What is the fundamental basis of resolution of vector. Suppose we have a vector $\vec{mg}$, now we resolve it into two components, horizontal and vertical. My question is what is the basis for telling ... 3answers 135 views Non-SHM oscillatory motion How to solve these kind of questions , where $|F| \propto x^2$? How to find time period and velocity type related things to the oscillatory motion? ... 3answers 130 views What needs to be integrated to solve this problem? An object is placed on a frictionless table with its one end attached to a cord which is connected to a pulley and the tension is maintained constant at 25 N. what is the change in kinetic energy ... 1answer 44 views Measuring vibration and converting to force (N) The test is: To have a rotating machine, bolted into a factory floor. To measure the vibration on 3 axis (output of accelerometers can be acceleration or velocity in $\mathrm{m/s}$ or ... 1answer 61 views Find the dielectric constant of the medium? Two point charges a distance $d$ apart in free space exert a force of $1.4\times10^{-4}N$. When the free space is replaced by a homogeneous dielectric medium, the force becomes $0.9\times10^{-4}N$. ... 0answers 42 views Do all the 4 forces of nature act at the same speed? [duplicate] It is believed that gravity, the weakest of the four forces propagates at the speed of light, cf. e.g. this Phys.SE post. One would expect (perhaps erroneously) that the other, stronger, forces acted ... 1answer 82 views Shear Flow corresponding to Eccentric Shear Force of a Closed-section Beam (Structural Analysis - Mechanics) Been stumped with this question for way too long... its a beam with a thin-walled rectangular cross-section, and a shear force is acting at a distance from the shear center. I know my decomposition of ... 1answer 57 views How does the buoyant force on a cube at the bottom of a tank of water manifest itself? Let's say a 10N cube (in air, on Earth) rests flat on a scale at the bottom of a tank of water, and the scale reads 8N, so there is 2N of buoyant force on the cube. How does the buoyant force ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371271729469299, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/17202/sum-of-the-first-k-binomial-coefficients-for-fixed-n/69153
## Sum of ‘the first k’ binomial coefficients for fixed n ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am interested in the function $\sum_{i=0}^{k} {N \choose i}$ for fixed $N$ and $0 \leq k \leq N$. Obviously it equals 1 for $k = 0$ and $2^{N}$ for $k = N$, but are there any other notable properties? Any literature references? In particular, does it have a closed form or notable algorithm for computing it efficiently? In case you are curious, this function comes up in information theory as the number of bit-strings of length $N$ with Hamming weight less than or equal to $k$. Edit: I've come across a useful upper bound: $(N+1)^{\underline{k}}$ where the underlined $k$ denotes falling factorial. Combinatorially, this means listing the bits of $N$ which are set (in an arbitrary order) and tacking on a 'done' symbol at the end. Any better bounds? - ## 9 Answers I'm going to give two families of bounds, one for when $k = N/2 + \alpha \sqrt{N}$ and one for when $k$ is fixed. The sequence of binomial coefficients ${N \choose 0}, {N \choose 1}, \ldots, {N \choose N}$ is symmetric. So you have $\sum_{i=0}^{(N-1)/2} {N \choose k} = {2^N \over 2} = 2^{N-1}$ when $N$ is odd. (When $N$ is even something similar is true but you have to correct for whether you include the term ${N \choose N/2}$ or not. Also, let $f(N,k) = \sum_{i=0}^k {N \choose i}$. Then you'll have, for real constant $\alpha$, $\lim_{N \to \infty} {f(N,\lfloor N/2+\alpha \sqrt{N} \rfloor) \over 2^N} = g(\alpha)$ for some function $g$. This is essentially a rewriting of a special case of the central limit theorem. The Hamming weight of a word chosen uniformly at random is a sum of Bernoulli(1/2) random variables. For fixed $k$ and $N \to \infty$, note that $${{n \choose k} + {n \choose k-1} + {n \choose k-2} \over {n \choose k}} = {1 + {k \over n-k+1} + {k(k-1) \over (n-k+1)(n-k+2)} + \cdots}$$ and we can bound the right side from above by the geometric series $${1 + {k \over n-k+1} + \left( {k \over n-k+1} \right)^2 + \cdots}$$ which equals ${n-(k-1) \over n - (2k-1)}$. Therefore we have $$f(n,k) \le {n \choose k} {n-(k-1) \over n-(2k-1)}.$$ - Using the summation formula for Pascal's triamgle, you get a shorter geometric series approximation which may work well for k less than but not too close to N/2. This is (N+1) choose k + (N+1) choose (k-2) + ..., which has about half as many terms and ratio that is bounded from above by (k^2-k)/((N+1-k)(N+2-k)), giving [((N+1-k)(N+2-k))/((N+1-k)(N+2-k) -k^2 +k)]*[(N+1) choose k] as an uglier but hopefully tighter upper bound. Gerhard "Ask Me About System Design" Paseman, 2010.03.06 – Gerhard Paseman Mar 6 2010 at 8:03 One can take this a step further. In addition to combining pairs of terms of the original sum N choose i to get a sum of terms of the form N+1 choose 2j+c, where c is always 0 or always 1, one can now take the top two or three or k terms, combine them, and use them as a base for a "psuedo-geometric" sequence with common ratio a square, cube, or kth power from the initial common ratio. This will give more accuracy at the cost of computing small sums of binomial coefficients. Gerhard "Ask Me About System Design" Paseman, 2010.03.27 – Gerhard Paseman Mar 27 2010 at 17:00 When k is so close to N/2 that the above is not effective, one can then consider using 2^(N-1) - c (N choose N/2), where c = N/2 - k. Gerhard "Ask Me About System Design" Paseman, 2010.03.27 – Gerhard Paseman Mar 27 2010 at 17:04 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Jean Gallier gives this bound (Proposition 4.16 in Ch.4 of "Discrete Math" preprint) $$f(n,k) < 2^{n-1} \frac{{n \choose k+1}}{n \choose n/2}$$ where $f(N,k)=\sum_{i=0}^k {N\choose i}$, and $k\le n/2-1$ for even $n$ It seems to be worse than Michael's bound except for large values of k Here's a plot of f(50,k) (blue circles), Michael Lugo's bound (brown diamonds) and Gallier's (magenta squares) ```n = 50; bisum[k_] := Total[Table[Binomial[n, x], {x, 0, k}]]; bibound[k_] := Binomial[n, k + 1]/Binomial[n, n/2] 2^(n - 1); lugobound[k_] := Binomial[n, k] (n - (k - 1))/(n - (2 k - 1)); ListPlot[Transpose[{bisum[#], bibound[#], lugobound[#]} & /@ Range[0, n/2 - 1]], PlotRange -> All, PlotMarkers -> Automatic] ``` Edit The proof, Proposition 3.8.2 from Lovasz "Discrete Math". Lovasz gives another bound (Theorem 5.3.2) in terms of exponential which seems fairly close to previous one $$f(n,k)\le 2^{n-1} \exp (\frac{(n-2k-2)^2}{4(1+k-n)}$$ Lovasz bound is the top one. ```n = 50; gallier[k_] := Binomial[n, k + 1]/Binomial[n, n/2] 2^(n - 1); lovasz[k_] := 2^(n - 1) Exp[(n - 2 k - 2)^2/(4 (1 + k - n))]; ListPlot[Transpose[{gallier[#], lovasz[#]} & /@ Range[0, n/2 - 1]], PlotRange -> All, PlotMarkers -> Automatic] ``` - I like this plot. It's a shame that Gallier doesn't include the proof. – Michael Lugo Aug 31 2010 at 22:15 Yeah, the proof he refers to is actually for a different bound (although it seems numerically close) – Yaroslav Bulatov Aug 31 2010 at 23:07 Here's Lovasz proof, turns out it's in Chapter 3, not Chapter 5 yaroslavvb.com/upload/lovasz-proof2.pdf – Yaroslav Bulatov Sep 1 2010 at 2:19 There is no useful closed-form for this. You can write it down as `$$2^N - \binom{N}{k+1} {}_2F_{1}(1, k+1-N, k+2; -1)$$` but that's really just a rewrite of the sum in a different form. - 1 I would not be so harsh in saying that the hypergeometric form is "not useful"; for instance, one can apply a Pfaff transformation, dlmf.nist.gov/15.8.E1 , to yield the identity $${}_2 F_1\left({{1 \quad m-n+1}\atop{m+2}}\mid-1\right)=\frac12 {}_2 F_1\left({{1 \quad n+1}\atop{m+2}}\mid\frac12\right)$$ – J. M. Oct 4 2011 at 0:57 1 The second bit has an argument that is nearer the expansion center 0 for the Gaussian hypergeometric series, so it stands to reason that the convergence is a bit faster. Also, one no longer needs to add terms of different signs... – J. M. Oct 4 2011 at 0:59 One standard estimate when the sum includes about half of the terms is the Chernoff bound, one form of which gives $$\sum_{k=0}^{(N-a)/2} {N\choose k} \le 2^N \exp\bigg(\frac{-a^2}{2N}\bigg)$$ This isn't so sharp. It's weaker than the geometric series bound Michael Lugo gave. However, the simpler form can be useful. - See A008949 "Triangle of partial sums of binomial coefficients." $T(n,k) = \sum_{i-0}^k {N\choose i}$ is the maximal number of regions into which $n$ hyperplanes of co-dimension $1$ divide $\mathbb R^k$ (the Cake-Without-Icing numbers) $2 ~T(n-1,k-1)$ is the number of orthants intersecting a generic linear subspace of $\mathbb R^n$ of dimension $k$. This tells you the probability if you choose $a$ independent random points on the unit sphere in $\mathbb R^d$, the probability that the origin is contained in the convex hull is $T(a-1,a-d-1)/2^{a-1}$. Complementarily, no hemisphere contains all of the points. The null space of the map by linear combinations of the points $\mathbb R^a \to \mathbb R^d$ generically has a kernel of dimension $a-d$, and this intersects the positive orthant iff $0$ is a convex hull of the points. By symmetry, all orthants are equally likely. - There's a generating function there too: (1 - x*y)/((1 - y - x*y)*(1 - 2*x*y)). Also, for k=2,3,...,10 it's given by Sloane's A000124, A000125, A000127, A006261, A008859, A008860, A008861, A008862, A008863. – Douglas S. Stones Mar 6 2010 at 3:32 The sum without the $i=0$ term arises in the "egg drop" problem -- see Michael Boardman's article, "The Egg-Drop Numbers," in Mathematics Magazine, Vol. 77, No. 5 (December, 2004), pp. 368-372, which concludes saying, "it is well known that there is no closed form (that is, direct formula) for the partial sum of binomial coefficients" with a reference to the book A=B by Petkovsek, Wilf, and Zeilberger (but unfortunately no page reference). - Each binomial coefficient satisfies $(\frac{N}{i})$i <= ${N \choose i}$ < $(\frac{eN}{i})$i, so if k <= N/2, you can upper bound the sum by $k(\frac{eN}{k})$k - If you interested in some back-of-the-hand order of magnitude estimates, you might consider looking at how $\binom{n}{k}$ behaves when $k=k(n)$ has a certain size. The idea I have in mind is to break down $\sum_{k=0}^m\binom{n}{k}$ into a sum over intervals of $k$ satisfying a certain regime. For example, look at terms where $k=\Theta(n)$, $k=\Theta(n^{1/2})$, etc. In general, using Stirling's approximation, you'll get: $\binom{n}{k}=\frac{n^ke^k}{k^k\sqrt{2\pi k}} A$ where $A:=\frac{n_{k}}{k^k}=\prod_{i=0}^{k-1}\left(1-\frac{i}{n}\right)$ and $n_k$ is the falling factorial. In particular, it's nicer to work with $B:=\ln(A) = \sum_{i=0}^{k-1} \ln\left(1-\frac{i}{n}\right)$. Now the idea is that each of the logarithm terms in $B$ can be Taylor expanded up to "sufficient" order depending on the size of $k$ compared to $n$. For example if $k=o(1)$, then $B\approx \sum_{i=0}^{k-1}\approx -\frac{k^2}{2n}$, so you get $A=e^{-\frac{k^2}{2n}(1+o(1))}$. In fact, you can do better than this if you expand $B$ to higher orders. In particular, if $k=o(n^{2/3})$, then $B=\sum_{i=0}^{k-1}-\frac{i}{n}+O(i^2n^{-2})=-\frac{k^2}{2n}+o(1)$ which gives $A=e^{-\frac{k^2}{2n}}(1+o(1))$ where now the $o(1)$ is no longer exponentiated. For other sizes of $k$, the exact same procedure works as long as you expand $B$ to sufficiently high order. - All of the above regarded estimating an upper bound for the sum. I am interested in estimating a lower bound for the sum. Any advise ? Dag - Please post new questions as separate questions – Yemon Choi Jun 30 2011 at 0:12 See mathoverflow.net/questions/55585/… . – Emil Jeřábek Jun 30 2011 at 10:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9041635990142822, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/22548/21-dimensional-physics-theory-of-our-universe?answertab=votes
# $2+1$ dimensional physics theory of our universe? Is there any physics theory that depicts our universe as $2+1$ dimensional? I heard that black holes seem to suggest that the world might be $2+1$ dimensional, so I am curious whether such theory exists? Just for curiosity. - 2 – David Zaslavsky♦ Mar 19 '12 at 5:47 1 Note to OP: $2+1\neq 3$. Qmechanic has edited that in to signify that you mean 2 space dimensions and 1 time dimension. – Manishearth♦ Mar 19 '12 at 9:03 ## 2 Answers You've probably come across the holographic principle: see http://en.wikipedia.org/wiki/Holographic_principle for details. The idea is that because the entropy of a black hole is proportional to the area of the event horizon, this means all the information about the black hole is present on the event horizon, and this has dimension 2+1D. However I don't think this should be taken to mean that our universe is 2+1 dimensional. Having said that, there are ideas from the more fringe areas of physics that at a very small scale/high energy the universe may be 1+1 dimensional. For example causal dynamical triangulation (http://en.wikipedia.org/wiki/Causal_dynamical_triangulation) seems to show 2D behaviour at very small scales. There is an idea from string theory called the AdS/CFT correspondence (http://en.wikipedia.org/wiki/AdS/CFT_correspondence) that physics in an n dimensional gravity theory can be encoded by a n-1 dimensional bounding surface. However this doesn't mean our world is 2+1D, but rather that our physics can be represented by a theory in 4+1D. - I personally haven't heard of one, but then again I haven't heard much. First thing, are you sure that it's two you want? Or four? Because by relativity et al, our universe is 4D. Go into string theory, it becomes 10/11/26 D. I think you may have misinterpreted the rubber-sheet explanation of gravity in general relativity as a theory. The 2D rubber sheet is only a way for us to imagine gravity acting, since it bends space. We can easily imagine a 2D sheet bending into the third dimension, as we live in a 3D world (3 macroscopic spatial dimensions, ti be exact). What's actually happening; a 3D bit of space bending into the fourth dimension, is not that easy to imagine. Possibly impossible to imagine. So the 2D-rubber-sheet-and-rock is just a visual aid. On a side note, the reason I doubt that there is any such theory is this: • We percieve 3 spatial dimensions, so there are atleast 3. • Electric/magnetic fields vary inversly to square of distance, which indicates ap3 macroscopic spatial dimensions. This is because they are mediated by EM waves, and intensity is proportional to $r^{1-N}$ for $N$ dimensions. Then again, if our universe was 2D I guess the nature if EM waves could be different. • We have 3 translational degrees of freedom in thermodynamics. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262081384658813, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/199323-scheduling-integer-programming-question-desperate.html
# Thread: 1. ## Scheduling - integer programming question [desperate] Question: Vicky is attending a summer school where she must study four units. She must do a one-hour lecture for each of the units every day. There are 6 one hour time slots in a day. As there are way too many students, repeating lectures for each unit is offered at every time slot of the day and are taught by different lecturers. Vicky’s decision on the class of a unit she likes to take is purely influenced by how much the lecturer looks like Keanu Reeves. [Hint: the decision variables should help you decide which class Vicky will take for each unit—i.e., Vicky to take Class i of Unit k. Furthermore, the score for how much the lecturer for Class i of Unit k looks like Keanu Reeves can be represented by the score $p_{ik}$.] a. Now, formulate an integer programming model to help Vicky choose her classes that maximize the total Keanu Reeves lookalike score. b. Derive a constraint so that Vicky never has to do more than two consecutive classes without a break. c. Modify the objective function, if Vicky’s objective is now to start her day as late as possible. My Answer: Part A.Let $x_{ik}$ = 1 if she take class i of unit k, otherwise 0. max z = $\sum x_{ik}p_{ik} , k=1,2,3,4. , i = 1,2,3,4,5,6.$ s.t $\sum x_{ik} = 1 , k=1,2,3,4$ $x_{ik} \in {0,1}, i = 1,2,3,4,5,6 , k=1,2,3,4$ Part B. $x_{i} + x_{i+1} \leq M(1-y)$ and $x_{i+2} \leq My$ where $y \in {0,1} , M is a large integer$ Part C.Unfortunately i can't think of how to change the objective function for part c. I'm assuming it has something to do with the 'i' values as these are relating to the time of the class but i can't figure out how to translate that into an answer. Any help would be greatly appreciated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948219895362854, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/22903/list
## Return to Answer 3 added 760 characters in body As Robin as pointed out, for all primes $p$, $\mathbb{Q}_p$ is rigid, i.e., has no nontrivial automorphisms. It is sort of a coincidence that you ask, since I spent much of the last $12$ hours writing up some material on multiply complete fields which has applications here: Theorem (Schmidt): Let $K$ be a field which is complete with respect to two inequivalent nontrivial norms (i.e., the two norms induce distinct nondiscrete topologies). Then $K$ is algebraically closed. Corollary: Let $K$ be a field which is complete with respect to a nontrivial norm and not algebraically closed. Then every automorphism of $K$ is continuous with respect to the norm topology. (Proof: To say that $\sigma$ is a discontinuous automorphism is to say that the pulled back norm $\sigma^*|| \ ||: x \mapsto ||\sigma(x)||$ is inequivalent to $|| \ ||$. Thus Schmidt's theorem applies. In particular this applies to show that $\mathbb{Q}_p$ and $\mathbb{R}$ are rigid, since every continuous automorphism is determined by its values on the dense subspace $\mathbb{Q}$, hence the identity is the only possibility. (It is possible to give a much more elementary proof of these facts, e.g. using the Ostrowski classification of absolute values on $\mathbb{Q}$.) At the other extreme, each algebraically closed field $K$ has the largest conceivable automorphism group: `$\# \operatorname{Aut}(K) = 2^{\# K}$`: e.g. Theorem 80 of http://math.uga.edu/~pete/FieldTheory.pdf. There is a very nice theorem of Bjorn Poonen which is reminiscent, though does not directly answer, your other question. For any field $K$ whatsoever, and any $g \geq 3$, there exists a genus $g$ function field $K(C)$ over $K$ such that $\operatorname{Aut}(K(C)/K)$ is trivial. However there may be other automorphisms which do not fix $K$ pointwise. There is also a sense in which for each $d \geq 3$, if you pick a degree $d$ polynomial $P$ with $\mathbb{Q}$-coefficients at random, then with probability $1$ it is irreducible and $\mathbb{Q}[t]/(P)$ is rigid. By Galois theory this happens whenever $P$ is irreducible with Galois group $S_d$, and by Hilbert Irreducibility the complement of this set is small: e.g. it is "thin" in the sense of Serre. Addendum: Recall also Cassels' embedding theorem (J.W.S. Cassels, An embedding theorem for fields, Bull. Austral. Math. Soc. 14 (1976), 193-198): every finitely generated field of characteristic $0$ can be embedded in $\mathbb{Q}_p$ for infinitely many primes $p$. It would be nice to know some positive characteristic analogue that would allow us to deduce that a finitely generated field of positive characteristic can be embedded in a rigid field (so far as I know it is conceivable that every finitely generated field of positive characteristic can be embedded in some Laurent series field $\mathbb{F}_q((t))$, but even if this is true it does not have the same consequence, since Laurent series fields certainly have nontrivial automorphisms). 2 added 228 characters in body As Robin as pointed out, for all primes $p$, $\mathbb{Q}_p$ is rigid, i.e., has no nontrivial automorphisms. It is sort of a coincidence that you ask, since I spent much of the last $12$ hours writing up some material on multiply complete fields which has applications here: Theorem (Schmidt): Let $K$ be a field which is complete with respect to two inequivalent nontrivial norms (i.e., the two norms induce distinct nondiscrete topologies). Then $K$ is algebraically closed. Corollary: Let $K$ be a field which is complete with respect to a nontrivial norm and not algebraically closed. Then every automorphism of $K$ is continuous with respect to the norm topology. (Proof: To say that $\sigma$ is a discontinuous automorphism is to say that the pulled back norm $\sigma^*|| \ ||: x \mapsto ||\sigma(x)||$ is inequivalent to $|| \ ||$. Thus Schmidt's theorem applies. In particular this applies to show that $\mathbb{Q}_p$ and $\mathbb{R}$ are rigid, since every continuous automorphism is determined by its values on the dense subspace $\mathbb{Q}$, hence the identity is the only possibility. (It is possible to give a much more elementary proof of these facts, e.g. using the Ostrowski classification of absolute values on $\mathbb{Q}$.) At the other extreme, each algebraically closed field $K$ has the largest conceivable automorphism group: `$\# \operatorname{Aut}(K) = 2^{\# K}$`: e.g. Theorem 80 of http://math.uga.edu/~pete/FieldTheory.pdf. There is a very nice theorem of Bjorn Poonen which is reminiscent, though does not directly answer, your other question. For any field $K$ whatsoever, and any $g \geq 3$, there exists a genus $g$ function field $K(C)$ over $K$ such that $\operatorname{Aut}(K(C)/K)$ is trivial. However there may be other automorphisms which do not fix $K$ pointwise. There is also a sense in which for each $d \geq 3$, if you pick a degree $d$ polynomial $P$ with $\mathbb{Q}$-coefficients at random, then with probability $1$ it is irreducible and $\mathbb{Q}[t]/(P)$ is rigid. By Galois theory this happens whenever $P$ is irreducible with Galois group $S_d$, and by Hilbert Irreducibility the complement of this set is small: e.g. it is "thin" in the sense of Serre. 1 As Robin as pointed out, for all primes $p$, $\mathbb{Q}_p$ is rigid, i.e., has no nontrivial automorphisms. It is sort of coincidence that you ask, since I much of the last $12$ hours writing up some material on multiply complete fields which has applications here: Theorem (Schmidt): Let $K$ be a field which is complete with respect to two inequivalent nontrivial norms (i.e., the two norms induce distinct nondiscrete topologies). Then $K$ is algebraically closed. Corollary: Let $K$ be a field which is complete with respect to a nontrivial norm and not algebraically closed. Then every automorphism of $K$ is continuous with respect to the norm topology. (Proof: To say that $\sigma$ is a discontinuous automorphism is to say that the pulled back norm $\sigma^*|| \ ||: x \mapsto ||\sigma(x)||$ is inequivalent to $|| \ ||$. Thus Schmidt's theorem applies. In particular this applies to show that $\mathbb{Q}_p$ and $\mathbb{R}$ are rigid, since every continuous automorphism is determined by its values on the dense subspace $\mathbb{Q}$, hence the identity is the only possibility. (It is possible to give a much more elementary proof of these facts, e.g. using the Ostrowski classification of absolute values on $\mathbb{Q}$.) There is a very nice theorem of Bjorn Poonen which is reminiscent, though does not directly answer, your other question. For any field $K$ whatsoever, and any $g \geq 3$, there exists a genus $g$ function field $K(C)$ over $K$ such that $\operatorname{Aut}(K(C)/K)$ is trivial. However there may be other automorphisms which do not fix $K$ pointwise. There is also a sense in which for each $d \geq 3$, if you pick a degree $d$ polynomial $P$ with $\mathbb{Q}$-coefficients at random, then with probability $1$ it is irreducible and $\mathbb{Q}[t]/(P)$ is rigid. By Galois theory this happens whenever $P$ is irreducible with Galois group $S_d$, and by Hilbert Irreducibility the complement of this set is small: e.g. it is "thin" in the sense of Serre.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439372420310974, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/31264/what-is-the-principle-behind-centrifugation?answertab=votes
What is the principle behind centrifugation? What is the principle behind centrifugation? I understand the idea that you spin something around the centripetal force will cause an apparent force on the spinning system. However I don't quite grasp how particles (in the non subatomic sense) with different density should be affected differently. Quite coarsely, I would expect to write down Newton's second law, but then the mass would simplify and the acceleration of every particle would be the same, regardless of mass. Is friction the answer? Or am I missing something silly? - 3 Answers You are missing buoyant force. Let us take a simple example when you pour some dirt in water and shake after some time more dense object settle.Why ? Because of buoyant force $$F=mg-V_{obj}d_{fluid}g$$ $$a=g-\frac{d_{fluid}}{d_{obj}}g$$ Hence the less dense object remain suspended for a long time. In the same way, in centrifuge there is outward acceleration same a $g$(due to centrifugal force) but greater in magnitude and buoyant force due to that.Hence dense particles moves away from the axis and less dense particles towards the axis. Although to an "inertial observer" the spinning centrifuge does not produce a "real force" it does increase the pressure within the fluid,but to an observer "in the test tube" it is same as increasing the gravitational force. - You've got particles suspended in a liquid. There is brownian motion due to the temperature of the liquid. Each particle feels gravity, but it also feels the brownian motion. (Also the viscosity of the liquid.) The smaller the particle, the smaller its mass, so the less is the force of gravity on it, to the point where the particles come down too slowly to wait for, like years. You can increase the "gravity" by spinning it in a centrifuge. Then the particles will "settle", but at rates determined by their size and/or density. That makes them segregate into layers. - The acceleration for different masses is not the same as is the velocity. Newton's 2nd law in the rotating frame would include the viscous force along with the centrifugal force and hence, mass would not be immaterial.Moreover,all particle in the rotating liquid quickly attain their critical velocities which is strongly dependent on the densities. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934167206287384, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/187167-finding-interval-expression-having-two-quadratic-equations-print.html
Finding the interval of expression having two quadratic equations. Printable View • September 2nd 2011, 06:59 PM sumedh Finding the interval of expression having two quadratic equations. What will be the values of 'm' so that the range of the equation $y= \frac{mx^2+3x-4}{-4x^2+3x+m}$ will be all real values( i.e. $y\epsilon [-\infty,\infty]$ ). given:x can take all real values. any help or hint will be appreciated. • September 2nd 2011, 07:02 PM TKHunny Re: Finding the interval of expression having two quadratic equations. You might try the Quadratic Formula. Parts of it may tell you where the denominator has no Real Solution. • September 2nd 2011, 07:29 PM sumedh Re: Finding the interval of expression having two quadratic equations. But even if the denominator has no real solution then also any real value of x will give some value of the denominator and hence will give some value of the whole equation so I think that we don't have to find the solutions or something related to solutions • September 2nd 2011, 07:35 PM TKHunny Re: Finding the interval of expression having two quadratic equations. Whoops! I read "Domain", somehow, where it says "Range". Given that the Domain IS the Real Numbers, 3^2 + 4*4*m < 0 or 16m < -9 or m < -9/16 In the basic rule list of rational functions, the ONLY way to get ALL Real Numbers in the Range is to avoid Horizontal Asymptotes. With equal or lesser degree in the numerator than in the denominator, how can we do that? We can't. Well, we can also get two vertical asymptotes and pick up the last value in the middle section. Too bad we don't get any vertical asymptotes on this assignment. Where does that leave us? • September 2nd 2011, 08:01 PM sumedh Re: Finding the interval of expression having two quadratic equations. but the answer to this is $m\epsilon [1,7]$ so i am confused(Worried) • September 3rd 2011, 08:45 AM TKHunny Re: Finding the interval of expression having two quadratic equations. No, that's not it. It is as I stated. Are you SURE is says "Given: x can take on all Real values." If it says that, then [1,7] is most defintely NOT a solution set to this question. [1,7] does do what I said earlier, "...we can also get two vertical asumptotes and pick up the last value in the middle section." However, like I also said earlier, "Too bad we don't get any vertical asymptotes on this assignment." Vertical asymptotes are inconsistent with "Given: x can take on all Real values." I should also point out that x = 7 and x = 1 are not solutions even if we get vertical asymptotes. You do not need to be confused. You just have to get used to arguing with the answer key when it is wrong. • September 3rd 2011, 08:01 PM sumedh Re: Finding the interval of expression having two quadratic equations. ok thank you (Happy) All times are GMT -8. The time now is 03:31 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152123332023621, "perplexity_flag": "middle"}
http://physics.aps.org/articles/v2/57
# Viewpoint: Electrons in graphene: an interacting fluid par excellence , Department of Physics, Simon Fraser University, Burnaby, British Columbia, Canada V5A 1S6 Published July 6, 2009  |  Physics 2, 57 (2009)  |  DOI: 10.1103/Physics.2.57 Along with the quark gluon plasma and cold atom gasses, graphene is establishing its place as a perfect liquid. #### Graphene: A Nearly Perfect Fluid Markus Müller, Jörg Schmalian, and Lars Fritz Published July 6, 2009 | PDF (free) Ever since it was shown that graphene—a single layer of carbon atoms—could be isolated from graphite, it has occupied a center stage of condensed matter physics. The popularity of graphene is rooted in the unusual nature of its low-energy excitations: near the Fermi level, the electron energies scale linearly with their momenta. This means that the electrons can be described as “massless” fermions, though with a velocity of about 300 times less than the velocity of light. The linear dispersion relation also implies a vanishing density of single-particle states at the Fermi level, which should make the effects of the Coulomb interaction between electrons weak. This usual mantra, however, may sometimes be quite misleading, as argued by Markus Müller at the ICTP in Trieste, Italy, Jörg Schmalian at Ames Lab and Iowa State University, US, and Lars Fritz at Harvard University, US, in a paper appearing in Physical Review Letters [1]. They show that a particularly suitable measure of how strongly the excitations in a given quantum fluid interact is given by the dimensionless ratio between the fluid’s shear viscosity and entropy density. They find that the value of this ratio in graphene is surprisingly close to its likely lower bound [2]. Such a low viscosity-to-entropy ratio, somewhat paradoxically, means that the electrons in graphene form a quantum liquid that is, in fact, strongly interacting. By this criterion graphene comes closer to being a “perfect fluid” than several other quantum systems that have often been labeled as strongly correlated. Landau’s notion of a Fermi liquid as a system of interacting fermions that, at low energies, effectively behave as noninteracting quasiparticles is the central paradigm of many-body physics. Our modern way of thinking about a Fermi liquid is to use the language of renormalization group theory—a theory that extracts the essential physics of many-body systems by zooming out from the microscopic details. In this framework, one would say that although the Coulomb interaction between electrons in a typical metal in an absolute sense is not weak, its effective strength depends on the energies at which the system is probed [3]. In a Fermi liquid, the effective interaction parameters decrease at lower temperatures and frequencies until they reach saturation. This idea receives its simplest realization precisely in graphene: The Fermi surface is shrunk to just two points (“Dirac” points) in momentum space, near which the energy depends linearly on the quasiparticle’s momentum and the density of quasiparticle states also vanishes linearly. The scarcity of low-energy excitations renders all the short-range components of the Coulomb electron-electron interactions irrelevant [4]. Using a term from renormalization group theory, these interactions “flow” towards zero as fast as the first power of temperature. From this perspective, graphene appears to be a perfect example of a weakly interacting Fermi liquid. Or so it would seem. What about the fact that the Coulomb interaction is a long-ranged force? In metals, this does not matter much, since the quasiparticles screen the interaction and make it effectively short ranged. But in graphene there are not enough low-energy quasiparticles to screen effectively, and the Coulomb interaction remains long ranged [5]. As a result, the Coulomb interaction does not change with the energy scale. Or, in the parlance of renormalization group theory one would say that the coupling $g$ in the Coulomb interaction $V(r)=g/r$ represents an exactly marginal coupling, which does not flow at all with the change in energy scale. Its main effect, it turns out, is to produce a shift in the Fermi velocity that diverges at low temperatures, albeit only as a logarithm: $v(T)∼glog(T0/T)$, where the high-energy scale $T0∼105K$ is set by the width of the conduction energy band [6]. If one defines the dimensionless strength of the Coulomb interaction $α(T)=g/($$ħ$$v(T))$, this coupling constant would slowly approach zero as the system is probed at progressively lower temperatures (Fig. 1). So theorists can paint a picture of graphene with only a few strokes: At temperature or frequency scales much below $T0$ any physical quantity is, to leading order, given by its value for the noninteracting system of quasirelativistic particles with an effective velocity $v(T)$. The leading corrections are small at low temperature and proportional to $α(T)∼1/log(T0/T)$ [7]. As long as the electron-electron interaction is not strong enough to turn graphene’s ground state into an insulator [4], the effect of the interaction on the low-energy properties is fairly small. In this respect graphene resembles some of our most cherished physical theories: quantum electrodynamics (at low energies) and quantum chromodynamics (at high energies). There are, however, quantities that do diverge in the noninteracting regime; namely, some response functions, which measure how quickly the system restores the thermodynamic equilibrium. In fact, in the complete absence of interactions, the relaxation time would be infinite. In graphene the relaxation time $τ$ is inversely proportional to temperature and $α2$. This suggests that the viscosity of the electronic fluid in graphene—which, like in any fluid, is a measure of resistance to a shear force and for graphene is given by $η∼n〈ε〉τ$, where $n∼(kBT/$$ħ$$v)2$ is the density of thermally excited quasiparticles and $〈ε〉∼kBT$ is their average energy—can to leading order be written as $η=(Cη/$$ħ$$)(kBT/vα)2$, where $Cη$ is a numerical constant of proportionality. Normalizing this result by graphene’s entropy density then yields $η/s=(Cηπ/9ζ(3))(1/α(T))(ħ/kB),$ (1) with all the numerical constants fully displayed. The last expression for the viscosity-to-entropy ratio shows it is a dimensionless number in the units of nature’s constants $ħ$$/kB$. If the low-temperature limit of the flowing coupling constant $α(T)$ were finite, this number would be a characteristic of a “fixed point” of the renormalization group flow, akin to other universal numbers that characterize fixed points, critical exponents and amplitude ratios as prime examples. (Fixed points are the special points in the coupling constant space where the flow stops.) Since $α(T)$, however, in graphene approaches zero at low temperatures, the ratio $η/s$ ultimately diverges, but only very slowly. Building on their previous calculation of graphene’s dc conductivity [8], in a technical tour de force, Müller et al. determined the constant of proportionality $Cη$ to ultimately find $η/s∼0.00815(log(T0/T))2$$ħ$$/kB$. At room temperatures, this number is only $∼0.3$$ħ$$/kB$. To appreciate the above result one obviously needs a useful point of reference. Let us ask what the same ratio would be, this time in a truly strongly interacting system. Kovtun, Son, and Starinets [2] studied a number of strongly interacting field theories, some of which were dual to those describing black holes, and found that the ratio is finite, universal, and in fact not much lower than the result in graphene: $η/s=1/(4π)($$ħ$$/kB)$. One expects that this number may provide a natural lower bound, and indeed the result for graphene conforms to this conjecture. The dimensionless viscosity of graphene, however, is significantly lower than in several other systems that would undoubtedly deserve to be characterized as strongly coupled. For example, in cold atoms with a diverging scattering length, $η/s∼0.5($$ħ$$/kB)$, whereas for helium at the superfluid critical point, $η/s∼0.7($$ħ$$/kB)$ [9]. It should be noted, though, that if the chemical potential $μ$ of graphene is tuned to lie away from the two Dirac points, it will behave as a conventional metal with a Fermi surface. In this case, the viscosity increases at low temperatures at a much faster rate as $η/s∼(|μ|/T)3$. The fact that the viscosity-to-entropy ratio in graphene is so low and almost temperature independent is another example that this material is what would be called a “quantum critical” system: there is no length scale other than the ones provided externally by the temperature, frequency of the measurement probe, or finite size of the sample. One of the main preoccupations of condensed matter physicists for many years has been understanding quantum critical points in various systems. Graphene appears to be a ready-made, particularly gentle example of fermionic quantum criticality, with comparatively few gapless fermions appearing only near special points in momentum space. One may expect many lessons about the nature of quantum transport, response to magnetic field, or the effects of disorder in critical systems to be learned from this deceptively simple-looking system. The main lesson of the work of Müller, Schmalian, and Fritz, however, may be that graphene is, in a certain well-defined sense, rather far from being weakly interacting. In fact, with a possible exception of the ultrarelativistic quark-gluon plasma [9], from temperatures as low as $50K$ to temperatures as high as $300K$ graphene may be closer to the notion of a perfect strongly interacting fluid than any other quantum system we currently know. One cannot help but wonder what other surprises this fascinating material has in store. ### References 1. M. Müller, J. Schmalian, and L. Fritz, Phys. Rev. Lett. 103, 025301 (2009). 2. P. Kovtun, D. T. Son, and A. O. Starinets, Phys. Rev. Lett. 94, 111601 (2005). 3. R. Shankar, Rev. Mod. Phys. 66, 129 (1994). 4. I. F. Herbut, Phys. Rev. Lett. 97, 146401 (2006). 5. D. V. Khveshchenko, Phys. Rev. Lett. 87, 246802 (2001). 6. J. González, F. Guinea, and M. A. H. Vozmediano, Phys. Rev. B 59, R2474 (1999). 7. I. F. Herbut, V. Juričić, and O. Vafek, Phys. Rev. Lett. 100, 046403 (2008). 8. L. Fritz, J. Schmalian, M. Müller, and S. Sachdev, Phys. Rev. B 78, 085416 (2008). 9. T. Schäfer, Phys. Rev. A 76, 063618 (2007). ### About the Author: Igor F. Herbut Igor Herbut is a professor of theoretical physics at Simon Fraser University, in Burnaby, British Columbia, in Canada. He studied physics in Belgrade and Baltimore, and received his Ph.D. from Johns Hopkins University in 1995. He has held visiting appointments at the Tokyo Institute of Technology, Kavli Institute for Theoretical Physics, Max Planck Institute, and University of Tokyo. Herbut’s recent research has been focused on gauge theories of high-temperature superconductivity, and the effects of electron correlations and disorder in graphene. He has also authored a graduate textbook on the theory of phase transitions for Cambridge University Press. ## Related Articles ### More Graphene Shaking Open a Gap in Graphene Synopsis | Apr 25, 2013 Roadblocks to Spin Travel in Graphene Synopsis | Apr 11, 2013 ## New in Physics Wireless Power for Tiny Medical Devices Focus | May 17, 2013 Pool of Candidate Spin Liquids Grows Synopsis | May 16, 2013 Condensate in a Can Synopsis | May 16, 2013 Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Fire in a Quantum Mechanical Forest Viewpoint | May 13, 2013 Insulating Magnets Control Neighbor’s Conduction Viewpoint | May 13, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222939014434814, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/77778?sort=votes
Bound of dimension of $H^1$ of certain line bundle Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a smooth variety over a field $k=\bar k$, $\rm char$ $k=0$. Assume that $\mathcal{L}$ is a nef and big line bundle. Is there some bound saying that $$h^1(\mathcal{L}^{\otimes n}) \le f(n)$$ where $f(n)$ is a function only depending on $n$? For example, a polynomial. And how about the case when $\rm char$ $k>0$? - 1 I remember this kind of things was studied by Alex Kuronya. Did you check his papers? – Hailong Dao Oct 11 2011 at 2:25 I have not yet. But I will try to check it soon. Thanks! – Michael Zhang Oct 11 2011 at 2:32 1 Answer In general, for any line bundle $L$ over $X$, you have $h^i(X, L^{\otimes n})\leqslant C m^n$ where $m=\dim X$. (write $L$ as the difference of two very ample line bundles, and use the usual restriction exact sequence) In the case where $L$ is nef, then one can say more: $h^i(X,L^{\otimes n})\leqslant C m^{n-i}$ using Fujita's theorem (cf Lazarsfeld, Positivity in Algebraic Geometry, 1.2.29 "growth of cohomology"). But in the case $i=1$, you don't need Fujita, and this can be done more basically. (see e.g Debarre's "Higher dimensional Algebraic Geometry", Proposition 1.31 p21) - Is this result independent of characteristic of the base field? – Michael Zhang Oct 11 2011 at 16:12 As this is a formal consequence of the restriction exact sequence and the definition of intersection numbers, I would tend to say yes. However, I almost never work over algebraically closed fields with positive characteristic, so I might be missing something. – Henri Oct 11 2011 at 22:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171395301818848, "perplexity_flag": "head"}
http://mathoverflow.net/questions/93069/when-are-conformal-maps-holomorphic/93070
## When are conformal maps holomorphic? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is a standard fact from elementary complex analysis that a holomorphic function $f:\mathbb{C}\to \mathbb{C}$ is a conformal mapping. Now, suppose I have a map $f':\mathbb{R}^2\to \mathbb{R}^2$ which is a conformal mapping of the plane onto itself. Write $$f'(x,y) = (f_1(x,y),f_2(x,y)).$$ Is $f_1 + if_2$ holomorphic? - 3 Depends if you allow conformal maps to change the orientation. In any case, you get either just holomorphic or both holomorphic and anti-holomorphic maps. This is a part of any standard complex analysis course. By the way, in the "standard fact" you should also assume that $f$ is nonconstant. – Misha Apr 4 2012 at 4:45 3 The fact that it's conformal is equivalent to a condition on the matrix of first derivatives. If the determinant is nonneegative the condition is the same as the Cauchy-Riemann equations, otherwise the Cauchy-Riemann equations are off by a factor of -1. So if you stipulate the determinant of the Jacobian is nonnegative (orientation is preserved) then it implies holomorphicity (as long as $f$ is $C^1$.) – Michael Greenblatt Apr 4 2012 at 4:57 ## 1 Answer If we define $f' : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ by $f'(x,y) = (x, -y)$, then it is conformal, but the corresponding map $f_1 + i f_2$ is not holomorphic. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053061008453369, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/06/18/brackets-and-flows/
# The Unapologetic Mathematician ## Brackets and Flows Now, what does the Lie bracket of two vector fields really measure? We’ve gone through all this time defining and manipulating a bunch of algebraic expressions, but this is supposed to be geometry! What does the bracket actually mean? It turns out that the bracket of two vector fields measures the extent to which their flows fail to commute. We won’t work this all out today, but we’ll start with an important first step: the bracket of two vector fields vanishes if and only if their flows commute. That is, if $X$ and $Y$ are vector fields with flows $\Phi_s$ and $\Psi_t$, respectively, then $[X,Y]=0$ if and only if $\Phi_s\circ\Psi_t=\Psi_t\circ\Phi_s$ for all $s$ and $t$. First we assume that the flows commute. As we just saw last time, the fact that $\Phi_s\circ\Psi_t=\Psi_t\circ\Phi_s$ for all $t$ means that $Y$ is $\Phi_s$-invariant. That is, $\Phi_{-t*}Y\circ\Phi_t=Y$. But this implies that the Lie derivative $L_XY$ vanishes, and we know that $L_XY=[X,Y]$. Conversely, let’s assume that $[X,Y]=0$. For any $p\in M$ we can define the curve $c_p$ in the tangent space $\mathcal{T}_pM$ by $c_p(s)=\Phi_{-s*}Y\circ\Phi_s(p)$. Since the Lie derivative vanishes, we know that $c_p'(0)=0$, and I say that $c_p'(s)=0$ for all $s$, or (equivalently) that $c_p(s)=Y_p$. Fixing any $s$ we can set $q=\Phi_s(p)$. Then we calculate $\displaystyle\begin{aligned}c_p'(s)&=\lim\limits_{\Delta s\to0}\frac{c_p(s+\Delta s)-c_p(s)}{\Delta s}\\&=\lim\limits_{\Delta s\to0}\frac{1}{\Delta s}\left[\Phi_{-(s+\Delta s)*}\circ Y\circ\Phi_{s+\Delta s}(p)-\Phi_{-s*}\circ Y\circ\Phi_s(p)\right]\\&=\lim\limits_{\Delta s\to0}\frac{1}{\Delta s}\Phi_{-s*}\left[\left[\Phi_{-\Delta s*}\circ Y\circ\Phi_{\Delta s}\right]\left(\Phi_s(p)\right)-Y\left(\Phi_s(p)\right)\right]\\&=\Phi_{-s*}\lim\limits_{\Delta s\to0}\frac{1}{\Delta s}\left[\left[\Phi_{-\Delta s*}\circ Y\circ\Phi_{\Delta s}\right](q)-Y(q)\right]\\&=\Phi_{-s*}c_q'(0)=\Phi_{-s*}0=0\end{aligned}$ Now this means that $Y$ is $\Phi_s$-invariant for all $t$, meaning that $\Phi_s$ and $\Psi_t$ commute for all $s$ and $t$, as asserted. As a special case, if $(U,x)$ is a coordinate patch then we have the coordinate vector fields $\frac{\partial}{\partial x^i}$. The fact that partial derivatives commute means that the brackets disappear: $\displaystyle\left[\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right]=0$ This corresponds to the fact that adding $s$ to the $i$th coordinate and $t$ to the $j$th coordinate can be done in either order. That is, their flows commute. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 3 Comments » 1. [...] I left off last time by pointing out that coordinate vector fields [...] Pingback by | June 22, 2011 | Reply 2. [...] again pick up the question we posed earlier about what the bracket measures. Clearly it should have something to do with flows and their [...] Pingback by | June 23, 2011 | Reply 3. [...] John Armstrong: The Lie Derivative, Brackets and Flows [...] Pingback by | June 25, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268752336502075, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/202412-algorithm-finding-arrangement-5-variables-eqn-b-w-satisfy.html
# Thread: 1. ## Algorithm for finding arrangement of 5 variables in eqn with +/- b/w to satisfy it Hi all My friend gave me a riddle (very well known) in which I had to break a 40kg block in 4 pieces ( of integral weight) such that using all those pieces one can measure all weights in range 1 - 40 on a physical balance thingy (I hope that's what its called). Then we decided to make a python program for it, but we cant find any algorithm to arrange 4 weights (i, j, k, l) and a weight w in equation. We might be able to do it using regular expressions and built in eval() ... (we just started with the idea but, i feel very confident about it) but that feels like cheating ... or more like a walkthrough ... It'll be great if someone could point me to the right direction by telling me what kind of algo do stuff like this (not too complex maths please :P) Thanks in Advance ___________________ 2. ## Re: Algorithm for finding arrangement of 5 variables in eqn with +/- b/w to satisfy i It's impossible. With four pieces you can make at most $2^4 = 16$ different weights. However, if you are allowed to "subtract" weights (e.g. by using a balance scale and putting one weight on the opposite end) then it's possible. Use 1,3,9,27 kg. This turns out to be a problem involving base 3. Also, 1+3+9+27 = 40. 3. ## Re: Algorithm for finding arrangement of 5 variables in eqn with +/- b/w to satisfy i Hi I did mentioned "with +/- between them" in my Title (I am very bad at explaining my ideas to other people :P ) -can you please explain me how to solve this problem using base 3. -And also how did you decided to use base 3 -And also some explanation for base 3. I've only read base 10, 2, 8, 16 I know It's a pretty long list so we should start by ME studying a bit more about all the points above. Could you give me links to pages where I can understand all these stuff better and then I'll come back here for more help 4. ## Re: Algorithm for finding arrangement of 5 variables in eqn with +/- b/w to satisfy i Base 3 works because, for a weight X *add X *not use X *subtract X If you don't know what base 3 is, it works just like any other base.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435443878173828, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/newtonian-gravity+star
# Tagged Questions 1answer 130 views ### How to calculate gravity inside the star? Gravity must decrease due to less effective mass when going inside the object but also must increase with depth inside the star due to its higher density. Is there a model or formula approximating ... 0answers 367 views ### Calculating semi-major axis of binary stars from velocity, position and mass I'm trying to calculate the 'instantaneous' semi-major axis of a binary system with two equal (known) mass stars for an $N$-body simulation. I know their velocities and positions at a given time, but ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272373914718628, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/124509/how-do-i-prove-the-following-statement-about-a-summation-of-a-series/124521
# How do I prove the following statement about a summation of a series? I have not been able to completely solve this problem and it's driving me crazy. Could you please help. The question is to show that, $$\sum_{n=1}^N \frac{\sin n\theta}{2^n} =\frac{2^{N+1}\sin\theta+\sin N\theta-2\sin(N+1)\theta }{2^N(5-4\cos\theta)}$$ Where do I start? I tried solving this using de Moivre's Theorem but I don't know where I am going wrong. Could you please help me or if possible show other ways to tackle this particular problem. Any Help is much appreciated! Thanks in Advance! - 4 Well your summation is the imaginary part of $\sum_{n = 1}^{N}\frac{e^{in\theta}}{2^{n}} = \sum_{n = 1}^{N}(e^{i\theta}/2)^{n}$. Then it is just a finite summation of a geometric series. – ADF Mar 26 '12 at 4:06 Please edit your question as the displayed equation is seriously messed up. – Gerry Myerson Mar 26 '12 at 5:09 @GerryMyerson Oh, sorry about that. Probably forgot to press the Shift key along with the = sign :) – Bidit Acharya Mar 26 '12 at 5:34 ## 2 Answers If you follow one of the suggestions the summation is the imaginary part of $$\begin{align*} \sum_{n = 1}^{N}\frac{e^{in\theta}}{2^{n}} &= \sum_{n = 1}^{N}(e^{i\theta}/2)^{n}\\ &= \frac{e^{i\theta}}{2} \frac{\left(1-\frac{e^{Ni\theta}}{2^N}\right)}{\left(1-\frac{e^{i\theta}}{2}\right)}\\ &= \frac{e^{i\theta}(2^N-e^{Ni\theta})}{2^N(2-e^{i\theta})}\\ &= \frac{e^{i\theta}(2^N-e^{Ni\theta})(2-e^{-i\theta})}{2^N(2-e^{i\theta})(2-e^{-i\theta})}\\ &= \frac{(2^Ne^{i\theta}-e^{(N+1)i\theta})(2-e^{-i\theta})}{2^N(4-2(e^{i\theta}+e^{-i\theta})+1)}\\ &= \frac{2^{(N+1)}e^{i\theta}-2e^{(N+1)i\theta}-2^N+e^{Ni\theta}}{2^N(4-2(e^{i\theta}+e^{-i\theta})+1)} \end{align*}$$ The imaginary part of this is $$\frac{2^{(N+1)}\sin \theta - 2\sin (N+1)\theta + \sin N\theta}{2^N (5-4\cos \theta)}$$ - Hint: Write $\sin(n\theta)=\dfrac{e^{in\theta}-e^{-in\theta}}{2i}$, then, use the formula for the sum of a geometric series. - 1 @ADF's suggestion nearly halves the task, don't you think? – Did Mar 26 '12 at 6:03 1 @Dider: After computing the first sum, one would probably realize that the second sum is the conjugate of the first, and the work is cut in half. – robjohn♦ Mar 26 '12 at 10:30 Your suggestion (in the comment, not in the post) amounts to using the fact that the sine is the imaginary part of the complex exponential. I agree; thus it would be more logical to use this fact and to base the proof on imaginary parts from its onset, wouldn't it? – Did Mar 26 '12 at 10:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471830725669861, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/30176/statistic-hypothesis-testing-standard-deviation-less-than-0-4
# Statistic hypothesis testing - Standard deviation less than 0.4 There was too much snow on the highways, so the mayor of the town sent snowplows to spread some chemicals on them. There is a standard of how much of one specific substance should be present in the compound that is used for spreading... We measured how much of the substance was present in the compound in 30 different places of the town. These are the results: ````0.91 1.08 0.72 1.07 1.14 0.62 1.06 1.20 0.76 1.19 0.96 0.73 0.83 0.55 0.79 1.34 0.60 1.19 1.35 1.13 0.67 0.77 0.48 0.83 1.78 2.25 1.21 0.89 0.83 1.07 ```` We expect that the values have normal distribution. Verify with a reliability of 99% that the standard deviation is less than 0.4. [Result: r = 24.546. Hypothesis H0 is not denied.] I calculated a) $\mu$ = 1.00.....and.....b) $\sigma$ = 0.367 Now I set ...H0: $\sigma^{2} = \sigma^{2}_{o}$... versus...H1: $\sigma^{2} < \sigma^{2}_{o}$ I used this test: $\frac {(n-1) s_{n}^{2}} { \sigma_{0}^{2} } \leq \chi^{2} _{ \alpha } (n-1)$ Then, I calculated $\frac {(n-1) s_{n}^{2}} { \sigma_{0}^{2} }$ = 24.54 and $\chi^{2} _{ \alpha } (n-1)$ = 49.58 Now, we see that the inequality holds good, so H0 should be denied! However the result in the book says the opposite... - The associated p-value is greater than 1%, therefore there is not enough evidence to reject the null hypothesis. Note that the statistic is smaller than the critical value. I have noticed that you got very good answers in the past and you seem to be satisfied, consider accepting some of them to motivate people to continue to help you. It is just one click ;) – user10525 Jun 10 '12 at 12:03 2 I didn't know there was something like Accepting Answers. From now on, I will always accept the best answer to my question which has at least 1 answer. Thank you Procrastinator for letting me know! – user1111261 Jun 10 '12 at 12:59 ## 1 Answer As Procrastinator pointed out the test statistic is not significantly large. Don't just look at the number and assume that it is large enough to reject! The chi square statistic has 29 degrees of freedom. It has a mean of 29 and a variance of 58. So the value of the test statistic being 24.54 is not large at all and with the estimate so close to 0.4, this is what we would expect. - So when do I use p-value as a test and when Normal Distribution Parameters Testing? – user1111261 Jun 10 '12 at 12:13 When doing hypothesis testing you compute the test statistic and compare it to the critical value to determine whether or not to reject the null hypothesis. The p-value add information by telling how extreme your result would be if the null hypothesis were true. A p-value greater than 0.05 is customarily taken as not significant. In this case Procrastinator checked to find the p-value to be greater than 0.2 – Michael Chernick Jun 10 '12 at 12:23 So, is it like this? -> .... When I get the result from the test that I should reject H0 and H1 is true, I am obliged to calculate p-value and then according to the p-value decide if the result is significant or if it's not. On the contrary, when the result of the test says that I cannot reject H0 because the inequality doesn't hold good, I don't have to calculate p-value. Or do I have to calculate p-value all the time? – user1111261 Jun 10 '12 at 12:53 To test a null hypothesis you only need to compare the test statistic to the critical value. You never have to compute the p-value. The p-value just provides more information than just a statement of reject/ don't reject. You can always compute a p-value. There is no inconsistency. In your case the p-value was much higher than 0.05 and your test statistic was well below the critical value. The p-value is more interesting when you reject because a p-value of 0.001 provides a clearer sign that you should reject than say a p-value of 0.03. – Michael Chernick Jun 10 '12 at 12:53 1 My last question: How do I calculate p-value in this case? – user1111261 Jun 10 '12 at 13:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313187599182129, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/202397-geometric-series-easy.html
# Thread: 1. ## Geometric series [easy] Hey guys, Just wondering if anybody could help me identify the r value in the following geometric series (sorry it's in image form, I'm no good at tex): Anyway, I had one similar just before where the coefficient was 1, meaning my a and r terms were both x - which made it easy - but this is for an extra little part on an assignment and I haven't done APs and GPs in about a year so I need a little refresh. Thanks in advance. 2. ## Re: Geometric series [easy] Originally Posted by manoistheman Hey guys, Just wondering if anybody could help me identify the r value in the following geometric series (sorry it's in image form, I'm no good at tex): Anyway, I had one similar just before where the coefficient was 1, meaning my a and r terms were both x - which made it easy - but this is for an extra little part on an assignment and I haven't done APs and GPs in about a year so I need a little refresh. Thanks in advance. Is this a geometric series or a geometric sequence? 3. ## Re: Geometric series [easy] Originally Posted by Prove It Is this a geometric series or a geometric sequence? Series - I will be using the a and r values to calculate the sum to infinity. Thanks for the speedy reply by the way. 4. ## Re: Geometric series [easy] Originally Posted by manoistheman Hey guys, Just wondering if anybody could help me identify the r value in the following geometric series (sorry it's in image form, I'm no good at tex): Anyway, I had one similar just before where the coefficient was 1, meaning my a and r terms were both x - which made it easy - but this is for an extra little part on an assignment and I haven't done APs and GPs in about a year so I need a little refresh. Thanks in advance. $\displaystyle \begin{align*} a &= S_1 \\ \\ a + r\,a &= S_2 \\ x + r\,x &= 1.6x + 1.6x^2 \\ \\ a + r\,a + r^2a &= S_3 \\ x + r\,x + r^2x &= 1.6x + 1.6x^2 + x^3 \end{align*}$ You should be able to solve for r now. 5. ## Re: Geometric series [easy] Originally Posted by Prove It $\displaystyle \begin{align*} a &= S_1 \\ \\ a + r\,a &= S_2 \\ x + r\,x &= 1.6x + 1.6x^2 \\ \\ a + r\,a + r^2a &= S_3 \\ x + r\,x + r^2x &= 1.6x + 1.6x^2 + x^3 \end{align*}$ You should be able to solve for r now. Oh my god, a=1.6x and r=x - just subbed in the values into the sum to infinity and got it right! Thank you good sir, though I feel silly for not seeing this earlier haha. EDIT: Actually, I apologise; the initial value I had listed as x SHOULD have been 1.6x - that would've made it a whole lot simpler in terms of the a value. Silly mistake on my part, and thanks for the help; I guess that's what tripped me up.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9735991954803467, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/80027?sort=oldest
Obstructions to being a hyperplane section or a fibre of a Lefschetz pencil Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a smooth projective variety $X$, when could $X$ fail to be a hyperplane section in some other variety $Y$, or fail to be the fibre of some Lefschetz pencil $\widetilde{Y} \rightarrow \mathbb{P}^{1}$? Here, the variety $Y$ is not fixed, but simply required to exist. - 2 Answers Your question is related to the problem of the existence of non-trivial extensions of subvarieties $X \subset \mathbb P^N$ to $\mathbb P^{N+1}$. An extension of $X$ is just a subvariety $Y$ of $\mathbb P^{N+1}$ such that $X= Y \cap \mathbb P^{N}$. It is called trivial if $Y$ is the join of $X$ and a point outside of $\mathbb P^N$. This is a classical question that was studied by the Italian school of algebraic geometry. For instance, Scorza proved that the Veronese surface in $\mathbb P^5$ does not admit non-trivial extensions. More recently, the problem has been studied by Zak, S. L'vovsky, L. Badescu, among many others. In Extensions of projective varieties and deformations by S. L'vovsky you will find the following result: Theorem. Suppose $X$ is not $\mathbb P^N$ nor a quadric. If $\dim X \ge 2$ and $H^1(X,TX\otimes \mathcal O_{\mathbb P^N}(-1))=0$ then every extension of $X$ is trivial. For a very nice introduction to this circle of ideas see the first chapter of the book Projective geometry and formal geometry by L. Badescu. Unfortunately, the relevant Chapter doesn't seem to be available from Google books. Of course this does not answer your question as the embedding of $X$ into $\mathbb P^N$ is fixed. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A generic curve over $\mathbb{C}$ of large genus, say at least $24$, will not be a fibre of a Lefschetz pencil or even a hyperplane section. The reason is that by the theorems of Harris-Mumford and Eisenbud-Harris, the moduli space of curves of large genus is of general type, so there can be no rational curve passing through a generic point. To use this one needs to know that all the smooth fibres of the pencil are not isomorphic. Since the local monodromy around a singular fibre is infinite by the Picard-Lefschetz formula, it suffices to show that there must be at least one singular fibre. But if all fibres are smooth, then $\tilde{Y}$ (the total space of the Lefschetz pencil) must be isomorphic to $C \times \mathbb{P}^1$. This cannot happen since $C \times \mathbb{P}^1$ is not a blow up of any other surface. Over a field of characteristic zero any very ample linear system contains a Lefschetz pencil, so it follows from the above that the smooth members of the linear system cannot all be isomorphic. But this would give rise to a unirational variety containing a generic point of the moduli space and this is not possible. A similar argument should work for other classes of varieties whose moduli spaces are not uniruled. - Since OP did not specify that Y be smooth, this is technically false. Every X is a hyperplane section of a projective cone over X. – Jason Starr Nov 4 2011 at 17:03 Sandor -- I am afraid I do not understand the point you are trying to make. The OP stated that X is smooth. The projective cone over X is singular, but not along the generic hyperplane section X. Rather it is singular at the vertex of the cone. So I am afraid I do not understand your comment. – Jason Starr Nov 4 2011 at 18:18 Jason, sorry. I had something else in mind. :( – Sándor Kovács Nov 4 2011 at 18:25 Thanks. This is a very good answer. Perhaps I should have asked the converse: given a smooth projective variety $X$, when can and how can I hope to realize it as the fibre of a Lefschetz pencil? – A. Pascal Nov 4 2011 at 18:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331156015396118, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/03/05/uniqueness-of-jordan-normal-forms/?like=1&_wpnonce=84af3d0b4c
# The Unapologetic Mathematician ## Uniqueness of Jordan Normal Forms So we’ve got a Jordan normal form for every linear endomorphism $T:V\rightarrow V$ on a vector space $V$ of finite dimension $d$ over an algebraically closed base field $\mathbb{F}$. That is, we can always pick a basis with respect to which the matrix of $T$ is block-diagonal, and each block is a “Jordan block” $J_n(\lambda)$. This is an $n\times n$ matrix $\displaystyle\begin{pmatrix}\lambda&1&&&{0}\\&\lambda&1&&\\&&\ddots&\ddots&\\&&&\lambda&1\\{0}&&&&\lambda\end{pmatrix}$ with the eigenvalue $\lambda$ down the diagonal and ${1}$ just above the diagonal. Abstractly, this is a decomposition of $V$ as a direct sum of various subspaces, each of which is invariant under the action of $T$. And, in fact, it’s the only “complete” decomposition of the sort. Decomposing into generalized eigenspaces can really be done in only one way, and breaking each eigenspace into Jordan blocks is also essentially unique. The biggest Jordan block comes from picking one vector $v$ that lives through as many applications of $T-\lambda1_V$ as possible. As we apply $T$ over and over to $v$, we expand until (after no more than $d$ iterations) we fill out an invariant subspace. Not only that, but we know that we can break up our generalized eigenspace as the direct sum of this block and another subspace, which is also invariant. And that lets us continue the process, splitting off blocks until we’ve used up the whole generalized eigenspace. Now, there is one sense in which this process is not unique. Direct sums are commutative (up to isomorphism), so we can rearrange the Jordan blocks for a given endomorphism $T$, and the result is still a Jordan normal form. But that’s the only way that two Jordan normal forms for the same endomorphism can differ. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 1 Comment » 1. [...] how to do this! Use a matrix in Jordan normal form! We know that within a given conjugacy class, the Jordan normal form is unique — up to rearrangement of the Jordan blocks. So we don’t quite have a unique [...] Pingback by | March 6, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9125198125839233, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/187543-inequality-prove-involving-convex-notions.html
Thread: 1. Inequality to prove involving convex notions Hi everyone, Here's an exercice I have trouble with. Let $a = (a_1,\dots,a_n)\in\mathbb{R}^n_+$ and define $G(a) = (a_1\dots a_n)^{\frac{1}{n}}$ I have to show that, assuming $a,b\in\mathbb{R}^n_+, G(a+b) \geqslant G(a) + G(b)$. I have already tried a looooot of things. I think I'm suppose to use some convex inequalities : my first thought was to take the log of $G(a+b)$, but it didn't seem to end very well for me. I've also tried to use other known inequalities, such as $ab \leqslant \displaystyle \frac{a^2+b^2}{2}$. But I don't think it works. So know, it's been almost an hour I'm on it, and I would like to have a small hint : I do not want the whole answer but just a tips which can make me progress. Thank you for your always-so-good answers, Hugo. 2. Re: Inequality to prove involving convex notions We have to show that for $0\leq x_j \leq 1$, we have $\left(\prod_{j=1}^nx_j\right)^{\frac 1n}+\left(\prod_{j=1}^n(1-x_j)\right)^{\frac 1n}\leq 1$. Take the $n$-th power, and use $m:=\min_{1\leq j\leq n}x_j$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9863483309745789, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/commutator+measurement-problem
# Tagged Questions 1answer 851 views ### Compatible Observables My QM book says that when two observables are compatible, then the order in which we carry out measurements is irrelevant. When you carry out a measurement corresponding to an operator $A$, the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8510360717773438, "perplexity_flag": "middle"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Number_theory
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Number theory Traditionally, number theory is that branch of pure mathematics concerned with the properties of integers. It contains many results and open problems that are easily understood, even by non-mathematicians. More generally, the field has come to be concerned with wider classes of problems that have arisen naturally from the study of integers. Number theory may be subdivided into several fields, according to the methods used and the type of questions investigated. See for example the list of number theory topics. Mathematicians working in the field of number theory are called number theorists. The term "arithmetic" is also used to refer to number theory. This is a somewhat older term, which is no longer as popular as it once was. Number theory used to be called the higher arithmetic, but this is dropping out of use. Nevertheless, it still shows up in the names of mathematical fields (arithmetic algebraic geometry , arithmetic of elliptic curves). This sense of the term arithmetic should not be confused either with elementary arithmetic, or with the branch of logic which studies Peano arithmetic as a formal system. Contents ## Fields ### Elementary number theory In elementary number theory, the integers are studied without use of techniques from other mathematical fields. Questions of divisibility, the Euclidean algorithm to compute greatest common divisors, factorization of integers into prime numbers, investigation of perfect numbers and congruences belong here. Typical statements are Fermat's little theorem and Euler's theorem extending it, the Chinese remainder theorem and the law of quadratic reciprocity. The properties of multiplicative functions such as the Möbius function and Euler's φ function are investigated; so are integer sequences such as factorials and Fibonacci numbers. Many questions in elementary number theory appear simple but may require very deep consideration and new approaches. Examples are • The Goldbach conjecture concerning the expression of even numbers as sums of two primes, • Catalan's conjecture regarding successive integer powers, • The twin prime conjecture about the infinitude of prime pairs, and • The Collatz conjecture concerning a simple iteration. The theory of Diophantine equations has even been shown to be undecidable (see Hilbert's tenth problem). ### Analytic number theory Analytic number theory employs the machinery of calculus and complex analysis to tackle questions about integers. The prime number theorem and the related Riemann hypothesis are examples. Waring's problem (representing a given integer as a sum of squares, cubes etc.), the Twin Prime Conjecture (finding infinitely many prime pairs with difference 2) and Goldbach's conjecture (writing even integers as sums of two primes) are being attacked with analytical methods as well. Proofs of the transcendence of mathematical constants, such as π or e, are also classified as analytical number theory. While statements about transcendental numbers may seem to be removed from the study of integers, they really study the possible values of polynomials with integer coefficients evaluated at, say, e; they are also closely linked to the field of Diophantine approximation, where one investigates "how well" a given real number may be approximated by a rational one. In algebraic number theory, the concept of number is expanded to the algebraic numbers which are roots of polynomials with rational coefficients. These domains contain elements analogous to the integers, the so-called algebraic integers. In this setting, the familiar features of the integers (e.g. unique factorization) need not hold. The virtue of the machinery employed -- Galois theory, group cohomology, class field theory, group representations and L-functions -- is that it allows to recover that order partly for this new class of numbers. Many number theoretical questions are best attacked by studying them modulo p for all primes p (see finite fields). This is called localization and it leads to the construction of the p-adic numbers; this field of study is called local analysis and it arises from algebraic number theory. ### Geometric number theory Geometric number theory (traditionally called geometry of numbers) incorporates all forms of geometry. It starts with Minkowski's theorem about lattice points in convex sets and investigations of sphere packings. Algebraic geometry, especially the theory of elliptic curves, may also be employed. The famous Fermat's last theorem was proved with these techniques. Combinatorial number theory deals with number theoretic problems which involve combinatorial ideas in their formulations or solutions. Paul Erdős is the main founder of this branch of number theory. Typical topics include covering system , zero-sum problems , various restricted sumsets , and arithmetic progressions in a set of integers. Algebraic or analytic methods are powerful in this field. Finally, computational number theory studies algorithms relevant in number theory. Fast algorithms for prime testing and integer factorization have important applications in cryptography ## History Number theory was a favorite study among the Ancient Greeks. It revived in the sixteenth and seventeenth centuries, in Europe, with Viète, Bachet de Meziriac, and especially Fermat. In the eighteenth century Euler and Lagrange made major contributions, and books of Legendre (1798), and Gauss put together the first systematic theories. Gauss's Disquisitiones Arithmeticae (1801) may be said to begin the modern theory of numbers. The formulation of the theory of congruences starts with Gauss's Disquisitiones. He introduced the symbolism $a \equiv b \pmod c,$ and explored most of the field. Chebyshev published in 1847 a work in Russian on the subject, and in France Serret popularised it. Besides summarizing previous work, Legendre stated the law of quadratic reciprocity. This law, discovered by induction and enunciated by Euler, was first proved by Legendre in his Théorie des Nombres (1798) for special cases. Independently of Euler and Legendre, Gauss discovered the law about 1795, and was the first to give a general proof. To the subject have also contributed: Cauchy; Dirichlet whose Vorlesungen über Zahlentheorie is a classic; Jacobi, who introduced the Jacobi symbol; Liouville, Zeller (?), Eisenstein, Kummer, and Kronecker. The theory extends to include cubic and biquadratic reciprocity , (Gauss, Jacobi who first proved the law of cubic reciprocity , and Kummer). To Gauss is also due the representation of numbers by binary quadratic forms. Cauchy, Poinsot (1845), Lebesgue(?) (1859, 1868), and notably Hermite have added to the subject. In the theory of ternary forms Eisenstein has been a leader, and to him and H. J. S. Smith is also due a noteworthy advance in the theory of forms in general. Smith gave a complete classification of ternary quadratic forms, and extended Gauss's researches concerning real quadratic forms to complex forms. The investigations concerning the representation of numbers by the sum of 4, 5, 6, 7, 8 squares were advanced by Eisenstein and the theory was completed by Smith. Dirichlet was the first to lecture upon the subject in a German university. Among his contributions is the extension of Fermat's theorem on $x^n+y^n \neq z^n,$ which Euler and Legendre had proved for n = 3,4, Dirichlet showing that $x^5+y^5 \neq az^5$. Among the later French writers are Borel; Poincaré, whose memoirs are numerous and valuable; Tannery , and Stieltjes. Among the leading contributors in Germany are Kronecker, Kummer, Schering , Bachmann, and Dedekind. In Austria Stolz 's Vorlesungen über allgemeine Arithmetik (1885-86), and in England Mathews' Theory of Numbers (Part I, 1892) are among the most scholarly of general works. Genocchi , Sylvester, and J. W. L. Glaisher have also added to the theory. A recurring and productive theme in number theory is the study of the distribution of prime numbers. Gauss conjectured the limit of the number of primes not exceeding a given number (the prime number theorem) as a teenager. Chebyshev (1850) gave useful bounds for the number of primes between two given limits. Riemann introduced complex analysis into the theory of the Riemann zeta function. This led to a relation between the zeros of the zeta function and the distribution of primes, eventually leading to a proof of prime number theorem independently by Hadamard and de la Vallée Poussin in 1896. However, an elementary proof was given later by Paul Erdos and Atle Selberg in 1949+. Here elementary means that it does not use techniques of complex analysis; however, the proof is still very ingenious and difficult. ## Quotations Mathematics is the queen of the sciences and number theory is the queen of mathematics. Gauss God invented the integers; all else is the work of man. Kronecker ## References • History of Modern Mathematics by David Eugene Smith, 1906 (adapted public domain text) • Essays on the Theory of Numbers, Richard Dedekind, Dover Publications, Inc., 1963. ISBN 0-486-21010-3 • Number Theory and Its History, Oystein Ore, Dover Publications, Inc., 1948,1976. ISBN 0-486-65620-9 • Unsolved Problems in Number Theory, Richard K. Guy, Springer-Verlag, 1981. ISBN 0-387-90593-6   ISBN 3-540-90593-6 • Important publications in number theory 03-10-2013 05:06:04 Science kits, science lessons, science toys, maths toys, hobby kits, science games and books - these are some of many products that can help give your kid an edge in their science fair projects, and develop a tremendous interest in the study of science. When shopping for a science kit or other supplies, make sure that you carefully review the features and quality of the products. Compare prices by going to several online stores. Read product reviews online or refer to magazines. Start by looking for your science kit review or science toy review. Compare prices but remember, Price \$ is not everything. Quality does matter. Science Fair Coach What do science fair judges look out for? ScienceHound Science Fair Projects for students of all ages All Science Fair Projects.com Site All Science Fair Projects Homepage Search | Browse | Links | From-our-Editor | Books | Help | Contact | Privacy | Disclaimer | Copyright Notice
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9100509881973267, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/complex+polynomials
# Tagged Questions 3answers 433 views ### Solving cubic equation for real roots I'm looking to solve the following cubic equation for x: $\beta\, x^3 - \gamma \,x = c$. I have plugged in some sample values ($\beta = 2$, $\gamma = 5$ and $c = 2$). When I try to solve this ... 6answers 2k views ### Finding real roots of negative numbers (for example, $\sqrt[3]{-8}$) Say I want to quickly calculate $\sqrt[3]{-8}$, to which the most obvious solution is $-2$. When I input $\sqrt[3]{-8}$ or Power[-8, 3^-1], Mathematica gives the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8786978721618652, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/55318/where-does-energy-go-in-destructive-interference/55326
# Where does energy go in destructive interference? [duplicate] This question already has an answer here: • What happens to the energy when waves perfectly cancel each other? 10 answers I have read that when two light waves interfere destructively, the energy contained within is transferred to other parts of the wave which have interfered constructively. However, I am having some trouble grasping this. While in experiments such as Young's Double Slit experiment, there are visible bright bands of higher energy, I would imagine that it be possible to configure light waves to propagate linearly such that the waves interfere only destructively and not at all constructively. Is such an arrangement possible? And if so, to where is the energy in the wave transferred? Similarly, how does the energy transfer from one part of a wave which is interfering destructively to another part which is interfering constructively? These regions may be several meters apart for long wavelength light, and I find it strange that energy can travel between these potentially distant and non-interacting regions. - 2 – Nick Kidman Feb 27 at 16:05 1 This question may be a duplicate of this one, "What happens to the energy when waves perfectly cancel each other?" No answer was ever accepted for that one. My answer, which someone (I have no idea who) gave a +100, is [located here][2]. [2]: physics.stackexchange.com/a/23953/7670 – Terry Bollinger Feb 27 at 16:23 – Qmechanic♦ Apr 27 at 9:26 ## marked as duplicate by Qmechanic♦Apr 27 at 14:27 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 2 Answers When the electromagnetic waves propagate without energy losses, e.g. in the vacuum, it is easy to prove that the total energy is conserved. See e.g. Section 1.8 here. In fact, not only the total energy is conserved. The energy is conserved locally, via the continuity equation $$\frac{\partial \rho_{\rm energy}}{\partial t}+\nabla\cdot \vec J = 0$$ This says that whenever the energy decreases from a small volume $dV$, it is accompanied by the flow of the same energy through the boundary of the small volume $dV$ and the current $\vec J$ ensures that the energy will increase elsewhere. The continuity equation above is easily proven if one substitutes the right expressions for the energy density and the Poynting vector: $$\rho_{\rm energy} = \frac{1}{2}\left(\epsilon_0 E^2+ \frac{B^2}{\mu_0} \right), \quad \vec J = \vec E\times \vec H$$ After the substitution, the left hand side of the continuity equation becomes a combination of multiples of Maxwell's equations and their derivatives: it is zero. These considerations work even in the presence of reflective surfaces, e.g. metals one uses to build a double slit experiment. It follows that if an electromagnetic pulse has some energy at the beginning, the total energy obtained as the integral $\int d^3x \rho_{\rm energy}$ will be the same at the end of the experiment regardless of the detailed arrangement of the interference experiment. If there are interference minima, they are always accompanied by interference maxima, too. The conservation law we have proved above guarantees that. In fact, one may trace via the energy density and the current, the Poynting vector, how the energy gets transferred from the minima towards the maxima. Imagine that at the beginning, we have two packets of a certain cross section area which will be kept fixed and the only nonzero component of $\vec E$ goes like $\exp(ik_1 x)$ (and is localized within a rectangle in the $yz$ plane). It interferes with another packet that goes like $\exp(ik_2 x)$. Because the absolute value is the same, the energy density proportional $|E|^2$ is $x$-independent in both initial waves. When they interfere, we get $$\exp(ik_1 x)+\exp(ik_2 x) = \exp(ik_1 x) (1 + \exp(i(k_2-k_1)x)$$ The overall phase is irrelevant. The second term may be written as $$1 + \exp(i(k_2-k_1)x = 2\cos ((k_2-k_1)x/2) \exp(i(k_2-k_1)x/2)$$ The final phase (exponential) may be ignored again as it doesn't affect the absolute value. You see that the interfered wave composed of the two ordinary waves goes like $$2\cos ((k_2-k_1)x/2)$$ and its square goes like $4\cos^2(\phi)$ with the same argument. Now, the funny thing about the squared cosine is that the average value over the space is $1/2$ because $\cos^2\phi$ harmonically oscillates between $0$ and $1$. So the average value of $4\cos^2\phi$ is $2$, exactly what is expected from adding the energy of two initial beams each of which has the unit energy density in the same normalization. (The total energy should be multiplied by $A_{yz} L_x\epsilon_0/2$: the usual factor of $1/2$, permittivity, the area in the $yz$-plane, and the length of the packet in the $x$-direction, but these factors are the same for initial and final states.) Finally, let me add a few words intuitively explaining why you can't arrange an experiment that would only have interference minima (or only interference maxima, if you wanted to double the energy instead of destroying it – which could be more useful). To make the interference purely destructive everywhere, the initial interfering beams would have to have highly synchronized phases pretty much at every place of the photographic plate (or strictly). But that's only possible if the beams are coming from nearly the same direction. But if they're coming from (nearly) the same direction, they couldn't have been split just a short moment earlier, so it couldn't have been an experiment with the interference of two independent beams. The beams could have been independent and separated a longer time before that. But if the beams started a longer time before that, they would still spread to a larger area on the photographic plate and in this larger area, the phases from the two beams would again refuse to be synchronized and somewhere on the plate, you would find both minima and maxima, anyway. The argument from the previous paragraph has simple interpretation in the analogous problem of quantum mechanics. If there are two wave packets of the wave function for the same particle that are spatially isolated and ready to interfere, these two terms $\psi_1,\psi_2$ in the wave function are orthogonal to each other because their supports are non-overlapping. The evolution of the wave functions in quantum mechanics is "unitary" so it preserves the inner products. So whatever evolves out of $\psi_1,\psi_2$ will be orthogonal to each other, too, even if the evolved wave packets are no longer spatially non-overlapping. But this orthogonality is exactly the condition for $\int|\psi_1+\psi_2|^2$ to have no mixed terms and be simply equal to $\int|\psi_1|^2+|\psi_2|^2$. The case of classical Maxwell's equations has a different interpretation – it's the energy density and not the probability density – but it is mathematically analogous. The properly defined "orthogonality" between the two packets is guaranteed by the evolution and it is equivalent to the condition that the total strength of the destructive interference is the same as the total strength of the constructive interference. - I would imagine that it be possible to configure light waves to propagate linearly such that the waves interfere only destructively and not at all constructively. This is possible, if the two light waves are exactly out of phase with each other, so the peaks of one correspond to the trough of the other. But if the two waves were produced in such a manner, and you superposed them at the source, that would be the same as creating a wave with zero amplitude. But if you superposed them at some later time, then there will always be constructive and destructive interference. The energy isn't literally transported from the dark to the bright fringes. Rather, since the amplitude of the bright fringes is double that of the original waves, there is no violation of energy conservation. Thinking of "energy transport" is just a useful visualization to assure oneself that energy is really being conserved. - Yeah. Generally it's not particularly useful to try and "locate" where in space the energy of some sort of wave is, and this makes less and less sense the more the wave looks like a plane wave, i.e $\sin(kx-\omega t)$. – alexarvanitakis Feb 27 at 15:51 Sorry alex... but for electromagnetic waves, it makes a perfect sense to ask where the energy is. The energy density is just $\rho=1/2(\epsilon E^2 + B^2/\mu)$. – Luboš Motl Feb 27 at 16:24 @Kitchi: the energy is literally transferred from the directions/places that become interference minima to the directions/places that become the interference maxima. One may literally and precisely study how this energy gets redistributed during the interference. Incidentally, this is also used in the Bohmian interpretation of quantum mechanics where the interfering wave literally pushes a particle so that it's more likely to find it near the maxima. – Luboš Motl Feb 27 at 16:27 @LubošMotl - What I was trying to convey is that the energy isn't transported at the screen or wherever the interference is occurring. In his question, the OP implied that he thought the energy was moving at the location of interference, hence my clarification to not think of it that way. Also, I suspect alexarvanitakis meant asking (for a plane wave) where locally in space the energy is doesn't really make that much sense for an EM wave. – Kitchi Feb 27 at 16:35 Thanks for your explanation of your motives, Kitchi, but frankly speaking, I don't see any indication that the OP thinks that the energy has to be moving inside the plane of the screen only. – Luboš Motl Feb 27 at 16:53 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509786367416382, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/21351/neutrino-oscillations-and-conservation-of-momentum
# Neutrino Oscillations and Conservation of Momentum I would like to better understand how neutrino oscillations are consistent with conservation of momentum because I'm encountering some conceptual difficulties when thinking about it. I do have a background in standard QM but only rudimentary knowledge of particle physics. If the velocity expectation value of a neutrino in transit is constant, then it would appear to me that conservation of momentum could be violated when the flavor eigenstate at the location of the neutrino source is different from that at the location of the interaction, since they are associated with different masses. For this reason I would think that the velocity expectation value changes in transit (for instance, in such a way to keep the momentum expectation value constant as the neutrino oscillates), but then it seems to me that the neutrino is in effect "accelerating" without a "force" acting on it (of course, since the momentum expectation value is presumed constant, there may not be a real problem here, but it still seems strange). Any comments? - 1 – Qmechanic♦ Feb 23 '12 at 9:09 ## 1 Answer If by "they are associated with different masses" you mean that the flavor eigenstates have different masses then you are working from a misconception. Those states are not eigenstates of the free Hamiltonian so they don't have a mass as such. (They do have a mean expectation if you could weight a bunch of them, but it does not apply to any given neutrino.) Update 30 April 2012 I had a talk with Fermilab theorist Boris Kayser today after he gave a colloquium and he squared me away on a few things. 1. This question is one that has been considered many times by many people in many ways. 2. Not only is what I had written originally not rigorous, but attempts to make it rigorous run into real trouble and get a result at odds with the conventional formalism and inconsistent with experiment. 3. There is a way to make a rigorous analysis (whole thing at arXiv:1110.3047), and it ends up agreeing with the usual formulation at first order in $\Delta m^2_{i,j}$. It requires that you consider an experiment in the rest frame of the particle that decays to produce the neutrino (and a charged lepton). You define that decay as occurring at space time point $(0,0)$ and compute the amplitude for a neutrino in mass state $i$ to be detected at space time point $(x_\nu, t_\nu)$ in coincidence with the charged lepton being detected at space time point $(x_l, t_l)$ (both also written in the rest frame of the decay particle). Then you notice that the propagators for the two leptons are kinematically entangled. Sum the amplitudes coherently (because the mass state of the neutrino is unobserved). Using the fact that the neutrinos are ultra-relativistic approximate to first order in $\delta m^2_{i,j}$ and drop all terms that don't affect the phase-differences (because neutrino mixing only depends on the phase differences). Somewhere in there was a boost back to the lab frame and a cute calculation of how L-over-E is invariant under the boost: $\frac{L^0}{E^0} = \frac{L}{E}$. The result should be the one we usually give, only now we've dealt with this cute little puzzle. Here's an image from an earlier talk he gave on the same subject which shows the whole process on which the calculation is performed: So, long story short: Good question, the usual formalism doesn't seem to have a good answer, but a rigorous calculation can be made and at to leading order agrees with the usual formalism. - I deleted an answer which was on the lines of the K0long K0short explanation, when I realized the "equal masses", also stated in the question above. I feel this entanglement business is a hand waving. The statement that at the production vertex and the interaction vertex, the neutrino is not in a mass eigenstate,only on its path, means that if I had an accurate experiment of the pi to mu nu decay, and plotted the missing mass I would get three masses? The mind boggles. – anna v May 1 '12 at 8:57 @dmckee You ask for an experiment taking place on the rest frame of the decaying particle. Isn't that necessarily a massless $W^\pm$ boson, so that the rest frame moves at the speed of light? Or am I doing something silly? – Emilio Pisanty May 1 '12 at 20:04 @episanty: The decaying pion or muon. The intermeidate-$W$ is necessarily very far off-shell. I think that Boris must have published a proper paper on this, and ought to dig through his inspire entries some more to find it as I haven't done the description proper justice. – dmckee♦ May 1 '12 at 20:12 @dmckee: ah, yes. Or, if I read you right, it can even be the associated lepton itself (i.e. $\mu^-\rightarrow \overline{\nu}_\mu e^-\overline{\nu}_e$ and track the $\overline{\nu}_\mu$)? – Emilio Pisanty May 1 '12 at 20:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543241858482361, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/93930/meaning-of-affine-transformation?answertab=oldest
# Meaning of affine transformation From Wikipedia, I learned that an affine transformation between two vector spaces is a linear mapping followed by a translation. But in a book Multiple view geometry in computer vision by Hartley and Zisserman: An affine transformation (or more simply an affinity) is a non-singular linear transformation followed by a translation. I wonder if these are two different concepts, given that one does not require the linear transformation to be non-singular while the other does? Thanks! - 3 Sure they are different concepts (and by asking it the way you ask it you have answered your question yourself already)... The map that takes all to $0$ is an affine transformation in the Wikipedia sense while it isn't in the second sense. – t.b. Dec 24 '11 at 21:13 2 You could think of it this way: an affine transformation in the sense of Wikipedia may or may not be an bijective (or invertible) function. However the book reserves the term for the bijective ones. [As t.b. points out, the Wikipedia definition is strictly more general.] – Srivatsan Dec 24 '11 at 21:19 1 An affine transformation preserves affine combinations, i.e. linear combinations in which the sum of the coefficients is $1$. Those are precisely the ones whose value does not depend on which point in the space is chosen to be the origin. If it's non-singular, then the image of a set of points will not have any affine relations not already present in the original set; otherwise it will. (But as to whether one or the other definition is correct, I have no opinion right now.) – Michael Hardy Dec 24 '11 at 21:31 – Tim Dec 24 '11 at 22:53 I think the definition of an affine transformation between two vector spaces is as follows. A map $f:V\to W$ is affine if there exists a $w$ in $W$ such that the map $v\mapsto f(v)-w$ is linear. In words, an affine transformation is a linear transformation up to a translation. – Ohdur Dec 24 '11 at 23:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919985294342041, "perplexity_flag": "head"}
http://mathoverflow.net/questions/98515/what-is-the-optimal-growth-of-the-constant-in-bdg
## What is the optimal growth of the constant in BDG? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a continuous local martingale, and $\langle X \rangle$ be its quadratic variation process. The "standard" proof of Burkholder-Davis-Gundy inequalities found in books yields $(\mathsf{E} |X|^{p})^{1/p} \le O(p) \cdot (\mathsf{E} \langle X \rangle ^{p/2})^{1/p}$ for large $p$. Can the growth rate be improved to, say, $O(p^{1/2})$? For example, if $\langle X \rangle$ is bounded, this estimate gives exponential tails for $|X|$, which is clearly suboptimal, since they should be Gaussian. - What is $\langle X \rangle$? – Bill Johnson May 31 at 20:49 Quadratic variation. Updated the post to clarify this. – Alexander Shamov May 31 at 20:52 3 The best constants are known, and you can't do better than p-1 for p > 2. This was proven by Davis I think, but I'm not sure if that applies specifically to continuous martingales. – George Lowther May 31 at 21:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234435558319092, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/138478-poisson-s-distribution-problem.html
# Thread: 1. ## Poisson's distribution problem If a random variable is Poisson's distributed with a mean of 5.2 determine (a) standard deviation (b) the probability that random variable will have a value of three or less (c) the probability that random variable will have a value of more than 6 . 2. st dev=square root of 5.2 $P(X\le 3)= P(X=0)+...+P(X=3)$ where $P(X=x)={e^{-5.2}(5.2)^x\over x!}$ $P(X>6)= 1-\biggl(P(X=0)+...+P(X=6)\biggr)$ 3. Thank you so much!Hey I have just one more problem on Control charts,Can I post it in this forum?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8417102098464966, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20740/is-there-an-introduction-to-probability-theory-from-a-structuralist-categorical-p/25647
## Is there an introduction to probability theory from a structuralist/categorical perspective? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The title really is the question, but allow me to explain. I am a pure mathematician working outside of probability theory, but the concepts and techniques of probability theory (in the sense of Kolmogorov, i.e., probability measures) are appealing and potentially useful to me. It seems to me that, perhaps more than most other areas of mathematics, there are many, many nice introductory (as well as not so introductory) texts on this subject. However, I haven't found any that are written from what it is arguably the dominant school of thought of contemporary mainstream mathematics, i.e., from a structuralist (think Bourbaki) sensibility. E.g., when I started writing notes on the texts I was reading, I soon found that I was asking questions and setting things up in a somewhat different way. Here are some basic questions I couldn't stop from asking myself: [0) Define a Borel space to be a set $X$ equipped with a $\sigma$-algebra of subsets of $X$. This is already not universally done (explicitly) in standard texts, but from a structuralist approach one should gain some understanding of such spaces before one considers the richer structure of a probability space.] 1) What is the category of Borel spaces, i.e., what are the morphisms? Does it have products, coproducts, initial/final objects, etc? As a significant example here I found the notion of the product Borel space -- which is exactly what you think if you know about the product topology -- but seemed underemphasized in the standard treatments. 2) What is the category of probability spaces, or is this not a fruitful concept (and why?)? For instance, a subspace of a probability space is, apparently, not a probability space: is that a problem? Is the right notion of morphism of probability spaces a measure-preserving function? 3) What are the functorial properties of probability measures? E.g., what are basic results on pushing them forward, pulling them back, passing to products and quotients, etc. Here again I will mention that product of an arbitrary family of probability spaces -- which is a very useful-looking concept! -- seems not to be treated in most texts. Not that it's hard to do: see e.g. http://www.math.uga.edu/~pete/saeki.pdf I am not a category theorist, and my taste for how much categorical language to use is probably towards the middle of the spectrum: that is, I like to use a very small categorical vocabulary (morphisms, functors, products, coproducts, etc.) as often as seems relevant (which is very often!). It would be a somewhat different question to develop a truly categorical take on probability theory. There is definitely some nice mathematics here, e.g. I recall an arxiv article (unfortunately I cannot put my hands on it at this moment) which discussed independence of events in terms of tensor categories in a very persuasive way. So answers which are more explicitly categorical are also welcome, although I wish to be clear that I'm not asking for a categorification of probability theory per se (at least, not so far as I am aware!). - 1 I am certainly not an expert, but I was looking for a similar thing, and found Dudley's book (books.google.com/…) promising. He doesn't mention categories at all, but it seems that he has them in mind. In particular, he defined "measurable function" between any two measurable spaces (p. 116), [which is different from the definition in Rudin]. Also, while he proves the existence of countable products of probability spaces, he does remark on converting the proof to an arbitrary product (p. 259). – unknown (google) Apr 8 2010 at 16:16 1 This is not developed enough to be a (partial) answer rather than a comment, but see perhaps: golem.ph.utexas.edu/category/2007/02/… (and other google/Mathscinet results for "Giry monad") – Yemon Choi Apr 8 2010 at 16:16 2 One thing I thought I'd mention - as a probablist manqué - is a comment at the beginning of Williams' Probability with Martingales, where he says something along lines of "it would be nice if we could think of random variables as equivalence classes of functions rather than functions, so that we don't need to keep inserting 'a.e.' everywhere; but this point of view runs into trouble when dealing with continuous-time stochastic processes". Which implies he is not keen on 'structuralist POV', although it doesn't rule out the possibility. – Yemon Choi Apr 8 2010 at 16:20 6 Something that you may want to consider is the fact that probability spaces are not the essential objects in probability, for at least two reasons. First, it is very common to change the underlying probability space, as long as the distributions of the relevant random variables remain the same. This allows to consider new events along the way. As suggested in Neel's answer, this may have a categorical formulation. But worst than that is the fact that often (every time martingales appear, at least) you want to leave the space unchanged and vary the sigma-algebra. – Andrea Ferretti Apr 9 2010 at 16:29 4 Indeed, one of the major differences between measure theory and probability theory (besides the perspective being completely different) is that in measure theory one fixes one sigma algebra, and in probability one considers relationships between multiple sigma algebras. – Mark Meckes Apr 9 2010 at 16:54 show 3 more comments ## 9 Answers One can argue that an object of the right category of spaces in measure theory is not a set equipped with a σ-algebra of measurable sets, but rather a set S equipped with a σ-algebra M of measurable sets and a σ-ideal N of M consisting of sets of measure 0. The reason for this is that you can hardly state any theorem of measure theory or probability theory without referring to sets of measure 0. However, objects of this category contain less data than the usual measured spaces, because they are not equipped with a measure. Therefore I prefer to call them measurable spaces. A morphism of measurable spaces (S,M,N)→(T,P,Q) is a map S→T such that the preimage of every element of P is a union of an element of M and a subset of an element of N and the preimage of every element of Q is a subset of an element of N. Irving Segal proved that for a measurable space the following properties are equivalent: (1) The Boolean algebra M/N of equivalence classes of measurable sets is complete; (2) The space of equivalence classes of all bounded (or unbounded) real-valued functions on S is Dedekind-complete; (3) Radon-Nikodym theorem is true for (S,M,N); (4) Riesz theorem is true for (S,M,N); (5) Equivalence classes of bounded functions on S form a von Neumann algebra (aka W*-algebra). (6) (S,M,N) is a coproduct (disjoint union) of points and real lines. A measurable space that satisfies these conditions is called localizable. This theorem tells us that if we want to prove anything nontrivial about measurable spaces, we better restrict ourselves to localizable measurable spaces. We also have a nice illustration of the claim I made in the first paragraph: None of these statements would be true without identifying objects that differ on a set of measure 0. For example, take a non-measurable set G and a family of one-element subsets of G indexed by themselves. This family of measurable sets does not have a supremum in the Boolean algebra of measurable sets, thus disproving a naïve version of (1). Another argument for restricting to localizable spaces is the following version of Gelfand-Neumark theorem: The category of localizable measurable spaces is equivalent to the category of commutative von Neumann algebras (aka W*-algebras) and their morphisms (normal unital homomorphisms of *-algebras). I actually prefer to define the category of localizable measurable spaces as the opposite category of the category of commutative W*-algebras. The reason for this is that the classical definition of measurable space exhibits immediate connections only to descriptive set theory (and with additional effort to Boolean algebras), which are mostly irrelevant for the central core of mathematics, whereas the description in terms of operator algebras immediately connects measure theory to other areas of the central core (noncommutative geometry, algebraic geometry, complex geometry, differential geometry etc.). Also it is easier to use in practice. Let me illustrate this statement with just one example: When we try to define measurable bundles of Hilbert spaces on a localizable measurable space set-theoretically, we run into all sorts of problems if the fibers can be non-separable, and I do not know how to fix this problem in the set-theoretic framework. On the other hand, in the algebraic framework we can simply say that a bundle of Hilbert spaces is a Hilbert module over the corresponding W*-algebra. Categorical properties of W*-algebras (hence of localizable measurable spaces) were investigated by Guichardet. Electronic version of this paper is available here. Let me mention some of his results. The category of localizable measurable spaces admits equalizers and coequalizers, arbitrary coproducts and hence arbitrary colimits. It also admits products, although they are quite different from what one might think. For example, the product of two real lines is not R^2 with the two obvious projections. The product contains R^2, but it also has a lot of other stuff, for example, the diagonal of R^2, which is needed to satisfy the universal property for the two identity maps on R. The more intuitive product of measurable spaces (R×R=R^2) corresponds to the spatial tensor product of von Neumann algebras and forms a part of a symmetric monoidal structure on the category of measurable spaces. See Guichardet's paper for other categorical properties (monoidal structures on measurable spaces, flatness, existence of filtered limits, etc.). Finally let me mention pushforward and pullback properties of measures on measurable spaces. I will talk about more general case of L^p spaces instead of just measures (i.e., L^1 spaces). For the sake of convenience let L_p(M) := L^{1/p}(M), where M is a measurable space. Here p can be an arbitrary complex number with a nonnegative real part. Note that you don't need a measure on M to define L_p(M). In particular, L_0 is the space of all bounded functions (i.e., the W*-algebra itself), L_1 is the space of finite complex-valued measures (the dual of L_0 in the σ-weak topology), and L_{1/2} is the Hilbert space of half-densities. I will also talk about extended positive part E^+L_p of L_p for real p. In particular, E^+L_1 is the space of all (not necessarily finite) positive measures. Pushforward for L_p spaces: Suppose we have a morphism of measurable spaces M→N. If p=1, then we have a canonical map L_1(M)→L_1(N), which just the dual of L_0(N)→L_(M) in the σ-weak topology. Geometrically, this is the fiberwise integration map. If p≠1, then we only have a pushforward map of the extended positive parts: E^+L_p(M)→E^+L_p(N), which is non-additive unless p=1. Geometrically, this is the fiberwise L_p norm. Thus L_1 is a functor from the category of measurable spaces to the category of Banach spaces and E^+L_p is a functor to the category of “positive homogeneous p-cones”. The pushforward map preserves the trace on L_1 and hence sends a probability measure to a probability measure. To define pullback of L_p spaces (in particular, L_1 spaces) one needs to pass to a different category of measurable spaces. In algebraic language, if we have two W*-algebras A and B, then a morphism from A to B is a usual morphism of W*-algebras f: A→B together with an operator valued weight T: E^+(B)→E^+(A) associated to f. Here E^+(A) denotes the extended positive part of A (think of positive functions on Spec A that can take infinite values). Geometrically, this is a morphism Spec f:  Spec B→Spec A between the corresponding measurable spaces and a choice of measure on each fiber of Spec f. Now we have a canonical additive map E^+L_p(Spec A)→E^+L_p(Spec B), which makes E^+L_p into a contravariant functor from the category of measurable spaces equipped with a fiberwise measure to the category of “positive homogeneous additive cones”. If we want to have a pullback of L_p spaces themselves and not just their extended positive parts, we need to replace operator valued weights in the above definition by finite complex-valued operator valued weights T: B→A (think of fiberwise complex-valued measure). Then L_p becomes a functor from the category of measurable spaces to the category of Banach spaces (if the real part of p is at most 1) or quasi-Banach spaces (if the real part of p is greater than 1). Here p is an arbitrary complex number with a nonnegative real part. Notice that for p=0 we get the original map f: A→B and in this (and only this) case we don't need T. Finally, if we restrict ourselves to an even smaller subcategory of measurable spaces equipped with a finite operator valued weight T such that T(1)=1 (i.e., T is a conditional expectation; think of fiberwise probability measure), then the pullback map preserves the trace on L_1 and in this case the pullback of a probability measure is a probability measure. There is also a smooth analog of the theory described above: The category of measurable spaces and their morphisms is replaced by the category of smoth manifolds and submersions, L_p spaces are replaced by bundles of p-densities, operator valued weights are replaced by sections of the bundle of relative 1-densities, integration map on 1-densities is defined via Poincaré duality (to avoid circular dependence on measure theory) etc. The forgetful functor that sends a smooth manifold to its underlying measurable space commutes with everything and preserves everything. - 5 This is great! I wish I could vote you up multiple times. – Neel Krishnaswami Apr 9 2010 at 8:44 2 I agree. Great answer! – George Lowther Apr 9 2010 at 21:34 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In the spirit of this answer to a different question, I'll offer a contrarian answer. How to understand probability theory from a structuralist perspective: Don't. To put it less provocatively, what I really mean is that probabilists don't think about probability theory that way, which is why they don't write their introductory books that way. The reason probabilists don't think that way is that probability theory is not about probability spaces. Probability theory is about families of random variables. Probability spaces are the mathematical formalism used to talk about random variables, but most probabilists keep the probability spaces in the background as much as possible. Doing probability theory while dwelling on probability spaces is a little like doing number theory while dwelling on a definition of 1 as `$\{\{\}\}$` etc. (That last sentence is definitely an overstatement, but I can't think of a more apt analogy offhand.) That said, multiple perspectives are always good to have, so I'm very happy you asked this question and that you've gotten some very nice noncontrarian answers that I hope to digest better myself. Added: Here is something which is perhaps more similar to dwelling on probability spaces. To set the stage for graph theory carefully one may start by defining a graph as a pair $(V,E)$ in which $V$ is a (finite, nonempty) set and $E$ is a set of cardinality 2 subsets of $V$. You need to start tweaking this in various ways to allow loops, directed graphs, multigraphs, infinite graphs, etc. But worrying about the details of how you do this is a distraction from actually doing graph theory. - 8 Indeed, I saw a quote from somebody famous (if I think of the author I'll edit) to the effect that "one could say that probability theory is the study of measure spaces with measure one, but this is like saying that number theory is the study of finite strings of the digits {0,...,9}." – Nate Eldredge May 25 2010 at 13:36 2 Another great quote along the same lines, from Rudin (Real and Complex Analysis, page 18 in my edition): "For instance, the real line may be described as a quadruple $(R^1, +, \cdot, <)$ where $+$, $\cdot$, and $<$ satisfy the axioms of a complete archimedean ordered field. But it is a safe bet that very few mathematicians think of the real field as an ordered quadruple." – Carl Offner May 30 2010 at 14:47 1 That is a great quote, but it doesn't make all the points Nate's quote does. If you think of the reals as a quadruple, you have the formalism necessary to understand and prove theorems about real numbers, although you may lack the intuition needed to appreciate the theorems. But if you think of natural numbers as strings of digits, you're missing not only intuition but also interesting algebraic structure. Likewise, a measure space with measure one is insufficient structure for probability; you need some additional algebraic or geometric structure before you can even talk about expectations. – Mark Meckes May 30 2010 at 23:35 2 You can't talk about expectations if all you have is a probability space. You need to look at a measurable function (random variable) from your probability space into $\mathbb{R}$ or a similar algebraic structure; or equivalently you need your probability space itself to have some algebraic structure. – Mark Meckes May 31 2010 at 3:16 9 Just for the record, that quote is my own, though the general sentiment that probability is not about measure spaces is certainly very widely held among probabilists. – Terry Tao Sep 3 2010 at 19:08 show 7 more comments A few months ago, Terry Tao had a really insightful post about "the probabilistic way of thinking", in which he suggested that a nice category of probability spaces was one in which the objects were probability spaces and the morphisms were extensions (ie, measurable surjections which are probability preserving). By avoiding looking at the details of the sample space, you can elegantly capture the style of probabilistic arguments in which you introduce new sources of randomness as needed. - This is a very interesting (and impressively long!) set of notes; thanks for linking to them. I may make more comments after I've digested them. – Pete L. Clark Apr 9 2010 at 3:11 As already noted, most probabilists identify random variables essentially with their distribution. The problem is that the kind of operations one can do with random variables often depend on the spaces they are defined on. The probabilitys spaces random variables are usually defined on, such as the unit interval with Lebesgue measure, do not allow for all the construction one wants to make (an uncountable family of independent random variables for example). In order to make all the constructions one wants to work with possible, one needs to work with more esoteric tools from measure theory. The problem is even larger when one turns to stochastic processes or adapted stochastic processes. For this reason, people have worked on probability theory from the model theoretic view, which gives answers to existence questions much closer to the categorial view. A relatively readable introduction to this field is given in the book "Model Theory of Stochastic Processes" by Fajardo and Keisler. Their paper Existence Theorems in Probability might also be of interest. - 1 I should have added a caveat to my answer that working with stochastic processes forces one to grapple with probability spaces more. But I've never heard of anyone actually wanting to consider an uncountable family of independent random variables. – Mark Meckes Apr 9 2010 at 13:19 1 They actually occur in mathemtical economics. One wants to have a continuum of agents to apply analysis and one wants to be able that their independent actions cancel out in the aggregate. One wants a law of large numbers for such cases. Finding spaces on which one can make this work turned out to be hard but possible. – Michael Greinecker Apr 9 2010 at 13:36 A category consists in a class of objects together with a class of morphisms. Measure theory together with morphisms between measure spaces is the topic of ergodic theory. So if you are interested in a categorical viewpoint at measure theory, just take a look at advanced books on ergodic theory. Now some references. Glasner's book "ergodic theory via joinings" is probably what is close to a full blown categorical account of some basic concepts in ergodic theory. Rudolph's "Fundamentals of measurable dynamics: ergodic theory on Lebesgue spaces" is also pretty geared toward such an account. If you are interested in applications of ergodic theory to Lie group actions and diophantine approximation, you should consult the appendices in the books of R. Zimmer "ergodic theory and semisimple Lie groups". These appendices summarize the categorical results relevant to these questions. Note however that most books on ergodic theory are pretty quick on the categorical stuff. Ergodic theory is a subject which is of interest to group theorists, dynamic people, probabilists, combinatorists, physicists, computer scientists,... So, really, it makes no sense to spend too much time on some fundational material that is irrelevant to most of these people, and to most applications. In contrast to algebraic geometry, which is built like a cathedral, and for which category is a very interesting foundational material, ergodic theory is more like of a bazaar. Its structure is definitely transverse to the usual classification of mathematics (algebra, analysis, geometry), and even transverse to the classification of science (math, physics, computer science, biology) you may be accustomed to. Much of the steam in ergodic theory comes from the many interactions between these communities. It is absolutely crucial to keep the entrance level as low as possible to get as much people as possible on the boat. Putting forward a categorical approach in the textbooks or in conferences would do much harm to the field. The references I provide should answer your four questions. Let me just add a comment. If you define a Borel space as a set endowed with a $\sigma$-algebra, you will soon run into many problems (e.g. a morphism at the level of the algebras not necessarily comes from a map between the sets, also a non-Borel non-Lebesgue measurable subset of $[0,1]$ endowed with the Lebesgue measure is a perfectly well defined measure space, and you definitely don't want it ), so that's why people don't usually define it that way. There are two choices in use at the moment: the Borel standard spaces, and the Lebesgue spaces. I am on the second wagon but it would be too long to explain why. - 1 Pete, I think you were too quick to dismiss coudy's answer (and frankly, I don't see how you found it disrespectful). Many results and methods in probability theory can be rephrased in terms of ergodic theory, which means this is a perfectly on-topic response to your question. – Tom LaGatta May 24 2010 at 1:14 1 @Tom LaGatta: I agree with you, and awhile ago I deleted these comments and removed my downvote. In some sense coudy's answer is the closest I have received so far to one of the aspects of the question, although I still maintain that it is not quite dead-on. I have explained this in more detail in my CW answer below. – Pete L. Clark Dec 19 2010 at 10:44 I want to post the following as a comment on many of the answers and comments already given. Several people have said, "Well, watch out -- probability theory is not really the study of probability measures, but rather the study of certain quantities preserved under certain equivalence relations on probability measures, like distribution functions." I certainly accept this point. In fact, I had more or less accepted it before I asked the question, although I admittedly didn't give much indication of this in the question itself. To be clear, I am aware "rewriting" impulses I have when reading about basic measure-theoretic probability are taking me in a direction away from the material of mainstream probability theory. I have two responses to this: 1) Okay, let's agree that the definition and study of a category of probability spaces is not the domain of probability theory per se. But this does not mean that it's not useful or worth studying. 1a) If this branch of mathematics is not probability theory, what is it? [User "coudy" gave an answer saying that this is ergodic theory. I was unduly dismissive of this answer at first, and I apologize for that. I still don't think that "ergodic theory" is exactly the answer to my question, for instance because so far as I understand the subject it focuses almost exclusively on the dynamical aspects of iterating a measure-preserving transformation of a probability space. (By way of analogy, the branch of mathematics that studies the category of finite type schemes over a field $K$ is arithmetic geometry, not arithmetic dynamics.) 1b) While I agree that probability theory is at present not concerned with such structuralist questions, is it clear that it shouldn't be? Or, in less polemical terms, is there no advantage or insight to be gained by studying the structural aspects of probability spaces? 2) I think an outsider to probability theory has a right to ask: "Okay, if probability spaces are really not the point of probability theory, why do they appear so prominently in all (so far as I know) modern foundations of the subject? Wouldn't -- or couldn't -- it be better to isolate exactly the structure that probability theory actually does care about and study this structure explicitly from the outset?" By way of analogy, consider the notion of a "differentiable atlas" in the study of smooth manifold theory. Gian-Carlo Rota referrred to atlases as a polite fiction, meaning (I think) that they are present in the foundations of the subject but do not really exist in the sense that the practitioners of the subject do not think about them and ask questions about them. They don't do any harm so long as you don't take them very seriously, but I have seen students get caught up on this point and "ask too many questions". The more modern approach of a structure sheaf seems like an improvement here -- it does the same work as an atlas but is something that the practitioners of the subject actually care about, so it is not at all a waste of time to "think deeply about structure sheaves". Indeed, the concept of "structure sheaf" is incredibly prevalent in other areas of mathematics, to the extent that if you are founding a new branch of geometry, knowing about structure sheaves will ease the birthing process. So the dual question to 1) here is "What is the kind of mathematical structure that probability theorists are interested in studying?" (Happily, many of the very nice answers above do in fact address this question.) - Unfortunately I didn't see this nice answer/commentary until just now. A propos of your question 2), you might be interested to look at the classic two-volume probability text by Feller, in which probability spaces play a surprisingly small role and are not even introduced until well into volume 2. – Mark Meckes Sep 4 at 15:27 There is an early paper by Victor Bogdan called "A new approach to the theory of probability via algebraic categories" (#54 here) which may be of interest. - Last year Voevodsky has given a talk at MIAN about his approach to probability theory; there is online a videorecording in Russian. I do not know if anything is written on this. There was also an old Russian book (in Russian, afaik not translated, from the 70s) developing a somewhat similar approach but I do not quite remember the reference. I could look for it, though, if there is interest... - 3 I don't know what is possible after 2 years - but if you're still here, I am interested – Ilya Mar 31 2012 at 20:08 I find this: http://etd.library.pitt.edu/ETD/available/etd-04202006-065320/unrestricted/Matthew_Jackson_Thesis_2006.pdf. Anyway I find "Bichteler :Integration, Springer LNM 315" it is about the foundation of the theory, the style is similar to Bourbaki, and may be adaptable for a categorical view. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516206383705139, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/295545/sam-has-to-go-to-work-from-home-sam-moves-either-south-or-east-when-going-to-wo
# Sam has to go to work from home. Sam moves either SOUTH or EAST when going to work. How many distinct routes are there for sam to go to work? The following figure depicts the paths from home to work. SAM never travels through the park when going to work. - manual counting gives 8 ways. Is there a mathematical process which can give the answer without manual counting. – Rajesh K Singh Feb 5 at 16:11 2 Counting is a mathematical process. – Henning Makholm Feb 5 at 16:28 ## 3 Answers He has to pass through exactly one of E, F, and C. If he passes through E or C then there's trivially only one path (for each) he can take. If he passes through F, there are $\binom{2}{1} = 2$ ways he can get from A to F and $\binom{3}{2} = 3$ ways he can get from F to K, for a total of $2\times 3 = 6$. So in all there are $1+1+6 = 8$ ways he can get home. - Write the number of possibilities of reaching a vertex $X$ from $A$. It starts like $A:1$ as getting to $A$ is unique, both $B$ and $D$ also have only $1$ possibilities. They simply add up, for example the possibilities to arrive at $I$ are the possibilities arriving at either to $H$ or to $G$ (plus the unique $GI$ path, resp. $HI$ path). $$A:1,\ B:1,\ D:1,\ F:2,\ E:1,\ G:3,\ H:2,\ I:5,\ C:1,\ J:3,\ K:8$$ - If the park were not there, there would be ${3+2 \choose 2}=10$ possible paths. Adding the park cuts off one possible line segment Sam could walk along (from the $BC$ midpoint to $H$). There is only one way he could get to the top of that line segment, and $2$ possible continuances once he reaches $H$. So the total number of paths that don't go through the park is $10 - 1 \cdot 2 = 8$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9711205959320068, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/28573/blocking-and-confounding-in-replicated-2k-factorial-design
# Blocking and confounding in replicated $2^k$ factorial design Consider the $2^6$ factorial in 8 blocks of eight each runs with ABCD, ACE and ABEF as the independent effects chosen to be confounded with blocks. Generate a design. Find the other effect of confounding. I can solve the second part of the question. I found the solution of the first part in a solutions manual (question 7-11) for Montgomery's textbook on the Design and Analysis of Experiments (Wiley, 2001), but I could not understand this table. I want some one to explain this table. If we change the number of blocks, how do we construct the design? - 3 What have you done (other than finding a solutions manual)? Where did you get stuck? Could you please provide more detail? This looks like a cut-and-paste homework assignment with no context. – David May 16 '12 at 3:04 I have revised the material from the "Statistical Principle of Research Design and Analysis" by Kuehl. Specially confounding and aliasing structure of design. I am not doing homework just trying to understand the factorial design using the confounding structure by using this kind of problem. When the design is only $2^3$ factorial, the highest order interaction is ABC, in this case we can use the even-odd rule to construct the block, but if the factor size is more than three, there are large number of factor effects combination. In such a case how can we construct the block? – David May 16 '12 at 9:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400739073753357, "perplexity_flag": "middle"}
http://cogsci.stackexchange.com/questions/tagged/linguistics+reading
# Tagged Questions 1answer 99 views ### Judgments of similarity between samples of writing I was thinking last night about the possibility of an experiment that investigates the factors contributing to peoples' judgments of 'stylistic similarity' between two samples of writing. For example, ... 5answers 296 views ### How long does it take to read X number of characters? How does the time needed to read a sentence scale with the number of characters? Or does this time scaling depend on something more than just character count? For example, let $X$ be the number of ... 1answer 815 views ### How to get rid of subvocalization? When I read a text written in latin alphabet and I want to understand what it means I usually transform each word into spoken word (internal speech) and then I transform it into meaning. I can't ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537374377250671, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/18491/watercooling-performance-and-amount-of-water-or-liquid-in-the-loop
# Watercooling performance and amount of water or liquid in the loop Imagine a watercooling system. It consist of a reservoir, pump, tubes and a radiator. We all know how it works. My question is, only inreasing the reservoir's volum, say from 1 liter to 2, will actually increase the performance? I do remember some thermodynamic laws from school, Delta T of water to ambient after a given time will be same? Or becaue there is more water in the system it will perform better? - ## 1 Answer The volume of the reservoir does not affect the performance directly. The performance (heat taken from the device being cooled down per second) is determined mainly by the temperature of the water. This temperature must not increase and this should be ensured with minimal waste of energy. The water in the cooling system goes the following loop: 1. The water of temperature $T_\text{min}$ comes to the radiator. 2. In the radiator the water takes energy from the device being cooled and its temperature rises to $T_\text{max}$. Let's denote the integral thermal conductivity of the radiator as $\varkappa_\text{rad}$. 3. The water of temperature $T_\text{max}$ leaves the radiator and contacts with the environment (usually with the atmosphere) of temperature $T_\text{env}$. The integral thermal conductivity of this part of the system is $\varkappa_\text{env}$. The temperature of the water changes to $T_\text{res}$ during this process. 4. The water of temperature $T_\text{res}$ comes to the reservoir. If the reservoir has some additional cooling system (characterized by $\varkappa_\text{res}$) then the temperature of the water decreases to $T_\text{min}$, else $T_\text{min} = T_\text{res}$. The thermal conductivity values here are integral values that take into account the fact that temperature of water is not constant inside each radiator. The thermal balance equation is $$Q_\text{rad} = Q_\text{env} + Q_\text{res}$$ or $$(T_\text{max} - T_\text{min}) = (T_\text{max} - T_\text{env}) + (T_\text{res} - T_\text{min}).$$ The efficiency of main radiator $\varkappa_\text{rad}$ should always have maximal value. For $\varkappa_\text{env}$ and $\varkappa_\text{res}$ we have to consider different cases. ### Water is cooled by the environment $$T_\text{min} = T_\text{res} = T_\text{env} < T_\text{max}$$ The temperature of the environment is low enough for effective cooling. We don't need waste energy for additional cooling and can make $Q_\text{res} = 0$ and $\varkappa_\text{res} = 0$. The value of $\varkappa_\text{env}$ should be maximized as well as $\varkappa_\text{rad}$. The volume is not important here. We need maximal area of the radiator's surface. If the volume is constant, increase of surface leads to high tension decreasing the flow through the main radiator and overheating. Hence the volume should be large enough but adding a barrel in the middle of tube will not help much. One can make the external radiator as large as he wants or even use a nearby lake. ### Water is cooled inside the reservoir $$T_\text{min} < T_\text{res} = T_\text{max} < T_\text{env}$$ The environment is too hot for the device and we need to exclude it from the system: $Q_\text{env} = 0$ and $\varkappa_\text{env} = 0$. The value of $\varkappa_\text{res}$ should be maximized as well as $\varkappa_\text{rad}$. But if the reservoir is big the heat exchange with the surrounding is also high. The reservoir should be big enough to contain proper cooling system but not larger. ### Water is cooled by everything $$T_\text{min} < T_\text{res} = T_\text{env} < T_\text{max}$$ The environment is colder than the device but is not cold enough to provide proper cooling. The water will be cooled in two stages: 1. cool it to the environment's temperature to get better starting conditions for the next stage of cooling and save energy. 2. cool it water inside the reservoir Here both $\varkappa_\text{env}$ and $\varkappa_\text{res}$ should be maximized and two reservoirs are required. The first one can be as large as we want (for the exchange with the environment) but larger than minimal size needed for heat exchange. The second one should be as small as possible just to provide enough cooling. ### Conclusion The volume of the reservoir does not affect the performance directly. The only requirement is: there should be enough water for the flow we want. - wow thanks a lot for complete answer! – Sean87 Dec 19 '11 at 21:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407049417495728, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/246144/what-is-the-nth-derivative-of-dfrac1-sqrt1-x2
# What is the nth derivative of $\dfrac{1}{\sqrt{1 + x^2}}$ I'm trying to find a general formula for the $n$th derivative of $$\dfrac{1}{\sqrt{1 + x^2}}$$ I got up to, \begin{eqnarray*} g^{(0)}(x) &=& g(x) \\ g^{(1)}(x) &=& \dfrac{1}{(1 + x^2)^{1/2}} \\ g^{(2)}(x) &=& \dfrac{-x}{(1 + x^2)^{3/2}} \\ g^{(3)}(x) &=& \dfrac{2x^2 - 1}{(x^2 + 1)^{5/2}} \\ g^{(4)}(x) &=& \dfrac{-6x^3 + 9x}{(x^2 + 1)^{7/2}} \\ g^{(5)}(x) &=& \dfrac{24x^4 - 72x^2 + 9}{(x^2 + 1)^{9/2}} \\ g^{(6)}(x) &=& \dfrac{-120x^5 + 600x^3 - 225x}{(x^2 + 1)^{11/2}} \\ g^{(7)}(x) &=& \dfrac{720x^6 - 5400x^4 + 4050x^2 -225}{(x^2 + 1)^{13/2}} \\ g^{(8)}(x) &=& \dfrac{-5040x^7 + 52920x^5 - 66150x^3 + 11025x}{(x^2 + 1)^{15/2}} \\ \end{eqnarray*} Except for the first term in the numerator ($n!$), and the power in the denominator, I couldn't find a general pattern for the rest of the coefficients in the numerator. Could anyone shed me some light on this problem? Any idea would be greatly appreciated. Thanks. - Is this part of a Taylor Series question, I presume? Also, it seems like your derivatives are off by order $1$. What I mean is, the $0$th order derivative is just your original, $\frac{1}{\sqrt{1+x^2}}$. – Joe Nov 28 '12 at 2:36 If this is part of finding the Maclaurin Series for your given function $g(x)$, you can use the well-known fact for the Maclaurin Series for a binomial expansion: $$(1+x)^k = \sum_{n=0}^{\infty} {k\choose n}x^n = 1 + kx + \frac{k(k-1)}{2!}x^2 + \frac{k(k-1)(k-2)}{3!}x^3 + \cdots$$ In your case, $$g(x) = \frac{1}{\sqrt{1+x^2}} = (1+x^2)^{-1/2}$$ So, $k = -\frac{1}{2}$ and $x = x^2$ per the $(1+x)^k$ part. – Joe Nov 28 '12 at 2:45 @Joe: Thanks a lot. Yes, it's part of the Taylor series question. – Chan Nov 28 '12 at 3:51 ## 1 Answer Since this is part of finding the Maclaurin Series for your given function $g(x)$, you can use the well-known fact for the Maclaurin Series for a binomial expansion: $$(1+x)^k = \sum_{n=0}^{\infty} {k\choose n}x^n = 1 + kx + \frac{k(k-1)}{2!}x^2 + \frac{k(k-1)(k-2)}{3!}x^3 + \cdots$$ $$g(x) = \frac{1}{\sqrt{1+x^2}} = (1+x^2)^{-1/2}$$ So, $k = -\frac{1}{2}$ and $x = x^2$ per the definition of the expansion for $(1+x)^k$ If you need further help writing out the binomial expansion for your function, let me know and I'll fill in some more details. Note that this only holds when our Taylor Series is about a point $a = 0$. That is, it is a Maclaurin Series. If $a \ne 0$, only then do we need to compute the nth order derivative about $a$. For a given Taylor Series, we can express a function $f$ as: $$f(x) = \sum_{n=0}^{N} \frac{f^n(a) (x-a)^n}{n!}$$ - Thanks a lot. However, I can't use Maclaurin Series fact. I still believe there must be a general formula for the $n$th derivative. – Chan Nov 28 '12 at 5:12 I'll try working out a formula for the $n$th order derivative then. Give me a few. What point $a$ is your series about? – Joe Nov 28 '12 at 5:22 1 $a = 0$, I've just figured it out based on your answer. Just plug in $a = 0$, then the remain sequences is : -1, 1, 9, -225, 11025. Once again, thanks a bunch Joe. – Chan Nov 28 '12 at 5:27 1 For odd $n$, $g^{(n)}(0)=0$. For even $n=2k$, $g^{(n)}(0)=\frac{(-1)^{k}((2k)!)^2}{4^{k}(k!)^2}$. This is found by expanding $\binom{-1/2}{k}$ and using the fact that $1\cdot3\cdot\cdots\cdot(2k-1)=\frac{(2k)!}{2\cdot4\cdot6\cdot\cdots\cdot(2k)}$. – alex.jordan Nov 28 '12 at 5:45 Ah, nice catch, Alex. Good work. – Joe Nov 28 '12 at 5:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8438680768013, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2032/is-duration-really-the-slope-of-the-price-yield-curve
# Is Duration really the slope of the Price-Yield curve? When looking at the Price-vs-Yield graph for a fixed rate instrument, we are often told that the duration is the slope of that curve. But is that really right? Duration is (change in price) divided by (price times change in yield). That's hardly the slope of the curve which would be (change in price) divided by (change in yield). Yield is expressed in percentage terms which makes it look relative, but going from 1% to 2% is a relative increase of 100%, because it's a 1% increase only in absolute terms. That added factor of price is not constant and so the slope and duration differ by different ratios for different prices!? - ## 2 Answers The Macaulay duration is a measure of how sensitive a bond's price is to changes in interest rates. Duration is related to, but differs from, the slope of the plot of bond price against yield-to-maturity. The slope of the price-yield curve is $-\frac{D}{1+r}P,$ where $D$ is Macaulay duration, $P$ is bond price, and $r$ is yield. Here's how the definition of duration arises. Let's expand the price of a bond, $P$, in terms of the yield-to-maturity, $r$, using Taylor's theorem: $$\Delta P=P(r+\Delta r)-P(r)\approx\frac{\partial P(r)}{\partial r}\Delta r+\frac{1}{2}\frac{\partial^2 P(r)}{\partial r^2}(\Delta r)^2.$$ Since $$P(r)=\sum_{t=1}^{T}\frac{C_t}{(1+r)^t},$$ where $C_t$ are the cash flows, we have that $$\Delta P\approx -\frac{\Delta r}{1+r}\sum_{t=1}^{T}\frac{t\ C_t}{(1+r)^t}+\frac{(\Delta r)^2}{2(1+r)^2}\sum_{t=1}^{T}\frac{t(t+1)C_t}{(1+r)^t},$$ and dividing both sides by $P$, we arrive at the expression $$\frac{\Delta P}{P}\approx -\frac{D}{1+r}\Delta r+\frac{\mathcal C}{2}(\Delta r)^2.$$ Here $$D=\frac{1}{P} \sum_{t=1}^{T}\frac{t\ C_t}{(1+r)^t}$$ is the Macaulay duration, and $$\mathcal C= \frac{1}{P(1+r)^2}\sum_{t=1}^{T}\frac{t(t+1)C_t}{(1+r)^t}$$ is a measure of curvature, or convexity, in the plot of bond price against yield-to-maturity. - You are correct: none of the durations are the slope of (the tangent to) the price/yield curve. Rather the slope is the "dollar duration" = modified duration * Price *-1. This will tend to betray rather large numbers; e.g., under continuous compounding the modified/Macaulay duration of a 100 par 10-year zero coupon bond is 10.0 years. The slope (of the tangent) at yield = 5% = -P(D) = 100[exp(-5%*10)]10 = -606. As a linear approximation, the price change is 606 for a 1 unit change in the x-axis, where 1.0 unit = 100% change (10,000 basis points). In this way, fwiw, the slope is the dollar duration is also DV01 (aka, PVBP) * 10,000 as Mod duration * Price / 10,000 = DV01 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513183832168579, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/3441/how-can-i-calculate-the-number-of-potential-combinations-in-a-password
# How can I calculate the number of potential combinations in a password? If I create a 10 digit password with the following requirements: • At least one uppercase letter A-Z - 26 • At least one lowercase letter a-z - 26 • At least one digits 0-9 - 10 • At least one common symbol (#,\$,%,etc) - 32 By inclusion-exclusion, I can calculate I have ~ 3.2333E+19 possible combinations However, if I change one of the requirements to at least TWO digits 0-9, how can I calculate the possible combinations? - 4 By inclusion-exclusion again. There are just more terms. – Qiaochu Yuan Aug 27 '10 at 4:04 You can take your previous answer, compute the number of passwords that had exactly one digit, and subtract it. – Arturo Magidin Aug 27 '10 at 16:09 So since there are 3.2333E+19 possible combinations remaining, and each of those has at least 1 digit, and the most digits it can have is 7, wouldn't the answer be 6/7's of the 3.2333E+19 = 2.77143E+19? – user1524 Aug 27 '10 at 19:11 ## 1 Answer You have to choose 10 letters, and 2 of them must be digits. Furthermore, there must be one each of a lowercase letter, an upper case letter, and a common symbol. For the others, there are 5 choices to be made, and these are to be made from $26+26+10+32 = 94$ characters. This gives $94^5 \approx 7.339e9$ choices for the the 6 other characters that do not have to be digits. And for the digits, there are 2 choices from 10 characters. So this gives $10^2 = 100$ choices. And for the one each of lower-case letters, upper-case letters, and common symbols there are $26 * 26 * 32 = 21632$ choices. Now lastly, there are $10!$ permutations of these 10 characters, so the total number of combinations of these characters is: $26*26*32*100*94^5 * 10! \approx 5.76e22$ - I'm confused by this method. I started with 94 possible characters with no restrictions as 94^10 = 5.38615E+19 total combinations possible, and then subtracted the restricted sets. This number is higher than my start point for some reason. – user1524 Aug 27 '10 at 19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8911526203155518, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/98563/distance-between-points-in-two-disjoint-compact-sets-closed
distance between points in two disjoint Compact sets [closed] Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) let $S$ and $T$ be two disjoint compact nonempty sets. Show that there are points $x_0$ in $S$ and a point $y_0$ in $T$ such that |$x$−$y$|≥|$x_0$−$y_0$| whenever $x$ is in $S$ and $y$ is in $T$. - Hi, please see our FAQ for the scope of our forum, and also for some other websites where your question would be more appropriately posed. In particular, I invite you to consider asking it over at math.stackexchange.com – Willie Wong Jun 1 at 11:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446709156036377, "perplexity_flag": "head"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_4&diff=30783&oldid=30782
# User:Michiexile/MATH198/Lecture 4 ### From HaskellWiki (Difference between revisions) | | | | | |----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | | | | Line 63: | | Line 63: | | | | </haskell> | | </haskell> | | | | | | | - | Similarly, in the category of sets we have the disjoint union <math>S\amalg T = S\times 0 \cup T \times 1</math>, which also comes with functions <math>i_S: S\to S\amalg T, i_T: T\to S\amalg T</math>. | + | Similarly, in the category of sets we have the disjoint union <math>S\coprod T = S\times 0 \cup T \times 1</math>, which also comes with functions <math>i_S: S\to S\coprod T, i_T: T\to S\coprod T</math>. | | | | | | | | We can use all this to mimic the product definition. The directions of the inclusions indicate that we may well want the dualization of the definition. Thus we define: | | We can use all this to mimic the product definition. The directions of the inclusions indicate that we may well want the dualization of the definition. Thus we define: | | Line 69: | | Line 69: | | | | '''Definition''' A ''coproduct'' <math>A+B</math> of objects <math>A, B</math> in a category <math>C</math> is an object equipped with arrows <math>A \rightarrow^{i_1} A+B \leftarrow^{i_2} B</math> such that for any other object <math>V</math> with arrows <math>A\rightarrow^{q_1} V\leftarrow^{q_2} B</math>, there is a unique arrow <math>A+B\to V</math> such that the diagram | | '''Definition''' A ''coproduct'' <math>A+B</math> of objects <math>A, B</math> in a category <math>C</math> is an object equipped with arrows <math>A \rightarrow^{i_1} A+B \leftarrow^{i_2} B</math> such that for any other object <math>V</math> with arrows <math>A\rightarrow^{q_1} V\leftarrow^{q_2} B</math>, there is a unique arrow <math>A+B\to V</math> such that the diagram | | | | | | | - | [[Image:A+Bdiagram.png]] | + | [[Image:A-Bdiagram.png]] | | | | | | | | commutes. The diagram <math>A \rightarrow^{i_1} A+B \leftarrow^{i_2} B</math> is called a ''coproduct cocone'', and the arrows are ''inclusion arrows''. | | commutes. The diagram <math>A \rightarrow^{i_1} A+B \leftarrow^{i_2} B</math> is called a ''coproduct cocone'', and the arrows are ''inclusion arrows''. | ## Revision as of 04:41, 14 October 2009 IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. ## Contents ### 1 Product Recall the construction of a cartesian product of two sets: $A\times B=\{(a,b) : a\in A, b\in B\}$. We have functions $p_A:A\times B\to A$ and $p_B:A\times B\to B$ extracting the two sets from the product, and we can take any two functions $f:A\to A'$ and $g:B\to B'$ and take them together to form a function $f\times g:A\times B\to A'\times B'$. Similarly, we can form the type of pairs of Haskell types: Pair s t = (s,t) . For the pair type, we have canonical functions fst :: (s,t) -> s and snd :: (s,t) -> t extracting the components. And given two functions f :: s -> s' and g :: t -> t' , there is a function f *** g :: (s,t) -> (s',t') . An element of the pair is completely determined by the two elements included in it. Hence, if we have a pair of generalized elements $q_1:V\to A$ and $q_2:V\to B$, we can find a unique generalized element $q:V\to A\times B$ such that the projection arrows on this gives us the original elements back. This argument indicates to us a possible definition that avoids talking about elements in sets in the first place, and we are lead to the Definition A product of two objects A,B in a category C is an object $A\times B$ equipped with arrows $A \leftarrow^{p_1} A\times B\rightarrow^{p_2} B$ such that for any other object V with arrows $A \leftarrow^{q_1} V \rightarrow^{q_2} B$, there is a unique arrow $V\to A\times B$ such that the diagram commutes. The diagram $A \leftarrow^{p_1} A\times B\rightarrow^{p_2} B$ is called a product cone if it is a diagram of a product with the projection arrows from its definition. In the category of sets, the unique map is given by q(v) = (q1(v),q2(v)). In the Haskell category, it is given by the combinator (&&&) :: (a -> b) -> (a -> c) -> a -> (b,c) . We tend to talk about the product. The justification for this lies in the first interesting Proposition If P and P' are both products for A,B, then they are isomorphic. Proof Consider the diagram Both vertical arrows are given by the product property of the two product cones involved. Their compositions are endo-arrows of P,P', such that in each case, we get a diagram like with $V=A\times B=P$ (or P'), and q1 = p1,q2 = p2. There is, by the product property, only one endoarrow that can make the diagram work - but both the composition of the two arrows, and the identity arrow itself, make the diagram commute. Therefore, the composition has to be the identity. QED. We can expand the binary product to higher order products easily - instead of pairs of arrows, we have families of arrows, and all the diagrams carry over to the larger case. #### 1.1 Binary functions Functions into a product help define the product in the first place, and function as elements of the product. Functions from a product, on the other hand, allow us to put a formalism around the idea of functions of several variables. So a function of two variables, of types A and B is a function f :: (A,B) -> C . The Haskell idiom for the same thing, A -> B -> C as a function taking one argument and returning a function of a single variable; as well as the curry / uncurry procedure is tightly connected to this viewpoint, and will reemerge when we talk about adjunctions in a few lectures' time. ### 2 Coproduct The product came, in part, out of considering the pair construction. One alternative way to write the Pair a b type is: `data Pair a b = Pair a b` and the resulting type is isomorphic, in Hask, to the product type we discussed above. This is one of two basic things we can do in a data type declaration, and corresponds to the record types in Computer Science jargon. The other thing we can do is to form a union type, by something like `data Union a b = Left a | Right b` which takes on either a value of type a or of type b , depending on what constructor we use. This type guarantees the existence of two functions ```Left :: a -> Union a b Right :: b -> Union a b``` Similarly, in the category of sets we have the disjoint union $S\coprod T = S\times 0 \cup T \times 1$, which also comes with functions $i_S: S\to S\coprod T, i_T: T\to S\coprod T$. We can use all this to mimic the product definition. The directions of the inclusions indicate that we may well want the dualization of the definition. Thus we define: Definition A coproduct A + B of objects A,B in a category C is an object equipped with arrows $A \rightarrow^{i_1} A+B \leftarrow^{i_2} B$ such that for any other object V with arrows $A\rightarrow^{q_1} V\leftarrow^{q_2} B$, there is a unique arrow $A+B\to V$ such that the diagram commutes. The diagram $A \rightarrow^{i_1} A+B \leftarrow^{i_2} B$ is called a coproduct cocone, and the arrows are inclusion arrows. The other thing you can do in a Haskell data type declaration looks like this: `Coproduct a b = A a | B b` and the corresponding library type is Either a b = Left a | Right b . This type provides us with functions ```A :: a -> Coproduct a b B :: b -> Coproduct a b``` and hence looks quite like a dual to the product construction, in that the guaranteed functions the type brings are in the reverse directions from the arrows that the product projection arrows. So, maybe what we want to do is to simply dualize the entire definition? Definition Let C be a category. The coproduct of two objects A,B is an object A + B equipped with maps $i_1:A\to A+B$ and $i_2:B\to A+B$ such that any other object V with maps $A\rightarrow_{v_1} V \leftarrow_{v_2} B$ has a unique map $v:A+B\to V$ such that v1 = vi1 and v2 = vi2. In the Haskell case, the maps i1,i2 are the type constructors A,B. And indeed, this Coproduct, the union type construction, is the type which guarantees inclusion of source types, but with minimal additional assumptions on the type. In the category of sets, the coproduct construction is one where we can embed both sets into the coproduct, faithfully, and the result has no additional structure beyond that. Thus, the coproduct in set, is the disjoint union of the included sets: both sets are included without identifications made, and no extra elements are introduced. Proposition If C,C' are both coproducts for some A,B, then they are isomorphic. The proof is almost exactly the same as the proof for the product case. • Diagram definition • Disjoint union in Set • Coproduct of categories construction • Union types ### 3 Algebra of datatypes Recall from [User:Michiexile/MATH198/Lecture_3|Lecture 3] that we can consider endofunctors as container datatypes. Some of the more obvious such container datatypes include: ```data 1 a = Empty data T a = T a``` These being the data type that has only one single element and the data type that has exactly one value contained. Using these, we can generate a whole slew of further datatypes. First off, we can generate a data type with any finite number of elements by $n = 1 + 1 + \dots + 1$ (n times). Remember that the coproduct construction for data types allows us to know which summand of the coproduct a given part is in, so the single elements in all the 1 s in the definition of n here are all distinguishable, thus giving the final type the required number of elements. Of note among these is the data type Bool = 2 - the Boolean data type, characterized by having exactly two elements. Furthermore, we can note that $1\times T = T$, with the isomorphism given by the maps ```f (Empty, T x) = T x g (T x) = (Empty, T x)``` Thus we have the capacity to add and multiply types with each other. We can verify, for any types A,B,C $A\times(B+C) = A\times B + A\times C$ We can thus make sense of types like T3 + 2T2 (either a triple of single values, or one out of two tagged pairs of single values). This allows us to start working out a calculus of data types with versatile expression power. We can produce recursive data type definitions by using equations to define data types, that then allow a direct translation back into Haskell data type definitions, such as: $List = 1 + T\times List$ $BinaryTree = T\times (1+BinaryTree\times BinaryTree)$ $TernaryTree = T\times (1+TernaryTree\times TernaryTree\times TernaryTree)$ $GenericTree = T\times (1+List\circ GenericTree)$ The real power of this way of rewriting types comes in the recognition that we can use algebraic methods to reason about our data types. For instance: ```List = 1 + T * List = 1 + T * (1 + T * List) = 1 + T * 1 + T * T* List = 1 + T + T * T * List``` so a list is either empty, contains one element, or contains at least two elements. Using, though, ideas from the theory of power series, or from continued fractions, we can start analyzing the data types using steps on the way that seem completely bizarre, but arriving at important property. Again, an easy example for illustration: ```List = 1 + T * List -- and thus List - T * List = 1 -- even though (-) doesn't make sense for data types (1 - T) * List = 1 -- still ignoring that (-)... List = 1 / (1 - T) -- even though (/) doesn't make sense for data types = 1 + T + T*T + T*T*T + ... -- by the geometric series identity``` and hence, we can conclude - using formally algebraic steps in between - that a list by the given definition consists of either an empty list, a single value, a pair of values, three values, et.c. At this point, I'd recommend anyone interested in more perspectives on this approach to data types, and thinks one may do with them, to read the following references: #### 3.2 Research papers d for data types 7 trees into 1 ### 4 Homework 1. What are the products in the category C(P) of a poset P? What are the coproducts? 2. Prove that any two coproducts are isomorphic. 3. Write down the type declaration for at least two of the example data types from the section of the algebra of datatypes, and write a Functor implementation for each.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8803150653839111, "perplexity_flag": "middle"}