url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathforum.org/mathimages/index.php/Pigeonhole_Principle
--> # Introduction Pi The symbol π is the sixteenth letter of the Greek alphabet, yet it has gained fame because of its designation in mathematics. π has some intriguing properties. For one, π is an irrational number[1], meaning that it cannot be written as the ratio of two integers and it has an infinite number of digits in its decimal representation. The first attempt to prove that π was irrational was made by Johaan Heinrich Lambert in 1761. Lambert used a continued fraction expansion of tan (x) to prove π's irrationality. Lambert's proof produced the theorem that if x is a non-zero rational number, then the number tan x is irrational, and if tan (x) is rational than x must be irrational. Lambert believed that since tan (π/4) = 1, a rational number, then π/4 must be irrational, hence π must be irrational. The results of Lambert's complex continued fraction of tan (x) proved his theory to be correct. [2] Lambert's proof seemed to be too simplified to be the answer to a complex problem. However, in 1794, Legendre proved the irrationality of π using a more rigorous procedure. In his book “Elements de Geometrie” Legendre provided a proof of π's irrationality, and also gave a proof that π2 is irrational. Legendre's work supported Lambert's proof, and put to rest the question of π's irrationality. Toward the end of his book, Legendre wrote : “It is probable that the number π is not even contained among the algebraic irrationalities, and that it cannot be the root of an algebraic equation with a finite number of terms, whose coefficients are rational. But it seems to be very difficult to prove this strictly.” [3] π is also a transcendental number[1], which means that it is not the solution of any non-constant polynomial with rational coefficients, such as The transcendence and irrationality of π has many important consequences. It is extremely difficult to prove that a number is transcendental. π was proven to be transcendental in 1882 by Ferdinand von Lindemann, based upon works of Charles Hermite and Euler. In 1873, Charles Hermite proved that the number e is transcendental. His proof showed that the finite equation aer + bes + cet + … = 0 cannot be satisfied if r, s, t, … are natural numbers and a, b, c … are rational numbers not all equal to zero. Lindemann extended Hermite's theorem to the case where r, s, t, … and a, b, c, … are algebraic numbers, not necessarily real. Lindemann's theorem states that if r, s, t, …, z, are distinct real or complex algebraic numbers, and a, b, c, …, n are real or complex algebraic numbers, at least one of which differs from zero, then the finite sum aer + bes + cet + … + nez cannot equal zero. Once Lindemann proved this, the transcendence of π quickly follows. Lindemann used Euler's theorem in the form eiπ + 1 = 0, which gave an expression with a = b = 1 algebraic, and c and all further coefficients equal to zero; s = 0 is algebraic, leaving r = iπ as the only reason why the equation should vanish. Therefore, iπ must be transcendental, and since i is algebraic, π must be transcendental. [3] However in this article, we will touch upon the more geometrical side of π. We will go back in time and retrace the steps of the ancient Egyptians, Archimedes, Euclid, and Cusanus in their arduous yet rewarding journey to estimate the value of the mysterious constant. We will learn each of their approximation methods, and actually calculate the π value they would have gotten with today's technology. (Calculators, Matlab, Mathematica, etc.) Then we shall explore many of π's applications, even in fields seemingly unrelated to geometry, like probability. We will explore the Reuleaux triangle, an analogous figure to the circle, later used as the prototype of the renowned Wankel engine. We shall find trace of π when the conventional wisdom led Cambridge geologist Hans-Henrik Stolum to calculate the ratio of the length of a river to that between its source and end point. The ubiquitous nature of π makes it one of the most widely known mathematical constants, both inside and outside the scientific community: Several books devoted to it have been published; the number is celebrated on Pi Day; and news headlines often contain reports about record-setting calculations of the digits of π. Several people have endeavored to memorize the value of π with increasing precision, leading to records of over 67,000 digits.[1] # Definition The number π is a mathematical constant that is the ratio of a circle's circumference to its diameter. As stated in the first sentence of this article, we shall in the following sections deal with the calculation and usage of π in Euclidean geometry. We should also be aware that the previous definition of π is not universal, because it is not valid in curved (non-Euclidean) geometries.[4] For this reason, some mathematicians prefer definitions of π based on calculus or trigonometry that do not rely on the circle. One such definition is: π is twice the smallest positive x for which cos(x) equals 0.[4] # Calculating Pi Throughout history, scholars have been trying to figure out the value of π. The polygon approximation era, infinite series, computer era and iterative algorithms have brought scholars closer and closer to the true value of the mysterious constant. We may wonder about their motivation for such research. For most numerical calculations involving π, a handful of digits provide sufficient precision. According to Jörg Arndt and Christoph Haenel, thirty-nine digits are sufficient to perform most cosmological calculations, because that is the accuracy necessary to calculate the volume of the universe with a precision of one atom.[4] Despite this, people have worked strenuously to compute π to thousands and millions of digits.[4] This effort may be partly ascribed to the human compulsion to break records. They also have practical benefits, such as testing supercomputers, testing numerical analysis algorithms (including high-precision multiplication algorithms); and within pure mathematics itself, providing data for evaluating the randomness of the digits of π. In the following sub-sections of this article, we shall explore approximations of π with a strong geometric blend to it. Such approximations include the method of the ancient Egyptians, Archimedes, Cosanus, and Euclid. Yet we have also added an algebraic sense by sometimes taking a detour along our ancestors' steps and using tools such as infinite series or Matlab to calculate results. Due to the orientation of this article, a large portion of π's approximations have been left out. ## Ancient Egyptians Figure 3-1: The Ancient Egyptian Method of Approximating π An Egyptian scribe named Ahmes wrote the oldest known text to imply an approximate value for π. The Rhind Papyrus, written by Ahmes, said that if we construct a square with a side whose length is eight-ninths of the diameter of a circle, then their area will be equal. It was the effort to construct a square whose area is equal a circle that generated the early approximations of π.[5] If we inspected the Rhind Papyrus, we can replicate the work of the ancient Egyptians and find out how close they were to the true π value. We will denote the diameter of the circle d, and the length of the side of the square to be a. Then we can calculate both of their areas with the corresponding formulas and equate them to get our (the ancient Egyptians') value of approximation. We know a and d are related in the following way: The area of the square, A1 is: We know from today's knowledge the area of the circle, A2 is: We shall briefly stop here to comment on two things. First, one might argument the circle area equation was not available at the time of the ancient Egyptians. However, the purpose of our derivations is to merely find out how close they were to the true π value. Second, although the symbol "π" was not introduced to represent the ratio between the circumference and diameter of the circle till much later (three thousand years), we have used it for convenience and to avoid confusion. We now set A1 equal to A2: This is a reasonably close approximation of what we know the value of π to be by modern methods. ## Euclid's Influence Euclid also made a contribution to the history of π. Although Euclid himself never postulated an approximation for the value of π, his work hints at the possible awareness of such a constant. We shall further understand this by following Proposition 2, Book XII of Elements. We first state Proposition 2, Book XII[6]: Circles are to one another as the squares on the diameters. Since the language may seem confusing after one read, let us instead appeal to symbols. Let Circle 1 have area A1 and diameter d1; let Circle 2 have area A2 and diameter d2. Proposition 2 is telling us the following relationship: Some simple algebraic manipulation gives us the following: , where C is some constant value. This says that the area of a circle is equal to the diameter squared times some constant C, eventually leading us to the formula for the area of the circle. Euclid's work is actually hinting to the existence of the constant which is known today as π. ## Archimede's Method Figure 3-3-1: A variety of inscribed regular polygons The first recorded algorithm for rigorously calculating the value of π was a geometrical approach using polygons. It was devised around 250 BC by the Greek mathematician Archimedes.[7] Since π is the ratio of any circle's circumference and its diameter, it will be greater than the circumference of any inscribed regular polygon and less than that of any circumscribed regular polygon. Shown in Figure 3-3-1, we see circles with inscribed regular polygons with 3, 4, 5, 6, 8, and 17 sides. It's obvious the more sides such a polygon has, the closer it will resemble an actual circle (notice how in Figure 3-3-1 when n=17 the circle is visually indiscernible from the parameter from the polygon), and thus the closer the length of its perimeter will be to that of the circle. Figure 3-3-2: Inscribed hexagon inside circle We will now retrace Archimedes' work. But instead of using both inscribed polygons and circumscribed polygons to "sandwich" the circle, we will only use inscribed polygons for our approximation process. With a regular hexagon as our visual guide (as shown in Figure 3-3-2), we wish to derive the general formula with which we can apply to any n-sided regular polygon to estimate the value of π. Since we know the circumference of the circle to be C=πd, where d is the diameter of the circle, by setting the diameter to 1 the parameter of the polygon (C) will be equivalent to the value of π. This is done without loss of generality: Now we will start our derivation of a general formula which will allow us to calculate the perimeter (C) of any n-sided polygon. Since the polygon has n sides, Applying trigonometry, Because the polygon has n sides and the length of AB is half that of one side, We can then plug in varoius values of n and compute the parameter of the regular polygon whose circumscribed circle has a diameter of 1. The process is easily done with a calculator (or MATLAB). We will point out that when n=10,000, the approximation for the value of π is: If we look at the known value of π for comparison: We find that up to the seventh place, the approximation with a 10,000-sided regular polygon is perfectly accurate. Yet Archimedes himself obviously did not enjoy the luxury of calculating devices to assist him in the following: Archimedes used the perimeters of many-sided polygons to approximate π and did an amazing job even viewed by today's standards. He managed his approximations through some geometric technique and for him it surely didn't come down to calculating 96sin(180/96). But in the following paragraphs I will take a detour while retracing Archimedes' route and actually tackle sin(180/96) through the double-angle and half-angle formulas: When we try to take the square roots of Equa (1) and Equa (2), we have to consider the possibility that the left hand side might be negative. (It is obvious the right hand side of both equations will be positive.) Yet since as the number of sides of the regular polygon gets large, θ will become small and we're confident cos(θ/2) will stay positive. Thus after some simple manipulation, Equa (1) and Equa (2) can be rewritten in the following way: We notice how we can generalize Equa (3): We then realize if we start with sin(θ/2), the left hand side of Equa (4), we can keep substituting Equa (5) into the right hand side for however many iterations we like. The larger k gets, the more sides the regular polygon will have, and the more accurate our approximation of π will be: In fact we are using an infinite series (see Series for more) to estimate π. The calculation of π was revolutionized by the development of infinite series techniques in the 16th and 17th centuries.[7] An infinite series is the sum of the terms of an infinite sequence. Infinite series allowed mathematicians to compute π with much greater precision than Archimedes and others who used geometrical techniques. ## Cusanus' Method Archimedes had used inscribed and circumscribed regular polygons within and about a given circle, eventually increasing the number of sides of the polygon. He "sandwiched in" the circle to get an upper and lower bound for the value of π. An analogous method developed by Cusanus has us "sandwiching" in regular polygons with increasing numbers of sides (the number of sides double after each iteration, as can be easily seen in Figure 3-4-1) by inscribed and circumscribed circles.[8] Like with Archimedes' method, we will only work with circumscribed circles. Inscribed circles are similar. We will start with squares to derive a general formula that relates the parameter of an n-sided regular polygon to that of a 2n-sided polygon. That way, with any random initial regular polygon, we can increase the amount of iterations with the help of some programming to get infinitely closer to the actual value of π. As shown in Figure 3-4-1, we start with a square inscribed in a unit circle whose diagonal is length 2. If we denote the length of the square's side as an, then the circumference of the polygon (square) would generate our first approximation of the value of π. Without loss of generality, I picked the midpoint of line segment AB, D, and extended OD to intersect the circle at point C. CB is the side of the second polygon used in our approximation, whose length we shall denote as a2n. Notice these steps could be done on a paragon, a hexagon, or any other inscribed regular polygon. We shall now derive a general formula to calculate a2n with an, which can then be iterated on Matlab (along with an initial value) for however many loops we wish for our approximation of π: Figure 3-4-2: Click to enlarge We know that, According to the Pythagorean Theorem, Thus, And since , After some manipulation, Figure 3-4-2: Click to enlarge With the progressive relationship between an and a2n and the initial value a4=1.414, we can write a Matlab script that calculates the value of π up to infinite n value. However, after experimenting I found that 10 iterations can get us close enough to the "actual" π value. Figure 3-4-2 is a screenshot of the Matlab script and the plot it generated. While the programming language is trivial, what the plot reveals is that after just 5 approximations, we have got ourselves a decent π value. In other words, a regular polygon with 32 (2^5=32) sides is as good as a circle in terms of calculating the π value. ## A Brief Timeline Up till this point we have described the works of the Ancient Egyptians, Euclid, Archimedes, and Cusanus all of which while calculating the value of π have taken on a geometric approach. We shall now provide a general outline of the history of π. • The Ancient Egyptians, as early as 1650 BCE, have been recorded to have equated (although approximately) the area of a circle and a square whose side is 8/9 that of the circle's diameter. • Taking a leap forward in time, we come to the Babylonians, which spans from 2000 BE to about 600BCE. One tablet unearthed at Susa (not far from Babylon) compares the parameter of a regular hexagon to the circumference of its circumscribed circle. This result proved to be a little bit closer to the Egyptians' approximation. • In the Bible (Old Testament), written about 550 BCE,the Talmud's books of Kings and Chronicles describes King Solomon's water basin and hints a possible value of π, 3. • Then we come across the work of Archimedes. By both inscribing and circumscribing regular polygons around circles, he produced an upper limit as well as a lower limit for the value of π, between which our known value of π is smoothly squeezed in. He also visioned the circle as the limit of the ever-increasing number of sides of a regular polygon of a fixed parameter. • Meanwhile in China, works of Liu Hui and Zu Chongzhi paralleled that of the West. Liu's approximation, was perhaps the most most accurate approximation of π until Zu came up with his approximation, • As we enter the Renaissance, we shall note the works of Fibonacci. In his Liber abaci, published in 1202, by making use of a regular polygon of ninety-six sides, he computed the value of π to be: Although for his approximations are not as accurate as the some approximations we have previously introduced, his contributions to mathematics are legendary, especially when considering the fact they took place after the Dark Ages. • Starting in the 16th century, new algorithms based on infinite series revolutionized the computation of π, and were used by mathematicians including Mādhava of Sañgamāgrama, Isaac Newton, Leonhard Euler, Srinivasa Ramanujan, and Carl Friedrich Gauss. • In the 20th century, mathematicians and computer scientists discovered new approaches that – when combined with increasing computer speeds – extended the decimal representation of π to over 10 trillion digits. Scientific applications generally require no more than 40 digits of π, so the primary motivation for these computations is the human desire to break records; but the extensive calculations involved have been used to test supercomputers and high-precision multiplication algorithms. # Applications of π Because π is closely related to the circle, it is found in many formulae from the fields of geometry and trigonometry, particularly those concerning circles, spheres, or ellipses. Formulae from other branches of science also include π in some of their important formulae, including sciences such as statistics, fractals, thermodynamics, mechanics, cosmology, number theory, and electromagnetism.[1] ## Alternative Circle Area Formulation Figure 4-1-1: Click to enlarge Figure 4-1-2: Click to enlarge Let's consider a relatively simple "derivation" for the formula A=πr2[1]. We divide the circle into 16 arcs, each being 22.5°, as in Figure 4-1-1. Then, let's cut the circle apart into 16 pieces and regroup them the manner shown in Figure 4-1-2. We see that Figure 4-1-2 roughly resembles a parallelogram. That is, if the circle were equally divided into more sections, then the figure would look more like a true parallelogram. Let us assume it is a parallelogram. Now, we have transformed the task of finding the area of the circle into finding the area of the parallelogram, which we know to be base × height. By observation, the height of the parallelogram is the radius of the circle, r. Also, since half of the circle's arcs are used for each of the two sides of the approximate parallelogram, the base would have a length half of the circumference of the original circle. To make the following derivations more clear, let us denote the base of the parallelogram as b, the height h, the area of the circle Acirc, the circumference of the circle C, and the area of the parallelogram Ap. Since half of the circle's arcs are used for each of the two sides of the approximate parallelogram: And by observation: Thus the area of the circle is: ## π in Probability Figure 4-2-1: Buffon's Needle Figure 4-2-2: Zooming into One Particular Square π shows up in areas that seemingly have nothing to do with geometry, such as probability. The French naturalist Buffon is primarily remembered for his work to popularize the natural sciences in France. Yet in mathematics he is remembered for two things: his French translation of Newton's Method of Fluxions, the forerunner of today's calculus, and more so even for the "Buffon needle problem."[9] We are primarily interested in the latter. The Buffon needle problem goes like this: Suppose you have a piece of paper with ruled parallel lines throughout, equally spaced (at a distance d between both horizontal and vertical lines), and a thin needle of length l (where l<d). You can then toss the needle onto the paper many times. Buffon claimed that the probability that the needle will touch one of the ruled lines is . Let's find out why.[9] Without loss of generality, we can choose to experiment with needles that are shorter than the spacing between the lines. (l<d) We try to form a mathematical argument of the condition satisfying which the needle will cross a line. In Figure 4-2-2, we zoom into one particular box. l is the length of the needle. x is the distance between the midpoint of the stick (point A) to its closest line. θ is the angle formed by the needle and the horizontal side of the square. d is the distance between the endpoint of the needle to the line parallel to the horizontal sides that also passes through A. By observation, the needle crosses the upper line when: By trigonometry: Thus, we arrive at the mathematical argument when the needle crosses a line, In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. The support is defined by the two parameters, a and b, which are its minimum and maximum values. The distribution is often abbreviated U(a,b).[10] Figure 4-2-3: Geometric Representation of the Continuous Uniform Distribution By this definition, θ and x are both continuous uniform distribution (or rectangular distribution) that satisfies: In Figure 4-2-3, Region D (the rectangle with width π and height d/2) represents every possibility for tossing a needle with θ ranging from 0 to 180°, and at the same time with x going from 0 to d/2. x can not be larger than d/2 since it is defined as the closest distance from the midpoint of the needle to the parallel lines. Region A is the area under the curve x=lsinθ/2, which is the geometric representation of the condition under which a needle will cross a line, as derived above. The area of A, A1, is computed using integration: The area of D, A2 is: With these two values, now we can calculate the probability of a needle crossing the parallel lines, This is how the equation for Buffon's experiment, came about. And so such for theoretical manipulation. Say we had some spare time and wanted to try out Buffon's experiment by actually tossing needles onto a grid of parallel lines repeatedly. Without loss of generality, we can set l equal to d. So that the probability, P, of the needle touching one of the lines is 2/π: This equation actually gives us another method to approximate the value of π, Theoretically, the more you toss, the more accurate the estimate of π should be. In 1901 the Italian mathematician Mario Lazzarini tossed the needle 3408 times and got π=3.1415929[9], which is quite impressive. It differs from π by no more than 3×10−7. While this method is not the most "accurate" measurement of π by any means, it is novel. Just think for a moment how the probability of a tossed needle intersecting a line relates to the value of a transcendental number, π. It is quite curious that π is related to probability. Another such example: the probability that a number chosen at random from the set of natural numbers has no repeated prime divisors is 6/π2. This value also represents the probability that two natural numbers selected at random will be relatively prime. This is quite astonishing since π is derived from a geometric setting.[9] ## Using π to Measure River Lengths Figure 4-3: Meandering into semi-circles By now you should be fully convinced that π is not confined within geometry or even mathematics. For example, π also appears as the average ratio of the actual length and the direct distance between source and mouth in a meandering river. Hans-Henrik Stolum, a geologist at Cambridge University, calculated the ratio between the total length of a river to the direct distance between its source and end point.[6] He found the average ratio to be a bit larger than 3. It is actually around 3.14, which we recognize as an approximation of π. Coincidence? Rivers have a tendency toward a loopy path. A slight bend will lead to faster currents on the outside shores, and the river will begin to erode and create a curved path. The sharper the bend, the more strongly the water flows to the outside, and in the consequence the erosion is in turn the faster. The meanders get increasingly more circular, and the river turns around and around in semi-circles. It then runs straight ahead again, and the meander becomes a bleak branch. Between the two reverse effects a balance adapts. The process is demonstrated in Figure 4-3. We have created a model where a fictitious river is superimposed by semicircles to represent the eventual curve of the river flow. Now by labeling the source point of the river A, end point B, the midpoints of every semicircle Mi (i=1,2,3,4), and the radius of each semicircle ri, we can check if the ratio of the length of the path of the river (C) to that of line segment AB (l) is actually π. $C=r_1\pi+r_2\pi+r_3\pi+r_4\pi$ $l=r_1+r_2+r_3+r_4$ Thus, $\frac{C}{l}=\pi$ ## Reuleaux Triangle Figure 4-4: Reuleaux Triangle We will introduce the Reuleaux triangle by constructing one. The process is as follows: start by drawing a circle of center A with a random radius. Then we pick a random point B on the parameter of circle A. With B as the center, we draw a second circle whose radius is equal to that of circle A. We label one of the two interception points of circles A and B "C". With C as the center, we draw our third circle with a radius equal to that of the first two. The shaded figure is called a Reuleaux triangle, named after the German engineer Franz Reuleaux (1829-1905), who taught at the Royal Technical University of Berlin.[7] Figure 4-4-1: Breadth of the Reuleaux Triangle ### Breadth of the Reuleaux The ratio of the perimeter Reuleaux triangle to its "distant across" (the line segment AB, BC, or AC) is equal to π: While the Reuleaux triangle has many interesting properties, we shall start by examining its breadth. We refer to the distance between two parallel lines tangent to the curve (see Fig 234-234234) as the breadth of the curve.[7] Notice that no matter where we place these parallel tangents, they will always be the same distance apart: the radius of the arcs comprising the triangle, r. With our definition it's easy to see in the case of a circle, the breadth is always its diameter. Therefore, both the Reuleaux triangle and the circle have constant breadth. This property alone makes them analogous to each other. Notice in the original three circles that constructed our Reuleaux triangle (namely circle A, B, and C in Figure 4-4). Arc AB, AC, and BC all correspond to a central angle of 60ˆ. The lengths of these arcs (which we shall denote as l) all are: Thus the perimeter of our given Reuleaux triangle, C1, is: Figure 4-4-2: Calculating the Area ### Same Perimeter, Different Areas Yet we notice the circumference of a circle of diameter r is also πr. Thus, the circle with a diameter of length r has the same circumference with the Reuleaux triangle of breadth r. Their having the same perimeter causes us to wonder how their areas are related. Let us start with the more difficult task of calculating the area of the Reuleaux triangle (We shall denote it as AR). As shown in Figure 4-4-2, we have divided the Reuleaux triangle into 4 parts and labelled them 1 through 4, which we shall also denote as Ai (i=1,2,3,4). The area of the Reuleaux triangle, AR, is: By symmetry, By observation, Using an alternative equation for the area of triangle, Area=abSinC, where a and b are two sides of the triangle and form angle C, the area the equilateral triangle, A1 is: Thus, After some scratch work we can calculate the area of the Reuleaux triangle: Meanwhile the area of a circle with diameter r (which we shall denote as ACirc) is: Let's now compare the areas of two figures with the same perimeter, a circle of diameter r and a Reuleaux triangle with a "distance across" of r: Thus, through our comparison we have found out that the area of the Reuleaux triangle is smaller than that of the circle. In fact, it can be proven that of regular polygons the circle has the largest area for a given diameter. The Austrian mathematician Wilhelm Blaschke (1885-1962) proved that given any number of such figures of equal breadth, the Reuleaux triangle will always possess the smallest area, and the circle will have the greatest area.[7] ### The Reuleaux and the Harry Watt Drill Figure 4-4-3-1 Harry Watt Drill[11] Another astonishing property of the Reuleaux triangle is that a drill bit in the shape of a Reuleaux triangle could bore a square hole rather than the expected round hole.[12] To paraphrase, the Reuleaux triangle is always in contact with each side of a square of appropriate size. Figure 4-4-3-1 gives such an example. The center of a Reuleaux triangle rotating in the square almost describes a circle——more exactly, it consists of four elliptical arcs. (The circle is the only curve of constant breadth that has a balanced center of symmetry.) Harry James Watt, an English engineer, recognized this in 1914. He received a patent (n0. 1241175), enabling these drills to be produced.[7] Felix Wankel (1902-1988), a German engineer, built an internal combustion engine for a car that was the shape of a Reuleaux triangle and rotated in a chamber.[13] It had fewer moving parts and gave out more horsepower for its size than the usual piston engines. The Wankel engine was first tried in 1957 and then put into production in the 1964 Mazda.[13] Again, the unusual properties of the Reuleaux triangle made this type of engine possible. As we look at the Envelope of the Reuleaux triangle in Figure 4-4-3-1, we notice it is almost a square, except for the four corners. What is the shape of those four corners? While we may be tempted to think the path of the geometric center (indicated by the black dot) is a circle, it is not. So what is it? We give answers to these two questions through the Parametric Equations of the curves. Figure 4-4-3-2 Left:First DKM Wankel Engine DKM 54 (Drehkolbenmotor), at the Deutsches Museum in Bonn, Germany Right:A Wankel engine in Deutsches Museum in Munich, Germany[13] Figure 4-4-3-3: Corners of the Reuleaux Triangle[12] Figure 4-4-3-4: The Center of the Reuleaux Triangle[12] We start by defining the four corners of the envelope of the Reuleaux triangle in the Cartesian coordinate system at (±1, ±1). The length of its side is 2. At the corner (-1, -1), the envelope of the boundary is given by the segment of the ellipse with parametric equations[12]: Also shown in Figure 4-4-3-3, the ellipse is centered at (1, 1). The lengths of its semimajor axis and semiminor axis are as follows[12]: In the same coordinate system, the path of the Reuleaux triangle's geometric center consists of a curve composed of four arcs of an ellipse (Wagon 1991). For a bounding square of side length 2, the ellipse in the lower-left quadrant has the parametric equations[12]: As in Figure 4-4-3-4, the ellipse is also centered at (1, 1). The lengths of its semimajor axis and semiminor axis are as follows[12]: # Recent History of π Ramanujan's Formula The more recent history of π includes the first electronic computation of π, which occurred in 1949 on the original ENIAC. It took 70 hours for ENIAC to compute the first 2037 decimal places of π. As technology continued to improve, so did the number of digits we were able to compute of π. In 1965, it was found that the fast Fourier transform (FFT) could be used to perform high-precision multiplications much faster than conventional schemes. [14] This advance dramatically lowered the time which it took the computer to calculate the digits of π. Despite all of these advances, until the 1970s all of the computer computations of the value of π still involved classic formulas. Some new infinite-series formulas were discovered by Ramanujan around 1910, but they were not well known until recently, when his writings were widely published. [14] One of the series is the formula for 1/π, which can be seen in the image to the right. Each term of this series produces an additional eight correct digits for π. In 1985, Bill Gosper used Ramanujan's formula to calculate seventeen million digits of π. Formula for computing the n-th digit of π. Before 1996, all mathematicians believed that if you wanted to determine the n-th digit of π, one would have to calculate all the previous digits before the n-th one. However, this is not true. In 1996, Borwein, Plouffe, and Bailey, found an algorithm which computed individual digits of π. Their algorithm only works for the digits of π which are hexadecimal (base 16) or binary (base 2). The algorithm they produced directly generates a single digit without needing to compute any of the previous digits. It can also be easily implemented on any modern computer, requires very little memory, and does not require multiple precision arithmetic software. Although this method is faster for finding a specific digit of π, it is not quicker than the best known method for computing all the digits of π up to a certain position. [14] The algorithm is based on the formula in the image to the left. In 1997, this algorithm was put into action by Fabrice Bellard, who computed 152 binary digits of π starting at the trillionth position. Bellard's work took twelve days to complete. [14] The digits of π have been studied more than any other constant. In December 2002, Yasumasa Kanada of the University of Tokyo, along with a team of several other men, completed the computation of π to over 1.24 trillion decimal digits. Kanada and his team evaluated π with the use of formulas involving arc tan. [14] Kanada also studied the frequency of the ten decimal digits 0 through 9 in the first trillion digits of π. His findings reveal that the most frequent occurring digit is 8. The following table displays the total results of his study. [14] # References 1. ↑ 1.0 1.1 1.2 1.3 1.4 1.5 Wikipedia, http://en.wikipedia.org/wiki/Pi 2. ↑ Eymard, Pierre, and J. P. Lafon. The Number [Pi]. Providence, RI: American Mathematical Society, 2004. Print. 3. ↑ 3.0 3.1 Beckmann, Petr. A History of Pi. Boulder: Golem, 1971. Print. 4. ↑ 4.0 4.1 4.2 4.3 Arndt, Jörg; Haenel, Christoph (2006). Pi Unleashed. Springer-Verlag. ISBN 978-3-540-66572-4., English translation by Catriona and David Lischka 5. ↑ Rossi, Corinna Architecture and Mathematics in Ancient Egypt, Cambridge University Press. 2007. ISBN 978-0-521-69053-9 6. ↑ 6.0 6.1 Health, L, Thomas. (2002). Euclid's Elements. Place of publication: Green Lion Press. ISBN 978-1888009187. 7. ↑ 7.0 7.1 7.2 7.3 7.4 7.5 Posamentier, Alfred; Lehmann, Ingmar (2004). Pi: A Biography of the World's Most Mysterious Number. Place of publication:USA Prometheus Books.ISBN 978-1591022008. 8. ↑ Polster, Burkard. (2004). Q.E.D.: Beauty in Mathematical Proof. Place of publication:USA Walker & Company. ISBN 978-0802714312. 9. ↑ 9.0 9.1 9.2 9.3 Wikipedia, http://en.wikipedia.org/wiki/Buffon's_needle 10. ↑ Wikipedia, http://en.wikipedia.org/wiki/Uniform_distribution_(continuous) 11. ↑ Weisstein, Eric W. "Reuleaux Triangle." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/ReuleauxTriangle.html 12. ↑ 12.0 12.1 12.2 12.3 12.4 12.5 12.6 Wolfram, http://mathworld.wolfram.com/ReuleauxTriangle.html 13. ↑ 13.0 13.1 13.2 Wikipedia, http://en.wikipedia.org/wiki/Wankel_engine 14. ↑ 14.0 14.1 14.2 14.3 14.4 14.5 Berggren, Lennart, Jonathan M. Borwein, and Peter B. Borwein. Pi : A Source Book. New York: Springer, 2004. Print.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256511926651001, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/1279/debunking-risk-premium-via-hedging-argument-or-why-even-in-the-real-world?answertab=votes
# Debunking risk premium via “hedging” argument? (or why even in the real world $\mu$ should equal $r$) Since I began thinking about portfolio optimization and option pricing, I've struggled to get an intuition for the risk premium, i.e. that investors are only willing to buy risky instruments when they are compensated by an add-on above $r$ (the risk free interest rate). On the one hand this is understandable and backed by most empirical data. On the other hand there is this divide between the risk-neutral world of derivatives pricing and the real world with real world probabilities. The bridge between both worlds goes via a hedging argument. I wonder (just to give one possibility of debunking the classic risk premium argument) if there really was something like a systematic risk premium $\mu-r$ nothing would be easier as to squeeze it out by simply buying the underlying and selling the futures contract against it. The result should be a reduction of the risk premium until it becomes $0$ and the trading strategy wouldn't work anymore. The result should be that even in a risk-averse world probabilities become risk-neutral (i.e. the growth rate $\mu$ equals $r$) like in option pricing. Is there something wrong with this reasoning? EDIT OK, my original argument doesn't work. Anyway: I'm still feeling uncomfortable with the idea. My reasoning is that the whole idea of an inherent mechanism for compensating people for taking risk within a random process nevertheless (!) seems to be build on shaky ground. If it was true it would also be true for shorting the instrument since you are also taking risk there. But in this more or less symmetrical situation one side seems to be privileged. And the more privileged one side is the more disadvantaged the other side must be. It all doesn't make sense... I think the only compensation that makes sense in the long run is $r$. Please feel free to comment and/or give some references on similar ideas. Thank you again. EDIT2 After a longer quiet period on that issue see this follow-up question and answers: - Maybe I'm missing some point here, but I'm not sure how you're able to "squeeze out" the risk premium by using the proposed construct. Could you elaborate a bit? – Karol Piczak Jun 5 '11 at 21:13 @Karol: see my editing the post. – vonjd Jun 6 '11 at 11:05 1 @vonjd Just to clarify one point, the reason for the asymmetry between long and short is that the equity market in aggregate must be held long by all participants. This is not the case, for example, with commodity futures, which is why we do not find a consistent risk premium to holding commodities. – Tal Fishman Jul 9 '12 at 15:50 ## 2 Answers If you're long the underlying and short the futures contract, then you have no risk and earn the risk-free rate. You get into the position at $S_0$ and will be able to get out of the position at $F_0$ at time $T$. By a no arbitrage argument it must be that $F_0 = S_0 \exp(r T)$. I imagine Hull has a pretty good exposition on this. The risk premium is because of uncertainty, and in this case there's no uncertainty. But imagine that everyone knows that 1/2 the time I will not deliver the underlying at $T$. Then you'll likely only agree to $F_0 < \frac{1}{2} S_0 \exp(r T)$ and if everyone knows that I 1/2 the time I won't deliver, then arbitrage won't restore $F_0 = S_0 \exp(r T)$. - I also find the "risk premium" idea unsatisfying, but I don't think your hedging argument works. Because hedging implies correlation, the risk free asset needs to be well correlated with the risky one in order for you to make any spread between the two. Are you saying that someone should be able to earn the risk premium (without taking any risk!) by being long equities and short treasuries, for example? I don't think that's possible. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95662522315979, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Hermitian_matrix
# Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j: $a_{ij} = \overline{a_{ji}}\,.$ If the conjugate transpose of a matrix $A$ is denoted by $A^\dagger$, then the Hermitian property can be written concisely as $A = A^\dagger\,.$ Hermitian matrices can be understood as the complex extension of real symmetric matrices. Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of having eigenvalues always real. ## Examples See the following example: $\begin{bmatrix} 2 & 2+i & 4 \\ 2-i & 3 & i \\ 4 & -i & 1 \\ \end{bmatrix}$ The diagonal elements must be real, as they must be their own complex conjugate. Well-known families of Pauli matrices, Gell-Mann matrices and their generalizations are Hermitian. In theoretical physics such Hermitian matrices are often multiplied by imaginary coefficients,[1][2] which results in skew-Hermitian matrices (see below). ## Properties The entries on the main diagonal (top left to bottom right) of any Hermitian matrix are necessarily real, because they have to be equal to their complex conjugate. A matrix that has only real entries is Hermitian if and only if it is a symmetric matrix, i.e., if it is symmetric with respect to the main diagonal. A real and symmetric matrix is simply a special case of a Hermitian matrix. Every Hermitian matrix is a normal matrix. The finite-dimensional spectral theorem says that any Hermitian matrix can be diagonalized by a unitary matrix, and that the resulting diagonal matrix has only real entries. This implies that all eigenvalues of a Hermitian matrix A are real, and that A has n linearly independent eigenvectors. Moreover, it is possible to find an orthonormal basis of Cn consisting of n eigenvectors of A. The sum of any two Hermitian matrices is Hermitian, and the inverse of an invertible Hermitian matrix is Hermitian as well. However, the product of two Hermitian matrices A and B is Hermitian if and only if AB = BA. Thus An is Hermitian if A is Hermitian and n is an integer. The Hermitian complex n-by-n matrices do not form a vector space over the complex numbers, since the identity matrix In is Hermitian, but i In is not. However the complex Hermitian matrices do form a vector space over the real numbers R. In the 2n2-dimensional vector space of complex n × n matrices over R, the complex Hermitian matrices form a subspace of dimension n2. If Ejk denotes the n-by-n matrix with a 1 in the j,k position and zeros elsewhere, a basis can be described as follows: $\; E_{jj}$ for $1\leq j\leq n$ (n matrices) together with the set of matrices of the form $\; E_{jk}+E_{kj}$ for $1\leq j<k\leq n$ (n2 − n/2 matrices) and the matrices $\; i(E_{jk}-E_{kj})$ for $1\leq j<k\leq n$ (n2 − n/2 matrices) where $i$ denotes the complex number $\sqrt{-1}$, known as the imaginary unit. If n orthonormal eigenvectors $u_1,\dots,u_n$ of a Hermitian matrix are chosen and written as the columns of the matrix U, then one eigendecomposition of A is $A = U \Lambda U^\dagger$ where $U U^\dagger = I=U^\dagger U$ and therefore $A = \sum _j \lambda_j u_j u_j ^\dagger$, where $\lambda_j$ are the eigenvalues on the diagonal of the diagonal matrix $\; \Lambda$. Additional facts related to Hermitian matrices include: • The sum of a square matrix and its conjugate transpose $(C + C^{\dagger})$ is Hermitian. • The difference of a square matrix and its conjugate transpose $(C - C^{\dagger})$ is skew-Hermitian (also called antihermitian). • This implies that commutator of two Hermitian matrices is skew-Hermitian. • An arbitrary square matrix C can be written as the sum of a Hermitian matrix A and a skew-Hermitian matrix B: $C = A+B \quad\mbox{with}\quad A = \frac{1}{2}(C + C^{\dagger}) \quad\mbox{and}\quad B = \frac{1}{2}(C - C^{\dagger}).$ • The determinant of a Hermitian matrix is real: Proof: $\det(A) = \det(A^\mathrm{T})\quad \Rightarrow \quad \det(A^\dagger) = \det(A)^*$ Therefore if $A=A^\dagger\quad \Rightarrow \quad \det(A) = \det(A)^*.$ (Alternatively, the determinant is the product of the matrix's eigenvalues, and as mentioned before, the eigenvalues of a Hermitian matrix are real.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8969636559486389, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/14082/what-is-the-physical-significance-of-dot-cross-product-of-vectors-why-is-divi/14084
# What is the physical significance of dot & cross product of vectors? Why is division not defined for vectors? I get the physical significance of vector addition & subtraction. But I don't understand what do dot & cross products mean? More specifically, • why is it that dot product of vectors $\vec{A}$ and $\vec{B}$ is defined as $AB \cos{\theta}$ • why is it that cross product of vectors $\vec{A}$ and $\vec{B}$ is defined as $AB \sin{\theta}$ To me, both these formulae seem to be arbitrarily defined (Although, I know that it definitely wouldn't be the case). If Cross product could be defined arbitrarily, why can't we define division of vectors? What's wrong with that? Why can't vectors be devided? - 2 Division is the inverse of multiplication. A vector space in which you can also multiply two vectors is called an algebra (over a field). The cross product is not a type of multiplication as it is not associative. The dot product also doesn't count as multiplication as it maps two vectors into a scalar. The Quaternions are an example of a vector space which is also an algebra. – Olaf Aug 29 '11 at 11:15 ## 6 Answers I get the physical significance of vector addition & subtraction. But I don't understand what do dot & cross products mean? Perhaps you would find the geometric interpretations of the dot and cross products more intuitive: The dot product of A and B is the length of the projection of A onto B multiplied by the length of B (or the other way around--it's commutative). The magnitude of the cross product is the area of the parallelogram with two sides A and B. The orientation of the cross product is orthogonal to the plane containing this parallelogram. Why can't vectors be divided? How would you define the inverse of a vector such that $\mathbf{v} \times \mathbf{v}^{-1} = \mathbf{1}$? What would be the "identity vector" $\mathbf{1}$? In fact, the answer is sometimes you can. In particular, in two dimensions, you can make a correspondence between vectors and complex numbers, where the real and imaginary parts of the complex number give the (x,y) coordinates of the vector. Division is well-defined for the complex numbers. The cross-product only exists in 3D. Division is defined in some higher-dimensional spaces too (such as the quaternions), but only if you give up commutativity and/or associativity. Here's an illustration of the geometric meanings of dot and cross product, from the wikipedia article for dot product and wikipedia article for cross product: - To be precise, it's the length of B times the length of the projection of A onto B (or vice versa). – Ted Bunn Aug 28 '11 at 21:08 edit: woops, fixed. – nibot Aug 28 '11 at 21:27 The best way is to ignore the garbage authors put in elementary physics books, and define it with tensors. A tensor is an object which transforms as a product of vectors under rotations. Equivalently, it can be defined by linear functions of (sets of vectors) and (linear functions of sets of vectors), all this is described on Wikipedia. There are exactly two tensors which are invariant under rotations: $\delta_{ij}$ and $\epsilon_{ijk}$ All other tensors which are invariant under rotations are products and tensor traces of these. These tensors define the "dot product" and "cross product", neither of which is a good notion of product: $V \cdot U = V^i U^j \delta_{ij}$ and cross product $(V \times U)_k = V^i U^j \epsilon_{ijk}$ It is pointless to try to think of the cross product as a "product", because it is not associative, $(A\times B)\times C$ does not equal $A\times(B\times C)$. It is also less than useful to think of the dot-product as a product in the usual sense, because it takes pairs of vectors to numbers, and $(A\cdot B)C$ does not equal $A(B\cdot C)$, because the first points in the C direction, and the second points in the A direction. The best way is to get used to the invariant tensors. These generalize to arbitrary dimensions, and they are much clearer, and do not require a right-hand rule (this is taken care of by the index order convention). You will not find a single physics paper which uses the cross product, with the single exception of Feynman's 1981 paper "the qualitative behavior of Yang-Mills theory in 2+1 dimensions", and even if you do, it is trivial to translate. - Any spherically symmetric metric is invariant under rotations. You mean rotations about an arbitrary point. – Jerry Schirmer Aug 29 '11 at 7:18 The spherically symmetric metric in 3-d euclidean space is $\delta_{ij}$. – Ron Maimon Aug 29 '11 at 7:37 sure, but you said "two tensors". I can define all sorts of spherically symmetric tenors in 3-d euclidean space. They just aren't the 3-d Euclidean metric. – Jerry Schirmer Aug 29 '11 at 7:51 nice answer. What book would you recommend for reading up on this stuff? – Larry Harson Aug 29 '11 at 14:17 @Jerry: like what? – Ron Maimon Aug 29 '11 at 14:34 show 3 more comments If you're going to define division of vectors, you're going to have to define over what multiplication field you're going to define the division: For ordinary numbers, I think of $\frac{x}{y}$ as being the number that, when multiplied by $y$, gives $x$. So, $\frac{\vec x}{\vec y}$ would have to be the vector that, when "multiplied" by $\vec y$, gives $\vec x$. If our multiplication field is the dot product, we're already in trouble, because the dot product of two vectors is a scalar, and the definition above would therefore require $\frac{\vec x}{\vec y}$ to simultaneously be a vector and a scalar. Similarly, if our operation is the cross product, then we know that, for any vectors $\vec x$ and $\vec y$ and any scalar $c$, we have ${\vec x} \times {\vec y} = {\vec x}\times \left({\vec y} + c{\vec x}\right)$, so this means that there are an infinite number of vectors that satisfy the property "when crossproducted by $\vec y$, gives $\vec x$. Therefore, division over the cross product is not unique. - For products you have the answers. For division I recommend you to read more about quaternions. Interpretation of vectors in terms of quaternions allows for more reach algebra than vector space itself. A little math right here. For natural definition of division you need at least division ring (one may comment that division algebra is enough, then add octonions to my answer). There is a theorem that the only finite dimensional division rings are reals, complex and quaternions. Vectors are vector space in three dimensions. So, any division for three-dimensional vectors will be "unnatural". - In addition to nibot's answer: division something is finding a part of something. In case of a vector, its part has the same direction but a smaller length. So it is natural to divide vectors by numbers, not by vectors. Those dot and cross products are not simple products because they depend not only on lengths but also on orientations. They are called correspondences between a couple of vectors and numbers or vectors. - You can divide vectors with clifford ("geometric") algebra. The geometric product of vectors is associative: $$abc = (ab)c = a(bc)$$ And the geometric product of a vector with itself is a scalar. $$aa = |a|^2$$ These are all the properties required to define a unique product of vectors. All other properties can be derived. I'll sum them up, however: for two vectors, the geometric product marries the dot and cross products. $$ab = a \cdot b + a \wedge b$$ We use wedges instead of crosses because this second term is not a vector. We call it a bivector, and it represents an oriented plane. It can be instructive to introduce a basis to see this. $e_1 e_1 = e_2 e_2 = 1$ and $e_1 e_2 = -e_2 e_1$ capture the geometric product's properties for these orthonormal basis vectors. The geometric product is then, $$ab = (a^1 e_1 + a^2 e_2) (b^1 e_1 + b^2 e_2) = (a^1 b^1 + a^2 b^2) + (a^1 b^2 - a^2 b^1) e_1 e_2$$ As I said, the geometric product of two vectors is invertible in Euclidean space. This is obvious from the associativity property: $a b b^{-1} = a(b b^{-1}) = a$. That $b b^{-1} = 1$ implies that $$b^{-1} = b/|b|^2$$ It's informative to look at the quantity $a = (a b) b^{-1}$, using the grouping to decompose it a different way. $$a = (ab)b^{-1} = (a \cdot b) b^{-1} + (a \wedge b) \cdot b^{-1}$$ The first term is in the direction of $b$, the second is orthogonal to $b$. This decomposes $a$ into $a_\parallel$ and $a_\perp$. What others have said is right, you can't define just the vector cross product to be invertible. This decomposition should convince you--you cannot fully reconstruct a vector without information from both the dot and cross products. And as has been said, this product is not commutative. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430721402168274, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/162989/if-am-bm-and-an-bn-for-m-n-1-does-a-b
# If $a^m=b^m$ and $a^n=b^n$ for $(m,n)=1$, does $a=b$? [duplicate] Possible Duplicate: Prove that $a=b$, where $a$ and $b$ are elements of the integral domain $D$ Something I'm curious about, suppose $a,b$ are elements of an integral domain, such that $a^m=b^m$ and $a^n=b^n$ for $m$ and $n$ coprime positive integers. Does this imply $a=b$? Since $m,n$ are coprime, I know there exist integers $r$ and $s$ such that $rm+sn=1$. Then $$a=a^{rm+sn}=a^{rm}a^{sn}=b^{rm}b^{sn}=b^{rm+sn}=b.$$ However, I'm worried that if $r$ or $s$ happen to be negative then $a^{rm}, a^{sn}$, etc may not make sense, and moreover, I don't see where the fact that I'm working in a domain comes into play. How can this be remedied? - ## marked as duplicate by Arturo Magidin, tomasz, Jennifer Dylan, Alex Becker, Jyrki LahtonenAug 20 '12 at 7:24 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 4 Answers That works as long as you pass to the fraction field. But using fractions, the proof is much simpler: excluding the trivial case $\rm\,b=0,\,$ we have $\rm\:(a/b)^m = 1 = (a/b)^n\:$ hence the order of $\rm\,a/b\,$ divides the coprime integers $\rm\,m,n,\,$ thus the order must be $1.\,$ Therefore $\rm\,a/b = 1,\,$ so $\rm\,a = b.\,$ For a proof avoiding fraction fields see this proof that I taught to a student. Conceptually, both proofs exploit the innate structure of an order ideal. Often hidden in many proofs in elementary number theory are various ideal structures, e.g. denominator/conductor ideals in irrationality proofs. Pedagogically, it is essential to bring this structure to the fore. - Thanks! I'm familiar with fraction fields, so I find this proof quite nice and simple. – hmIII Jun 25 '12 at 20:17 ## Did you find this question interesting? Try our newsletter email address If $a=0$ or $b=0$, the conclusion follows, so we may assume $a\neq 0$ and $b\neq 0$. Suppose that $s\lt 0$ (in which case $r\gt 0$). Write $s=-t$ with $t\gt 0$. Then $rm = 1+tn$. So we have $$aa^{tn} = a^{1+tn} = a^{rm} = (a^m)^r = (b^m)^r = b^{rm} = b^{1+tn} = bb^{tn}.$$ Since $a^{tn} = (a^n)^t = (b^n)^t = b^{tn}$, we conclude from $aa^{tn}=bb^{tn}$ that $a=b$. A symmetric argument holds if $r\lt 0$. (Basically, we are going to the field of fractions and then clearing denominators "behind the scenes"). Alternatively, say $m = qn+r$, $0\leq r\lt n$. Then $a^ra^{qn} = b^rb^{qn}=b^ra^{qn}$, which yields $a^r=b^r$; so you can replace $m$ with its remainder modulo $n$. Repeating as in the Euclidean Algorithm, we get that if $a^n=b^n$ and $a^m=b^m$, then $a^{\gcd(n,m)} = b^{\gcd(n,m)}$. - Thanks! One question, is it supposed to be $aa^{tn}=a^{1+tn}+\cdots$ in the first string of equalities? – hmIII Jun 25 '12 at 20:15 +1 I like this way of avoiding the field of fractions. Use cancellation law instead! It's valid in an integral domain! Equivalent, of course, but conceptually a bit simpler IMHO. – Jyrki Lahtonen Jun 25 '12 at 20:29 @hmIII: Yes; I had $s$ and tried to fix it, but I forgot that one. – Arturo Magidin Jun 25 '12 at 20:49 – Gone Jun 25 '12 at 21:07 Your concerns address each other :) You are worried that $r$ and $s$ may be negative, indicating you wish that inverses for $r$ and $s$ exist so that negative powers for them are defined. But if you are in a commutative domain, you can work in the field of fractions for the domain, where they are defined! So, as far as I can see, your logic is completely right, in the field of fractions of the domain. - Hint: Let $d$ be the least positive integer such that $a^d=b^d$. Show that $d|n$ and $d|m$. This approach will not require $R$ commutative, or even that $R$ have a multiplicative identity, only that it not have zero divisors. Specifically, use the division algorithm to show that if $n=dq+r$ with $0\leq r<d$. Then if $r>0$, show $a^r = b^r$, contradicting that $d$ was the least example. - – Gone Jun 25 '12 at 20:31 @BillDubuque Depends on the level of the reader, I guess. Sometimes it is nice to emphasize the absolutely elementary nature of a thing, before piling on the abstractions. For example, this proof works for semi-groups with cancellation, even. – Thomas Andrews Jun 25 '12 at 20:36 – Gone Jun 25 '12 at 20:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9242016673088074, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/131225/birth-and-death-processes
# birth and death processes Suppose we have a system of N balls, each of which can be in one of two boxes. A ball in box I stays there for a random amount of time with exponential(lambda) distribution and then moves instantaneously to box II. A ball in box II stays there for a random amount of time with exponential(lambda) distribution and then moves instantaneously to box I. All balls act independently of each other. Let Xt be the number of balls in box I at time t. a) I'm trying to show that X is a birth and death process and specify the birth and death rates. b) How can we find the stationary distribution of the process. For part a, if we can show it satisfies a Yule process, this is essentially what we're trying to do. And for part b, I also want some clarification on the detailed balanced equations for this problem. - this is the ehrenfest urn model in continuous time. At the event times of a poisson $\lambda$ process $X_t$ goes up or down by 1. – mike Apr 13 '12 at 11:57 Like @mike said, except the intensity of the Poisson process is $N\lambda$. (Unrelated: Yule processes are pure birth hence they cannot model bounded populations like here.) – Did Apr 14 '12 at 9:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174741506576538, "perplexity_flag": "head"}
http://mathoverflow.net/questions/36800?sort=oldest
## Fermat’s Bachet-Mordell Equation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fermat once claimed that the only integral solutions to $y^2 = x^3 - 2$ are $(3, \pm 5)$. Fermat knew Bachet's duplication formulas (more precisely, Bachet had a formula for computing what we call $-2P$), which for $y^2 = x^3 + ax + b$ says $$x_{2P} = \frac{x^4-8bx}{4x^3+4b} = \frac x4 \cdot \frac{x^3 - 8b}{x^3+b}.$$ Using this formula it is easy to prove the following: Consider the point $P = (3,5)$ on the elliptic curve $y^2 = x^3 - 2$. The $x$-coordinate $x_n$ of $[-2]^nP$ has a denominator divisible by $4^n$; in particular, $[-2]^nP$ has integral coordinates only if $n = 0$. In fact, writing $x_n = p_n/q_n$ for coprime integers $p_n$, $q_n$, we find $$x_{n+1} = \frac{x_n}4 \cdot \frac{x_n^3 + 16}{x_n^3 - 2} = \frac{p_n}{4q_n} \cdot \frac{p_n^3 + 16q_n^3}{p_n^3 - 2q_n^3}.$$ Since $p_n$ is odd for $n \ge 1$ and $q_n = 4^nu$ for some odd number $u$ (use induction), we deduce that the power of $2$ dividing $q_{n+1}$ is $4$ times that dividing $q_n$. My question is whether the general result that $kP$ has integral affine coordinates if and only if $k = \pm 1$ can be proved along similarly simple lines. The modern proofs based on the group law, if I recall it correctly, use Baker's theorem on linear forms in logarithms. - Franz---if you factorise $y^2+2$ in the integers of $\mathbf{Q}(\sqrt{-2})$ then you can find all integer points on the curve easily and deduce what you want. Is this ekementary enougn for you? – Kevin Buzzard Aug 26 2010 at 21:36 4 Or even elementary enough? – Kevin Buzzard Aug 26 2010 at 21:38 3 @Kevin, that approach must be elementary, it's in Elementary Number Theory, by Uspensky and Heaslett, pages 398-9. – Gerry Myerson Aug 27 2010 at 0:52 1 @Kevin: Although Weil suggests that Fermat had a solution along these lines (with the arithmetic in the ring of integers replaced by the language of binary quadratic forms), I am certain that Fermat used the approach outlined above. I just wonder if it can completed. – Franz Lemmermeyer Aug 27 2010 at 12:32 ## 1 Answer The above result for $p=2$ can be strengthen: the denominators of $x(kP)$ and $y(kP)$ are even if and only $k$ is even. In non-elementary language, this follows from the fact that the Tamagawa number at $2$ is 1 (or that $P$ has good reduction at $2$) and that the reduction is additive; so the kernel of reduction has index $2$. It also follows easily from the duplication formula and the fact that $2$ can not divide $x$. Similarly, for any prime $p$, one could find a congruence for $k$ such that the denominator of $x(kP)$ is divisible by $p$. If $p$ has good reduction, i.e. $p>3$ then there is a number $M_p$ dividing $N_p=\vert\tilde E(\mathbb{F}_p)\vert$ such that $x(kP)$ is not $p$-integral if and only if $k$ is a multiple of $M_p$. So the answer to your question is, I guess, a "No". Just from looking at the group law, i.e. the addition and duplication formula, without using something further, one can not be certain that for any $k$ there is a $p$ such that $M_p$ divides $k$. E.g. the denominator of $x(5P)$ is the square of $29 \cdot 211\cdot 2069$. As Kevin comments, there are of course other elementary ways, not along these lines. - 1 I was also thinking of showing that the absolute value of the denominator increases when you go from, say, 2kP to (2k+1)P. – Franz Lemmermeyer Aug 27 2010 at 12:34 1 Oh, I see. Maybe, htat works. Here the point $P$ has everywhere good reduction, so the sqrt $e(kP)$ of the denominator of $x(kP)$ satisfies $e(kP) = f_k(P)\cdot e(P)^{k^2} = f_k(P)$ where $f_k$ is the $k$-th division polynomial. – Chris Wuthrich Aug 27 2010 at 13:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92153000831604, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/6513/while-generating-a-random-elliptic-curve-what-are-the-conditions-i-have-to-consi/6566
# While generating a random Elliptic curve what are the conditions i have to considerd? 1. I want to generate a random elliptic curve over a prime field. What are the conditions I should satisfy? 2. For the NIST recommended standard ECC-224 bit curve with prime $p=2^{224}-2^{96}+1$, a reduction technique is given by $(z_1+z_2+z_3-z_4-z_5) \bmod{p}$, what is the logic behind in this? - Just checking, you want to generate a curve or a point on a curve? A curve over a finite field or infinite or either? – mikeazo♦ Feb 28 at 14:04 i want to generate a random curve over prime field... – venkat Mar 1 at 15:43 I don't understand question #2 at all. Can you help me understand? – mikeazo♦ Mar 1 at 15:59 NIST recommended standard ECC-224 bit, prime value is 2^224-2^96+1. 224 bit * 224 bit multiplication results 448 bit output. which can be converted back over field 224 bit, NIST standard has given one formula ((z1+z2+z3-z4-z5) mod prime )). what is mathematical logic beind in this formula. – venkat Mar 2 at 14:36 I have updated the question to make #2 a little better. Can you make sure I didn't change the meaning? Also, what are the $z$ values? – mikeazo♦ Mar 3 at 13:43 show 1 more comment ## 2 Answers There are several conditions that might need to be satisfied, depending on your needs. At a bare minimum the curve you generate needs to have a large prime subgroup. To determine this one can use any number of point counting algorithms, or alternatively use the complex multiplication method to generate a curve with the desired number of points. Beyond large prime subgroup you may also want the twist to have a large prime subgroup to avoid attacks based on sending points on the twist. Point validation is another way to avoid them. Sometimes a specific cofactor is required for certain parametrizations of the curve that can lead to more efficient calculations. For details you can see DJB's analysis of his choices for Curve25519, as well as papers cited by the Explicit Formula Database. http://hyperelliptic.org/EFD/bib.html - Baring other requirements that you did not explain: An elliptic curve can be written as $y^2=x^3+ax+b$. To generate a random curve over a prime field, choose $a,b$ at random from the prime field. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121452569961548, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/7373?sort=newest
## Geometry Vs Arithmetic of schemes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let's suppose we have a Scheme $X$ over the the field $k$, where such a field can be though to be either $\mathbb{C}$ or a finite field $\mathbb{F}_q$. Then having this in mind, Where do we find some representative examples where Geometry governs arithmetic? That is to say, examples where the geometry (or topology) of $X$ over $\mathbb{C}$ dictates the arithmetic behavior over $\mathbb{F}_q$. Answers along with references would be highly appreciated. - Standard example: The number of rational points on a curve is governed by the Hasse-Weil bound, which depends only on the genus and size of the base field. One could conceivably argue that the genus is governed by the complex topology, but I don't see a good reason to direct a causal arrow in any particular direction. Standard reference for the analogy: some chapter in BBD, Faisceaux Pervers, Asterisque 100. – S. Carnahan♦ Dec 1 2009 at 5:25 Scott- I would say the genus is determined by the cohomology. Can you reconstruct the genus just knowing the number of points (actually you probably can...is the Hasse bound sharp if you look at large enough finite fields of a given characteristic?) – Ben Webster♦ Dec 2 2009 at 20:41 By the way, I think that BBD is a terrible reference for this stuff. Milne's Etale Cohomology is much more accessible, and in English, and actually about this stuff. (BBD sends the reader back to SGA and Theorie de Weil a lot). – Ben Webster♦ Dec 2 2009 at 20:42 @Ben, to reconstruct the Hasse bound: see my answer below. – Ilya Nikokoshev Dec 2 2009 at 23:03 ## 5 Answers Let's start with the most elementary example: projective space $\mathbb P^n$. It's not hard to see that that the number of points on it is always $q^n + q^{n-1} + \dots + q + 1.$ Note that this is because $\mathbb P^n$ can be always decomposed into simpler pieces: $\mathbb A^n \cup \mathbb A^{n-1}\cup\dots\cup \mathbb A^0$. Interestingly, something similar applies to all $\mathbb F_q$-varieties. Specifically, the Lefschetz fixed points formula from topology applied to arithmetics gives the following statement for a variety $X/\mathbb F_q:$ There exist some algebraic numbers $\alpha_i$ with $|\alpha_i| = q^{n_i/2}$ for some $(n_i)$ such that the number of points $$\# X(\mathbb F_{q^l}) = \sum_i (-1)^{n_i}\alpha^l_i\quad \text{for}\ l > 0 .$$ Numbers $\alpha_i$ in fact come from geometry: they are eigenvalues of some operators acting on etale cohomology groups $H_{et}(X)$. In particular, the numbers $n_i$ can only occupy an interval between 0 and $\text{dim}\, X$ and there are as many of them as the dimension of this group. These groups can directly compared to the case of $\mathbb C$ whenever you construct your variety in a geometric way. To see how, consider the example of curves. Over $\mathbb C$ the cohomology have the form $\mathbb C \oplus \mathbb C^{2g} \oplus \mathbb C\$ for some $g$ called genus; the same holds over $\mathbb F_q$: • projective line $\mathbb P^1$ has genus 0, so it always has $n+1$ points • elliptic curves $x^2 = y^3 + ay +b$ have genus 1, so they must have exactly $n + 1 + \alpha + \bar\alpha$ points for some $\alpha\in \mathbb C$ with $|\alpha| = \sqrt q.$ This is exactly the Hasse bound mentioned in another post. These theorems, which provided an unexpected connecion between topology and arithmetics some half-century ago, were just the beginning of studying varieties over $\mathbb F_q$ using the geometric intuition that comes from the complex case. You can read more at any decent introduction to arithmetic geometry or étale cohomology. There are also some questions here about motives which are a somewhat more abstract version of the above picture. As a reply to Ben's comment above about reconstructing the genus if you know `$X_n = \#X(F_{q^n})$`: • You know with certainty that $1 + q^n - X_n = \sum \alpha_i^n\$ for some algebraic numbers $\alpha_i, i = 1, 2, \dots$ having property $|\alpha_i| = \sqrt q.$ • There cannot be two different solutions $(\alpha_i)$ and $(\beta_i)$ for a given sequence of $X_n$ because if $N$ is a number such that both $\alpha_i = \beta_i = 0$ for $i>N$ then both $\alpha$ and $\beta$ are uniquely determined from the first $N+1$ terms of the sequence. • So a given sequence uniquely determines the genus. I don't know, however, if a constructive algorithm that guarantees to terminate and return genus for a sequence $X_n$ is possible. The first idea is to loop over natural numbers testing the conjecture that genus is less then $N$, but there seem to be some nuances. - Thanks! this was really helpfull – Csar Lozano Huerta Dec 3 2009 at 5:51 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Look at Darmon's article on "Arithmetic of Curves". - That's an amazing article, thanks! – Ilya Nikokoshev Jan 2 2010 at 22:04 Thanks!! :D Do I not have a proper claim towards the best answer? – Anweshi Jan 2 2010 at 23:53 A brief answer is: in order to relate a variety over a finite field with a one over complex numbers, a common 'nice' model for them over some number field is needed. If such a model exists, then the varieties in question have isomorphic etale cohomology groups. Probably they also have isomorphic etale homotopy types; then l-completions of their homotopy groups are isomorphic. Note here: etale cohomology 'almost computes' singular cohomology of complex varieties, and completely computes the number of point of a variety over a finite field. - I didn't know that! sounds really amazing...I'll take a look at the references, Thks – Csar Lozano Huerta Dec 3 2009 at 4:43 E. Kowalski just published a very beautifull survey on a related issue: "My main emphasis has been to try to present some of the theory and applications surrounding the Deligne Equidistribution Theorem, for non-specialists (in particular, for readers with little experience in algebraic geometry)". Deligne's theorem and the work of Katz and others later on it are tough to enter, this survey provides a kind of bridge. - Look at Dan Abramovich's Birational geometry for number theorists - Thanks, I'm taking a look – Csar Lozano Huerta Dec 1 2009 at 7:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935457170009613, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/99432?sort=votes
## Matrix representation of 2F4(2)' in unitary U(27) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, I am looking for matrix representation of Tits group $^2F_4(2)'$ of size 17 971 200. Atlas of finite groups offer several matrix representations but they are not embedded in U(27). They are in GL(27, C). I am looking really for embedding of $^2F_4(2)'$ in $E_6$ compact Lie group. I tried to guess the embedding but no luck so far. I have come to idea that if I have this group generators in U(27) then I will find them in $E_6$ really. Atsuyama has defined embedding of EIII symmetric space into E6 Lie group by mapping point in EIII to "reflection". The formula for such reflection can be found in Atsuyma paper. I am hoping to find 1755 points in EIII in which reflections would be 2A conjugacy class in $^2F_4(2)'$. Motivation The motivation for my research is following. Daniel Allcock has written to me that "There seems to be a friendship between $^3D_4(2)$ and $Co_0$ even though neither contains the other". That friendship can be expressed as mapping 819 reflections from 2A conjugacy class in $^3D_4(2)$ embedded in F4 into elements of 2A conjugacy class of $Co_0$. One could think that there might be such friendship between $^2F_4(2)'$ embedded in E6 Lie group and some sporadic group X. Regards, Marek - Marek, could you define EIII? Just to be sure I understand what you're asking: Are you looking for irreducible embeddings? If so the ATLAS says there are only two into GL(27,C) and if these don't work (as you say), then you have no chance. There are also two into GL(26,C); so the question is whether the image of these reps in GL(26,c) can in turn be embedded into U(27). (I'm not sure if this means that they must be embedded into U(26) but I would guess so...) – Nick Gill Jun 13 at 12:40 I presume you are familiar with this paper: ams.org/mathscinet-getitem?mr=1653177 Griess, Robert L., Jr.; Ryba, A. J. E. Finite simple groups which projectively embed in an exceptional Lie group are classified! Bull. Amer. Math. Soc. (N.S.) 36 (1999), no. 1, 75–93. This paper gives a full list of finite simple groups lying inside exceptional Lie groups. For the ${^2F_4(2)'}$ embedding in $E_6$ they refer to this paper: Arjeh Cohen and David Wales, On finite subgroups of F4 (C) and E6 (C), Proc. London Math. Soc., (3) 74 (1997), 105-150. – Nick Gill Jun 13 at 12:55 EIII is defined in Atsuyama "Projective spaces...II", 1997 as elements of complexified Jordan algebra h3O satisfying x delta x = 0 where points x and ax are identified for non zero complex number a. Paper of Cohen, Wales proves that embedding of Ree 2F4(2)' in E6 exists but does not give explicit embedding, which I am looking for. So if such embedding exists, then having E6 embedded in U(27) we must obtain 2F4(2)' in U(27). So theory say that embedding exists but in practice we don't know how to obtain matrices in U(27) generating this group. Regards, Marek – Marek Mitros Jun 13 at 15:45 No more answers ? Can somebody explain why representation theory does not give embedding of finite group into O(n) orthogonal Lie group ? BTW. I discovered on June 20th that $^2F_4(2)′$ is subgroup of Fischer $Fi_{22}$ ! Any comments on this fact ? – Marek Mitros Jun 29 at 20:43 1 A finite group representation is always unitary, and you can construct the corresponding invariant form; this amounts to some linear algebra. Whether it is orthogonal, is completely determined by its character. See e.g. Serre's little book on representation theory. – Dima Pasechnik Jun 30 at 0:17 show 1 more comment ## 1 Answer I am happy to inform that Robert Wilson have sent me the complex matrices generating the 2F4(2)' group inside the E6 compact Lie group. See the paper http://arxiv.org/pdf/1208.4221.pdf for details. Best regards, Marek -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.907043993473053, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/570/deformation-theory-of-representations-of-an-algebraic-group/14230
## Deformation theory of representations of an algebraic group ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For an algebraic group G and a representation V, I think it's a standard result (but I don't have a reference) that • the obstruction to deforming V as a representation of G is an element of H2(G,V⊗V*) • if the obstruction is zero, isomorphism classes of deformations are parameterized by H1(G,V⊗V*) • automorphisms of a given deformation (as a deformation of V; i.e. restricting to the identity modulo your square-zero ideal) are parameterized by H0(G,V⊗V*) where the Hi refer to standard group cohomology (derived functors of invariants). The analogous statement, where the algebraic group G is replaced by a Lie algebra g and group cohomology is replaced by Lie algebra cohomology, is true, but the only proof I know is a big calculation. I started running the calculation for the case of an algebraic group, and it looks like it works, but it's a mess. Surely there's a long exact sequence out there, or some homological algebra cleverness, that proves this result cleanly. Does anybody know how to do this, or have a reference for these results? This feels like an application of cotangent complex ninjitsu, but I guess that's true about all deformation problems. While I'm at it, I'd also like to prove that the obstruction, isoclass, and automorphism spaces of deformations of G as a group are H3(G,Ad), H2(G,Ad), and H1(G,Ad), respectively. Again, I can prove the Lie algebra analogues of these results by an unenlightening calculation. ## Background: What's a deformation? Why do I care? I may as well explain exactly what I mean by "a deformation" and why I care about them. Last things first, why do I care? The idea is to study the moduli space of representations, which essentially means understanding how representations of a group behave in families. That is, given a representation V of G, what possible representations could appear "nearby" in a family of representations parameterized by, say, a curve? The appropriate formalization of "nearby" is to consider families over a local ring. If you're thinking of a representation as a matrix for every element of the group, you should imagine that I want to replace every matrix entry (which is a number) by a power series whose constant term is the original entry, in such a way that the matrices still compose correctly. It's useful to look "even more locally" by considering families over complete local rings (think: now I just take formal power series, ignoring convergence issues). This is a limit of families over Artin rings (think: truncated power series, where I set xn=0 for large enough n). So here's what I mean precisely. Suppose A and A' are Artin rings, where A' is a square-zero extension of A (i.e. we're given a surjection f:A'→A such that I:=ker(f) is a square-zero ideal in A'). A representation of G over A is a free module V over A together with an action of G. A deformation of V to A' is a free module V' over A' with an action of G so that when I reduce V' modulo I (tensor with A over A'), I get V (with the action I had before). An automorphism of a deformation V' of V as a deformation is an automorphism V'→V' whose reduction modulo I is the identity map on V. The "obstruction to deforming" V is something somewhere which is zero if and only if a deformation exists. I should add that the obstruction, isoclass, and automorphism spaces will of course depend on the ideal I. They should really be cohomology groups with coefficients in V⊗V*⊗I, but I think it's normal to omit the I in casual conversation. - ## 5 Answers A representation of G on a vector space V is a descent datum for V, viewed as a vector bundle over a point, to BG. That is, linear representations of G are "the same" as vector bundles on BG. So the question is equivalent to the analogous question about deformations of vector bundles on BG. We could just as easily ask about deformations of vector bundles on any space X. Given a vector bundle V on X, consider the category of all first-order deformations of V. An object is a vector bundle over X', where X' is an infinitesimal thickening (in the example, one may take X = BG x E where E is a local Artin ring and X' = BG x E' where E' is a square-zero extension whose ideal is isomorphic as a module to the residue field). A morphism is a morphism of vector bundles on X' that induces the identity morphism on V over X. If X is allowed to vary, this category varies contravariantly with X. Vector bundles satisfy fppf descent, so this forms a fppf stack over X. This stack is very special: locally it has a section (fppf locally a deformation exists) and any two sections are locally isomorphic. It is therefore a gerbe. Moreover, the isomorphism group between any two deformations of V is canonically a torsor under the group End(V) (this is fun to check). Gerbes banded by an abelian group H are classified by H^2(X,H) (this is also fun to check); the class is zero if and only if the gerbe has a section. If the gerbe has a section, the isomorphism classes of sections form a torsor under H^1(X,H). The isomorphisms between any two sections form a torsor under H^0(X,H). (This implies that the automorphism group of any section is H^0(X,H).) In our case, H = End(V), so we obtain a class in H^2(X,End(V)) and if this class is zero, our gerbe has a section, i.e., a deformation exists. In this case, all deformations form a torsor under H^1(X,End(V)), and the automorphism group of a deformation is H^0(X,End(V)). All of the cohomology groups above are sheaf cohomology in the fppf topology. If you are using a different definition of group cohomology, there is still something to check. - $G$ was an algebraic group. Now what is meant by $BG$ ? – Wilberd van der Kallen Jan 19 2011 at 15:36 $BG$ is the stack of $G$-torsors (principal $G$-bundles). – Keerthi Madapusi Pera Jan 19 2011 at 16:13 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The statements about the group and Lie algebra in the question are special cases of a more general fact. Namely, if $A$ is an associative algebra and $V$ an $A$-module, then obstructions to deformations of $V$ lie in the Hochschild cohomology group $HH^2(A,{\rm End}(V))$, freedom of deformation in $HH^1(A,{\rm End}(V))$, and infinitesimal automorphisms in $HH^0(A,{\rm End}(V))$. This is rather easy to check using the bar complex. Now, the statement for Lie algebras is the special case $A=U({\mathfrak g})$, recalling that for any $U({\mathfrak g})$-bimodule $M$, $$HH^\ast (U({\mathfrak g}),M)=H^\ast({\mathfrak g},M_{ad}).$$ Similarly, for affine algebraic groups, it is the special case $A=O(G)^\ast$, where $O(G)$ is the coalgebra of regular functions, recalling that for any (algebraic) $G$-bimodule $M$, $$HH^\ast(O(G)^\ast,M)=H^\ast(G,M_{ad}).$$ - Here's not a complete answer, but I think an enlightening trick. Deformations of V over the dual numbers are always in bijection with Ext1(V,V) in any abelian category. The trick is that if you have a deformation V', you have a long exact sequence: Hom(V,V) -> Hom(V',V) -> Hom(V,V) -> Ext1(V,V) -> Ext1(V',V) -> Ext1(V,V) -> Ext2(V,V) You can see that the extension splits if and only if the image of the identity under the boundary map is trivial (using Baer sum, you can extend this trick to show that two extensions are isomorphic if and only if the image of the identity is the same). I think the obstruction in Ext2(V,V) you had in mind is the image of that class under the next boundary map, by a similar argument. - I will offer a sketch of an argument, and maybe someone who knows what a stack is can make it happen for real. There is probably a non-stacky deformation theory of commutative Hopf algebras, but I don't know what it looks like. Deforming G as a group should be the same as deforming BG as a plain old geometric object. Pulling back a point in BG along a cover by a point is very roughly taking a based loop space, and the deformed loop space comes with the deformed composition law. Similarly, deforming a representation of G should be the same as deforming a sheaf on BG. I'm going to assume G is smooth. Then the tangent complex of BG mapping to a point is just the sheaf Ad, concentrated in degree 1. If we boldly assume that deformation theory of/on stacks works just like deformations of/on schemes, but maybe with some degree shifts, we should get the answers you want. For deforming G in particular, there is a canonical class in H^2(BG, Ad[-1]) that classifies obstructions, and if that vanishes, H^1(BG, Ad[-1]) classifies deformations and H^0(BG, Ad[-1]) classifies automorphisms of a deformation. When deforming the sheaf V, one usually sees the sheaf End(V) written as coefficients. Olsson wrote a paper on deformations of representable morphisms of stacks, and while the morphism BG -> S isn't representable, one might benefit from asking the author for additional details if one were, say, working in the same building as he. - I've heard this approach that "deforming BG as a stack is the same as deforming G as a group," and I like it, but I don't see why any deformation of BG must be of the form BG' for G' a deformation of G. Also, I don't understand how to do deformations of representations, but I guess the thing I have to understand is deformation theory of coherent sheaves on a scheme. I'll ask Martin about it when I get the chance. – Anton Geraschenko♦ Oct 16 2009 at 5:07 About what Anton said at the end about deformations of a group. Let $m_0$ be the standard multiplication. Then I want to consider a deformation of the form $m:(G \times \epsilon \mathfrak{g}) \times (G \times \epsilon \mathfrak{g}) \to G \times \epsilon \mathfrak{g}$ where $m(g_1, g_2) = m_{0}(g_1,g_2) + \epsilon m_1 (g_1,g_2)$. When you write out the associativity condition $m\circ (m \times 1) = m \circ (1 \times m)$ it seems that you find that $(g_1,g_2) \mapsto (m_{1}(g_{1},g_{2}))(g_{1}g_{2})^{-1}$ is a group cohomology cocycle for G acting on $\mathfrak{g}$ by the adjoint representation. Now one has to identify $H^{2}(G,Ad)$ with $H^{2}(BG,Ad)$ (taking care of the topology somehow). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394829273223877, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/4808/why-circle-encloses-largest-area/26213
# Why Circle encloses largest Area? In this wikipedia, article http://en.wikipedia.org/wiki/Circle#Area_enclosed its stated that the circle is the closed curve which has the maximum area for a given arc length. First, of all i would like to see different proofs, for this result. (If there are any elementary ones!) One, interesting observation, which one can think while seeing this problem, is: How does one propose such type of problem? Does, anyone take all closed curves, and calculate their area to come this conclusion. I don't think thats the right intuition. - 8 – Qiaochu Yuan Sep 16 '10 at 20:07 ## 7 Answers There are relatively simple proofs in textbooks on calculus of variations. In more elementary approaches a convex figure is deformed, in discrete steps or through a continuous unbending process, toward a circle, and two things need to be proved: convergence to the circle, and increase of the isoperimetric ratio throughout the flow. Usually one step is easy and the other is difficult, requiring non-elementary methods to make rigorous. It is also necessary to make explicit what class of curves is considered: rectifiable, piecewise smooth, or something else. The simplest argument I know that is elementary and rigorous is to prove the finite-dimensional approximation, that for fixed edge lengths of a polygon, there is a maximum area (by compactness) and (by elementary geometry or Lagrange multipliers) it is the one where all vertices are on a circle. Then, use this to prove that any smooth curve, if it beats the circle, has a finite polygonal approximation that beats the inscribed polygon. - Here is a physicist's answer: Imagine a rope enclosing a two-dimensional gas, with vacuum outside the rope. The gas will expand, pushing the rope to enclose a maximal area at equilibrium. When the system is at equilibrium, the tension in the rope must be constant, because if there were a tension gradient at some point, there would be a non-zero net force at that point in the direction of the rope, but at equilibrium the net force must be zero in all directions. The gas exerts a force outward on the rope, so tension must cancel this force. Take a small section of rope, so that it can be thought of as a part of some circle, called the osculating circle. The force on this rope segment due to pressure is $P l$, with $P$ pressure and $l$ the length. The net force due to tension is $2 T \sin(l/2R)$, with $T$ tension and $R$ the radius of the osculating circle. Because the pressure is the same everywhere, and the force from pressure must be canceled by the force from tension, the net tension force must be the same for any rope segment of the same length. That means the radius of the osculating circle is the same everywhere, so the rope must be a circle. Note: I'm not sure what the general attitude is here towards these sorts of physics-based intuitive answers; this is my first answer on math.SE. Please let me know if this sort of reasoning is not what this site looks for. - 1 It's cool to see. Physicists have intuition and we need that. – Patrick Da Silva Jul 6 '11 at 22:59 As Qiaochu Yuan pointed out, this is a consequence of the isoperimetric inequality that relates the length $L$ and the area $A$ for any closed curve $C$: $$4\pi A \leq L^2 \ .$$ Taking a circumference of radius $r$ such that $2\pi r = L$, you obtain $$A \leq \frac{L^2}{4\pi} = \frac{4 \pi^2 r^2}{4\pi} = \pi r^2 \ .$$ That is, the area $A$ enclosed by the curve $C$ is smaller than the area enclosed by the circumference of the same length. As for the proof of the isoperimetric inequality, here is the one I've learnt as undergraduate, which is elementary and beautiful, I think. Go round your curve $C$ counterclockwise. For a plane vector field $(P,Q)$, Green's theorem says $$\oint_{\partial D}(Pdx + Qdy) = \int_D \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}\right) dxdy\ .$$ Apply it for the vector field $(P,Q) = (-y,x)$ and when $D$ is the region enclosed by your curve $C = \partial D$. You obtain $$A = \frac{1}{2} \oint_{\partial D} (-ydx + xdy) \ .$$ Now, parametrize $C= \partial D$ with arc length: $$\gamma : [0,L] \longrightarrow \mathbb{R}^2 \ ,\qquad \gamma (s) = (x(s), y(s)) \ .$$ Taking into account that $$0= xy \vert_0^L = \int_0^L x'yds + \int_0^L xy'ds \ ,$$ we get $$A = \int_0^L xy'ds = -\int_0^L x'yds \ .$$ So enough for now with our curve $C$. Let's look for a nice circumference to compare with! First of all, $[0,L]$ being compact, the function $x: [0,L] \longrightarrow \mathbb{R}$ will have a global maximum and a global minimum. Changing the origin of our parametrization if necessary, me may assume the minimum is attained at $s=0$. Let the maximum be attained at $s=s_0 \in [0,L]$. Let $q = \gamma (0)$ and $p = \gamma (s_0)$. (If there are more than one minimum and more than one maximum, we choose one of each: the ones you prefer.) Since $x'(0) = x'(s_0) = 0$, we have vertical tangent lines at both points $p,q$ of our curve $C$. Draw a circumference between these parallel lines, tangent to both of them (a little far away of $C$ to avoid making a mess). So the radius of this circumference will be $r = \frac{\| pq \|}{2}$. Let's take the origin of coordinates at the center of this circumference. We parametrize it with the same $s$, the arc length of $C$: $$\sigma (s) = (\overline{x}(s), \overline{y}(s)) \ , \quad s \in [0, L] \ .$$ Of course, $\overline{x}(s)^2 + \overline{y}(s)^2 = r^2$ for all $s$. If we choose $\overline{x}(s) = x(s)$, this forces us to take $\overline{y}(s) = \pm \sqrt{r^2 - \overline{x}(s)^2}$. In order that $\sigma (s)$ goes round all over our circumference counterclockwise too, we choose the minus sign if $0\leq s \leq s_0$ and the plus sign if $s_0 \leq s \leq L$. We are almost done, just a few computations left. Let $\overline{A}$ denote the area enclosed by our circumference. So, we have $$A = \int_0^L xy'ds = \int_0^L \overline{x}y'ds \qquad \text{and} \qquad \overline{A}= \pi r^2 = -\int_0^L\overline{y}\overline{x}'ds = -\int_0^L\overline{y} x'ds \ .$$ Hence, $$\begin{align} A + \pi r^2 &= A + \overline{A} = \int_0^L (\overline{x}y' - \overline{y}x')ds \\\ &\leq \int_0^L \vert \overline{x}y' - \overline{y}x'\vert ds \\\ &= \int_0^L \vert (\overline{x}, \overline{y})\cdot (y', -x')\vert ds \\\ &\leq \int_0^L \sqrt{\overline{x}^2 + \overline{y}^2} \cdot \sqrt{(y')^2+ (-x')^2}ds \\\ &= \int_0^L rds = rL \ . \end{align}$$ The last inequality is Cauchy-Schwarz's one and the last but one equality is due to the fact that $s$ is the arc-length of $C$. Summing up: $$A + \pi r^2 \leq rL \ .$$ Now, since the geometric mean is always smaller than the arithmetic one, $$\sqrt{A\pi r^2} \leq \frac{A + \pi r^2}{2} \leq \frac{rL}{2} \ .$$ Thus $$A \pi r^2 \leq \frac{r^2L^2}{4} \qquad \Longrightarrow \qquad 4\pi A \leq L^2 \ .$$ - Assuming that such a closed curve exists, it is fairly straightforward to show that it must be a circle. For example, check out this page for a proof. It is interesting that the existence hypothesis is the hard part of the proof. - I have a slight problem with that proof. Consider the first step, where they argue that if the curve was concave anywhere, you could reflect that section to get a convex curve of the same arc length enclosing more area. But what if another part of the curve comes around is in the way, so that reflecting the concave section produces a curve that is self-intersecting? (I've seen a similar case overlooked in many presentations of proofs of Pick's theorem, at the step where they are proving existence of a diagonal). – tzs Oct 12 '11 at 7:34 As for the second question, the result is quite intuitive. First of all one can readily see that we can suppose WLOG that the curve is convex, so already it cannot be too far from being circle-like. Second, it's not hard to see that stretching the curve out so as to make it non-uniform causes it to enclose less area, e.g. consider the analogous problem for rectangles (or even try it for $n$-gons). The intuition here is that a kink in the boundary of an area does not create much extra area but creates extra arc length. Of course the hard part is to justify these intuitions. Probably the proof that comes closest proceeds via Steiner symmetrization; there is a link at the Wikipedia article. There is also a neat Fourier-analytic proof. - First you can propose this problem as: The area $A$ encompassed by any simple closed rectifiable curve $C$, of length $L$, satisfacts the inequality $A\geq \frac{L^2}{4\pi}$, and equality occurs, if and only if, $C$ is a circle. The only proof I have done for this was using Parseval's identity (and therefore Fourier series), so it's not elementary (but it's rather simple if you know the aforementioned identity). Thought if you want I can post that proof. - Here is the Fourier series proof, due to Hurwitz. Let $f:[0,2\pi]\to\mathbb C$ be a $C^1$ curve parametrized by the arc length. Denote again by $f$ the corresponding $2\pi$-periodic function, and let $c_n$ be its $n$-th Fourier coefficient. By Stokes's and Parseval's Formulas, the signed area enclosed is $$\frac{1}{2i}\ \int_0^{2\pi}\ f'(t)\ \overline{f(t)}\ dt =\frac{\pi}{i}\ \sum_{n\in\mathbb Z}\ in\ |c_n|^2 =\pi\ \sum_{n\in\mathbb Z}\ n\ |c_n|^2 \le\pi\ \sum_{n\in\mathbb Z}\ n^2\ |c_n|^2=\pi,$$ with equality if and only if $c_n=0$ for $n\not=0,1$, and $|c_1|=1$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388359189033508, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/150467-p-adic-number-field.html
# Thread: 1. ## P-adic number field I've been investigating the p-adic field, and I was wondering how the p-adics form a field. The part I'm stuck at is that from what I've learned, to qualify as a field, every element except the additive identity, should have a multiplicative inverse. But, from what I've read, in the p-adic field, any p-adic integer that ends in zero doesn't have a multiplicative inverse. So, I snooped around abit and found another definition for a field, which said that every non-zero element has a multiplicative inverse. I'm assuming that this means that the p-adic numbers that end in zero would be considered as zero elements. What exactly are zero elements? (Or non-zero elements?) and how does it apply to the p-adic numbers? 2. Originally Posted by Bingk I've been investigating the p-adic field, and I was wondering how the p-adics form a field. The part I'm stuck at is that from what I've learned, to qualify as a field, every element except the additive identity, should have a multiplicative inverse. But, from what I've read, in the p-adic field, any p-adic integer that ends in zero doesn't have a multiplicative inverse. So, I snooped around abit and found another definition for a field, which said that every non-zero element has a multiplicative inverse. I'm assuming that this means that the p-adic numbers that end in zero would be considered as zero elements. What exactly are zero elements? (Or non-zero elements?) and how does it apply to the p-adic numbers? non-zero p-adic integers that end in zero don't have an inverse in $\mathbb{Z}_p,$ the ring of p-adic integers, but they do have an inverse in $\mathbb{Q}_p,$ the field of p-adic numbers. that is easy to see: a non- zero p-adic integer which ends in zero is in the form $x=\sum_{i=k}^{\infty}x_i p^i,$ for some $k \geq 1$ with $x_k \neq 0.$ then, since $p$ is prime and $1 \leq x_k \leq p-1,$ there exists some $1 \leq y \leq p-1$ such that $x_ky \equiv 1 \mod p.$ it's now easy to see that $x^{-1}=yp^{-k} + \cdots \in \mathbb{Q}_p.$ 3. Ah! I think I get it, hehehe thanks! Less formally, are you saying that basically you shift the digits so that it doesn't end in zero, get the inverse of that, and shift the digits to the appropriate place?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9717391729354858, "perplexity_flag": "head"}
http://mathoverflow.net/questions/72834/what-exactly-does-the-weight-filtration-in-hodge-theory-have-to-do-with-the-weil/72840
## What exactly does the weight filtration in Hodge theory have to do with the Weil conjectures? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a variety over $\mathbb{C}$, say separated. According to Deligne's results, there is a "mixed Hodge structure" on the total cohomology $H^\bullet(X(\mathbb{C}), \mathbb{Z})$. One component of this is a "weight filtration" on $H^\bullet(X(\mathbb{C}), \mathbb{Q})$. I haven't read Deligne's "Theorie de Hodge" and don't really understand all this, but I believe that in the case of a smooth projective variety, this reduces to usual Hodge theory and the weight filtration is the filtration by grading, and the extension to singular varieties comes by some sort of simplicial resolution by smooth objects. Let $Y_0$ be a variety over a finite field $\kappa$. Given a mixed perverse sheaf $K_0$ on $Y_0$, there is a canonical (and functorial) weight filtration on $K_0$, such that the sucessive subquotients are pure complexes of increasing weight (in the sense of Weil II). What do these to have to do with each other? In section 6 of BBD (asterisque 100), it seems that the authors are using the functoriality of the weight filtration over finite fields to deduce results about the weight filtration over $\mathbb{C}$. Namely, I'd be interested if, given a perverse sheaf $K$ (say, of geometric origin) on a smooth, proper scheme $X$ over $\mathbb{C}$ which can be "spread out" to perverse sheaves of "reduction of $X$ mod a prime*" there is some way in which the weight filtration on the cohomology of $K$ (actually, I'm not sure that this exists, it seems to in the constant case at least) can be viewed as a completion of the weight filtrations in finite characteristic. Here is the specific result in BBD: Let $f: X \to Y$ be a separated morphism of schemes of finite type over $\mathbb{C}$. Suppose that the stalks of $R^n f_* \mathbb{Q}$ are $H^n(X_y, \mathbb{Q})$, and that these form a local system. Then the weight filtration on these stalks form a locally constant filtration of the local system $R^n f_* \mathbb{Q}$. This appears to be proved by reducing mod a prime, where one has a Frobenius and the perverse weight filtration makes sense. (One reason to think these might be related is that if $X_0$ is a proper smooth scheme over $\mathbb{\kappa}$, then the cohomologies $H^i(X, \mathbb{Q}_l)$ have weight $i$ by the Weil conjectures, and this has some correspondence with how the weight filtration was defined for projective, smooth schemes over $\mathbb{C}$.) *Which is done by reducing the field $\mathbb{C}$ of definition to some finitely generated ring over $\mathbb{Z}$, and then working from there. - 3 There is a theory of "Mixed Hodge Modules", due to Morihiko Saito, which is the precise Hodge theoretic analogue of the theory of mixed perverse shaves of BBD. It was, I think, motivated by BBD and gives alternative proofs of all the results of BBD for varieties over $\mathbb{C}$. For a survey, see: "Saito, Morihiko. Introduction to mixed Hodge modules. Actes du Colloque de Théorie de Hodge (Luminy, 1987). Astérisque No. 179-180 (1989), 10, 145–162". – ulrich Aug 13 2011 at 15:27 1 It follows from Saito's results that the cohomology of any perverse sheaf of geometric origin on a variety over $\mathbb{C}$ has a natural mixed Hodge structure, in particular, a weight filtration. It should be the same, after extension of scalars, as the one gotten by spreading out (as you described) but I don't know a reference where this is proved. – ulrich Aug 13 2011 at 15:34 Thanks. I'll take a look at Saito's theory. – Akhil Mathew Aug 14 2011 at 19:09 1 Dear Akhil, One thing to bear in mind with Saito's theory is that it is very subtle, and my experience is that it is hard to figure out what is really going on. (The definition of what it means to be a mixed Hodge module is very indirect, relying on an analysis of iterated vanishing cycle constructions.) I am saying this just so that you don't have your expectations raised too high. I don't mean that you shouldn't look at it --- it is quite fundamental --- but just don't expect it to be easy to understand, or to provide immediate clarification. Best wishes, Matt – Emerton Aug 14 2011 at 22:31 Dear Matt, thanks for the heads up! I'll keep that in mind when I do get around to learning the theory (and it seems I should first learn about vanishing cycles). – Akhil Mathew Aug 14 2011 at 22:56 ## 2 Answers Dear Akhil, This is a big topic, although one that has been discussed at various times here, e.g. http://mathoverflow.net/questions/31223/in-what-setting-does-one-usually-define-mixed-sheaves-and-weights-for-them/31239#31239 The idea is that for constant coeffients smooth projective varieties should be cohomologically the simplest. And by approximating more general varieties by these, via simplicial techniques etc., we get a weight filtration on cohomology which measures the deviation from the simplest case. How to make this precise? Well 1. In positive characteristic, we can say that smooth projective varieties are one on which Frobenius acts with expected bounds eigenvalues. So the weight filtration is defined via eigenspaces of these. 2. Over $\mathbb{C}$, smooth projective varietes carry classical Hodge decompositions. The weight filtration needs to be (nontrivially) inserted into this picture via mixed Hodge theory. The compatibility of the weights comes either by construction* or via the (somewhat conjectural) story of mixed motives. For perverse coefficients, the story is already much more complex. The "simplest" cases should be intersection cohomology complexes with coefficients in direct images of families of smooth projective varieties. The analogue of (1) is BBD, and of (2) is Saito's theory that Uhlirch mentions. *(Added) Perhaps I can say what I mean "compatible by construction". I'll take two examples, which give a sense of what's going behind the scenes. A) take $X$ to be the complement of two points $p,q$ in smooth projective curve $\bar X$. Then have an exact sequence $$0\to W_1= H_1(\bar X, \mathbb{Q}) \to W_2=H^1(X, \mathbb{Q})\to \mathbb{Q}(-1)\to 0$$ The last map can be thought of as sort of residue at $p$. The symbol $\mathbb{Q}(-1)$ means the one dim vector space shifted into weight $2$, so this sequence also displays the weight filtration, There is an entirely analogous sequence in the $\ell$-adic world which gives the weights there. So these are compatible (pretty much by design). B) For the second example, let us use $\bar X$ as above but with coefficients in the intersection cohomology $L=j_\ast R^i f_\ast\mathbb{Q}$, where $f:Y\to X$ is smooth projective. Then $L$ carries variation of Hodge structure of weight $i$. By Zucker [Ann. Math 1979] $H^1(\bar X, L)$ has a pure Hodge structure of weight $1+i$. In the $\ell$-adic world, the analgous statement is Deligne's purity theorem [Weil II]. Note that Zucker's theorem was one of the key analytic inputs in Saito's work, analogous to the role of Weil II in BBD. Some References: Matt is correct that Saito's work isn't easy to get into. Aside from some expositions by Saito, I might suggest looking at the last few chapters of Peters and Streenbrink's book on mixed Hodge theory, which gives a pretty good introduction. I'm also linking my own, not quite successful, attempt to go through some of this: http://www.math.purdue.edu/~dvb/preprints/tifr.pdf - Thanks for this answer! I'll have a look at Saito's theory of mixed Hodge modules (I'm still very far from understanding anything about motives). – Akhil Mathew Aug 14 2011 at 3:10 Thanks to results by myself:) and David Hebert, those ingredients of the theory of mixed motives that are needed in order to generalize the long exact sequence above (to cohomology of arbitrary Voevodsky's motives, either over a field or over any excellent base scheme) are no longer conjectural. One defines a (Chow) weight structure for Voevodsky's motives; it yields certain weight spectral sequences that generalize classical ones. See arxiv.org/abs/0704.4003 arxiv.org/abs/1007.0219 arxiv.org/abs/1007.0219 – Mikhail Bondarko Aug 15 2011 at 13:51 Mikhail, thanks for the information. – Donu Arapura Aug 15 2011 at 14:21 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. see Deligne's ICM 1970 address (Theorie de Hodge I) as well as his ICM 1974 address. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292192459106445, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=39449
Physics Forums ## The probability distribution function of... I've been practicing on how to get the probability distribution/density functions of certain random variables by solving some questions in my book. I cam accross this particular problem, and though, It seems easy, the answer does not comply with what I got (or simply I got the wrong answer.) Urn I and Urn II each has two red chips and two white chips. Two chips are drawn from each urn without replacement. Let $$X_1$$ be the number of red chips taken from Urn I, $$X_2$$ be the number of red chips taken from Urn II. Find the $$p_X_3(k)$$ where $$X_3 = X_1 + X_2$$ I got the answer when $$X_3 = 0$$ so thought I go with the case where $$X_3 = 1$$ and this can happen if either $$X_1 = 1$$ and $$X_2 = 0$$ or vice versa[tex] P(X_3 = 1) = \left ( \left (\begin{array}{cc}2 \\ 1 \end{array} \right) \left( \begin{array}{cc}2 \\ 1\end{array} \right) / \left( \begin{array}{cc}4 \\ 2\end{array} \right) \right) \right \cdot \left ( \left (\begin{array}{cc}2 \\ 0 \end{array} \right) \left( \begin{array}{cc}2 \\ 2\end{array} \right) / \left( \begin{array}{cc}4 \\ 2\end{array} \right) \right) \cdot 2[/tex] since you can have $$\left (\begin{array}{cc}2 \\ 1 \end{array} \right)$$ ways of getting 1 red chip and $$\left (\begin{array}{cc}2 \\ 1 \end{array} \right)$$ ways of getting the white chip out of $$\left (\begin{array}{cc}4 \\ 2 \end{array} \right)$$ ways of getting 2 chips from a set of 4 chips from Urn I and for Urn II there are 2 choose 0 ways of getting 0 red and 2 choose 2 ways of getting 2 white chips, so you multiply their probabilities, then multiply by two since the cases of Urn I and Urn II can interchange. I got a probability of 2/9, but when I referred to the answer at the appendix of the book the answer should be 2/90. Am I missing something did I misinterpret the question or is my computation wrong? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member There is going to be 6 ways for the first urn and 6 for the second urn, so that gives us 36 choices. Thus probability 2/90 is impossible. To start from the begining: If we draw a red on the first draw, the chances is 2/4, to draw a red a second time is now 1/3, thus two reds are 1/6. Similarly for two whites from the same urn. This leaves a 2/3 chance that we will draw both a red and a white from the same urn. To check the work we see that all cases must add to 1. 0 Red = 1/6 from urn one, 1/6 from urn 2 = 1/36 1 red = 2/3 from one, 1/6 from the second or visa versa: 4/18 = 2/9. 2 Red = 2/3 X 2/3 = 4/9. 3 red, same as 1 red = 2/9. 4 red same as 0 red = 1/36. Thus checking our work we have a total of 1/36 + 2/9 +4/9 + 2/9 + 1/36 = 17/18. WHAT WENT WRONG? Well, there is another way you can draw two red, that is none from the first urn and two from the second, or visa versa; giving 2/36 to add to the case of 2 reds. I knew it! 2/9 was the correct answer. And those were the same answers I got from solving that problem. But for some odd reason, the appendix of the book gave these answers: Red = 1/6 from urn one, 1/6 from urn 2 = 1/36 1 red = 2/3 from one, 1/6 from the second or visa versa: 4/18 = 2/90. 2 Red = 2/3 X 2/3 = 1/20. 3 red, same as 1 red = 2/90. 4 red same as 0 red = 1/36. As you can see, the book added an extra zero to those probabilities that from 1 red to 3 red. I cant believe it. The book made a typo error ^_^;; thanx for the clarification on the cases btw ^_^ Thread Tools | | | | |------------------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: The probability distribution function of... | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Calculus & Beyond Homework | 2 | | | Set Theory, Logic, Probability, Statistics | 3 | | | Quantum Physics | 3 | | | General Math | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151015281677246, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/3797/how-to-evaluate-the-0-0-type-limit-in-mathematica/3800
# How to evaluate the 0/0 type limit in Mathematica? When I use `Limit` to evaluate the $k \to 0$ limit of ````((k + 2) (α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2 k) ```` If `α` and `k` are assumed to be real, `Limit` gives the answer: `2`. However, I believe the correct answer is ````(2 (-1 + α^2))/(-1 + 4 α^2) ```` So, when should I trust the answer of `Limit`? - Indeed the correct limit for real $\alpha$ is the one you said. – acl Apr 1 '12 at 15:08 @acl he's made the assumption that $\sqrt{\alpha^4}=\alpha^2$ which ignores the existence of the negative root. This jumps the branch cut. – rcollyer Apr 1 '12 at 15:49 4 This is a long standing weakness in Limit for cases where it cannot discern parametrized branchcut behavior. It is, coincidently, under current investigation. But the outlook does not look terribly promising at the moment. – Daniel Lichtblau Apr 2 '12 at 16:39 ## 3 Answers Edit The answer is "ambiguous" because you have two parameters, $\alpha$ and $k$, and in this case the limit depends on the value of $\alpha$. What you can try is the following: ````f[k_, α_] := ((k + 2) (α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2 k) Simplify[Limit[f[k, α^(1/4)], k -> 0] /. α -> α^4, α ∈ Reals] ```` $\frac{2 \left(\alpha ^2-1\right)}{4 \alpha ^2-1}$ What I did is to remove all powers of $\alpha$ from under the square roots, so that the $k\to 0$ limit makes them look like $\sqrt{k+\alpha}\to \sqrt{\alpha}$ which manifestly cancels with the already present $\sqrt{\alpha}$ terms. At the end, I replace $\alpha$ by $\alpha^4$ to return to the original definition. What follows below are the steps that led me to finally settle on the above approach. The upshot is that we have to avoid handing Mathematica expressions such as $\sqrt{\alpha^4}-\alpha^2 = 0$ because it doesn't simplify them at an early enough stage in the evaluation, even when the domain is real. Initial answer First assume simply that $\alpha$ is real: ````Assuming[α ∈ Reals, Limit[f[k, α], k -> 0]] ```` `2` Now say explicitly that $\alpha>0$: ````Assuming[α > 0, Limit[f[k, α], k -> 0]] ```` $\frac{2 \left(\alpha ^2-1\right)}{4 \alpha ^2-1}$ The behavior can be illustrated with a contour plot of `f[k, α]` around `{0,0}`: ````ContourPlot[f[k, α], {k, 0, .1}, {α, -.3, .3}, FrameLabel -> {"k", "α"}, PlotRange -> {1, 5}, ContourLabels -> True] ```` In the plane of $k, \alpha$, Mathematica acts as if it preferred to choose the "easiest" approach to the $k=0$ axis by taking $\alpha = 0$ in the first case. But what it should have done is to return the more general result, or a `ConditionalExpression` (not necessary in this particular case because the general result goes smoothly to 2 for $\alpha\to0$). The first result is a bug, I would say: Since $\alpha$ is a constant while the limit is taken, setting it to zero when it's allowed to be any real number is just too restrictive. This preliminary conclusion that it's a bug is strengthened below where I try to understand why it may be happening, and whether the function `f[k, α]` can be made to look less pathological before doing the limit. Work-around From the comments in the other answers, it is clear that Mathematica doesn't use the assumption of real variables at a sufficiently early stage in the calculation. You can even see that without taking any limit: ````f[0, α] ```` `2` The reason for this result is that it can't see the simplification $\sqrt{\alpha^4}-\alpha^2 = 0$ which is always true for real $\alpha$. It knows this fact, but isn't using it. To check this, we can do ````Refine[0 == (Sqrt[α^4] - α^2), α ∈ Reals] ```` `True` If this guess about the bug is right, then it's basically a problem of the order of two non-commuting limits. In addition to doing $k\to0$, the assumption of real $\alpha$ amounts to taking the limit $\Im(\alpha)\to 0$ (zero imaginary part). If you do the latter limit last, it gives $2$, but we are interested in the opposite order of limits. To avoid this problem, one can eliminate the variable $k$ in terms of a new variable that gets rid of the square root: First transform to variables $x=\sqrt{k}$ and $y=\alpha^2$ to end up with at most squares: ````Clear[g]; g[x, y] = Simplify[f[x^2, Sqrt[y]]] ```` $\frac{\left(x^2+2\right)\left(y-\sqrt{x^2+y^2}\right)+x^2}{-\sqrt{x^2+y^2}+2x^2+y}$ Next, define a third new variable $z\equiv y-\sqrt{x^2+y^2}$ that incorporates the unwanted square root, ````newg = g/.First@Solve[Eliminate[{g == g[x, y], z == -Sqrt[x^2 + y^2] + y}, x],g] ```` $\frac{2 y z+2 y-z^2-z-2}{4 y-2 z-1}$ Finally, observe that the limit $k\to0$ is the same as the limit $z\to0$ provided that we make the assumption $\lim_{x\to 0}(y-\sqrt{x^2+y^2})=0$ (which is true because $y\ge0$). Therefore, we can take the desired limit by setting ````newg /. z -> 0 ```` $\frac{2 y-2}{4 y-1}$ This is the correct limit if we reinstate $y=\alpha^2$. Note that I didn't have to use `Limit` at all because the function is well-behaved in this simplified form. - You already had my +1, but the most recent edit is very clever. It works even if you remove the `α ∈ Reals` condition on `Simplify`. – rcollyer Apr 2 '12 at 15:38 @rcollyer Thanks, it's OK to remove it, but I'll leave the condition in because it gets me the form I ultimately wanted. – Jens Apr 2 '12 at 15:55 Never trust anything or anyone with limits, indeed. Let's look at some plots for a variety of values of $\alpha$: ````Plot[Table[((k + 2) (α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2 k), {α, -10, 10, 1.5}] // Evaluate, {k, -2, 2}] ```` Applying L'Hospital's rule we get: ````(k*(α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2*k) // (D[Numerator[#1], k]/D[Denominator[#1], k])& // Simplify ```` $\frac{2 \left(\alpha ^2-1\right)}{4 \alpha ^2-1}$ Indeed, calculating the limit for various of $\alpha$ and comparing it to the above result we get: ````lim = ListPlot[ Limit[ Table[{α, ((k + 2) (α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2 k)}, {α, .001, 3, .1} ], k -> 0 ], PlotRange -> {-3, 3} ]; Show[ lim, Plot[(2 (-1 + α^2))/(-1 + 4 α^2), {α, 0, 3}, PlotRange -> {-3, 3} ] ] ```` With assumptions about $a$: ````Assuming[α ∈ Reals, Limit[((k + 2) (α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2 k) // Simplify, k -> 0]] (* ==> 2 *) Assuming[α > 0, Limit[((k + 2) (α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2 k) // Simplify, k -> 0]] (* ==> (2 (-1 + α^2))/(-1 + 4 α^2) *) ```` Same for `α < 0`. - Not a bug. There's a simplification that I'm about to post that will make it clear(er). – rcollyer Apr 1 '12 at 14:22 Show[Plot[(2 (-1 + [Alpha]^2))/(-1 + 4 [Alpha]^2), {[Alpha], .01, .5}], ListPlot[Limit[ Table[{[Alpha], ((k + 2) ([Alpha]^2 - Sqrt[[Alpha]^4 + k]) + k)/([Alpha]^2 - Sqrt[[Alpha]^4 + k] + 2 k)}, {[Alpha], .01, .5, .01}], k -> 0]]] – belisarius Apr 1 '12 at 14:55 @belisarius try this range: `{\[Alpha], .001, 1, .01}` – Sjoerd C. de Vries Apr 1 '12 at 15:01 @Sjoerd, yep. I did that earlier :) – belisarius Apr 1 '12 at 15:03 1 It's not a bug. It has to do with the fact that `Sqrt[a^4]` is not always equal to `a^2`. If $\sqrt{a^4}\neq a^2$ the limit is indeed 2. – Heike Apr 1 '12 at 15:28 show 17 more comments The answer does in fact seem to be `2`. This can be seen by expanding over the $k + 2$ term in the numerator $$\begin{align}\lim_{k \to 0} \frac{ (k+2)(\alpha^2 - \sqrt{\alpha^4 + k}) + k}{\alpha^2 - \sqrt{\alpha^4 + k} + 2 k} = &\lim_{k \to 0} \frac{k (\alpha^2 - \sqrt{\alpha^4 + k})}{\alpha^2 - \sqrt{\alpha^4 + k} + 2 k} \\ &+ \lim_{k \to 0} \frac{2 (\alpha^2 - \sqrt{\alpha^4 + k})}{\alpha^2 - \sqrt{\alpha^4 + k} + 2 k} \\ &+ \lim_{k \to 0}\frac{k}{\alpha^2 - \sqrt{\alpha^4 + k} + 2 k} \\ =&\lim_{k \to 0} \frac{k (\alpha^2 - \sqrt{\alpha^4 + k})}{\alpha^2 - \sqrt{\alpha^4 + k} + 2 k} + 2 +0. \end{align}$$ A cursory inspection reveals that the first term must go to zero as $k \to 0$ like the third term did. This is in contrast to what you get, though, when you break the original fraction into separate terms: ````frac = ((k + 2) (α^2 - Sqrt[α^4 + k]) + k)/(α^2 - Sqrt[α^4 + k] + 2 k) // Apart ```` $\frac{1}{4} \left(2 \alpha ^2+1\right)-\frac{\sqrt{\alpha ^4+k}}{2}+\frac{2 \alpha ^2 \sqrt{\alpha ^4+k}}{4 \alpha ^2+4 k-1}+\frac{-8 \alpha ^4+18 \alpha ^2-7}{4 \left(4 \alpha ^2+4 k-1\right)}-\frac{7 \sqrt{\alpha ^4+k}}{2 \left(4 \alpha ^2+4 k-1\right)}$ with a limit ````lim = Limit[frac, k -> 0] ```` $\frac{3 \sqrt{\alpha ^4}-5 \alpha ^2+2}{1-4 \alpha ^2}.$ The difficulty in interpretation here lies with $\sqrt{\alpha^4}$. If you specify that `α ∈ Reals`, ````Simplify[lim, α ∈ Reals] ```` you get what you found $\frac{2 \left(\alpha ^2-1\right)}{4 \alpha ^2-1}.$ However, this is choosing the positive branch, but $\sqrt{a^2} = \pm a$, as seen by ````lim /. # & /@ (Sqrt[α^4] -> # & /@ {α^2, -α^2}) // Simplify ```` {$\frac{2 \left(\alpha ^2-1\right)}{4 \alpha ^2-1}$, $2$}. The problem here is the assumption as it implies that the positive root is taken, and hence jumps to the other branch. My initial analysis, however, does not make any assumptions about which branch is used by not expanding the square root. - The OP mentioned $\alpha$ is real... – Sjoerd C. de Vries Apr 1 '12 at 15:41 @SjoerdC.deVries Yes, but the reality of α does not play a role in the initial analysis, and regardless of whether or not α is real, there's still the negative root to contend with. The bug here is that `Sqrt[q^2] == |q|`, not `Sqrt[q^2] == ±|q|` as it is supposed to be, causing the solution to jump branches. – rcollyer Apr 1 '12 at 15:47 In your first formula, if you take the positive root then the third term converges not to 0, but to $\frac{1}{2-1/(2a^2)}$. – celtschk Apr 1 '12 at 16:00 @celtschk to be clear, you're talking about $$\lim_{k \to 0}\frac{k}{\alpha^2 - \sqrt{\alpha^4 + k} + 2 k},$$ correct? If so, then I don't see how it converges to $$\frac{1}{2 - 1/(2 \alpha^2)}.$$ Would you give more details, please? – rcollyer Apr 1 '12 at 16:04 2 @rcollyer There is no problem with branches when the variables are real, because by definition of the square root: Assuming[a [Element] Reals, Simplify[Sqrt[a^4]]] i.e., the square root is always the positive root. – Jens Apr 1 '12 at 16:16 show 6 more comments lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8766574263572693, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=115005&page=13
Physics Forums Thread Closed Page 13 of 14 « First < 3 10 11 12 13 14 > ## Validity of Relativity Thanks for the link, setAI, I've printed out David Deutsche's paper and will study it carefully. hurkyl, what you're saying sounds good to me. I think. I'll get back here when I've read that paper tonight. Quote by Hurkyl So does Newtonian mechanics. When I see things in motion, they tend to come to rest. But Newtonian mechanics says I'm deluded and things in motion tend to stay in motion, and that there's some mysterious external force that is causing things to come to rest! Huh? This makes it sound like you are quite clueless about Newtonian physics (which I doubt is the case). Do you really think Newton's first law says "all moving things will keep on moving no matter what"? Anyway, there is a very clear sense in which MWI insists that we are deluded about basic apparent perceptual facts. This is in contrast to every other scientific theory that is or has been widely accepted. If this sense is not clear to you, maybe you should ask about it (or, say, read the clarifying sections of David Albert's "QM and Experience") rather than parodying it. And, of course, Special Relativity. It would have me believe that when someone is walking across my room, I should think they appear thinner! Of course, it conveniently says that the difference should be to small to measure. In other words, what you actually see matches (within the relevant uncertainties) with what the theory says is actually happening. This is in stark contrast to MWI. According to MWI, the real state of the world does *not* have a person walking across your room, so your perception to the contrary is a *delusion*. Maxwellian electrodynamics said some fairly wacky things about the universe. Please. When I say that MWI requires us to accept that our perceptual experience is delusional, I'm not just saying "MWI is fairly wacky". I use language carefully and precisely. MWI is fairly wacky, yes, but that is not the point at issue here. That's what's going on here: the quantum theory suggests quantum weirdness. Some people like to believe in some strange physical mechanism that allows them to incorporate the successes of quantum mechanics into their beloved classical notion of the universe. Others adjust their notion of the universe appropriately. MWI is one way to explain why the universe appears the way we thought it was. The whole point I am making is that your last sentence, taken literally, is quite false. "The way we thought it was" surely includes things like the needles on experimental apparati in Germany in the 1920's swinging in particular directions, yes? Well, according to MWI, that (and a gazillion other things like it, including, as I've said, pretty much all of our perceptual experience of the world) never actually happened. In other words, it is a delusion. So is MWI "one way to explain" all the perceptual/empirical evidence that led to quantum mechanics? Literally speaking, no. It doesn't explain that evidence; it explains it away (so to speak). According to it, that evidence was all wrong. You have to admit, that's a very uncomfortable (because circular) position for a theory to be in. There's a big difference: your "abestos-and-uranium-sandwich theory" is not based upon empirically successful physics. MWI is. No, it isn't. At least, not in anything like the normal scientific sense. The "stupidity" here is the irrational clinging to some ad-hoc physical mechanism that make one's beloved notion of the universe literally true, and refusing to even entertain the notion that such mechanisms aren't necessary. So, it's stupid to believe that when I see a table in front of me, there's really, in external physical reality, a hunk of table-shaped stuff out there? Or that when I see the needle go right, that's because, really, there is a needle and it moved to the right? I would ask you to seriously consider what is left of science (including in particular the alleged empirical evidence for MWI) if you take this seriously. Recognitions: Gold Member Science Advisor Staff Emeritus No, it isn't. At least, not in anything like the normal scientific sense. You take an empirically successful principle (unitary evolution), and you push it to its logical conclusion -- in the theoretical domain, how can you get more scientific than that? To the best of my knowledge, MWI is a theory about unitary evolution. That's it. Unlike your "curative asbestos-and-uranium sandwiches" (hey, weren't you objecting to parodies? ), MWI doesn't postulate anything new: it simply studies what follows from unitary evolution. And at any point, you could reintroduce wavefunction collapse and be doing orthodox quantum mechanics. (But, you would no longer be doing MWI) I had previously been thinking that you meant "deluded" simply to refer to the fact we think we see a classical state, when the universe is in a quantum state. But I have absolutely no idea where you get things like: : According to MWI, the real state of the world does *not* have a person walking across your room : Well, according to MWI, that ... never actually happened. : According to it, that evidence was all wrong. Recognitions: Gold Member Staff Emeritus Quote by Hurkyl You take an empirically successful principle (unitary evolution), and you push it to its logical conclusion People disagree that it's logical. Unitarity is surely a useful property, but making it the be-all of everything, at the cost of either "parallel worlds" or the possibility that people I see on the street are in a different state as far as their furshlugginer consciousness is concerned, is not best described as "logical" IMHO. And it still doesn't answer the question, how can QM operate, as evidently it does, completely hidden from human consciousness, say inside the Sun? setAI, hurkyl: I read the David Deutsche paper and have to say I didn't understand it. So I read it again, and again, and I still couldn't follow its thrust or see anything that "proved" the MWI. If either of you could post a link to an alternative paper or article I'd be grateful. Recognitions: Gold Member Staff Emeritus Quote by Farsight ... see anything that "proved" the MWI There is nothing that proves MWI, it's something that those who believe in it try to persuade you of. In other words it's like philosophy or religion: "Go on and faith will come to you." Er, no thanks. setAI: can you post another link that demonstrates why MWI sounds likely or plausible? Thanks, setAI. Look, I don't mean to be rude, but I'm afraid they come over as "magicked out of the hat" leaping logic lubricated by bigword babble and psuedo-mathematics. If I missed a trick somewhere, apologies. But I am mathematically literate and I am smart. And I am in no way convinced of MWI by these links. I remain deeply interested QM matters. Such as the "Quantum Eraser": http://www.bottomlayer.com/bottom/ki...scully-web.htm http://en.wikipedia.org/wiki/Delayed...quantum_eraser "In terms of the conventional way of viewing the physical universe, this result seems disturbing. One possible explanation is that the causality of the second observation travels back through time to affect the outcome of the first observation. In other words, this is time travel. Oddly enough, quantum mechanics does not seem to have much of a problem with time travel. Similarly bizarre results have been shown in other experiments where we have spooky action at a distance..." Unless somebody can tell me something better about MWI, I fancy the time travel. Recognitions: Gold Member Science Advisor Staff Emeritus The only thing I'm really convinced of is that it is not necessary to assume that wavefunction collapse is a physical process. Entangled states, quantum erasers, counterfactual computation... IMO none of that seems weird at all, unless you're working in a mindset that collapse happens as a physical process. I first realized this during a (brief) introduction to quantum computing in one of my math courses: we were introduced to the CNOT gate whose action on a pair of qubits in basis states is given by: Code: ```|x> ------*------ |x> | /--+--\ |y> ---|C-NOT|--- |x+y> \-----/``` In particular, if our second qubit is in the |0> state, then this is: Code: ```|x> ------*------ |x> | /--+--\ |0> ---|C-NOT|--- |x> \-----/``` and the whole thing acts as if we had actually measured the first qubit and stored the result in the second qubit... except that a collapse didn't happen. e.g. on a superposition of |0> and |1>, we'd get have: (a|0> + b|1>) |0> on the left hand side, and: a |0>|0> + b |1>|1> on the right hand side. If you try to imagine behaviors that measurements have... such as consistency, you'll find that these CNOT gates have that property. e.g. Code: ```|x> ------*---------*------ |x> | | /--+--\ | |0> ---|C-NOT|------------- |x> \-----/ | /--+--\ |0> -------------|C-NOT|--- |x> \-----/``` if we make two different "measurements" of the firstqubit, the results are the same. Of course, I'm cheating by simply saying they're the same: I really should add another gate to this circuit to "measure" if they are the same... and if we did, we would find that the result of that measurement is always "yes", even if the original input is in a superposition of |0> and |1>. There's something very weird about the interference pattern appears at both detectors, hurkyl. http://www.joot.com/dave/writings/ar...ookiness.shtml Ah, I am but a blind man searching a thunderstorm for the lightning particle. Hurkyl: I made a post on the "Electron Energy" thread and it's disappeared. It was nothing contentious, just a link to something I found when looking up infinite energy. This sort of thing has happened a few times. Is there some kind of priesthood god damn thought-police on this forum expunging any concepts that challenge dogma? And is this Physics, or the Catholic Church circa 1450? Recognitions: Gold Member Staff Emeritus Quote by Farsight Hurkyl: I made a post on the "Electron Energy" thread and it's disappeared. It was nothing contentious, just a link to something I found when looking up infinite energy. This sort of thing has happened a few times. Is there some kind of priesthood god damn thought-police on this forum expunging any concepts that challenge dogma? And is this Physics, or the Catholic Church circa 1450? If you read the guidelines, and every new poster has to sign that he or she read them, then you wouldn't have to ask. If you have a theory that challenges current science then it belongs on our independent research forum, and there are strict guidelines for appearing there, mainly to make sure the theories that are present there are serious and not just random garbage. And linking to an obvious crank site (obvious to us even if not to you) is a no-no, and if you keep doing it you will be warned, and if you still don't stop, you'll be banned. Them's the rules at PF, and if you don't like them take your creative imagination elsewhere. All points noted, selfAdjoint. Sorry to interrupt the thread everybody. Quote by ttn There is only one argument showing this, and it is the same argument showing that something *inside* the future light cone of an event can't causally affect the event. The argument is: there is no such thing as backwards-in-time causation. Sorry I haven't replied earlier I don't get much chance to spend time on the forum. The idea that there is backward in time causation seems to have crept into the interpretation of the proposed Bell Local Theory on its own accord. The theory itself does not contain this element. Therefore it should not be used as an argument for refuting the idea that electromagnetism is mediated by zero proper interval paths. If we take two spatially separated quantum systems and place an observer in the vicinity of each then each observer will experience time progressing “normally” from the past to the future. For any given experimental set up and initial conditions the temporal evolution (relative to the subjective time of each observer) of the states quantum systems will be completely deterministic. The resulting state functions will provide the probabilities of measurable outcomes. If we now consider an interaction between spatially separated systems! Let the donor emit energy of excitation at an event E1 and the acceptor receive the energy at an event E2. We know if we calculate the proper interval of separation between these two events then this has zero magnitude. In space-time these events are contiguous (Touching) and according to our proposed Bell local theory can interact directly with each other with out the need of a carrier particle., Thus in space-time events E1 and E2 can be regarded as a single event but appear separated because they are viewed by observers placed at different positions and time in the universe. The ability for the two systems to interact depended on their states immediately before interaction (relative to the subjective times of the observers). These states were dependent on the local temporal evolution of the quantum systems. There was no backwards in time causal influence necessary to trigger the interaction. There is just a single event involving the direct transfer of energy between spatially separated but properly local systems and no backward in time causation. Quote by ttn "Lorentz super-positioning" is a crazy phrase you seem to have made up. I have no idea what it means, and I assume others don't either. Indeed, based on what you seem to think this phrase means, I question whether you know what (normal, quantum-mechanical) super-positioning means -- i.e., whether you know any quantum physics in the first place. You are correct I did make up the expression “Lorentz super-positioning", originally I called this idea “proper interval locality”, however I thought the word super-positioning might appeal more to specialist in quantum mechanics. I’m willing to accept it’s a “crazy phrase” and can cause confusion with the super-positioning of quantum states. Thanks for the advice. However I’ll try and give a definition of the concept using the origin name.. Proper Interval Locality occurs when the proper interval of separation between events on the world-lines of quantum systems has zero magnitude. This occurs when, relative to a given inertial reference frame, the square of the temporal component of the proper interval is equal to minus the square of the spatial component of the spatial component of the proper interval. Under conditions of proper interval locality it is proposed that quantum systems can interact directly without requiring an intermediating particle/wave. Using this principle a method of electrodynamics can be developed which is free from the contradictions recognised in current theory. Quote by Hans de Vries You are mixing up two very different thing: 1) Being on the light cone (s=0) $\sqrt{c^2t^2-x^2-y^2-z^2}$ 2) Separation in space time = $\sqrt{c^2t^2+x^2+y^2+z^2}$ Hi Hans Your second expression is Euclidian and is not applicable to a universe characterised by the constancy of the speed of light relative to all inertial frames of reference. The interval between a pair of events in space-time must be calculated (for flat space-time) using the Minkowski Metric. Quote by Hans de Vries Following your reasoning ANY two points in the universe would have a space-time separation of zero! Each pair of space-time points A and B has many points C which are on the light cone of both A and B, that is: AC = 0 and BC = 0 and thus AB = 0+0 = 0. Your reasoning is correct any two events in space-time can be joined by zero interval paths. I suspect your instincts are telling you this is absurd every thing must happen at once!! However it can be the basis of a Bell local theory and leads directly to the development of the wave-function, interference and the violation of Bell’s inequality for light correlation experiments. The idea of universal linkage between all events takes a little getting use to but can eventually lead to a simple elegant and beautiful theory of electromagnetism? I believe it to be worth investing a little intellectual capital to get your mind round it. Cheers UglyDuckling: this sounds interesting. Are you basically saying: a photon travels at the "speed of light" so no time passes for a photon. Therefore it "instantly" connects A and C such that trying to locate it somewhere between A and C means you locate it at B, and it therefore connected A and B? So trying to locate it is like trying to locate a rod somewhere along its length? Is a rod the right analagy? Or should I try to think about a property with no length? Like, is determining the position of a photon as much use as trying to measure the length of a gallon? Or, and utmost apologies, are we straying into "crank" territory here? Anybody? Thread Closed Page 13 of 14 « First < 3 10 11 12 13 14 > Thread Tools | | | | |---------------------------------------------|------------------------------|---------| | Similar Threads for: Validity of Relativity | | | | Thread | Forum | Replies | | | General Physics | 0 | | | Special & General Relativity | 0 | | | Linear & Abstract Algebra | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953517496585846, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=154005
Physics Forums ## Glass with water 1. The problem statement, all variables and given/known data What is the velocity of the water at the top of a glass of water? Is it really 0? For example, there is a lot of problems which asks with what velocity does the water would leave that glass if I make a hole on the bottom. For these, we consider the velocity at the top to be zero. Why is that? Is it approximately zero or really zero? I am thinking now, if it is an incompressible fluid, it must be zero, because if the water is confined to that volume, cannot have speed. Nevertheless, it is clear that the water molecules are randomly moving. Can you clarify my doubts? Thanks in advance. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Mentor When there is no hole in the bottom of the glass, the velocity of all of the water is zero. When there is a hole suddenly made in the bottom of the glass that goes all the way across (hole diameter = glass diameter), all the water accelerates together down out of the glass, and the velocity of the top surface is the same as the velocity of the bottom surface (ignoring the wetting effects on the walls). When there is a hole in the bottom that is smaller than the diameter, then the flow rate out the hole will determine how fast the top surface goes down (through volume change calculations). I had an exercise on my book where I should show that the velocity at the bottom of the glass where there is a small hole is $$\sqrt{2gh}$$, h is the height of the water level. This is only true, if I consider the velocity at the surface to be zero, when applying Bernoulli's equation. Right? ## Glass with water which it will nearly be if the hole is relatively small. Thread Tools Similar Threads for: Glass with water Thread Forum Replies Classical Physics 26 Introductory Physics Homework 5 Introductory Physics Homework 3 General Physics 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9131795167922974, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/35135/relation-between-statistical-mechanics-and-quantum-field-theory/35136
# Relation between statistical mechanics and quantum field theory I was talking with a friend of mine, he is a student of theoretical particle physics, and he told me that lots of his topics have their foundations in statistical mechanics. However I thought that the modern methods of statistical mechanics, for example the renormalization group or the Parisi-Sourlas theorem, come from the methods of quantum field theory or many-body techniques (Feynman diagrams and so on). I notice that books also regarding modern concepts, such as spin glasses, don't require any other knowledge then basic calculus. Can someone explain which is the relation between these subjects? What topics should I study of field theory or similar to have a deep understanding in statistical mechanics? - 2 This is nicely discussed in "An introduction to lattice gauge theory and spin systems - B. Kogut". You can find it for free on some site if you google search. – user10001 Aug 29 '12 at 10:02 – Qmechanic♦ Aug 29 '12 at 11:37 The article @dushya points out is in Reviews of Modern Physics, by the way. I second the recommendation. The state-of-the-art has evolved only slightly in the 30+ years since it was written, and if you work through that you'll be on very firm ground. – wsc Aug 29 '12 at 13:17 1 – Vijay Murthy Sep 4 '12 at 18:46 ## 3 Answers Quantum statistical mechanics is usually worked out within the framework of second quantization, in which a system with a variable number of particles is described as a field theory. Much of statistical mechnaics deals with the nonrelativistic case, which is far simpler than realtivistic QFT as all rnormalizations are finite. Therefore one can see QFT working there without having to understand the cancellation of infinities. The intiution gained from statistical mechanics is then very useful for treating problems in relativistic QFT. This is also the historical way things were worked out. - – edwineveningfall Sep 2 '12 at 8:32 To understand statistical mechanics at the level of the book by Reichl, say you don't need any QFT. Some conformal field theory is needed only if you want to rigorously study critical scaling in 2D theories. Of course, for a deep understanding, one needs both. – Arnold Neumaier Sep 2 '12 at 16:24 Statistical field theory is equivalent to quantum field theory if you perform a Wick rotation in time. Inverse temperature $1/T$ is identified as time. Of course, the metrics are different. In QFT, it is Minkowski while in SFT, it is Euclidean. - I think it works better the other way around (understand Statistical Mechanics to get a feel for QFTs). This is not an answer "per se", since one take too much space, but you can find good lectures online: Perimeter Scholars - Quantum Field Theory 2 - Francois David The first two lectures should be enough for you to get all the parallels. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493514895439148, "perplexity_flag": "head"}
http://mathoverflow.net/questions/56431?sort=newest
Hurewicz theorem related to Galois group (or Tannakian categories)? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a proof of the Hurewicz theorem $\pi_1(X)^{ab} = H_1(X, \mathbf Z)$ ($X$ a connected topological space) expressing $\pi_1(X)$ as the "Galois" group of $X$, i.e., group of deck transformations of the universal cover? (as opposed to a proof using the construction of $\pi_1$ as the group of paths up to homotopy). Thank you. - I don't understand your question. It is certainly true that $\pi_1(X)$ is the group of deck-transformations, but what does this have to do with Hurewicz theorem? – J.C. Ottem Feb 23 2011 at 19:17 1 How do you want to define $H_1(X)$? If you want to use singular homology then it seems unlikely that you can avoid some kind of paths. You could describe $H^1(X)$ as $[X,S^1]$ or using Cech cochains and then consider pairings $\pi_1(X)^{ab}\otimes H^1(X)\to\mathbb{Z}$, but you would lose information about torsion in $H_1(X)$ that way. – Neil Strickland Feb 23 2011 at 19:18 If one defines $H^1(X,{\mathbb Z})$ as the group of ${\mathbb Z})$ torsors over $X$, then one can see that $H^1(X,{\mathbb Z})$ is the group of homomorphisms from $\pi_1(X)^{mathrm ab}$ to ${\mathbb Z}) by thinking about how deck transformations act on torsors. But that is$H^1$not$H_1\$. – profilesdroxford54 Feb 23 2011 at 19:57 Sorry -- forgot a dollar sign: If one defines $H^1(X,\mathbb{Z})$ as the group of $\mathrm Z$-torsors over $X$, then one can see that $H^1(X,\mathbb{Z})$ is the group of homomorphisms from $\pi_1(X)^{\mathrm{ab}}$ to $\mathbb Z$ by thinking about how deck transformations act on torsors. But that is $H^1$ not $H_1$. – profilesdroxford54 Feb 23 2011 at 20:02 2 I would be tempted to define $H_1$ as the abelianisation of $\pi_1$ in the general Tannakian setting, and use Hurewicz (when it holds) to say that this new $H_1$ and the old one (simplicial homology, say) agree. – David Roberts Feb 23 2011 at 23:31 show 1 more comment 1 Answer For $X$ a $0$-connected nice space (say, a CW-complex), and for any group $G$, there is a natural bijection of the following shape $$[X,BG]\simeq Hom(\pi_1(X),G)$$ which can be proved roughly as follows (if you like tannakian-like arguments): maps from $X$ to $BG$ correspond to $G$-torsor over $X$, which correspond to maps of topoi from the topos of sheaves over $X$ to the topos of $G$-sets; but, as any $G$-torsor is locally constant, this also corresponds to the maps of topoi from the topos of locally constant sheaves over $X$ to the topos of $G$-sets. As, by Galois theory, the topos of locally constant sheaves over $X$ is canonically equivalent to the topos of $\pi_1(X)$-sets, we conclude from the fact that, given two groups $A$ and $B$, exact and colimit preserving functors from $B$-sets to $A$-sets correspond to homomorphisms of groups from $A$ to $B$. To be precise, $[X,BG]$ means the set of homotopy classes of maps from $X$ to $BG$, while for $G$ an abelian group, $Hom(\pi_1(X),G)$ means the set of group homomorphisms (for a non abelian $G$'s, we have to quotient a little bit, but we won't care here). For $A$ an abelian group, we thus get bijections $$H^1(X,A)\simeq [X,BA]\simeq Hom(\pi_1(X)^{ab},A) .$$ By the Yoneda lemma, to prove that the map $\pi_1(X)^{ab}\to H_1(X,\mathbf{Z})$ is bijective, it is sufficient to prove that, for any abelian group $A$, the map $$<\star> \quad Hom(H_1(X,\mathbf{Z}),A)\to Hom(\pi_1(X)^{ab},A)$$ is bijective. But, instead of checking this for all $A$'s, it is sufficient to prove this in the case where $A$ is an injective object in the category of abelian groups (because there are enough injectives). In this case, as $Hom(-,A)$ is an exact functor, we have a bijection $$Hom(H_1(X,\mathbf{Z}),A)\simeq H^1(X,A) .$$ Therefore, for any injective $A$, the map $<\star>$ is bijective. If you like topoi and pro-groups, you may play the same game and prove this for locally $0$-connected topoi with essentially the same argument. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377757906913757, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/69414/an-inequality-of-integrals
# An inequality of integrals Let $f\in C^1([a,b])$ with $f(a)=0$. How can I show that there exists a positive constant $M$ independent of $f$ such that $\int^b_a|f(x)|^2dx\leq M\int^b_a|f^\prime(x)|^2dx$? - ## 1 Answer For any $x\in[a,b]$, you have $$\begin{align*} |f(x)|^2 &= \left|\int_a^xf'(t)\,dt\right|^2\\ &\leq \int_a^x|f'(t)|^2\,dt\int_a^x1\,dt\\ &= (x-a)\int_a^x|f'(t)|^2\,dt\\ &\leq (x-a)\int_a^b|f'(t)|^2\,dt, \end{align*}$$ where the first inequality is the Cauchy-Schwarz inequality. Now integrate both sides over $[a,b]$ with respect to $x$ to get your desired inequality, with $M=(b-a)^2/2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507548213005066, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/11/15/neighborhoods/?like=1&source=post_flair&_wpnonce=9a71b83933
# The Unapologetic Mathematician ## Neighborhoods As we go on, we’re going to want to focus right in on a point $x$ in a topological space $(X,\tau)$. We’re interested in the subsets of $X$ in which we could “wiggle $x$ around a bit” without leaving the subset. These subsets we’ll call “neighborhoods” of $x$. More formally, a neighborhood $N$ of $x$ is a subset of $X$ which contains some open subset $U\subseteq X$, which contains the point $x$. Then we can wiggle $x$ within the “nearby” set $U$ and not leave $N$. Notice here that I’m not requiring $N$ itself to be open, though if it is we call it an “open neighborhood” of $x$. In fact, any open set $U$ is an open neighborhood of any of its points $x$, since clearly it contains an open set containing $x$ — itself! Similarly, a subset is a neighborhood of any point in its interior. But what about a point not in its interior? If we take a point $x\in S$, but $x\notin S^\circ$, then $S$ is a neighborhood of $X$ if and only if there is an open set $U$ with $x\in U\subseteq S$. But then since $U$ is an open set contained in $S$, we must have $U\subseteq S^\circ$, which would put $x$ into the interior as well. That is, a set is a neighborhood of exactly those points in its interior. In fact, some authors use this condition to define “interior” rather than the one more connected to orders. So the only way for a set to be a neighborhood of all its points is for all of its points to be in its interior. That is, $S^\circ=S$. But, dually to the situation for the closure operator, the fixed points of the interior operator are exactly the open sets. And so we conclude that a set $S$ is open if and only if it is a neighborhood of all its points — another route to topologies! We say what the neighborhoods of each point are, and then we define an open set as one which is a neighborhood of each of its points. But now we have to step back a moment. I can’t just toss out any collection of sets and declare them to be the neighborhoods of $x$. There are certain properties that the collection of neighborhoods of a given point must satisfy, and only when we satisfy them will we be able to define a topology in this way. Let’s call something which satisfies these conditions (which we’ll work out) a “neighborhood system” for $x$ and write it $\mathcal{N}(x)$. First of all (and almost trivially), each set in $\mathcal{N}(x)$ must contain $x$. We’re not going to get much of anywhere if we don’t at least require that. If $S\in\mathcal{N}(x)$ is a neighborhood of $x$, and $S\subseteq T$, then $T$ must be a neighborhood as well since it contains whatever open set $U$ satisfies the neighborhood condition $x\in U\subseteq S$. Also, if $S$ and $T$ are two neighborhoods of $x$ then $x\in S^\circ\cap T^\circ=(S\cap T)^\circ\subseteq S\cap T$. That is, there must be a neighborhood contained in both $S$ and $T$. We can sum up these two conditions by saying that $\mathcal{N}(x)$ is a “filter” in the partially-ordered set $P(X)$. So, given a topology $\tau$ on $X$ we get a filter $\mathcal{N}(x)$ for each point. Conversely, if we have such a choice of a filter at each point, we can declare the open sets to be those $U$ so that $x\in U$ implies that $U\in\mathcal{N}(x)$. Trivially, $\varnothing$ satisfies this condition, as it doesn’t have any points to be a neighborhood of. The total space $X$ satisfies this condition because it’s above everything, so it’s in every filter, and thus is a neighborhood of every point. Now let’s take two sets $U$ and $V$, which are neighborhoods of each of their points, and let’s consider their intersection and a point $x\in U\cap V$. Since $x$ is in both $U$ and $V$, each of them is a neighborhood of $x$, and so since $\mathcal{N}(x)$ is a filter we see that $U\cap V$ must be a neighborhood of $x$ as well. We can extend this to cover all finite intersections. On the other hand, let’s consider an arbitrary family $U_\alpha$ of sets, each of which is a neighborhood of each of its points. Now, given any point $x$ in the union $\bigcup\limits_\alpha U_\alpha$ it must be in at least one of the sets, say $U_A$. Now $x\in U_A\subseteq\bigcup\limits_\alpha U_\alpha$ tells us first that $U_A\in\mathcal{N}(x)$ by assumption, and then that $\bigcup\limits_\alpha U_\alpha\in\mathcal{N}(x)$ by the filter property of $\mathcal{N}(x)$. Thus we can take arbitrary intersections, and so we have a topology. One caveat here: I might be missing something. Other definitions of a neighborhood system tend to include something along the lines of saying that every neighborhood of a point $x$ contains another neighborhood in its interior. I seem to have come up with a topology without using that assumption, but I’m willing to believe that there’s something I’ve missed here. If you see it, go ahead and let me know. ### Like this: Posted by John Armstrong | Point-Set Topology, Topology ## 10 Comments » 1. Yes, you do get a topology without that assumption, but you’d then lose the one to one correspondence between topologies as usually defined and topologies via neighborhood filters. For example, take $X = \{0, 1, 2\}$, and define a filter at each point by: $N_0 = \{\{0, 1\}, X\}$, $N_1 = \{\{1, 2\}, X\}$, $N_2 = \{\{0, 2\}, X\}$. Then the only open sets are the empty set and \$X\$; in particular, you can’t retrieve the $N_i$ from the topology in this example. Comment by Todd Trimble | November 15, 2007 | Reply 2. Okay, so many different neighborhood systems will give the exact same topology, but only the one that satisfies the compatibility condition between the different neighborhood filters will be the system we can get from a topology, right? Can we get the “right” filter by just throwing out all the “bad” neighborhoods, which don’t contain a neighborhood in their interiors? Comment by | November 16, 2007 | Reply 3. Yes to the first question, and I don’t see anything wrong with the formulation of the second (i.e., I don’t think there are fatal objections on grounds of impredicativity or anything like that). Anyway, I think it’s clear to both of us category wonks what’s going on: there’s an “underlying” functor which maps neighborhood systems (in the sense of your post) to topologies, and this has a left adjoint which maps a topology to the neighborhood system it generates, and throwing out the bad neighborhoods amounts to stabilizing, i.e., taking the coclosure (wrt the induced comonad) of the neighborhood system. All this reminds me of the true story of an undergraduate student of the high-powered Australian categorist Ross Street who, when asked in an oral exam if he could define the notion of topological space, answered accurately but bewilderingly, “Why yes, it’s just a relational $\beta$-module!” There is some such ultra-fancy definition, which I believe was worked out in the mid-sixties by Michael Barr; I’ve never worked out what it’s saying but I think it must be somewhere in the neighborhood (ha ha) of what we’re talking about now. (This $\beta$ presumably refers to the monad on Set whose algebras are compact Hausdorff spaces, viz., $\beta(X)$ is the set of ultrafilters on X). Might be fun to work out some time. Comment by Todd Trimble | November 16, 2007 | Reply 4. Oh, it was Ross Street’s student who had that topological space definition! I’ve had the ghost of that anecdote running in my head for almost half a year now. Comment by | November 16, 2007 | Reply 5. That you don’t need the usual assumption that neighbourhoods contain other neighbourhoods in their `interiors’, but that the `bad’ neighbourhood systems are not in the image of the adjoint functor is very similar to what happens for non-idempotent closure operators. And its seems that going back and forth between `bad’ neighbourhood systems and `bad’ closure operators, one doesn’t throw out anything. Comment by | November 18, 2007 | Reply 6. Benoît, that’s a great observation. I may have mentioned it before, but I haven’t really looked at the basics of topology in so long precisely because it was always just glossed over swiftly as another commenter suggested it should be. Now that these posts bring me back around to look at it directly, there’s a lot of really fascinating stuff in here. It’s doubly bizarre to find it all over again because I’m a topologist myself! Comment by | November 18, 2007 | Reply 7. [...] is closer to the subset than is to .” We’ll do this with a technique similar to neighborhoods. But there we just defined a collection of neighborhoods for each point. Here we will define the [...] Pingback by | November 23, 2007 | Reply 8. [...] are “topologically distinguishable” if they don’t have the same collection of neighborhoods — if . Now maybe one of the collections of neighborhoods strictly contains the other: . In [...] Pingback by | January 10, 2008 | Reply 9. [...] let’s equip with a topology defined by a neighborhood system. We say that a net converges to a point if for every neighborhood , the net is eventually in . In [...] Pingback by | January 29, 2008 | Reply 10. [...] answer is to use the categorical definition of a limit. Given a point the collection of open neighborhoods of form a directed set, and we can take the limit [...] Pingback by | March 22, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 84, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9602307081222534, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/187681-times-b-lcm-b.html
Thread: 1. |a\times b| = LCM[|a|, |b|] A and B are groups with binary operations $*$ and ? Prove that the order of $a\times b$ is $\text{LCM}[|a|,|b|]$ The binary operation of $A\times B$ is define as such: $(a\times b)(c\times d)=(a*c)\times (b\text{?} d)$ Let $|a|=n$ and $|b|=m$. Do I need to define the gcd as well? Regardless if I do or not, I am not sure where to proceed next. 2. Re: |a\times b| = LCM[|a|, |b|] LCM(n, m) is the smallest integer that is both a multiple of both n and m. By definition the order of $a\times b$ is the least such integer $k$ with $(a \times b)^k = (a*\cdots *a) \times (b*\cdots *b) = a^k \times b^k = e \times e$. By definition of the order of a and b, we know that $a^k = e \Leftrightarrow n | k$ and $b^k = e \Leftrightarrow m | k$. Therefore, k is the least such integer such that k = $l_an$ and k = $l_bm$ (these expressions come from definition of n "divides" k, or "n | k"). This means, overall, that k is the least such integer that is a multiple of both n and m.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541466236114502, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Loop_transformation
# Loop optimization (Redirected from Loop transformation) In compiler theory, loop optimization is the process of the increasing execution speed and reducing the overheads associated of loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops, so a lot of compiler optimization techniques have been developed to make them faster. ## Representation of computation and transformations Since instructions inside loops can be executed repeatedly, it is frequently not possible to give a bound on the number of instruction executions that will be impacted by a loop optimization. This presents challenges when reasoning about the correctness and benefits of a loop optimization, specifically the representations of the computation being optimized and the optimization(s) being performed.[1] ## Optimization via a sequence of loop transformations Loop optimization can be viewed as the application of a sequence of specific loop transformations (listed below or in [2]) to the source code or intermediate representation, with each transformation having an associated test for legality. A transformation (or sequence of transformations) generally must preserve the temporal sequence of all dependencies if it is to preserve the result of the program (i.e., be a legal transformation). Evaluating the benefit of a transformation or sequence of transformations can be quite difficult within this approach, as the application of one beneficial transformation may require the prior use of one or more other transformations that, by themselves, would result in reduced performance. ### Common loop transformations • fission/distribution : Loop fission attempts to break a loop into multiple loops over the same index range but each taking only a part of the loop's body. This can improve locality of reference, both of the data being accessed in the loop and the code in the loop's body. • fusion/combining : Another technique which attempts to reduce loop overhead. When two adjacent loops would iterate the same number of times (whether or not that number is known at compile time), their bodies can be combined as long as they make no reference to each other's data. • interchange/permutation : These optimizations exchange inner loops with outer loops. When the loop variables index into an array, such a transformation can improve locality of reference, depending on the array's layout. • inversion : This technique changes a standard while loop into a do/while (a.k.a. repeat/until) loop wrapped in an if conditional, reducing the number of jumps by two for cases where the loop is executed. Doing so duplicates the condition check (increasing the size of the code) but is more efficient because jumps usually cause a pipeline stall. Additionally, if the initial condition is known at compile-time and is known to be side-effect-free, the if guard can be skipped. • loop-invariant code motion : If a quantity is computed inside a loop during every iteration, and its value is the same for each iteration, it can vastly improve efficiency to hoist it outside the loop and compute its value just once before the loop begins. This is particularly important with the address-calculation expressions generated by loops over arrays. For correct implementation, this technique must be used with loop inversion, because not all code is safe to be hoisted outside the loop. • parallelization : A special case for Automatic parallelization focusing on loops, restructuring them to run efficiently on multiprocessor systems. It can be done automatically by compilers (named automatic parallelization) or manually (inserting parallel directives like OpenMP). • reversal : Loop reversal reverses the order in which values are assigned to the index variable. This is a subtle optimization which can help eliminate dependencies and thus enable other optimizations. Also, certain architectures utilize looping constructs at Assembly language level that count in a single direction only (e.g. decrement-jump-if-not-zero (DJNZ)). • scheduling : Loop scheduling divides a loop into multiple parts that may be run concurrently on multiple processors. • skewing : Loop skewing takes a nested loop iterating over a multidimensional array, where each iteration of the inner loop depends on previous iterations, and rearranges its array accesses so that the only dependencies are between iterations of the outer loop. • software pipelining : A type of out-of-order execution of loop iterations to hide the latencies of processor function units. • splitting/peeling : Loop splitting attempts to simplify a loop or eliminate dependencies by breaking it into multiple loops which have the same bodies but iterate over different contiguous portions of the index range. A useful special case is loop peeling, which can simplify a loop with a problematic first iteration by performing that iteration separately before entering the loop. • tiling/blocking : Loop tiling reorganizes a loop to iterate over blocks of data sized to fit in the cache. • vectorization : Vectorization attempts to run as many of the loop iterations as possible at the same time on a multiple-processor system. • unrolling: Duplicates the body of the loop multiple times, in order to decrease the number of times the loop condition is tested and the number of jumps, which may degrade performance by impairing the instruction pipeline. Completely unrolling a loop eliminates all overhead (except multiple instruction fetches & increased program load time), but requires that the number of iterations be known at compile time (except in the case of JIT compilers). Care must also be taken to ensure that multiple re-calculation of indexed variables is not a greater overhead than advancing pointers within the original loop. • unswitching : Unswitching moves a conditional inside a loop outside of it by duplicating the loop's body, and placing a version of it inside each of the if and else clauses of the conditional. #### Other loop optimizations • sectioning : First introduced for vectorizers, loop-sectioning (also known as strip-mining) is a loop-transformation technique for enabling SIMD-encodings of loops and improving memory performance. This technique involves each vector operation being done for a size less than or equal to the maximum vector length on a given vector machine. [2] [3] ### The unimodular transformation framework The unimodular transformation approach [3] uses a single unimodular matrix to describe the combined result of a sequence of many of the above transformations. Central to this approach is the view of the set of all executions of a statement within n loops as a set of integer points in an n-dimensional space, with the points being executed in lexicographical order. For example, the executions of a statement nested inside an outer loop with index i and an inner loop with index j can be associated with the pairs of integers (i, j). The application of a unimodular transformation corresponds to the multiplication of the points within this space by the matrix. For example, the interchange of two loops corresponds to the matrix $\left[\begin{array}{cc}0&1\\1&0\end{array}\right]$. A unimodular transformation is legal if it preserves the temporal sequence of all dependencies; measuring the performance impact of a unimodular transformation is more difficult. Imperfectly nested loops and some transformations (such as tiling) do not fit easily into this framework. ### The polyhedral or constraint-based framework The polyhedral model [4] handles a wider class of programs and transformations than the unimodular framework. The set of executions of a set of statements within a possibly imperfectly nested set of loops is seen as the union of a set of polytopes representing the executions of the statements. Affine transformations are applied to these polytopes, producing a description of a new execution order. The boundaries of the polytopes, the data dependencies, and the transformations are often described using systems of constraints, and this approach is often referred to as a constraint-based approach to loop optimization. For example, a single statement within an outer loop 'for i := 0 to n' and an inner loop 'for j := 0 to i+2' is executed once for each (i, j) pair such that 0 <= i <= n and 0 <= j <= i+2. Once again, a transformation is legal if it preserves the temporal sequence of all dependencies. Estimating the benefits of a transformation, or finding the best transformation for a given code on a given computer, remain the subject of ongoing research as of the time of this writing (2010). ## References 1. Jean-Francois Collard, Reasoning About Program Transformations,, 2003 Springer-Verlag. Discusses in depth the general question of representing executions of programs rather than program text in the context of static optimization. 2. David F. Bacon, Susan L. Graham, and Oliver J. Sharp. Compiler transformations for high-performance computing. Report No. UCB/CSD 93/781, Computer Science Division-EECS, University of California, Berkeley, Berkeley, California 94720, November 1993 (available at CiteSeer [1]). Introduces compiler analysis such as data dependence analysis and interprocedural analysis, as well as a very complete list of loop transformations 3. Steven S. Muchnick, Advanced Compiler Design and Implementation, 1997 Morgan-Kauffman. Section 20.4.2 discusses loop optimization. 4. R. Allen and K. Kennedy. Optimizing Compilers for Modern Architectures. Morgan and Kaufman, 2002.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8733009099960327, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21265/how-many-groups-of-size-at-most-n-are-there-what-is-the-asymptotic-growth-rate
## How many groups of size at most n are there? What is the asymptotic growth rate? And what of rings, fields, graphs, partial orders, etc.? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Question. How many (isomorphism types of) finite groups of size at most n are there? What is the asymptotic growth rate? And the same question for rings, fields, graphs, partial orders, etc. Motivation. This question arises in the context of a certain finite analogue of Borel equivalence relation theory. I explained in this answer that the purpose of Borel equivalence relation theory is to analyze the complexity of various naturally occuring equivalence relations in mathematics, such as the isomorphism relations on various types of structures. It turns out that many of the most natural equivalence relations arising in mathematics are Borel relations on a standard Borel space, and these fit into a hierarchy under Borel reducibility. Thus, this subject allows us make precise the idea that some classification problems are wild and others tame, by fitting them into a precise hierarchy where they can be compared with one another under reducibility. Recently, there has been some work adapting this research project to other contexts. Last Friday, for example, Sy Friedman gave a talk for our seminar on an effective analogue of the Borel theory. Part of his analysis provided a way to think about very fine distinctions in the relative difficulty even of the various problems of classifying finite structures, using methods from complexity theory, such as considering NP equivalence relations under polytime reductions. For a part of his application, it turned out that fruitful conclusions could be made when one knows something about the asymptotic growth rate of the number of isomorphism classes, for the kinds of objects under consideration. This is where MathOverflow comes in. I find it likely that there are MO people who know about the number of groups. Therefore, please feel free to ignore all the motivation above, and kindly tell us all about the values or asymptotics of the following functions, where n is a natural number: • G(n) = the number of groups of size at most n. • R(n) = the number of rings of size at most n. • F(n) = the number of fields of size at most n. • Γ(n) = the number of graphs of size at most n. • P(n) = the number of partial orders of size at most n. Of course, in each case, I mean the number of isomorphism types of such objects. These particular functions are representative, though of course, there are numerous variations. Basically I am interested in the number of isomorphism classes of any kind of natural finite structure, limited by size. For example, one could modify Γ for various specific kinds of graphs, or modify P for various kinds of partial orders, such as trees, lattices or orders with height or width bounds. And so on. Therefore, please answer with other natural classes of finite structures, but I shall plan to accept the answer for my favored functions above. In many of these other cases, there are easy answers. For example, the number of equivalence relations with n points is the intensely studied partition number of n. The number of Boolean algebras of size at most n is just log2(n), since all finite Boolean algebras are finite power sets. - 7 From OEIS: Number of groups of order n: research.att.com/~njas/sequences/A000001 – Joel David Hamkins Apr 13 2010 at 22:08 1 For graphs, see research.att.com/~njas/sequences/A000088 . – Qiaochu Yuan Apr 13 2010 at 22:43 8 "Most finite groups are 2-groups", so G(n) will be a very bumpy function. I almost want to say that it's not advisable to try and approximate it by some "simple" function like x^alpha or e^(x^alpha). To give an example: there are 11759892 groups of order at most 1023, and 49487365422 of order 1024: this is about 4000 times bigger! The number of groups of size p^e is about p^((2/27)e^3 and presumably the p=2 terms dominate. This should be enough information to see how G(n) is growing. – Kevin Buzzard Apr 14 2010 at 0:10 8 Wouldn't it make a lot more sense to weight things by the size of their automorphism group? (I know, it's not the same question, but is quite natural, and may have a chance at being less "bumpy".) I wonder if anyone has attacked it. Certainly for the count of finite extensions of a $p$-adic field, the answer is much more elegant with such weighting factors. – BCnrd Apr 14 2010 at 0:29 3 Brian, that is a very interesting question (and it reminds me of the nice enumeration of ss elliptic curves). There aren't any orders less than 16 with more than one weighted group, and I'm unwilling to work out more by hand. – S. Carnahan♦ Apr 14 2010 at 0:40 show 7 more comments ## 9 Answers For groups: you can check out this recent paper of John Conway, Heiko Dietrich, and E.A. O'Brien (DOI) for results and conjectures on counting the number of groups of a given order (I also seem to remember a recent article of Conway's in the Notices of the AMS (or maybe the Bulletin) on this subject). For fields: there is a unique isomorphism class of fields of size $p^n$ for each prime $p$ and each positive integer $n$, so one can figure out the asymptotic from the prime number theorem. For rings: the OEIS has information on this sequence here. - The exact formula for fields is $F(n) = \pi(n)+\pi(n^{1/2})+\pi(n^{1/3})+\cdots$ which is about $n/\log(n)$. – François G. Dorais♦ Apr 13 2010 at 23:27 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Roughly speaking, the more high powers of primes divide $n$, the more groups of order $n$ there should be. In fact, if $f(n)$ is the number of isomorphism classes of groups of order $n$, then $$f(n)\leq n^{(\frac{2}{27}+o(1))e(n)^{2}}%$$ where $e(n)$ is the largest exponent of a prime dividing $n$ and $o(1)\rightarrow0$ as $e(n)\rightarrow\infty$ (see Pyber, L. Enumerating finite groups of given order. Ann. of Math. (2) 137 (1993), no. 1, 203--220. MR1200081). From my Group Theory notes page 12. - Community wiki of resources from the Online Encyclopedia of Integer Sequences: • Number of groups of size n A000001 • Number of graphs of size n A000088 • Number of posets of size n A000112 • Number of abelian groups of size n A000688 • Number of rings of size n A027623 - The asymptotics for groups are strictly speaking still open, since extensions of nonsolvable groups are apparently rather thorny. Edit:It seems my information is somewhat out of date (see Milne's answer). I'm not sure how bad the $o(1)$ can be. It is expected that 2-groups dominate by a lot, although one could reasonably argue that the numerical evidence gathered to date samples the very small end of the nonsolvable family. By a 1965 result of Higman and Sims, the number of isomorphism types of groups of order $2^n$ (and conjecturally, groups of order at most $2^n$) grows as $2^{\frac{2}{27}n^3 + O(n^{8/3})}$. In other words, your function $G(n)$ grows very roughly like `$2^{\frac{2}{27}(\log_2 n)^3}$`. More specifically, `$\overline{\operatorname{lim}} \, \frac{\log G(n)}{(\log_2 n)^3} = 2/27$`. Addendum: I did a bit of GAP computation following Brian Conrad's comment. If we weight by dividing by the order of the automorphism group, none of the orders up to 70 contribute more than 1 (including 64, which contributes 48611383/78744960), and the average contribution from non-highly-divisible orders drops pretty quickly. The cumulative sums by 10s are roughly: 0, 5.3, 7.5, 8.9, 10.3, 11.4, 12.1, 13.1. Due to the jumpiness, I can only say that the growth looks very sub-linear. Given the growth rate of isomorphism types, I suspect we'll eventually get an explosion of mass for large powers of two even with the weighting. - As observed by Rob, there is exactly one field for each prime power order. The exact formula for the number of fields is then $$F(n) = \pi(n) + \pi(n^{1/2}) + \pi(n^{1/3}) + \cdots$$ where $\pi(x)$ counts the number of primes up to $x$. There are $O(\log n)$ nonzero lower order terms each of which is $O(\sqrt{n})$. So the leading term $\pi(n)$ dominates and the Prime Number Theorem gives the asymptotic $F(n) \sim \mathrm{Li}(n) \sim n/\log(n)$. - The number of finite abelian groups (Sloane A000688) is a multiplicative function, so the asymptotics are known. - Mathworld gives the asymptotic formula A(n) ~ 2.3n (eqn. 4) mathworld.wolfram.com/AbelianGroup.html – François G. Dorais♦ Apr 14 2010 at 1:03 1 Just to clarify (for those who were perplexed, like me): A(n) is the number of abelian groups of order at most n, not of order exactly n. That is, on average there are 2.3 abelian groups of a given order. (I'm marginally surprised this turns out to be finite.) The constant ~2.3 is the product zeta(2)*zeta(3)*zeta(4)*... – Michael Lugo Apr 14 2010 at 1:17 Here's an article with more precise information: emis.de/journals/PIMB/088/n088p057.ps.gz – Kevin O'Bryant Apr 14 2010 at 1:51 Posets are A000112 in Sloane. The asymptotics aren't given there, but are known. See D. J. Kleitman and B. L. Rothschild, The number of finite topologies, Proc. AMS 25 (1970) 276-282. This paper shows that $\log_2 P_n = n^2/4 + o(n^2)$, where $P_n$ is the number of posets on $n$ elements. The full asymptotic formula is given in Kleitman and Rothschild, Asymptotic enumeration of partial orders on a finite set, Transactions of the American Mathematical Society 205 (1975) 205-220. This paper gives $\log P_n = n^2/4 + 3n/2 + o(\log n)$, and an explicit (but messy) asymptotic formula for $P_n$. Edited to add: Richard Stanley, in Enumerative Combinatorics volume 1, exercise 3.3(e) (rated [3+]), gives $$P_n \sim C \cdot 2^{n^2/4+3n/2} e^n n^{-n-1}$$ where $C = {2 \over \pi} \sum_{i \ge 0} 2^{-i(i+1)}$; he states this is a simplification of the formula from Kleitman-Rothschild (1975) that I haven't written out here. - For groups there is the book Enumeration of Finite Groups by Simon R. Blackburn, Peter M. Neumann, and Geetha Venkataraman. - In a more general vein, spectra of equational classes have been studied by Ralph McKenzie and others. A jumping off point is his "Locally finite varieties with large free spectra" . I believe his work on tame congruence theory shed some light on the growth rates for certain classes of finite algebras.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249677062034607, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/210194/quotient-of-ring-of-integers/210444
# Quotient of ring of integers Let $R=\mathcal{O}(K)$ be the ring of the integers of $K=\mathbb{Q}[\zeta_8]$, where $\zeta_8=e^{2\pi i/8}=\sqrt{2}/2(1+i)$ is a primitive eighth root of unity in $\mathbb{C}$. It can be shown that $R$ is a P.I.D. Let $\mathscr{P}$ be the ideal $\langle \zeta_8-1\rangle$ and let $$\mathscr{P}^{-2}=\{x\in K\mid (\zeta_8-1)^2x\in R\}.$$ Claim: $\mathscr{P}^{-2}/R\cong R/\mathscr{P}^2\cong \mathbb{Z}/4\mathbb{Z}$. I have absolutely no idea how to prove this! - Which isomorphism do you have trouble with? Can you put words to what is giving you difficulty? – Hurkyl Oct 10 '12 at 2:54 I don't understand either isomorphism. What are the four cosets of $\mathscr{P}^{2}$ in $R$? The four cosets of $R$ in $\mathscr{P}^{-2}$? – Clinton Boys Oct 10 '12 at 3:16 Would also like to know how to show $R/\mathcal{P}\cong \mathbb{F}_2$. – Clinton Boys Oct 10 '12 at 3:36 1 It might help you to get started to show that 2 is in $\mathscr{P}$. – David Loeffler Oct 10 '12 at 10:02 ## 1 Answer Well, ${\zeta_8}^2$ is a fourth root of unity, and namely it is $i$ (at $\pi/2$), prefer to denote it $\zeta_8=\sqrt i$. This field $\Bbb Q(\sqrt i)$ is then a 4d vectorspace over $\Bbb Q$ with a standard basis $1,\sqrt i,\,i,\,i\sqrt i$. (The next power, ${\zeta_8}^4= -1$ is already dependent on them. Note also that $\Bbb Q(\sqrt i)=\Bbb Q(i,\sqrt 2)$ as field extension.) So that $\Bbb Z[\sqrt i] =\{a+b\sqrt i+ci+di\sqrt i \mid a,b,c,d\in\Bbb Z\}$. 1. What are the integers in $\Bbb Q(\sqrt i)$? It is going to be $\Bbb Z[\sqrt i]$, but it needs to be thought over. 2. As David mentioned in a comment, $2\in\mathscr P=\left((1-\sqrt i)\right)$, because (writing '$a\equiv b \pmod{\mathscr P}$' for $b-a\in\mathscr P$), we have $$1\equiv\sqrt i \overset{()^2}\implies 1\equiv i \overset{()^2}\implies 1\equiv -1 \overset{+1}\implies 2\equiv 0 \pmod{\mathscr P}$$ 3. Then, for $\mathscr P^2$, it is the ideal generated by $(\sqrt i -1)^2=i-2\sqrt i+1$. So, basically the relation $$1+i \equiv 2\sqrt i \pmod{\mathscr P^2}$$ generates it. From this, we also have $2i\equiv 4i$, implying $0\equiv 2i$ then (multplying by $(-i)$:) $\ 0\equiv 2 \pmod{\mathscr P^2}$, so again $-1\equiv 1$. Also, $1+i\equiv 2\sqrt i\equiv 0\sqrt i=0$, that is, $1\equiv -i=(-1)i\equiv i$. And, because $\sqrt i-1\notin\mathscr P^2$, we also have $\sqrt i\not\equiv 1 \pmod{\mathscr P^2}$. So, finally, we can conclude that $R/\mathscr P^2$ is represented by the following set: $$\{ 0,1,\sqrt i,1-\sqrt i \}$$ And, it is not $\Bbb Z/4\Bbb Z$, but still a ring with 4 elements.. I'm sure there is a more sophisticated and simpler solution, but until we find it, you can play around with $\sqrt i$.. About the other statement, $\mathscr P^{-2}/R$, well.. the problem is that $R$ cannot be an ideal in there, because $1\in R$. Edit: But, as Hurkyl noted, they are both $R$-modules, so it can make sense anyway. - On your last sentence, while they aren't a ring and ideal, they are two abelian groups -- they're even two $R$-modules. So the quotient should be interpreted as a quotient of those structures. – Hurkyl Oct 10 '12 at 17:11 Ah, yes, you're right, $R$-modules, thanks.. – Berci Oct 10 '12 at 20:08 Thanks for this. One final question: why is $\mathscr{P}^4=\langle 2\rangle$? – Clinton Boys Oct 12 '12 at 3:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9589224457740784, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/33078/what-is-a-completely-positive-map-physically/33089
What is a completely positive map *physically*? I am sure this question is really stupid, but I could not refrain from asking it in this forum. This can be considered as a continuation of this question. ````What does it mean to be a completely positive map, from a Physics point of view. ```` A positive map $h:\mathcal{B(H)}\rightarrow\mathcal{B(K)}$ is a map which takes states to states. However if we put an auxiliary space $\mathcal{B(A)}$ and take the natural extension $1\otimes h:\mathcal{B(A)}\otimes\mathcal{B(H)}\rightarrow\mathcal{B(A)}\otimes\mathcal{B(K)}$, then completely positive maps are the ones which preserves positivity whatever the dimension of $\mathcal{B(A)}$ may be. So they form what we know as quantum channel (and all its relations with Jamiołkowski isomorphism etc.). Obviously foor positive maps which are not completely positive, when extended, will not remain as a physical object. In a way the same thing is done by operator space theorists as well. My question is, can we give a definition of complete positivity without involving auxiliary systems? After all positive maps sends a state to a state. So which physical process actually hinders them of being a valid quantum operation? Looking back, are all not completely maps are physically impossible to simulate? (This alone perhaps should be written as a different question all together) - 3 Answers Physically, a CP map is represents evolution processes even in the presence of entanglement. After all positive maps sends a state to a state This is false when the system is entangled with something else. The classic example is the transpose on the system of interest. (Technically, a transpose, since it's basis-dependent.) Since positivity can be characterized using determinants, transposing preserves positivity. However, if you only transpose a tensor half of a big matrix then this fails. Again, the standard example is a pair of qubits entangled in the $|\phi^+\rangle={1\over\sqrt{2}}\left(|00\rangle+|11\rangle\right)$ Bell state. Then the total density matrix is $$\rho=\frac{1}{2}\begin{pmatrix}1&0&0&1\\0&0&0&0\\0&0&0&0\\1&0&0&1\end{pmatrix}$$ and when transposed over the second qubit (done transposing each of the four submatrices) it goes to $$\rho^{T_\textrm{B}}=\frac{1}{2}\begin{pmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\\0&0&0&1\end{pmatrix},$$ which is not positive ($(0,1,-1,0)$ has eigenvalue -1). That's the math. The physics it implies is that while you can ensure lone-system positivity of a map with purely local conditions, it does not imply that it works as a physical evolution for a larger, entangled system. - Physically, a completely positive map is just the kind of transformation one can achieve by passing a beam in a certain mixed state through some device (the transformer) thereby producing another beam in a usually different mixed state, allowing for dissipative effects. Clearly, such a transformation must map states into states and hence be positive. But under the standard asssumptions of a unitary dynamics of the universe as a whole, it is a restriction of a unitary map of the bigger system consisting of beam plus transformer. This implies that it has the form given in Stinespring's theorem, and is therefore completely positive. Conversely, Stinespring's theorem says that each completely positive map can be realized as the restriction of such a unitary dynamics, hence is (in principle at least) experimentally realizable. - Completely positive maps can be characterized without involving external systems by means of the Stinespring factorization theorem, which reduces to Choi's theorem for the case of finite dimensional Hilbert spaces: $\Phi(a) = \sum_{i=1}^{mn}V_i^\dagger a V_i$. Completely positive maps are considered to represent the most general quantum evolutions, however, Shaji and Sudarshan: "ftp://79.110.128.93/books/physics,%20math/%D0%96%D1%83%D1%80%D0%BD%D0%B0%D0%BB%D1%8B/Phys%20Letters%20A/Volume%20341,%20Issues%201-4,%20pp.%201-356%20(20%20June%202005)/Anil%20Shaji,%20E.C.G.%20Sudarshan%20-%20Who's%20afraid%20of%20not%20completely%20positive%20maps%3F.pdf" (Who’s afraid of not completely positive maps?) gave arguments that they do not exhaust all physical evolution possibilities. - 3 Shaji and Sudarshan's paper is completely bogus. Also the Stinespring theorem doesn't get rid of the auxiliary system, it's always there. – Ron Maimon Jul 30 '12 at 15:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281784296035767, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Fibonacci_Numbers&diff=32655&oldid=13862
# Fibonacci Numbers ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (12:40, 25 June 2012) (edit) (undo) | | | (35 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | - | |ImageName=Fibonacci Spiral | + | |ImageName=Fibonacci numbers in a sea shell | | | |Image=NAUTILUS.jpg | | |Image=NAUTILUS.jpg | | - | |ImageIntro=The spiral curve of the Nautilus sea shell follows the pattern of the spiral drawn in a Fibonacci rectangle, a collection of squares with sides that have the length of Fibonacci numbers. | + | |ImageIntro=The spiral curve of the Nautilus sea shell follows the pattern of a spiral drawn in a Fibonacci rectangle, a collection of squares with sides that have the length of Fibonacci numbers | | - | |ImageDescElem=The Fibonacci sequence is the sequence <math>1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \ldots,</math> where the first two numbers are 1s and every later number is the sum of the two previous numbers. So, given two <math>1</math>'s as the first two terms, the next terms of the sequence follows as : <math>1+1=2, 1+2=3, 2+3=5, 3+5=8, \dots</math> | + | . | | | | + | |ImageDescElem=The Fibonacci sequence is the sequence where the first two numbers are 1s and every later number is the sum of the two previous numbers. So, given two <math>1</math>'s as the first two terms, the next terms of the sequence follows as : <math>1+1=2, 1+2=3, 2+3=5, 3+5=8, \dots</math> | | | | | | | - | [[image:sunflower.jpg|Image 1|frame]] | + | {{Anchor|Reference=1|Link=[[Image:sunflower.jpg|Image 1|thumb|500px|left]]}} | | - | The Fibonacci numbers can be discovered in nature, such as the spiral of the Nautilus sea shell, the petals of the flowers, the seed head of a sunflower, and many other parts of the nature. The seeds at the head of the sunflower, for instance, are arranged so that one can find a collection of spirals in both clockwise and counterclockwise ways. The number of spirals differs depending on whether one counts in a clockwise or a counterclockwise way because different patterns of spirals are formed depending on the counting direction, as shown by <i>Image 1</i>. The two numbers of spirals are always consecutive numbers in the Fibonacci sequence. | + | | | | | | | | - | Nature prefers this way of arranging the seeds because it seems to allow the seeds to be uniformly distributed. For more information about Fibonacci patterns in nature, see [[Fibonacci_Numbers#Fibonacci_Numbers_in_Nature| Fibonacci Numbers in Nature]] | + | The Fibonacci numbers can be discovered in nature, such as the spiral of the Nautilus sea shell, the petals of the flowers, the seed head of a sunflower, and many other parts. The seeds at the head of the sunflower, for instance, are arranged so that one can find a collection of spirals in both clockwise and counterclockwise ways. Different patterns of spirals are formed depending on whether one is looking at a clockwise or counterclockwise way; thus, the number of spirals also differ depending on the counting direction, as shown by [[#1|Image 1]]. The two numbers of spirals are always consecutive numbers in the Fibonacci sequence. | | | | + | | | | | + | Nature prefers this way of arranging seeds because it seems to allow the seeds to be uniformly distributed. For more information about Fibonacci patterns in nature, see [[Fibonacci_Numbers#Fibonacci_Numbers_in_Nature| Fibonacci Numbers in Nature]] | | | | | | | | | | | | Line 20: | | Line 22: | | | | The problem was to find out how many pairs of rabbits there will be after one year. | | The problem was to find out how many pairs of rabbits there will be after one year. | | | | | | | - | [[image:Rabbit.png|thumb|300px|Image 2]] | + | {{Anchor|Reference=2|Link=[[Image:Rabbit.png|Image 2|thumb|300px|right]]}} | | - | On January 1st, there is only 1 pair. On February 1st, the baby rabbits matured to be grown up rabbits, but they have not reproduced, so there will only be the original pair present. | + | On January 1st, there is only 1 pair. On February 1st, this baby rabbits matured to be grown up rabbits, but they have not reproduced, so there will only be the original pair present. | | | | | | | - | Now look at any later month. June is a good example. As you can see in <i>Image 2</i>, all 5 pairs of rabbits that were alive in May continue to be alive in June. Furthermore, all 3 pairs of rabbits that were ''also'' alive on April 1st, which all became or were adult rabbit pairs on May 1st, reproduce, creating 3 new pairs of rabbits born in June. | + | Now look at any later month. June is a good example. As you can see in [[#2|Image 2]], all 5 pairs of rabbits that were alive in May continue to be alive in June. Furthermore, there are 3 new pairs of rabbits born in June, one for each pair that was alive in April (and are therefore old enough to reproduce in June). | | | | | | | - | This means that on June 1st, there are 8 pairs of rabbits. This is equal to the 5 pairs from May 1st plus the 3 new pairs, which is the number of pairs from April 1st. This same reasoning can be applied to any month, March or later, so the number of rabbits pairs at a certain point is the same as the sum of the number of rabbit pairs in the two previous months. | + | This means that on June 1st, there are 5 + 3 = 8 pairs of rabbits. This same reasoning can be applied to any month, March or later, so the number of rabbits pairs in any month is the same as the sum of the number of rabbit pairs in the two previous months. | | | | | | | | This is exactly the rule that defines the Fibonacci sequence. As you can see in the image, the population by month begins: 1, 1, 2, 3, 5, 8, ..., which is the same as the beginning of the Fibonacci sequence. The population continues to match the Fibonacci sequence no matter how many months out you go. | | This is exactly the rule that defines the Fibonacci sequence. As you can see in the image, the population by month begins: 1, 1, 2, 3, 5, 8, ..., which is the same as the beginning of the Fibonacci sequence. The population continues to match the Fibonacci sequence no matter how many months out you go. | | | | | | | | An interesting fact is that this problem of rabbit population was not intended to explain the Fibonacci numbers. This problem was originally intended to introduce the Hindu-Arabic numerals to Western Europe, where people were still using Roman numerals, and to help people practice addition. It was coincidence that the number of rabbits followed a certain pattern which people later named as the Fibonacci sequence. | | An interesting fact is that this problem of rabbit population was not intended to explain the Fibonacci numbers. This problem was originally intended to introduce the Hindu-Arabic numerals to Western Europe, where people were still using Roman numerals, and to help people practice addition. It was coincidence that the number of rabbits followed a certain pattern which people later named as the Fibonacci sequence. | | - | | | | | | | | | | | =Fibonacci Numbers in Nature= | | =Fibonacci Numbers in Nature= | | Line 36: | | Line 37: | | | | ===Leaf Arrangement=== | | ===Leaf Arrangement=== | | | | | | | - | Fibonacci numbers appear in the arrangement of leaves in certain plants. Take a plant, locate the lowest leaf and number that leaf as 0. Number the leaves by order of creation starting from 0, as shown in <i> Image 3</i>. Then, count the number of leaves you encounter until you reach the next leaf that is directly above and pointing in the same direction as the lowest leaf, which is the leaf with number 8 in this image. The number of leaves you pass, in this case, 8, will be a Fibonacci number. | + | Fibonacci numbers appear in the arrangement of leaves in certain plants. Take a plant, locate the lowest leaf and number that leaf as 0. Number the leaves by order of creation starting from 0, as shown in [[#3|Image 3]]. Then, count the number of leaves you encounter until you reach the next leaf that is directly above and pointing in the same direction as the lowest leaf, which is the leaf with number 8 in this image. The number of leaves you pass, in this case, 8, will be a Fibonacci number. | | - | [[image:fibonacileaf.png|right|150px|Image 3|thumb]] | + | {{Anchor|Reference=3|Link=[[Image:fibonacileaf.png|Image 3|thumb|150px|right]]}} | | - | | + | | | | Moreover, the number of rotations you make around the stem until you reach that leaf will also be a Fibonacci number. You make rotations up the stem by following ascending order of the leaf's number. In the image, if you follow the red arrows, the number of rotations you make until you reach 8 will be 5, which is a Fibonacci number. | | Moreover, the number of rotations you make around the stem until you reach that leaf will also be a Fibonacci number. You make rotations up the stem by following ascending order of the leaf's number. In the image, if you follow the red arrows, the number of rotations you make until you reach 8 will be 5, which is a Fibonacci number. | | | | | | | - | In <i>Image 4</i>, the leaf that is pointing in the same direction as the lowest leaf 0 is the leaf number 13. The number of leaves in between these two leaves is 13, which is a Fibonacci number. Moreover, going up the stem in a clockwise direction, such that we follow leaves 0, 1, 2, ..., 13, we make 8 rotations, and going up the stem in a counterclockwise direction, we make 5 rotations. The number of clockwise rotations and the number of counterclockwise rotations are always consecutive Fibonacci numbers. | + | In [[#4|Image 4]], the leaf that is pointing in the same direction as the lowest leaf 0 is the leaf number 13. The number of leaves in between these two leaves is 13, which is a Fibonacci number. Moreover, going up the stem in a clockwise direction, such that we follow leaves 0, 1, 2, ..., 13, we make 8 rotations, and going up the stem in a counterclockwise direction, we make 5 rotations. The number of clockwise rotations and the number of counterclockwise rotations are always consecutive Fibonacci numbers. | | - | [[image:leave.png|none|thumb|Image 4|250px]] | + | {{Anchor|Reference=4|Link=[[Image:leave.png|Image 4|thumb|250px|none]]}} | | | | | | | | ===Spirals=== | | ===Spirals=== | | - | [[image:goldenrectangle copy.jpg|300px|right|Image 5|thumb]] | + | {{Anchor|Reference=5|Link=[[Image:goldenrectangle copy.jpg|Image 5|thumb|300px|right]]}} | | - | Fibonacci numbers can be seen in nature through spiral forms that can be constructed by Fibonacci rectangles as shown in <i>Image 5</i>. Fibonacci rectangles are rectangles that are built so that the ratio of the length to the width is the proportion of two consecutive Fibonacci numbers. | + | Fibonacci numbers can be seen in nature through spiral forms that can be constructed by Fibonacci rectangles as shown in [[#5|Image 5]]. Fibonacci rectangles are rectangles that are built so that the ratio of the length to the width is the proportion of two consecutive Fibonacci numbers. | | | | | | | - | [[image:shell.jpg|200px|thumb|Image 6|left]] | | | | | We can build Fibonacci rectangles first by drawing two squares with length 1 next to each other. Then, we draw a new square with length 2 that is touching the sides of the original two squares. We draw another square with length 3 that is touching one unit square and the latest square with length 2. We can build Fibonacci rectangles by continuing to draw new squares that have the same length as the sum of the length of the latest two squares. | | We can build Fibonacci rectangles first by drawing two squares with length 1 next to each other. Then, we draw a new square with length 2 that is touching the sides of the original two squares. We draw another square with length 3 that is touching one unit square and the latest square with length 2. We can build Fibonacci rectangles by continuing to draw new squares that have the same length as the sum of the length of the latest two squares. | | | | | | | - | After building Fibonacci rectangles, we can draw a spiral in the squares, each square containing a quarter of a circle. Such spirals are called the Fibonacci spirals, and they can be seen in sea shells, snails, the spirals of the galaxy, and other parts of nature, as shown in <i>Image 6</i> and <i>Image 7</i>. | + | After building Fibonacci rectangles, we can draw a spiral in the squares, each square containing a quarter of a circle. Such spiral is called the Fibonacci spiral, and it can be seen in sea shells, snails, the spirals of the galaxy, and other parts of nature, as shown in [[#6|Image 6]] and [[#7|Image 7]]. | | | | + | | | | | + | {{Anchor|Reference=6|Link=[[Image:shell.jpg|Image 6|thumb|170px|left]]}} | | | | + | {{Anchor|Reference=7|Link=[[Image:galaxy.jpg|Image 7|thumb|250px|none]]}} | | | | | | | - | [[image:galaxy.jpg|250px|thumb|Image 7|none]] | | | | | | | | | | ===Ancestry of Bees=== | | ===Ancestry of Bees=== | | Line 66: | | Line 67: | | | | | | | | | [[image:bees.jpg|thumb|Image 8|400px|none]] | | [[image:bees.jpg|thumb|Image 8|400px|none]] | | - | | | | | | }} | | }} | | - | | | | | | |ImageDesc===Symbolic Definition of Fibonacci Sequence== | | |ImageDesc===Symbolic Definition of Fibonacci Sequence== | | | | | | | Line 94: | | Line 93: | | | | :{{EquationRef2|Eq. (1)}}<math>F_1+F_2+\dots+F_n=F_{n+2}-1</math> | | :{{EquationRef2|Eq. (1)}}<math>F_1+F_2+\dots+F_n=F_{n+2}-1</math> | | | | | | | - | For example, the sum of first <math>5</math> Fibonacci number is : | + | For example, the sum of first <math>5</math> Fibonacci numbers is : | | | | | | | | :<math>F_1+F_2+F_3+F_4+F_5= 1 + 1 + 2 + 3 +5=F_7-1=12</math> | | :<math>F_1+F_2+F_3+F_4+F_5= 1 + 1 + 2 + 3 +5=F_7-1=12</math> | | Line 100: | | Line 99: | | | | The example is demonstrated below. The total length of red bars that each correspond to <math>F_1, F_2, F_3, F_4, F_5</math> is one unit less than the length of <math>F_7</math>. | | The example is demonstrated below. The total length of red bars that each correspond to <math>F_1, F_2, F_3, F_4, F_5</math> is one unit less than the length of <math>F_7</math>. | | | | | | | - | [[image:identity1.gif|thumb|Image 11|none|600px]] | + | [[image:identity1.gif|thumb|Image 9|none|600px]] | | | | | | | - | |HiddenText= | + | :<math>{\color{Gray}F_1=F_3-F_2}</math> | | | | + | :<math>{\color{Gray}F_1=F_3-F_2}</math> | | | | + | | | | | + | :<math>{\color{Gray}\dots}</math> | | | | + | | | | | + | |FullText= | | | | | | | | :<math>F_1=F_3-F_2</math> | | :<math>F_1=F_3-F_2</math> | | Line 134: | | Line 138: | | | | | | | | | This example is shown below. | | This example is shown below. | | - | [[image:fidentity2.gif|thumb|Image 12|none|800px]] | + | [[image:fidentity2.gif|thumb|Image 10|none|800px]] | | | | | | | - | |HiddenText= | + | :<math>{\color{Gray}F_1=F_2}</math> | | | | + | :<math>{\color{Gray}F_3=F_4-F_2}</math> | | | | + | | | | | + | :<math>{\color{Gray}\dots}</math> | | | | + | |FullText= | | | :<math>F_1=F_2</math> | | :<math>F_1=F_2</math> | | | :<math>F_3=F_4-F_2</math> | | :<math>F_3=F_4-F_2</math> | | Line 164: | | Line 172: | | | | | | | | | This example is shown below. | | This example is shown below. | | - | [[image:identity3.gif|thumb|Image 13|none|600px]] | + | [[image:identity3.gif|thumb|Image 11|none|600px]] | | | | | | | - | To see the proof, click below. | + | To see the proof, <span class="plainlinks">[http://xstretchmarks.com/how-to-get-rid-of-stretch-marks/ <span style="color:black;font-weight:normal;text-decoration:none!important;background:none!important; text-decoration:none;">click</span>]</span> below. | | | | | | | - | |HiddenText= | + | Subtracting Eq. (2), the sum of Fibonacci numbers with odd indices, from the sum of the first <math>{\color{Gray}2n}</math> Fibonacci numbers, we | | - | Subtracting {{EquationNote|Eq. (2)}}, the sum of Fibonacci numbers with odd indices, from {{EquationNote|Eq. (1)}}, the sum of the first <math>n</math> Fibonacci numbers, we get the identity of the sum of Fibonacci numbers with even indices. }} | + | |FullText= | | | | + | Subtracting {{EquationNote|Eq. (2)}}, the sum of Fibonacci numbers with odd indices, from the sum of the first <math>2n</math> Fibonacci numbers, we get the identity of the sum of Fibonacci numbers with even indices. | | | | + | | | | | + | First, when we find the sum of first <math>2n</math> Fibonacci numbers through {{EquationNote|Eq. (1)}}, we get: | | | | + | :<math>F_1+F_2+\dots+F_{2n}=F_{2n+2}-1</math> | | | | + | | | | | + | Now, subtract {{EquationNote|Eq. (2)}} from the above equation, and we get: | | | | + | :<math>F_2+F_4+F_6+\dots+F_{2n}=F_{2n+2}-F_{2n}-1</math> | | | | + | | | | | + | By definition of Fibonacci numbers, <math>F_{2n+2}-F_{2n}=F_{2n+1}</math>. Thus, | | | | + | :<math>F_2+F_4+F_6+\dots+F_{2n}=F_{2n+1}-1</math> | | | | + | }} | | | | | | | | ====Sum of the squares of Fibonacci numbers==== | | ====Sum of the squares of Fibonacci numbers==== | | | | | | | | The sum of the squares of the first <math>n</math> Fibonacci numbers is the product of the <math>n^{\rm th}</math> and the <math>{(n+1)}^{\rm th}</math> Fibonacci numbers. | | The sum of the squares of the first <math>n</math> Fibonacci numbers is the product of the <math>n^{\rm th}</math> and the <math>{(n+1)}^{\rm th}</math> Fibonacci numbers. | | - | [[image:goldrectangle.jpg|right|Image 14|thumb|300px]] | + | {{Anchor|Reference=12|Link=[[Image:Goldenrectangle_copy.jpg|Image 12|thumb|300px|right]]}} | | | | | | | | :<math>\sum_{i=1}^n {F_i}^2=F_n F_{n+1}</math> | | :<math>\sum_{i=1}^n {F_i}^2=F_n F_{n+1}</math> | | | | | | | - | This identity can be proved by studying the area of the rectangles in <i>Image 14</i>. | + | This identity can be proved by studying the area of the rectangles in [[#12|Image 12]]. | | | | | | | - | {{HideShowThis|ShowMessage=Click here to show proof.|HideMessage=Click here to hide proof. | + | {{SwitchPreview|ShowMessage=Click here to show proof.|HideMessage=Click here to hide proof.|PreviewText=The rectangle is called a Fibonacci rectangle, which is further described in Fibonacci Numbers in Nature. The numbers inside each square indicate the length of one side of the square. Notice that the lengths of the squares are all Fibonacci numbers. | | - | |HiddenText= | + | |FullText= | | | The rectangle is called a Fibonacci rectangle, which is further described in [[Fibonacci_Numbers#Fibonacci_Numbers_in_Nature| Fibonacci Numbers in Nature]]. The numbers inside each square indicate the length of one side of the square. Notice that the lengths of the squares are all Fibonacci numbers. | | The rectangle is called a Fibonacci rectangle, which is further described in [[Fibonacci_Numbers#Fibonacci_Numbers_in_Nature| Fibonacci Numbers in Nature]]. The numbers inside each square indicate the length of one side of the square. Notice that the lengths of the squares are all Fibonacci numbers. | | | | | | | Line 232: | | Line 251: | | | | :<math>\gcd(F_n, F_{n+1})=F_{\gcd(n,n+1)}=F_1=1</math>. | | :<math>\gcd(F_n, F_{n+1})=F_{\gcd(n,n+1)}=F_1=1</math>. | | | | | | | - | That is, <math>F_n</math> and <math>F_{n+1} </math>are always <balloon title="Two integers are relatively prime if their greatest common divisor is 1"> relatively prime</balloon>. | + | That is, <math>F_n</math> and <math>F_{n+1} </math>, or two consecutive Fibonacci numbersare always <balloon title="Two integers are relatively prime if their greatest common divisor is 1"> relatively prime</balloon>. | | | | | | | | To see the proof for this special case, click below. | | To see the proof for this special case, click below. | | | | | | | - | |HiddenText= | + | | | | | | | | - | Assume that <math> F_n</math> and <math> F_{n+1}</math> have some integer <math>k </math> as their common divisor. Then, both <math> | + | |FullText=Assume that <math> F_n</math> and <math> F_{n+1}</math> have some integer <math>k </math> as their common divisor. Then, both <math> | | | F_{n+1}</math> and <math>F_n</math> are each multiples of <math>k</math>: | | F_{n+1}</math> and <math>F_n</math> are each multiples of <math>k</math>: | | | | | | | Line 258: | | Line 276: | | | | | | | | | Because the first sequence of differences of the Fibonacci sequence also includes a Fibonacci sequence, the <balloon title="Second difference is a sequence of differences between two consecutive numbers of the first sequence of differences">second difference </balloon> also includes a Fibonacci sequence. The Fibonacci sequence is thus reproduced in every sequence of differences. | | Because the first sequence of differences of the Fibonacci sequence also includes a Fibonacci sequence, the <balloon title="Second difference is a sequence of differences between two consecutive numbers of the first sequence of differences">second difference </balloon> also includes a Fibonacci sequence. The Fibonacci sequence is thus reproduced in every sequence of differences. | | - | | | | | | | | | | | We can see that the sequence of differences is composed of Fibonacci numbers by looking at the definition of Fibonacci numbers : | | We can see that the sequence of differences is composed of Fibonacci numbers by looking at the definition of Fibonacci numbers : | | Line 273: | | Line 290: | | | | ==Golden Ratio== | | ==Golden Ratio== | | | {{Hide|1= | | {{Hide|1= | | - | | + | {{Anchor|Reference=13|Link=[[Image:golden_rectangle_detailed.jpg|Image 13|thumb|150px]]}} | | - | [[image:golden_rectangle_detailed.jpg|150px|thumb|Image 9]] | + | The [[Golden Ratio| golden ratio]] appears in paintings, architecture, and in various forms of nature. Two numbers are said to be in the golden ratio if the ratio of the smaller number to the larger number is equal to the ratio of the larger number to the sum of the two numbers. In [[#13|Image 13]], the width of A and B are in the golden ratio if<math> a : b = (a+b) : a</math>. | | - | The golden ratio appears in paintings, architecture, and in various forms of nature. Two numbers are said to be in the golden ratio if the ratio of the smaller number to the larger number is equal to the ratio of the larger number to the sum of the two numbers. In <i>Image 9</i>, the width of A and B are in the golden ratio if<math> a : b = (a+b) : a</math>. | + | | | | | | | | | The golden ratio is represented by the Greek lowercase phi ,<math>\varphi</math>, and the exact value is | | The golden ratio is represented by the Greek lowercase phi ,<math>\varphi</math>, and the exact value is | | Line 281: | | Line 297: | | | | | | | | | This value can be found from the definition of the golden ratio. To see an algebraic derivation of the exact value of the golden ratio, go to [[Golden_Ratio#An_Algebraic_Representation| Golden Ratio : An Algebraic Representation]]. | | This value can be found from the definition of the golden ratio. To see an algebraic derivation of the exact value of the golden ratio, go to [[Golden_Ratio#An_Algebraic_Representation| Golden Ratio : An Algebraic Representation]]. | | - | | | | | | | | | | | An interesting fact about golden ratio is that the ratio of two consecutive Fibonacci numbers approaches the golden ratio as the numbers get larger, as shown by the table below. | | An interesting fact about golden ratio is that the ratio of two consecutive Fibonacci numbers approaches the golden ratio as the numbers get larger, as shown by the table below. | | Line 312: | | Line 327: | | | | | | | | | Taking the limit of <math>r_n=1+\frac{1}{r_{n-1}}</math> we get : | | Taking the limit of <math>r_n=1+\frac{1}{r_{n-1}}</math> we get : | | - | | | | | | | | | | | :<math>r=1+\frac{1}{r}</math> | | :<math>r=1+\frac{1}{r}</math> | | Line 332: | | Line 346: | | | | :<math>r=\frac{1 + \sqrt{5}}{2} </math> | | :<math>r=\frac{1 + \sqrt{5}}{2} </math> | | | | | | | - | which is the golden ratio. Thus, <i>if <math>r_n</math> has a limit</i>, then this limit is the golden ration. That is, as we go farther out in the sequence, the ratio of two consecutive Fibonacci numbers approaches the golden ratio. In fact, it can be proved that <math>r_n</math> <i>does have</i> a limit; one way is to use Binet's formula in the next section. For a different proof using <balloon title="load:continuedf">infinite continued fraction</balloon> | + | which is the golden ratio. Thus, <i>if <math>r_n</math> has a limit</i>, then this limit is the golden ratio. That is, as we go farther out in the sequence, the ratio of two consecutive Fibonacci numbers approaches the golden ratio. In fact, it can be proved that <math>r_n</math> <i>does have</i> a limit; one way is to use Binet's formula in the next section. For a different proof using <balloon title="load:continuedf">infinite continued fraction</balloon> | | | <span id="continuedf" style="display:none">A continued fraction is a fraction in which the denominator is composed of a whole number and a fraction. An infinite continued fraction of the golden ratio has the form : <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1} {1 + \ddots\,}}}} </math></span>go to [[Golden_Ratio#Continued_Fraction_Representation_and_Fibonacci_Sequences|Continued Fraction Representation and Fibonacci Sequences]] | | <span id="continuedf" style="display:none">A continued fraction is a fraction in which the denominator is composed of a whole number and a fraction. An infinite continued fraction of the golden ratio has the form : <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1} {1 + \ddots\,}}}} </math></span>go to [[Golden_Ratio#Continued_Fraction_Representation_and_Fibonacci_Sequences|Continued Fraction Representation and Fibonacci Sequences]] | | | | | | | - | [[image:vitruvianman.jpg|250px|thumb|Image 10: Vitruvian man]] | + | {{Anchor|Reference=14|Link=[[Image:vitruvianman.jpg|Image 14|thumb|250px]]}} | | - | Many people find the golden ratio in various parts of nature, art, architecture, and even music. However, there are some people who criticize this viewpoint. They claim that many mathematicians are wishfully trying to make the connection between the golden ratio and other parts of the world even though there is no real connection. | + | Many people find the golden ratio in various parts of nature, art, architecture, and even music. However, there are some people who criticize this viewpoint. They claim that many mathematicians are wishfully trying to make a connection between the golden ratio and other parts of the world even though there is no real connection. | | | | | | | | One example of the golden ratio that mathematicians found in nature is the human body. According to many, an ideal human body have proportions that show the golden ratio, such as: | | One example of the golden ratio that mathematicians found in nature is the human body. According to many, an ideal human body have proportions that show the golden ratio, such as: | | Line 343: | | Line 357: | | | | *distance between the shoulder line and top of the head : length of the head. | | *distance between the shoulder line and top of the head : length of the head. | | | | | | | - | Leonardo da Vinci's drawing <i>Vitruvian man</i> shown in <i>Image 10</i> emphasizes the proportion of human body. This drawing shows the proportions of an ideal human body that was studied by a Roman architect Vitruvius in his book De Architectura. In the drawing, a man is simultaneously inscribed in a circle and a square. The ratio of the square side to the radius of the circle in the drawing reflects the golden ratio, although the drawing deviates from the real value of the golden ratio by 1.7 percent. The proportions of the body of the man is also known to show the golden ratio. | + | Leonardo da Vinci's drawing <i>Vitruvian man</i> shown in [[#14|Image 14]] emphasizes the proportion of human body. This drawing shows the proportions of an ideal human body that was studied by a Roman architect Vitruvius in his book De Architectura. In the drawing, a man is simultaneously inscribed in a circle and a square. The ratio of the square side to the radius of the circle in the drawing reflects the golden ratio, although the drawing deviates from the real value of the golden ratio by 1.7 percent. The proportions of the body of the man is also known to show the golden ratio. | | | | | | | - | Although people later found the golden ratio in the painting, there is no evidence whether Leonardo da Vinci was trying to use the golden ratio in his painting or not. For more information about the golden ratio, go to [[Golden_Ratio|Golden Ratio]] | + | Although people later found the golden ratio in the painting, there is no evidence whether Leonardo da Vinci was trying to show the golden ratio in his painting or not. For more information about the golden ratio, go to [[Golden_Ratio|Golden Ratio]] | | | }} | | }} | | | | | | | Line 393: | | Line 407: | | | | {{Hide|1= | | {{Hide|1= | | | ===Fibonacci Numbers and the Mandelbrot Set=== | | ===Fibonacci Numbers and the Mandelbrot Set=== | | - | [[image:red.png|left|250px|Image 15|thumb]] | + | [[image:Mandelbrot_large.png|left|250px|Image 15|thumb]] | | | The Mandelbrot set is a set of points in which the boundary forms a fractal. It is a set of all complex numbers <math>c</math> for which the sequence | | The Mandelbrot set is a set of points in which the boundary forms a fractal. It is a set of all complex numbers <math>c</math> for which the sequence | | | | | | | Line 411: | | Line 425: | | | | | | | | | Thus, the sequence defined by <math>c=0</math> is bounded and <math>0</math> is included in the Mandelbrot set. | | Thus, the sequence defined by <math>c=0</math> is bounded and <math>0</math> is included in the Mandelbrot set. | | - | | | | | | | | | | | On the other hand, when we test<math>c=1</math>, | | On the other hand, when we test<math>c=1</math>, | | Line 427: | | Line 440: | | | | The terms of this sequence will increase to infinity. Thus, <math>c=1</math> is not included in the Mandelbrot set. | | The terms of this sequence will increase to infinity. Thus, <math>c=1</math> is not included in the Mandelbrot set. | | | | | | | - | [[image:antennaspoke.png|250px|thumb|Image 17]] | + | {{Anchor|Reference=17|Link=[[Image:antennaspoke.png|Image 17|thumb|250px]]}} | | - | People have been drawn to study the Mandelbrot set because of its aesthetic beauty. It is surprising to many people how a simple formula like {{EquationNote|Eq. (6)}} can generate a complex structure of the Mandelbrot set. The Fibonacci sequence is related to the Mandelbrot set through the period of the <balloon title="The main cardioid is the heart-shaped figure in the picture">main cardioid</balloon> and some large <balloon title="Primary bulbs are any bulbs that are directly connected to the main cardioid.">primary bulbs</balloon>. For each bulb, there are many <balloon title= | + | People have been drawn to study the Mandelbrot set because of its aesthetic beauty. The Mandelbrot set is known to be one of the most beautiful and complicated illustration of fractal. It is surprising to many people how a simple formula like {{EquationNote|Eq. (6)}} can generate a complex structure of the Mandelbrot set. The Fibonacci sequence is related to the Mandelbrot set through the period of the <balloon title="The main cardioid is the heart-shaped figure in the picture">main cardioid</balloon> and some large <balloon title="Primary bulbs are any bulbs that are directly connected to the main cardioid.">primary bulbs</balloon>. For each bulb, there are many <balloon title= | | | "Antennas are clusters of spoke that can be considered as line radiating from the bulbs.">antennas</balloon>, and the largest antenna is called the main antenna. The number of spokes in the main antenna is the period of the bulb. | | "Antennas are clusters of spoke that can be considered as line radiating from the bulbs.">antennas</balloon>, and the largest antenna is called the main antenna. The number of spokes in the main antenna is the period of the bulb. | | | | | | | - | The period of the main cardioid is considered to be 1. In <i>Image 17</i>, the main antenna has five spokes, including the one connecting the primary bulb and the junction point of the antenna. The period of this bulb is five. | + | The period of the main cardioid is considered to be 1. In [[#17|Image 17]], the main antenna has five spokes, including the one connecting the primary bulb and the junction point of the antenna. The period of this bulb is five. | | | | | | | - | Now, we will consider the period of the largest primary bulbs that are attached to the main cardioid and are in between two larger bulbs. In <i>Image 18</i>, the largest bulb between the bulb of period 1 and the bulb of period 2 is the bulb of period 3, and this bulb was found by looking for the largest bulb on the periphery of the main cardioid. The largest bulb between the bulb of period 2 and period 3 is the bulb of period 5, and the one between bulb of period 3 and period 5 is the bulb of period 8. The sequence generated in this way proceeds as | + | Now, we will consider the period of the largest primary bulbs that are attached to the main cardioid and are in between two larger bulbs. In [[#18|Image 18]], the largest bulb between the bulb of period 1 and the bulb of period 2 is the bulb of period 3, and this bulb was found by looking for the largest bulb on the periphery of the main cardioid. The largest bulb between the bulb of period 2 and period 3 is the bulb of period 5, and the one between bulb of period 3 and period 5 is the bulb of period 8. The sequence generated in this way proceeds as | | | 1, 2, 3, 5, 8, 13, ..., following the pattern of Fibonacci sequence. | | 1, 2, 3, 5, 8, 13, ..., following the pattern of Fibonacci sequence. | | | | | | | - | [[image:mandel.png|300px|thumb|Image 18|none]] | + | {{Anchor|Reference=18|Link=[[Image:mandel.png|Image 18|thumb|300px|none]]}} | | | | | | | Line 443: | | Line 456: | | | | | | | | | |Field=Algebra | | |Field=Algebra | | - | |References=http://www.worldproutassembly.org/archives/2007/08/the_mathematica.html | + | |References= | | | | | | | - | http://www.jimloy.com/algebra/fibo.htm | + | Maurer, Stephen B & Ralston, Anthony. (2004) Discrete Algorithmic Mathematics. Massachusetts : A K Peters. | | | | | | | - | http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibnat.html | + | Posamentier, Alfred S & Lehmann Ingmar. (2007) The Fabulous Fibonacci Numbers. New York : Prometheus Books. | | | | | | | - | http://milan.milanovic.org/math/english/division/division.html | + | Vorb'ev, N. N. (1961) Fibonacci Numbers. New York : Blaisdell Publishing Company. | | | | | | | - | http://dougerino.blogspot.com/2010/03/fibonacci-phi-and-kepler.html | + | Hoggatt, Verner E., Jr. (1969) Fibonacci and Lucas Numbers. Boston : Houghton Mifflin Company. | | | | | | | - | http://www.mathematicianspictures.com/FIBONACCI/Fibonacci.htm | + | Knott, Ron. (n.d.). The Fibonacci Numbers and Golden Section in Nature. Retrieved from http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibnat.html | | | | | | | - | http://www.chabad.org/library/article_cdo/aid/463900/jewish/Deciphering-Natures-Code.htm | + | Wikipedia (Golden Ratio). (n.d.). Golden Ratio. Retrieved from http://en.wikipedia.org/wiki/Golden_ratio. | | | | | | | - | http://www.highlandwoodworking.com/woodnews/july_2006/ask_the_staff_july06.html | + | Fibonacci Numbers in Nature & the Golden Ratio. (n.d.). In World-Mysteries.com. Retrieved from http://www.world-mysteries.com/sci_17.htm | | | | | | | - | http://hiddenlighthouse.wordpress.com/2010/04/07/nature-fibonacci/ | + | http://www.world-mysteries.com/sci_17.htm | | | | | | | - | http://en.wikipedia.org/wiki/Mandelbrot_set | + | Wikipedia (Mandelbrot Set). (n.d.). Mandelbrot Set. Retrieved from http://en.wikipedia.org/wiki/Mandelbrot_set. | | | | + | | | | | + | Devaney, Robert L. (2006) Unveiling the Mandelbrot Set. Retrieved from http://plus.maths.org/issue40/features/devaney/. | | | | + | | | | | + | Weisstein, Eric W. (n.d.). Mandelbrot Set. In MathWorld--A Wolfram Web Resource. Retrieved from http://mathworld.wolfram.com/MandelbrotSet.html. | | | | | | | - | http://plus.maths.org/issue40/features/devaney/ | | | | - | } | | | | | |Field=Number Theory | | |Field=Number Theory | | | |ToDo===Things to add(possible ideas for future)== | | |ToDo===Things to add(possible ideas for future)== | | Line 473: | | Line 488: | | | | ==Things to 'not' add== | | ==Things to 'not' add== | | | *A derivation of the exact value of the golden ratio. The derivation is redundant with the information in the golden ratio page. | | *A derivation of the exact value of the golden ratio. The derivation is redundant with the information in the golden ratio page. | | - | |InProgress=Yes | + | |InProgress=No | | | | + | } | | | | + | |AuthorName=Unknown | | | | + | |SiteURL=http://luis-tejeiro.blogspot.com/2009_03_01_archive.html | | | | + | |Field=Number Theory | | | | + | |Pre-K=No | | | | + | |Elementary=No | | | | + | |MiddleSchool=No | | | | + | |HighSchool=Yes | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | }} | | }} | ## Current revision Fibonacci numbers in a sea shell Field: Algebra Image Created By: Unknown Website: [1] Fibonacci numbers in a sea shell The spiral curve of the Nautilus sea shell follows the pattern of a spiral drawn in a Fibonacci rectangle, a collection of squares with sides that have the length of Fibonacci numbers . # Basic Description The Fibonacci sequence is the sequence where the first two numbers are 1s and every later number is the sum of the two previous numbers. So, given two $1$'s as the first two terms, the next terms of the sequence follows as : $1+1=2, 1+2=3, 2+3=5, 3+5=8, \dots$ Image 1 The Fibonacci numbers can be discovered in nature, such as the spiral of the Nautilus sea shell, the petals of the flowers, the seed head of a sunflower, and many other parts. The seeds at the head of the sunflower, for instance, are arranged so that one can find a collection of spirals in both clockwise and counterclockwise ways. Different patterns of spirals are formed depending on whether one is looking at a clockwise or counterclockwise way; thus, the number of spirals also differ depending on the counting direction, as shown by Image 1. The two numbers of spirals are always consecutive numbers in the Fibonacci sequence. Nature prefers this way of arranging seeds because it seems to allow the seeds to be uniformly distributed. For more information about Fibonacci patterns in nature, see Fibonacci Numbers in Nature ## Origin The Fibonacci sequence was studied by Leonardo of Pisa, or Fibonacci (1770-1240). In his work Liber Abacci, he introduced a problem involving the growth of the rabbit population. The assumptions were • there is one pair of baby rabbits placed in an enclosed place on the first day of January • this pair will grow for one month before reproducing and produce a new pair of baby rabbits on the first day of March • each new pair will mature for one month and produce a new pair of rabbits on the first day of their third month • the rabbits never die, so after they mature, the rabbits produce a new pair of baby rabbits every month. The problem was to find out how many pairs of rabbits there will be after one year. Image 2 On January 1st, there is only 1 pair. On February 1st, this baby rabbits matured to be grown up rabbits, but they have not reproduced, so there will only be the original pair present. Now look at any later month. June is a good example. As you can see in Image 2, all 5 pairs of rabbits that were alive in May continue to be alive in June. Furthermore, there are 3 new pairs of rabbits born in June, one for each pair that was alive in April (and are therefore old enough to reproduce in June). This means that on June 1st, there are 5 + 3 = 8 pairs of rabbits. This same reasoning can be applied to any month, March or later, so the number of rabbits pairs in any month is the same as the sum of the number of rabbit pairs in the two previous months. This is exactly the rule that defines the Fibonacci sequence. As you can see in the image, the population by month begins: 1, 1, 2, 3, 5, 8, ..., which is the same as the beginning of the Fibonacci sequence. The population continues to match the Fibonacci sequence no matter how many months out you go. An interesting fact is that this problem of rabbit population was not intended to explain the Fibonacci numbers. This problem was originally intended to introduce the Hindu-Arabic numerals to Western Europe, where people were still using Roman numerals, and to help people practice addition. It was coincidence that the number of rabbits followed a certain pattern which people later named as the Fibonacci sequence. # Fibonacci Numbers in Nature [Click to see more.] ### Leaf Arrangement Fibonacci numbers appear in the arrangement of leaves in certain plants. Take a plant, locate the lowest leaf and number that [...] ### Leaf Arrangement Fibonacci numbers appear in the arrangement of leaves in certain plants. Take a plant, locate the lowest leaf and number that leaf as 0. Number the leaves by order of creation starting from 0, as shown in Image 3. Then, count the number of leaves you encounter until you reach the next leaf that is directly above and pointing in the same direction as the lowest leaf, which is the leaf with number 8 in this image. The number of leaves you pass, in this case, 8, will be a Fibonacci number. Image 3 Moreover, the number of rotations you make around the stem until you reach that leaf will also be a Fibonacci number. You make rotations up the stem by following ascending order of the leaf's number. In the image, if you follow the red arrows, the number of rotations you make until you reach 8 will be 5, which is a Fibonacci number. In Image 4, the leaf that is pointing in the same direction as the lowest leaf 0 is the leaf number 13. The number of leaves in between these two leaves is 13, which is a Fibonacci number. Moreover, going up the stem in a clockwise direction, such that we follow leaves 0, 1, 2, ..., 13, we make 8 rotations, and going up the stem in a counterclockwise direction, we make 5 rotations. The number of clockwise rotations and the number of counterclockwise rotations are always consecutive Fibonacci numbers. Image 4 ### Spirals Image 5 Fibonacci numbers can be seen in nature through spiral forms that can be constructed by Fibonacci rectangles as shown in Image 5. Fibonacci rectangles are rectangles that are built so that the ratio of the length to the width is the proportion of two consecutive Fibonacci numbers. We can build Fibonacci rectangles first by drawing two squares with length 1 next to each other. Then, we draw a new square with length 2 that is touching the sides of the original two squares. We draw another square with length 3 that is touching one unit square and the latest square with length 2. We can build Fibonacci rectangles by continuing to draw new squares that have the same length as the sum of the length of the latest two squares. After building Fibonacci rectangles, we can draw a spiral in the squares, each square containing a quarter of a circle. Such spiral is called the Fibonacci spiral, and it can be seen in sea shells, snails, the spirals of the galaxy, and other parts of nature, as shown in Image 6 and Image 7. Image 6 Image 7 ### Ancestry of Bees Fibonacci numbers also appear when studying the ancestry of bees. Bees reproduce according to the following rules: • male bees hatch from an unfertilized egg, and have only a mother and no father, • female bees hatch from a fertilized egg, and require both a mother and a father. The table below starts with a male bee, and tracks the ancestors of the male bee. Only one female was needed to produce the male bee. This female bee, on the other hand, must have had both a mother and a father to be hatched; thus, the third row of the bee family tree has one male and a female. For each male and female, such pattern repeats. When we count the number of bees for each generation, we get a Fibonacci sequence as we go up the generations, similar to the way we got Fibonacci numbers in the rabbit population problem. Image 8 # A More Mathematical Explanation [Click to view A More Mathematical Explanation] ## Symbolic Definition of Fibonacci Sequence The Fibonacci sequence is the sequence UNIQ921748e75f [...] [Click to hide A More Mathematical Explanation] ## Symbolic Definition of Fibonacci Sequence The Fibonacci sequence is the sequence $F_1, F_2, F_3, \ldots, F_n, \ldots$ where $F_n = F_{n-1} + F_{n-2} \quad \hbox{ for } n>2$, and $F_1 = 1,\ F_2 = 1$. The Fibonacci sequence is recursively definedA recursively defined sequence is one in which each term is defined by preceding terms in the sequence. For instance, $x_n=(x_{n-1})^2-3x_{n-2}$ is recursively defined. because each term is defined in terms of its two immediately preceding terms. ## Identities and Properties ### Idenitities There are some interesting identities, including formula for the sum of first $n$ Fibonacci numbers, the sum of Fibonacci numbers with odd indices and sum of Fibonacci number with even indices. Note that all the identities and properties in this section can be proven in a more rigorous way through mathematical induction. #### Sum of first $n$ Fibonacci numbers The sum of first$n$ Fibonacci numbers is one less than the value of the ${(n+2)}^{\rm th}$ Fibonacci number: $F_1+F_2+\dots+F_n=F_{n+2}-1$ For example, the sum of first $5$ Fibonacci numbers is : $F_1+F_2+F_3+F_4+F_5= 1 + 1 + 2 + 3 +5=F_7-1=12$ The example is demonstrated below. The total length of red bars that each correspond to $F_1, F_2, F_3, F_4, F_5$ is one unit less than the length of $F_7$. Image 9 ${\color{Gray}F_1=F_3-F_2}$ ${\color{Gray}F_1=F_3-F_2}$ ${\color{Gray}\dots}$ $F_1=F_3-F_2$ $F_2=F_4-F_3$ $F_3=F_5-F_4$ $\dots$ $F_{n-1}=F_{n+1}-F_n$ $F_n=F_{n+2}-F_{n+1}$ Adding up all the equations, we get : $F_1+F_2+\dots+F_n=-F_2+(F_3-F_3)+(F_4-F_4)+ \dots +(F_{n+1}-F_{n+1})+F_{n+2}$ $=F_{n+2}-F_2$ Except for $F_{n+2}$ and $-F_2$, all terms on the right side of the equation is canceled out by another term that has the opposite sign and the same magnitude. Because $F_2=1$, we get : $F_1+F_2+\dots+F_n=F_{n+2}-1$ #### Sum of Fibonacci numbers with odd indices The sum of first $n$ Fibonacci numbers with odd indices is equal to the ${(2n)}^{\rm th}$ Fibonacci number: $F_1+F_3+F_5+\dots+F_{2n-1}=F_{2n}$ For instance, the sum of first $4$ Fibonacci numbers with odd indices is: $F_1+F_3+F_5+F_7=1+2+5+13=21=F_8$ This example is shown below. Image 10 ${\color{Gray}F_1=F_2}$ ${\color{Gray}F_3=F_4-F_2}$ ${\color{Gray}\dots}$ $F_1=F_2$ $F_3=F_4-F_2$ $F_5=F_6-F_4$ $\dots$ $F_{2n-1}=F_{2n}-F_{2n-2}$ Adding all the equations, we get : $F_1+F_3+F_5+\dots+F_{2n-1}=(F_2-F_2)+(F_4-F_4)+(F_6-F_6)+\dots+(F_{2n-2}-F_{2n-2})+F_{2n}$ $=F_{2n}$ Except for $F_{2n}$, all the terms on the right side of the equation disappear because each term is canceled out by another term that has the opposite sign and the same magnitude. #### Sum of Fibonacci numbers with even indices The sum of first $n$ Fibonacci numbers with even indices is one less than the ${(2n+1)}^{\rm th}$ Fibonacci number: $F_2+F_4+\dots+F_{2n}=F_{2n+1}-1$ For example, the sum of first $4$ Fibonacci numbers with even indices is : $F_2+F_4+F_6= 1+3+8=F_7-1=13-1=12$ This example is shown below. Image 11 To see the proof, below. Subtracting Eq. (2), the sum of Fibonacci numbers with odd indices, from the sum of the first ${\color{Gray}2n}$ Fibonacci number [...] Subtracting Eq. (2), the sum of Fibonacci numbers with odd indices, from the sum of the first $2n$ Fibonacci numbers, we get the identity of the sum of Fibonacci numbers with even indices. First, when we find the sum of first $2n$ Fibonacci numbers through Eq. (1), we get: $F_1+F_2+\dots+F_{2n}=F_{2n+2}-1$ Now, subtract Eq. (2) from the above equation, and we get: $F_2+F_4+F_6+\dots+F_{2n}=F_{2n+2}-F_{2n}-1$ By definition of Fibonacci numbers, $F_{2n+2}-F_{2n}=F_{2n+1}$. Thus, $F_2+F_4+F_6+\dots+F_{2n}=F_{2n+1}-1$ #### Sum of the squares of Fibonacci numbers The sum of the squares of the first $n$ Fibonacci numbers is the product of the $n^{\rm th}$ and the ${(n+1)}^{\rm th}$ Fibonacci numbers. Image 12 $\sum_{i=1}^n {F_i}^2=F_n F_{n+1}$ This identity can be proved by studying the area of the rectangles in Image 12. The rectangle is called a Fibonacci rectangle, which is further described in Fibonacci Numbers in Nature. The numbers inside each square indicate the l [...] The rectangle is called a Fibonacci rectangle, which is further described in Fibonacci Numbers in Nature. The numbers inside each square indicate the length of one side of the square. Notice that the lengths of the squares are all Fibonacci numbers. Any rectangle in the picture is composed of squares with lengths that are Fibonacci numbers. In fact, any rectangle is composed of every square with side lengths $F_1$ through $F_n$, with the value of $n$ depending on the rectangle. Moreover, the dimension of this rectangle is $F_n$ by $F_{n+1}$. With this information in mind, we can prove the identity $\sum_{i=1}^n {F_i}^2=F_n F_{n+1}$ by computing the area of the rectangle in two different ways. The first way of finding the area is to add the area of each squares. That is, the area of the rectangle will be : ${F_1}^2+{F_2}^2+{F_3}^2+\dots+{F_n}^2$. Another way of computing the area is by multiplying the width by the height. Using this method, the area will be : $F_n F_{n+1}$. Because we are computing the area of the same rectangle, the two methods should give the same results. Thus, ${F_1}^2+{F_2}^2+{F_3}^2+\dots+{F_n}^2=F_n F_{n+1}$. For example, for the red rectangle, the width is $5$ and the height is $8$. Since $5$ is the $5^{\rm th}$ Fibonacci number and $8$ is the $6^{\rm th}$ Fibonacci number, let $n=5$. The area of the rectangle is : $1^2+1^2+2^2+3^2+5^2={F_1}^2+{F_2}^2+{F_3}^2+{F_4}^2+{F_5}^2=\sum_{i=1}^5 {F_i}^2=40$, or $5 * 8 = F_5 F_{5+1} = F_5 F_6 = 40$. Thus, $\sum_{i=1}^5 {F_i}^2=F_5 F_{5+1}$. ### Properties #### Greatest Common Divisor The greatest common divisor of two Fibonacci numbers is the Fibonacci number whose index is the greatest common divisor of the indices of the original two Fibonacci numbers. In other words, $\gcd(F_n,F_m) = F_{\gcd(n,m)}$. For instance, $\gcd(F_9,F_6)=\gcd(34,8)=2=F_3=F_{\gcd(9,6)}$. In a special case where $F_n$ and $F_m$ are consecutive Fibonacci numbers, this property says that $\gcd(F_n, F_{n+1})=F_{\gcd(n,n+1)}=F_1=1$. That is, $F_n$ and $F_{n+1}$, or two consecutive Fibonacci numbersare always relatively prime. To see the proof for this special case, click below. Assume that ${\color{Gray}F_n}$ and ${\color{Gray}F_{n+1}}$ have some integer k as their common divisor. Then, bo [...] Assume that $F_n$ and $F_{n+1}$ have some integer $k$ as their common divisor. Then, both $F_{n+1}$ and $F_n$ are each multiples of $k$: $F_{n+1}=ka$ $F_n=kb$ Subtracting Eq. (4) from Eq. (3), we get : $F_{n-1}=k(a-b)$, which means that if two consecutive Fibonacci numbers, $F_n$ and $F_{n+1}$, have $k$ as their common divisor, then the previous Fibonacci number, $F_{n-1}$ must also be a multiple of $k$. In that case, $F_{n-1}$ and $F_n$, which are also two consecutive Fibonacci numbers, will have $k$ as a common divisor. Then, it follows that $F_{n-2}$ must also be a multiple of $k$. Repeating the subtraction of consecutive Fibonacci numbers, we can conclude that the very first Fibonacci number, $F_1 = 1$ must also be a multiple of $k$. So $k=1$, and the only common divisor between two consecutive Fibonacci numbers is 1. Thus, two consecutive Fibonacci numbers are relatively prime. #### Finite Difference of Fibonacci Numbers One of the interesting properties of Fibonacci numbers is that the sequence of differences between consecutive Fibonacci numbers also forms a Fibonacci sequence, as shown in the table below. For more information about the difference table, click Difference Tables. Because the first sequence of differences of the Fibonacci sequence also includes a Fibonacci sequence, the second difference also includes a Fibonacci sequence. The Fibonacci sequence is thus reproduced in every sequence of differences. We can see that the sequence of differences is composed of Fibonacci numbers by looking at the definition of Fibonacci numbers : $F_n = F_{n-1} + F_{n-2}$. The difference between two consecutive Fibonacci numbers is : $F_n - F_{n-1} = F_{n-2}$. Thus, the difference between two consecutive Fibonacci numbers, $F_n$ and $F_{n-1}$, is equal to the value of the previous Fibonacci number, $F_{n-2}$. ## Golden Ratio Image 13 The golden ratio appears in paintings, architecture, and in various forms of nature. Two numbers are said to be in the golden ratio if the ratio of the smaller number to the larger number is equal to the ratio of the larger number to the sum of the two numbers. In Image 13, the width of A and B are in the golden ratio if$a : b = (a+b) : a$. The golden ratio is represented by the Greek lowercase phi ,$\varphi$, and the exact value is $\varphi=\frac{1 + \sqrt{5}}{2} \approx 1.61803\,39887\dots\,$ This value can be found from the definition of the golden ratio. To see an algebraic derivation of the exact value of the golden ratio, go to Golden Ratio : An Algebraic Representation. An interesting fact about golden ratio is that the ratio of two consecutive Fibonacci numbers approaches the golden ratio as the numbers get larger, as shown by the table below. $\frac{F_{n+1}}{F_n}$ $\frac{1}{1}$=1 $\frac{2}{1}$=2 $\frac{3}{2}$=1.5 $\frac{5}{3}$=1.66667 $\frac{8}{5}$=1.6 $\frac{13}{8}$=1.625 $\frac{21}{13}$=1.61538 $\frac{34}{21}$=1.61904 $\frac{55}{34}$=1.61765 $\frac{89}{55}$=1.61818 Lets assume that the ratio of two consecutive Fibonacci numbers have a limit and verify that this limit is, in fact, the golden ratio. Let $r_n$ denote the ratio of two consecutive Fibonacci numbers, that is, $r_n=\frac{F_{n+1}}{F_n}$. Then, $r_{n-1}=\frac{F_n}{F_{n-1}}$. $r_n$ and $r_{n-1}$ are related by : $r_n=\frac{F_{n+1}}{F_n}=\frac{F_n+F_{n-1}}{F_n}=1+\frac{F_{n-1}}{F_n}=1+\cfrac{1}{{F_n}/{F_{n-1}}}=1+\frac{1}{r_{n-1}}$. Assuming that the ratio $r_n$ has a limit, let $r$ be that limit: $\lim_{n \to \infty} r_n=\lim_{n \to \infty}\frac{F_{n+1}}{F_n}=r$. Then, $\lim_{n \to \infty} r_n = \lim_{n \to \infty} r_{n-1} = r$. Taking the limit of $r_n=1+\frac{1}{r_{n-1}}$ we get : $r=1+\frac{1}{r}$ Multiplying both sides by $r$, we get ${r}^2=r+1$ which can be written as: $r^2 - r - 1 = 0$. Applying the quadratic formula An equation, $\frac{-b \pm \sqrt {b^2-4ac}}{2a}$, which produces the solutions for equations of form $ax^2+bx+c=0$ , we get $r = \frac{1 \pm \sqrt{5}} {2}$. Because the ratio has to be a positive value, $r=\frac{1 + \sqrt{5}}{2}$ which is the golden ratio. Thus, if $r_n$ has a limit, then this limit is the golden ratio. That is, as we go farther out in the sequence, the ratio of two consecutive Fibonacci numbers approaches the golden ratio. In fact, it can be proved that $r_n$ does have a limit; one way is to use Binet's formula in the next section. For a different proof using infinite continued fraction A continued fraction is a fraction in which the denominator is composed of a whole number and a fraction. An infinite continued fraction of the golden ratio has the form : $\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1} {1 + \ddots\,}}}}$go to Continued Fraction Representation and Fibonacci Sequences Image 14 Many people find the golden ratio in various parts of nature, art, architecture, and even music. However, there are some people who criticize this viewpoint. They claim that many mathematicians are wishfully trying to make a connection between the golden ratio and other parts of the world even though there is no real connection. One example of the golden ratio that mathematicians found in nature is the human body. According to many, an ideal human body have proportions that show the golden ratio, such as: • distance between the foot and navel : distance between the navel and the head • distance between the finger tip and the elbow : distance between the wrist and the elbow • distance between the shoulder line and top of the head : length of the head. Leonardo da Vinci's drawing Vitruvian man shown in Image 14 emphasizes the proportion of human body. This drawing shows the proportions of an ideal human body that was studied by a Roman architect Vitruvius in his book De Architectura. In the drawing, a man is simultaneously inscribed in a circle and a square. The ratio of the square side to the radius of the circle in the drawing reflects the golden ratio, although the drawing deviates from the real value of the golden ratio by 1.7 percent. The proportions of the body of the man is also known to show the golden ratio. Although people later found the golden ratio in the painting, there is no evidence whether Leonardo da Vinci was trying to show the golden ratio in his painting or not. For more information about the golden ratio, go to Golden Ratio ## Binet's Formula for Fibonacci Numbers Binet's Formula gives a formula for the $n^{\rm th}$ Fibonacci number as : $F_n=\frac{{\varphi}^n-{\bar{\varphi}}^n}{\sqrt5}$, where $\varphi$ and $\bar{\varphi}$ are the two roots of Eq. (5), that is, $\varphi=\frac{1 + \sqrt{5}}{2},\quad \bar{\varphi}=\frac{1-\sqrt{5}}{2}$. Here is one way of verifying Binet's formula through mathematical induction, but it gives no clue about how to discover the formula. Let $F_n=\frac{{\varphi}^n-{\bar{\varphi}}^n}{\sqrt5}$ as defined above. We want to verify Binet's formula by showing that the definition of Fibonacci numbers holds true even when we use Binet's formula. First, we will show through inductive stepAn inductive step is one of the two parts of mathematical induction (base case and inductive step) where one shows that if a statement holds true for some $n$, then the statement also holds true for $n+1$ that: $F_n=F_{n-1}+F_{n-2}\quad\hbox{ for } n>2$ and then we will show the base caseA base case is one of the two parts of mathematical induction (base case and inductive step) where one shows that a statement holds true for the lowest value of $n$, usually $n=0$ or $n=1$, depending on situation. that: $F_1=1,\quad F_2=1$. First, according to Binet's fromula, $F_{n-1}+F_{n-2} = \frac{{\varphi}^{n-1}-{\bar{\varphi}}^{n-1}}{\sqrt5}+ \frac{{\varphi}^{n-2}-{\bar{\varphi}}^{n-2}}{\sqrt5}$ $=\frac{({\varphi}^{n-1}+{\varphi}^{n-2})-({\bar{\varphi}}^{n-1}+{\bar{\varphi}}^{n-2})}{\sqrt5}$ $=\frac{({\varphi}+1){\varphi}^{n-2}-(\bar{\varphi}+1){\bar{\varphi}}^{n-2}}{\sqrt5}$. Because $\varphi$ and $\bar{\varphi}$ are the two roots of Eq. (5), the above equation becomes : $F_{n-1}+F_{n-2}=\frac{{{\varphi}^2}{\varphi}^{n-2}-{{\bar{\varphi}}^2}{\bar{\varphi}}^{n-2}}{\sqrt5}$ $=\frac{{\varphi}^n-{\bar{\varphi}}^n}{\sqrt5}$ $=F_n$, as desired. Now, because $\varphi=\frac{1 + \sqrt{5}}{2},\quad \bar{\varphi}=\frac{1-\sqrt{5}}{2}$, $F_1=\frac{\varphi-\bar{\varphi}}{\sqrt5}=\frac{1}{\sqrt5}\left (\frac{1 + \sqrt{5}}{2}-\frac{1-\sqrt{5}}{2}\right)=\frac{1}{\sqrt5} {\sqrt5} = 1$ $F_2=\frac{{\varphi}^2-{\bar{\varphi}}^2}{\sqrt5}=\frac{(\varphi+\bar{\varphi})(\varphi-\bar{\varphi})}{\sqrt5}=\frac{1*{\sqrt5}}{\sqrt5}=1$. Binet's formula thus is a correct formula of Fibonacci numbers. ## Fibonacci Numbers and Fractals ### Fibonacci Numbers and the Mandelbrot Set Image 15 The Mandelbrot set is a set of points in which the boundary forms a fractal. It is a set of all complex numbers $c$ for which the sequence $z_{n+1}=(z_n)^2+c \quad \hbox{ for } n=0,1,2,\dots$ does not go to infinity, starting with $z_0=0$. For instance, $c=0$ is included in the Mandelbrot set because $z_1=(z_0)^2+c=0^2+0 = 0$ $z_2=(z_0)^2+c=0^2+0=0$ $\dots$ $z_n=0^2+0=0$ for any $n$. Thus, the sequence defined by $c=0$ is bounded and $0$ is included in the Mandelbrot set. On the other hand, when we test$c=1$, Image 16 $z_1=(z_0)^2+c=1$ $z_2=(z_1)^2+c=2$ $z_3=(z_2)^2+c=5$ $\dots$ The terms of this sequence will increase to infinity. Thus, $c=1$ is not included in the Mandelbrot set. Image 17 People have been drawn to study the Mandelbrot set because of its aesthetic beauty. The Mandelbrot set is known to be one of the most beautiful and complicated illustration of fractal. It is surprising to many people how a simple formula like Eq. (6) can generate a complex structure of the Mandelbrot set. The Fibonacci sequence is related to the Mandelbrot set through the period of the main cardioid and some large primary bulbs. For each bulb, there are many antennas, and the largest antenna is called the main antenna. The number of spokes in the main antenna is the period of the bulb. The period of the main cardioid is considered to be 1. In Image 17, the main antenna has five spokes, including the one connecting the primary bulb and the junction point of the antenna. The period of this bulb is five. Now, we will consider the period of the largest primary bulbs that are attached to the main cardioid and are in between two larger bulbs. In Image 18, the largest bulb between the bulb of period 1 and the bulb of period 2 is the bulb of period 3, and this bulb was found by looking for the largest bulb on the periphery of the main cardioid. The largest bulb between the bulb of period 2 and period 3 is the bulb of period 5, and the one between bulb of period 3 and period 5 is the bulb of period 8. The sequence generated in this way proceeds as 1, 2, 3, 5, 8, 13, ..., following the pattern of Fibonacci sequence. Image 18 # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References Maurer, Stephen B & Ralston, Anthony. (2004) Discrete Algorithmic Mathematics. Massachusetts : A K Peters. Posamentier, Alfred S & Lehmann Ingmar. (2007) The Fabulous Fibonacci Numbers. New York : Prometheus Books. Vorb'ev, N. N. (1961) Fibonacci Numbers. New York : Blaisdell Publishing Company. Hoggatt, Verner E., Jr. (1969) Fibonacci and Lucas Numbers. Boston : Houghton Mifflin Company. Knott, Ron. (n.d.). The Fibonacci Numbers and Golden Section in Nature. Retrieved from http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/fibnat.html Fibonacci Numbers in Nature & the Golden Ratio. (n.d.). In World-Mysteries.com. Retrieved from http://www.world-mysteries.com/sci_17.htm http://www.world-mysteries.com/sci_17.htm Weisstein, Eric W. (n.d.). Mandelbrot Set. In MathWorld--A Wolfram Web Resource. Retrieved from http://mathworld.wolfram.com/MandelbrotSet.html. # Future Directions for this Page ## Things to add(possible ideas for future) • Fibonacci numbers and Pascal's triangle • A helper page for recursively defined sequence • A section describing the Fibonacci numbers with negative subscripts. this appears in Finite Difference of Fibonacci Numbers section ## Things to 'not' add • A derivation of the exact value of the golden ratio. The derivation is redundant with the information in the golden ratio page. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 194, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8939759135246277, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/97282/why-does-using-the-probability-of-the-complementary-event-in-the-following-quest/97287
# Why does using the probability of the complementary event in the following question yield incorrect results I was working on a probability question when I got stuck. The question goes as follows: On any day, the probability that a boy eats his prepared lunch is 0.5. The probability that his sister eats her lunch i 0.6. The probability that the girl eats her lunch given that the boy eats his is 0.9. Determine the probability that: a) both eat their lunch b) the boy eats his lunch given that the girl eats hers c) at least one of them eats their lunch. The first one I got right, but I am not completely sure why it is right. Apparently the answer to a) is $9\over 20$ because since the boy ate his lunch, the girls probability of eating is 0.9 and if you multiply the two probabilities you get $9\over 20$. However this does not seem to take into account the possibility that the girl ate her lunch first, in which case the probability of both eating is $3\over 10$. As for question b) I did understand, but c) made me rather confused. My reasoning was that the probability that at least one of them ate their lunch was complementary to neither eating their lunch, which thus leads to the answer 0.8. However, the answer which the book gives is 0.65. I therefore though that $P($ at least one eats $)=P(both)+P(B|G')+P(G|B')$, but this gave me some strange number. Does anyone have any suggestions on how to continue? - ## 2 Answers For(a), $P(A \cap B) = P(A|B)P(B) = P(B|A)P(A) = P(B \cap A)$. For (b) the answer is $P(B|A) = 0.45/0.6$. For (c), the answer is $P(A \cup B) = P(A)+P(B)-P(A \cap B)$. - I believe that you first point is incorrect as $P(A|B)P(B)=9/20$ and $P(B|A)P(A)=3/10$ – E.O. Jan 8 '12 at 3:03 @EmileOkada: How did you get 3/10? – Thomas Jan 8 '12 at 3:08 I see my mistake now. Srry :P – E.O. Jan 8 '12 at 3:10 c) $P< at\ least \ one \ of\ them\ eat\ their\ lunch>=1-P<A^c \cap B^c>$ - Not quite. The event that both eat their lunch would be $A\cap B$ while what is asked is the probability that at least one of them eats their lunch, that is, $P(A \cup B)$. Your expression on the right is in fact equal to $P(A\cup B)$, not $P(A \cap B) = P(\text{both eat their lunch})$. – Dilip Sarwate Jan 8 '12 at 3:56 this is the ans to part c,which is at least one of them will eat their lunch – johnny Jan 8 '12 at 6:59 Is there any difference between what you have written on the left side of the equation, viz. "both of them eat their lunch" and "at least one of them will eat their lunch"? I have already agreed that the right side of your equation is (one way of correctly expressing) the answer to part c. that at least one of them will eat their lunch. What I am complaining about is that the left side says that the right side stands for something different. – Dilip Sarwate Jan 8 '12 at 12:41 o, i c,i will change it – johnny Jan 9 '12 at 4:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9858263731002808, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/272079/isomorphism-of-multiplicative-group-modulo-n/272086
# Isomorphism of multiplicative group modulo n Let's take groups $\mathbb{Z}^*_{12}, \mathbb{Z}^*_{10}$ and $\mathbb{Z}^*_{8}$ (multiplicative groups modulo $12, 10$ and $8$). The order of all these groups is $4$ (since $\varphi(12) = \varphi(10) = \varphi(8) = 4$). A multiplicative group modulo n with order of $4$ is known to be cyclic. Therefore, there must exist an isomorphism between every one of these groups and the additive group $\mathbb{Z}_4^+$. Is that correct? - 3 Are you sure that any group of order $4$ is cyclic ? how about $\frac{\mathbb{Z}}{2\mathbb{Z}}\times \frac{\mathbb{Z}}{2\mathbb{Z}}$ ? – Louis La Brocante Jan 7 at 11:28 3 It is at least abelian, but not always cyclic. – André Jan 7 at 11:30 Find an element of order $4$ in the group @Rolando gave. – Babak S. Jan 7 at 11:35 ## 2 Answers Another approach to this is using the structure theorem of multiplicative group of units. First by CRT you factor the groups into prime powers $$(\mathbb Z /12 \mathbb Z)^\times = (\mathbb Z /2^2 \mathbb Z)^\times \times (\mathbb Z /3 \mathbb Z)^\times$$ $$(\mathbb Z /10 \mathbb Z)^\times = (\mathbb Z /2 \mathbb Z)^\times \times (\mathbb Z /5 \mathbb Z)^\times$$ $$(\mathbb Z /8 \mathbb Z)^\times = (\mathbb Z /2^3)^\times$$ Now use use the structure theorem which says • $(\mathbb Z /2^r \mathbb Z)^\times = C_2 \times C_{2^{r-2}}$ • $(\mathbb Z /p^r \mathbb Z)^\times = C_{(p-1)p^{r-1}}$ this shows that first and last group are isomorphic (to $C_2 \times C_2$) while the middle one is different (it's $C_{2^2}$). - The groups are $$(\mathbb Z /12 \mathbb Z)^\times = \{1,5,7,11\}$$ $$(\mathbb Z /10 \mathbb Z)^\times = \{1,3,7,9\}$$ $$(\mathbb Z /8 \mathbb Z)^\times = \{1,3,5,7\}$$ As you can see all groups have the order 2 element -1 in them (11, 9, 8 respectively). But $5^2 = 1$ and $7^2 = 1$ in the first group, whereas $3^2 = -1$ and $7^2 = -1$ in the second. That shows these two groups are not isomorphic. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168004989624023, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/08/09/orthogonal-and-symplectic-lie-algebras/?like=1&source=post_flair&_wpnonce=9a71b83933
# The Unapologetic Mathematician ## Orthogonal and Symplectic Lie Algebras For the next three families of linear Lie algebras we equip our vector space $V$ with a bilinear form $B$. We’re going to consider the endomorphisms $f\in\mathfrak{gl}(V)$ such that $\displaystyle B(f(x),y)=-B(x,f(y))$ If we pick a basis $\{e_i\}$ of $V$, then we have a matrix for the bilinear form $\displaystyle B_{ij}=B(e_i,e_j)$ and one for the endomorphism $\displaystyle f(e_i)=\sum\limits_jf_i^je_j$ So the condition in terms of matrices in $\mathfrak{gl}(n,\mathbb{F})$ comes down to $\displaystyle\sum\limits_kB_{kj}f_i^k=-\sum_kf_j^kB_{ik}$ or, more abstractly, $Bf=-f^TB$. So do these form a subalgebra of $\mathfrak{gl}(V)$? Linearity is easy; we must check that this condition is closed under the bracket. That is, if $f$ and $g$ both satisfy this condition, what about their commutator $[f,g]$? $\displaystyle\begin{aligned}B([f,g](x),y)&=B(f(g(x))-g(f(x)),y)\\&=B(f(g(x)),y)-B(g(f(x)),y)\\&=-B(g(x),f(y))+B(f(x),g(y))\\&=B(x,g(f(y)))-B(x,f(g(y)))\\&=-B(x,f(g(y))-g(f(y)))\\&=-B(x,[f,g](y))\end{aligned}$ So this condition will always give us a linear Lie algebra. We have three different families of these algebras. First, we consider the case where $\mathrm{dim}(V)=2l+1$ is odd, and we let $B$ be the symmetric, nondegenerate bilinear form with matrix $\displaystyle\begin{pmatrix}1&0&0\\ 0&0&I_l\\ 0&I_l&0\end{pmatrix}$ where $I_l$ is the $l\times l$ identity matrix. If we write the matrix of our endomorphism in a similar form $\displaystyle\begin{pmatrix}a&b_1&b_2\\c_1&m&n\\c_2&p&q\end{pmatrix}$ our matrix conditions turn into $\displaystyle\begin{aligned}a&=0\\c_1&=-b_2^T\\c_2&=-b_1^T\\q&=-m^T\\n&=-n^T\\p&=-p^T\end{aligned}$ From here it’s straightforward to count out $2l$ basis elements that satisfy the conditions on the first row and column, $\frac{1}{2}(l^2-l)$ that satisfy the antisymmetry for $p$, another $\frac{1}{2}(l^2-1)$ that satisfy the antisymmetry for $n$, and $l^2$ that satisfy the condition between $m$ and $q$, for a total of $2l^2+l$ basis elements. We call this Lie algebra the orthogonal algebra of $V$, and write $\mathfrak{o}(V)$ or $\mathfrak{o}(2l+1,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $B_l$. Next up, in the case where $\mathrm{dim}(V)=2l$ is even we let the matrix of $B$ look like $\displaystyle\begin{pmatrix}0&I_l\\I_l&0\end{pmatrix}$ A similar approach to that above gives a basis with $2l^2-l$ elements. We also call this the orthogonal algebra of $V$, and write $\mathfrak{o}(V)$ or $\mathfrak{o}(2l,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $D_l$. Finally, we again take an even-dimensional $V$, but this time we use the skew-symmetric form $\displaystyle\begin{pmatrix}0&I_l\\-I_l&0\end{pmatrix}$ This time we get a basis with $2l+2$ elements. We call this the symplectic algebra of $V$, and write $\mathfrak{sp}(V)$ or $\mathfrak{sp}(2l,\mathbb{F})$. Sometimes we refer to the isomorphism class of this algebra as $C_l$. Along with the special linear Lie algebras, these form the “classical” Lie algebras. It’s a tedious but straightforward exercise to check that for any classical Lie algebra $L$, each basis element $e$ of $L$ can be written as a bracket of two other elements of $L$. That is, we have $[L,L]=L$. Since $L\subseteq\mathfrak{gl}(V)$ for some $V$, and since we know that $[\mathfrak{gl}(V),\mathfrak{gl}(V)]=\mathfrak{sl}(V)$, this establishes that $L\subseteq\mathfrak{sl}(V)$ for all classical $L$. ### Like this: Posted by John Armstrong | Algebra, Lie Algebras No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 61, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909278154373169, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Resonator
Resonator For other uses, see Resonator (disambiguation). A standing wave in a rectangular cavity resonator. A resonator is a device or system that exhibits resonance or resonant behavior, that is, it naturally oscillates at some frequencies, called its resonant frequencies, with greater amplitude than at others. The oscillations in a resonator can be either electromagnetic or mechanical (including acoustic). Resonators are used to either generate waves of specific frequencies or to select specific frequencies from a signal. Musical instruments use acoustic resonators that produce sound waves of specific tones. A cavity resonator, usually used in reference to electromagnetic resonators, is one in which waves exist in a hollow space inside the device. Acoustic cavity resonators, in which sound is produced by air vibrating in a cavity with one opening, are known as Helmholtz resonators. Explanation A physical system can have as many resonant frequencies as it has degrees of freedom; each degree of freedom can vibrate as a harmonic oscillator. Systems with one degree of freedom, such as a mass on a spring, pendulums, balance wheels, and LC tuned circuits have one resonant frequency. Systems with two degrees of freedom, such as coupled pendulums and resonant transformers can have two resonant frequencies. As the number of coupled harmonic oscillators grows, the time it takes to transfer energy from one to the next becomes significant. The vibrations in them begin to travel through the coupled harmonic oscillators in waves, from one oscillator to the next. The term resonator is most often used for a homogeneous object in which vibrations travel as waves, at an approximately constant velocity, bouncing back and forth between the sides of the resonator. Resonators can be viewed as being made of millions of coupled moving parts (such as atoms). Therefore they can have millions of resonant frequencies, although only a few may be used in practical resonators. The oppositely moving waves interfere with each other to create a pattern of standing waves in the resonator. If the distance between the sides is $d\,$, the length of a round trip is $2d\,$. To cause resonance, the phase of a sinusoidal wave after a round trip must be equal to the initial phase so the waves self-reinforce. The condition for resonance in a resonator is that the round trip distance, $2d\,$, is equal to an integral number of wavelengths $\lambda\,$ of the wave: $2d = N\lambda,\qquad\qquad N \in \{1,2,3,\dots\}$ If the velocity of a wave is $c\,$, the frequency is $f = c / \lambda\,$ so the resonant frequencies are: $f = \frac{Nc}{2d}\qquad\qquad N \in \{1,2,3,\dots\}$ So the resonant frequencies of resonators, called normal modes, are equally spaced multiples (harmonics) of a lowest frequency called the fundamental frequency. The above analysis assumes the medium inside the resonator is homogeneous, so the waves travel at a constant speed, and that the shape of the resonator is rectilinear. If the resonator is inhomogeneous or has a nonrectilinear shape, like a circular drumhead or a cylindrical microwave cavity, the resonant frequencies may not occur at equally spaced multiples of the fundamental frequency. They are then called overtones instead of harmonics. There may be several such series of resonant frequencies in a single resonator, corresponding to different modes of vibration. Electromagnetic Electromagnetism Scientists An electrical circuit composed of discrete components can act as a resonator when both an inductor and capacitor are included. Oscillations are limited by the inclusion of resistance, either via a specific resistor component, or due to resistance of the inductor windings. Such resonant circuits are also called RLC circuits after the circuit symbols for the components. A distributed-parameter resonator has capacitance, inductance, and resistance that cannot be isolated into separate lumped capacitors, inductors, or resistors. An example of this, much used in filtering, is the helical resonator. A single layer coil (or solenoid) that is used as a secondary or tertiary winding in a Tesla coil or magnifying transmitter is also a distributed resonator. Cavity resonators A cavity resonator is a hollow conductor blocked at both ends and along which an electromagnetic wave can be supported. It can be viewed as a waveguide short-circuited at both ends (see Microwave cavity). The cavity's interior surfaces reflect a wave of a specific frequency. When a wave that is resonant with the cavity enters, it bounces back and forth within the cavity, with low loss (see standing wave). As more wave energy enters the cavity, it combines with and reinforces the standing wave, increasing its intensity. Examples RF cavities in the linac of the Australian Synchrotron are used to accelerate and bunch beams of electrons; the linac is the tube passing through the middle of the cavity. An illustration of the electric and magnetic field of one of the possible modes in a cavity resonator. The cavity magnetron is a vacuum tube with a filament in the center of an evacuated, lobed, circular cavity resonator. A perpendicular magnetic field is imposed by a permanent magnet. The magnetic field causes the electrons, attracted to the (relatively) positive outer part of the chamber, to spiral outward in a circular path rather than moving directly to this anode. Spaced about the rim of the chamber are cylindrical cavities. The cavities are open along their length and so they connect with the common cavity space. As electrons sweep past these openings they induce a resonant high frequency radio field in the cavity, which in turn causes the electrons to bunch into groups. A portion of this field is extracted with a short antenna that is connected to a waveguide (a metal tube usually of rectangular cross section). The waveguide directs the extracted RF energy to the load, which may be a cooking chamber in a microwave oven or a high gain antenna in the case of radar. The klystron, tube waveguide, is a beam tube including at least two apertured cavity resonators. The beam of charged particles passes through the apertures of the resonators, often tunable wave reflection grids, in succession. A collector electrode is provided to intercept the beam after passing through the resonators. The first resonator causes bunching of the particles passing through it. The bunched particles travel in a field-free region where further bunching occurs, then the bunched particles enter the second resonator giving up their energy to excite it into oscillations. It is a particle accelerator that works in conjunction with a specifically tuned cavity by the configuration of the structures. On the beamline of an accelerator system, there are specific sections that are cavity resonators for RF. The reflex klystron is a klystron utilizing only a single apertured cavity resonator through which the beam of charged particles passes, first in one direction. A repeller electrode is provided to repel (or redirect) the beam after passage through the resonator back through the resonator in the other direction and in proper phase to reinforce the oscillations set up in the resonator. In a laser, light is amplified in a cavity resonator that is usually composed of two or more mirrors. Thus an optical cavity, also known as a resonator, is a cavity with walls that reflect electromagnetic waves (light). This allows standing wave modes to exist with little loss outside the cavity. Mechanical Mechanical resonators are used in electronic circuits to generate signals of a precise frequency. These are called piezoelectric resonators, the most common of which is the quartz crystal. They are made of a thin plate of quartz with metal plates attached to each side, or in low frequency clock applications a tuning fork shape. The quartz material performs two functions. Its high dimensional stability and low temperature coefficient makes it a good resonator, keeping the resonant frequency constant. Second, the quartz's piezoelectric property converts the mechanical vibrations into an oscillating voltage, which is picked up by the plates on its surface, which are electrically attached to the circuit. These crystal oscillators are used in quartz clocks and watches, to create the clock signal that runs computers, and to stabilize the output signal from radio transmitters. Mechanical resonators can also be used to induce a standing wave in other medium. For example a multiple degree of freedom system can be created by imposing a base excitation on a cantilever beam. In this case the standing wave is imposed on the beam.[1] This type of system can be used as a sensor to track changes in frequency or phase of the resonance of the fiber. One application is as a measurement device for dimensional metrology.[2] Acoustic The most familiar examples of acoustic resonators are in musical instruments. Every musical instrument has resonators. Some generate the sound directly, such as the wooden bars in a xylophone, the head of a drum, the strings in stringed instruments, and the pipes in an organ. Some modify the sound by enhancing particular frequencies, such as the sound box of a guitar or violin. Organ pipes, the bodies of woodwinds, and the sound boxes of stringed instruments are examples of acoustic cavity resonators. Automobiles A sport motorcycle, equipped with exhaust resonator, designed for performance. The exhaust pipes in automobile exhaust systems are designed as acoustic resonators that work with the muffler to reduce noise, by making sound waves "cancel each other out"[1]. The "exhaust note" is an important feature for many vehicle owners, so both the original manufacturers and the after-market suppliers use the resonator to enhance the sound. In 'tuned exhaust' systems designed for performance, the resonance of the exhaust pipes can also be used to 'suck' the combustion products out of the combustion chamber at a particular engine speed or range of speeds. Percussion instruments In many keyboard percussion instruments, below the centre of each note is a tube, which is an acoustic cavity resonator, referred to simply as the resonator. The length of the tube varies according to the pitch of the note, with higher notes having shorter resonators. The tube is open at the top end and closed at the bottom end, creating a column of air that resonates when the note is struck. This adds depth and volume to the note. In string instruments, the body of the instrument is a resonator. The tremolo effect of a vibraphone is achieved via a mechanism that opens and shuts the resonators. Stringed instruments String instruments such as the bluegrass banjo may also have resonators. Many five-string banjos have removable resonators, so players can use the instrument with a resonator in bluegrass style, or without it in folk music style. The term resonator, used by itself, may also refer to the resonator guitar. The modern ten-string guitar, invented by Narciso Yepes, adds four string resonators to the traditional classical guitar. By tuning these resonators in a very specific way (C, Bb, Ab, Gb) and making use of their strongest partials (corresponding to the octaves and fifths of the strings' fundamental tones), the bass strings of the guitar now resonate equally with any of the 12 tones of the chromatic octave. The guitar resonator is a device for driving guitar string harmonics by an electromagnetic field. This resonance effect is caused by a feedback loop and is applied to drive the fundamental tones, octaves, 5th, 3rd to an infinite sustain. References and notes 1. M.B. Bauza, R.J Hocken, S.T Smith, S.C Woody, (2005), The development of a virtual probe tip with application to high aspect ratio microscale features, Rev. Sci Instrum, 76 (9) 095112  .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317169189453125, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/61985?sort=newest
## Finding the degree of minimal polynomials ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let a number $x = \sqrt[a_1]{p_1} + \sqrt[a_2]{p_2} + \ .. \ + \sqrt[a_n]{p_n}$ be a number such that all $a_n$ are integers and all $p_n$ are rational. I've been noticing that for every number x, the degree of its minimal polynomial is seemingly always equal to $\prod_{1}^n \ a_n$. Is that valid for all values of $a_n$? If so, is there a proof? - On a related topic, see exercises 18--22 on pp. 290--291 of Lang's Undergraduate Algebra (3rd edition), especially the remark after exercise 18 for context. If you want to find this on Google books, search for the phrase "most people" in the book, which is part of the remark. Those exercises concern the field degree of a field extension obtained by adjoining to a field $F$ several $n$-th roots of elements of $F$, and in practice one often finds a "random" sum of numbers algebraic over a field is a primitive element for the extension generated by all of those numbers. – KConrad Apr 17 2011 at 3:57 The question at mathoverflow.net/questions/26832/… is relevant to the question here. – KConrad Apr 17 2011 at 4:02 ## 3 Answers No. Some conditions are needed on the $a_i$ and $p_i$. For instance, take n=2, $a_1 = a_2 = 2$, $p_1 = p_2 = 2$. Then $x = 2 \sqrt{2}$, which has minimal polynomial $x^2 - 8$. As an even simpler example, n=1, $a_1 = 2$, $p_1 = 4$, then $x$ is rational. For a less trivial example, take $a_1= 4$, $a_2 = 6$, $p_1=p_2=2$. Check that this has a polynomial of degree 12. In fact, this isn't really true at all. One can, however, prove that the degree of the minimal polynomial is at most $\prod a_n$, which is an easy exercise in field theory. Any graduate algebra textbook covering Galois theory will be more than sufficient to prove this; just remember the degree of the minimal polynomial is the same as the dimension of the extension field viewed as a vector space over the base field. EDIT: After much miscommunication on my part, we've reached the following results: Suppose $a_1,\ldots,a_n$ are pairwise relatively prime positive integers, $p_1, \ldots, p_n$ integers such that $\sqrt[a_i]{p_i}$ is of degree $a_i$ for each i. Then $\sqrt[a_1]{p_1} + \cdots + \sqrt[a_n]{p_n}$ is of degree $\displaystyle \prod_{i=1}^n a_i$. The condition that each $\sqrt[a_i]{p_i}$ is met (by Eisenstein Criterion) should there be a prime $q_i$ such that $q_i | p_i$ and $q_i^2 \not{|} p_i$ for each i. - What if $gcd(a_n, p_n) = 1$, and no pairs ${a_n, p_n}$ are equal? then is it satisfied? – Victor Apr 17 2011 at 2:47 I'm not totally sure what having $gcd(a_n,p_n)=1$ gives you (or conditions regarding pairs $(a_n,p_n)$), but if you have $gcd(a_1,...,a_n)=1$ then your statement holds. If $gcd(p_1, \ldots ,p_n)=1$ and $a_1= \cdots = a_n$, it should also hold. I'm not sure about the general case for $gcd(p_1, \ldots ,p_n)=1$, though I wouldn't be surprised if your statement held then. – The Cheese Stands Alone Apr 17 2011 at 2:58 There are a good number of cases where your statement will hold, but it's not true in general. If the $a_i$ and the $p_i$ aren't related in any obvious way, it's probably true for small values of n, but I'd still check. Wolfram alpha (www.wolframalpha.com) can compute the minimal polynomials in sufficiently small examples, and most computational algebra engines can do it for arbitrary numbers of the form you want. – The Cheese Stands Alone Apr 17 2011 at 3:00 Scratch what I said above; it's not true unless each $p_i^{1/a_i}$ has minimal polynomial of degree $a_i$. This holds in the case that some prime q divides $p_i$, but $q^2$ doesn't divide $p_i$, by the Eisenstein Criterion. With more advanced arguments, a little bit stonger statements can be made. It fails in the case where we take $\sqrt{4}$. Another similar case is that the kth root of unity $e^{i2 \pi /k}$ satisfies the polynomial $a^{k−1}+\cdots+x+1$, which is of degree k−1. But when $p_i^{1/a_i}$ is of degree $a_i$ over $\mathbb{Q}$, the above should hold. – The Cheese Stands Alone Apr 17 2011 at 6:57 1 Dear Logan, your statement after "...not true unless..." is not correct: the sum $\sqrt[2]{2.3}+ \sqrt[2]{2.3^3}$ does not have degree 4 although all your hypotheses are verified: the numbers $\sqrt[2]{2.3}$ and $\sqrt[2]{2.3^3}$ have minimal polynomials of degree 2 and the prime $q=2$ divides $2.3$ and $2.3^2$ whereas $q^2=2^2$ divides neither. In view of this and your "scratch what I said above", could you please sum up your claim in a new statement and provide a proof of that new statement or a reference ? You are welcome to use "more advanced arguments". – Georges Elencwajg Apr 17 2011 at 17:23 show 6 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The canonical references for this are: MR0818878 (87b:68058) Zippel, Richard(1-MIT-C) Simplification of expressions involving radicals. J. Symbolic Comput. 1 (1985), no. 2, 189–210. MR1148819 (92k:12008) Landau, Susan(1-MA-C) Simplification of nested radicals. SIAM J. Comput. 21 (1992), no. 1, 85–110. and more recently MR1776235 (2001g:12004) Blömer, J.(D-PDRB) Denesting by bounded degree radicals. (English summary) Fifth European Symposium on Algorithms (Graz, 1997). Algorithmica 28 (2000), no. 1, 2–15. - Besicovitch has proved the following related interesting result: Consider an integer $n\gt 1$ and distinct prime numbers $p_1,p_2,\ldots ,p_k.$ Then the field $F=\mathbb Q (\sqrt[n]{p_1},\ldots ,\sqrt[n]{p_k})$ has dimension $n^k$ over $\mathbb Q$ . More precisely, a $\mathbb Q$-basis of that field $F$ is given by the radicals $$\sqrt[n]{p_1^{m_1}\ldots p_i^{m_i} \ldots p_k^{m_k} } \quad (\; 0\leq m_i \lt n \quad , \quad 1\leq i\leq k )$$ (The case $n=2$ is a classical chestnut in Galois theory.) This does not answer the OP's question but at least assures us that, for example, $$\sqrt[3]{900}+\sqrt[3]{36}+ \sqrt[3]{15}+\sqrt[3]{150} \notin \mathbb Q$$ which is not so simple to check directly. I have the pessimistic feeling that there is no very satisfactory general answer to the question "when does the sum $\sqrt[n_1]{a_1}+ \sqrt[n_2]{a_2}+...+\sqrt[n_k]{a_k}$ have degree $n_1 n_2 ...n_k$", but I'd love to be shown wrong. Bibliography: Besicovich's original article is: Abram S. Besicovitch, "On the linear independence of fractional powers of integers", Journal of the London Mathematical Society 15 (1940), 3-6. Here is a more recent and accessible proof : Ian Richards, "An application of Galois theory to elementary arithmetic", Advances in Mathematics 13 (1974), 268-273. 13 (1974), 268-273. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177758097648621, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/300758/rouches-theorem-problem-for-a-general-polynomial
Rouche's Theorem problem for a general polynomial Prove that for any $a\in \mathbb{C}$ and $n\geq 2$, the polynomial $az^n+z+1$ has at least one root in the disk $|z| < 2$. My instinct is to use Rouche's Theorem for this problem, however I have only been able to prove the case when $|a| > 3/2^n$. In this case we let $f = az^n$ and $g = az^n+z+1$, and Rouche's Theorem works great. When we have $|a| \leq 3/2^n$, we run into problems. No matter what we choose for $f$ we haven't been able to get Rouche's Theorem to help us. Is this the right approach, or is there a theorem I'm forgetting that would be more useful here? - Can you help me by any chance with my problem please? – Carpediem Feb 12 at 0:54 2 Answers The claim is in general not correct for $n=2$. Consider for example $a= \frac{1}{4}$: $$a \cdot z^n +z+1 = \frac{1}{4} z^2 + z+1 = \frac{(z+2)^2}{4}$$ ... and this polynomial has clearly no roots in the disk $|z|<2$. I found proofs (using Roche's theorem) for some more special cases: • $|a| < \frac{1}{2^n}$: In this case let $f(z) := z$, then $$|f(z)-g(z)| = |a \cdot z^n+1| \leq a^\cdot 2^n+1 < 1+1 = 2 = |f(z)| \leq |f(z)|+|g(z)|$$ • $|a| < \frac{3}{2^n}$, $a \in \mathbb{R}$: Let $c:= \frac{1}{2} \left(a + \frac{3}{2^n} \right)$ and $f(z) :=c \cdot z$, then $$|f(z)-g(z)| = |(a-c) \cdot z^n+z+1| \leq |a-c| \cdot 2^n+2+1 = \bigg| \underbrace{\frac{1}{2} a - \frac{1}{2} \cdot \frac{3}{2^n}}_{>0} \bigg| \cdot 2^n+3 \\ = c \cdot 2^n = |f(z)| \leq |f(z)|+|g(z)|$$ - We just discussed this problem and figured out a solution with Elise and other fellow grad students yesterday, so I'll write it here too. As saz noted, the claim as stated is not correct. It should say "at least one root in the closed disk $|z| \leq 2$". And again as noted in saz's answer, we need only consider the case $|a| \geq \frac{1}{2^n}$. If the roots of the polynomial are $z_1, \dots z_n$, we have $$|z_1| \dots |z_n| = \frac{1}{|a|} \leq 2^n \, .$$ Therefore at least one of the $z_i$ should satisfy $|z_i| \leq 2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598404169082642, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/17581/significance-testing-or-cross-validation
# Significance testing or cross validation? Two common approaches for selecting correlated variables are significance tests and cross validation. What problem does each try to solve and when would I prefer one over the other? - ## 2 Answers First, lets be explicit and put the question into the context of multiple linear regression where we regress a response variable, $y$, on several different variables $x_1, \ldots, x_p$ (correlated or not), with parameter vector $\beta = (\beta_0, \beta_1, \ldots, \beta_p)$ and regression function $$f(x_1, \ldots, x_p) = \beta_0 + \beta_1 x_1 + \ldots + \beta_p x_p,$$ which could be a model of the mean of the response variable for a given observation of $x_1, \ldots, x_p$. The question is how to select a subset of the $\beta_i$'s to be non-zero, and, in particular, a comparison of significance testing versus cross validation. To be crystal clear about the terminology, significance testing is a general concept, which is carried out differently in different contexts. It depends, for instance, on the choice of a test statistic. Cross validation is really an algorithm for estimation of the expected generalization error, which is the important general concept, and which depends on the choice of a loss function. The expected generalization error is a little technical to define formally, but in words it is the expected loss of a fitted model when used for prediction on an independent data set, where expectation is over the data used for the estimation as well as the independent data set used for prediction. To make a reasonable comparison lets focus on whether $\beta_1$ could be taken equal to 0 or not. • For significance testing of the null hypothesis that $\beta_1 = 0$ the main procedure is to compute a $p$-value, which is the probability that the chosen test-statistic is larger than observed for our data set under the null hypothesis, that is, when assuming that $\beta_1 = 0$. The interpretation is that a small $p$-value is evidence against the null hypothesis. There are commonly used rules for what "small" means in an absolute sense such as the famous 0.05 or 0.01 significance levels. • For the expected generalization error we compute, perhaps using cross-validation, an estimate of the expected generalization error under the assumption that $\beta_1 = 0$. This quantity tells us how well models fitted by the method we use, and with $\beta_1 = 0$, will perform on average when used for prediction on independent data. A large expected generalization error is bad, but there are no rules in terms of its absolute value on how large it needs to be to be bad. We will have to estimate the expected generalization error for the model where $\beta_1$ is allowed to be different from 0 as well, and then we can compare the two estimated errors. Whichever is the smallest corresponds to the model we choose. Using significance testing we are not directly concerned with the "performance" of the model under the null hypothesis versus other models, but we are concerned with documenting that the null is wrong. This makes most sense (to me) in a confirmatory setup where the main objective is to confirm and document an a priory well specified scientific hypothesis, which can be formulated as $\beta_1 \neq 0$. The expected generalization error is, on the other hand, only concerned with average "performance" in terms of expected prediction loss, and concluding that it is best to allow $\beta_1$ to be different from 0 in terms of prediction is not an attempt to document that $\beta_1$ is "really" different from 0 $-$ whatever that means. I have personally never worked on a problem where I formally needed significance testing, yet $p$-values find their way into my work and do provide sensible guides and first impressions for variable selection. I am, however, mostly using penalization methods like lasso in combination with the generalization error for any formal model selection, and I am slowly trying to suppress my inclination to even compute $p$-values. For exploratory analysis I see no argument in favor of significance testing and $p$-values, and I will definitely recommend focusing on a concept like expected generalization error for variable selection. In other contexts where one might consider using a $p$-value for documenting that $\beta_1$ is not 0, I would say that it is almost always a better idea to report an estimate of $\beta_1$ and a confidence interval instead. - Simply using significance tests and a stepwise procedure to perform model selection can lead you to believe that you have a very strong model with significant predictors when you, in fact, do not; you may get strong correlations by chance and these correlations can seemingly be enhanced as you remove other unnecessary predictors. The selection procedure, of course, keeps only those variables with the strongest correlations with the outcome and, as the stepwise procedure moves forward, the probability of committing a Type I error becomes larger than you would imagine. This is because the standard errors (and thus p-values) are not adjusted to take into account the fact that the variables were not selected for inclusion in the model randomly and multiple hypothesis tests were conducted to choose that set. David Freedman has a cute paper in which he demonstrates these points called "A Note on Screening Regression Equations." The abstract: Consider developing a regression model in a context where substantive theory is weak. To focus on an extreme case, suppose that in fact there is no relationship between the dependent variable and the explanatory variables. Even so, if there are many explanatory variables, the $R^2$ will be high. If explanatory variables with small t statistics are dropped and the equation refitted, the $R^2$ will stay high and the overall F will become highly significant. This is demonstrated by simulation and by asymptotic calculation. One potential solution to this problem, as you mentioned, is using a variant of cross validation. When I don't have a good economic (my area of research) or statistical reason to believe my model, this is my preferred approach to selecting an appropriate model and performing inference. Other respondents might mention that stepwise procedures using the AIC or BIC are asympotically equivalent to cross validation. This only works as the number of observations relative to the number of predictors gets large, however. In the context of having many variables relative to the number of observations (Freedman says 1 variable per 10 or fewer observations), selection in this manner can exhibit the poor properties discussed above. In an age of powerful computers, I don't see any reason not to use cross validation as a model selection procedure over stepwise selection. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937751829624176, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53912/scalar-field-lagrangian-and-potential?answertab=active
# Scalar field lagrangian and potential This question is a continuation of this Phys.SE post. Scalar field theory does not have gauge symmetry, and in particular, $\phi\to\phi−1$ is not a gauge transformation. but why? and I want see the mathematics that will represent the interaction potential from the Lagrangian for scalar field. Details in the paper: see check the equation (3) please. - ## 1 Answer Because there is no physical redundancy in the description. A gauge theory (and related gauge transformations) only occurs if there are different field configurations corresponding to a certain physical configuration. Such redundant configurations can be related by local gauge transformations (see for example the vector potential $A_\mu$ in electrodynamics). In your case (which the other question referred to), the Lagrangian is not invariant under your transformation, since the quantity $$U(\phi)= \frac{1}{8} \phi^2 (\phi -2)^2$$ changes under $\phi\rightarrow\phi-1$ to $$U(\phi-1)= \frac{1}{8} (\phi-1)^2 (\phi -3)^2.$$ - Where from the equation $U(\phi)= \frac{1}{8} \phi^2 (\phi -2)^2$ arises or more precisely why did we write in this form? – Unlimited Dreamer Feb 14 at 11:51 1 – Frederic Brünner Feb 14 at 11:51 My curiosity is why did he write the equation like this? – Unlimited Dreamer Feb 14 at 11:52 Such a term (i.e. one of a higher order than 2) is necessary in order to get interactions in your theory. – Frederic Brünner Feb 14 at 12:01 – Unlimited Dreamer Feb 14 at 12:03 show 7 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8734622001647949, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/53498?sort=oldest
## Nontrivial circular arguments? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There is a famous circular argument for the Prime Number Theorem (PNT). It turns out that there exists an infinite sequence of elementary-to-prove Chebyshev-type estimates that taken together imply PNT. Unfortunately, the collective existence of all these proofs seems to require the PNT, so one must work hard a la Selberg and Erdos for an elementary proof. See Harold Diamond, Elementary methods in the study of the distribution of prime numbers, Bull. Amer. Math. Soc. N. S. 7 (1982), 553-589. Now on the traditional view, circular proofs simply have no value at all. Yet one feels perhaps that the example in the previous paragraph has something to say. Just an illusion? Or does there exist a foundational framework where circular proofs of this special sort enjoy bona fide status? Just to riff a little more, imagine that the Goldbach conjecture turns out independent of PA (or some other, perhaps weaker, axiom system for arithmetic). The truth of the Goldbach conjecture (required for independence!) would then imply the existence of a proof (trivial!) for any given even number that that one even number equals the sum of two primes. Now that obviously isn't very interesting compared to the example in the first paragraph, and perhaps the difference has something to do with the greater quantifier complexity of PNT? As a side question, are there any similar stories in the lore of the Riemann Hypothesis? For example, does RH imply the existence of an infinite sequence of zero-free region proofs of a known form that collectively amount to RH? EDIT: Lest this come up repeatedly, let me expand upon a remark I made in a comment below: I would be interested to know if RH implies the existence of proofs of a known form in a known system within which one does not assume RH, such that the conclusions of all these proofs conjunctively yield RH. Actually the answer to my question is "yes" though I find my own example unsatisfying (rather like the Goldbach example): We can check zeros up to a given magnitude rigorously by known (non-trivial) numerical techniques. RH predicts these tests will come out positive, but of course the tests don't rely on RH. If we know they all come out positive, that's RH. I find this unsatisfying because the little proofs approximate the whole of RH so badly (compared to how the Chebyshev estimates really do make one feel one has PNT for all practical purposes). Now a family of zero-free regions that union up to $\sigma >1/2$ where each new zero-free region had strong arithmetic consequences, that would seem interesting. - 3 Perhaps psychologically we feel that the infinite sequence of Chebyshev-type estimates could, in principle, be attacked by other methods. I don't think anyone has ruled out this possibility, in any case. – Qiaochu Yuan Jan 27 2011 at 15:53 2 Every circular proof is also a proof of equivalence. That is, the unsatisfying circular proof, using an assumption that ends up depending on the thing to be proved, is also a proof the assumption is equivalent to the conclusion (they both imply each other). The hard part is to extract from the disappointment the bidirection or circle of implication. – Mitch Harris Jan 27 2011 at 18:13 2 The estimate $\psi(x)=x+O_{\varepsilon}(x^{1/2+\varepsilon})$ implies RH, which implies $\psi(x)=x+O(x^{1/2}\log^2{x})$. I've always found this rather amusing. – David Hansen Jan 28 2011 at 1:33 1 Given that $\sqrt{2}$ is irrational we get that for every rational with $\frac{p}{q} \le \sqrt{2}$ we have $\frac{p}{q} <\frac{p}{q}+\frac{1}{3q^2} <\sqrt{2}$ with something similar from above. – Aaron Meyerowitz Jan 28 2011 at 6:12 4 tricki.org/article/… – Terry Tao Jan 28 2011 at 11:03 show 1 more comment ## 1 Answer Perhaps an example of the kind of circularity you mention arises with the self-reference phenomenon that arises in connection with the incompleteness theorems and related applications. Specifically, Gödel proved the fixed-point lemma that for any assertion $\varphi(x)$ in arithmetic, there is a sentence $\psi$ such that PA, or any sufficiently powerful and expressible theory, proves that $\psi$ is equivalent to $\varphi(\ulcorner\psi\urcorner)$. In other words, $\psi$ is equivalent to the statement "$\psi$ has property $\varphi$." Thus, statements in the language of arithmetic can refer to themselves, and so self-reference, the stuff of paradox and nonsense, enters our beautiful number theory. One famous example, used by Gödel to prove the first incompleteness theorem, occurs when $\varphi(x)$ asserts, "$x$ is the code of a statement having no proof in PA", for then the resulting fixed point $\psi$ effectively asserts "this statement is not provable". It follows now that it cannot be provable, for then it would be a false provable statement, and so it is true. Thus, it is a true unprovable statement, establishing the first incompleteness theorem. A dual version of this, however, exhibits your circularity property in a stronger way. Namely, let us apply the fixed point lemma to the formula $p(x)$ asserting "$x$ is the code of a statement provable in PA." In this case, the resulting fixed point $\psi$ asserts "this statement IS provable." Consider now the following theorem of Löb: Theorem.(Löb) If the Peano Axioms (PA) prove the implication (PA proves $\varphi$)$\to\varphi$, then PA proves $\varphi$ directly. (And the converse is immediate, so PA proves that (PA proves $\varphi$)$\to \varphi$ if and only if PA proves $\varphi$.) In the case of $\psi$ asserting "$\psi$ is provable," we have that the hypothesis of Löb's theorem holds, and so we may make the conclusion that yes, indeed, $\psi$ really is provable! In other words, the statement "this statement is provable" really is provable, although no naive argument will establish this. The proof of Löb's theorem is itself a surprising exercise in circularity, something like the following: Theorem. Santa exists. Proof. Let $S$ be the statement, "If S holds, then Santa exists." Now, we claim that $S$ is true. Since it is an implication, we assume the hypothesis, and argue for the conclusion. So assume that the hypothesis of S is true; that is, assume $S$ holds. But then the implication expressed by $S$ is true. So the conclusion that Santa exists is also true. So we have shown under the assumption of the hypothesis of $S$ that the conclusion is true. So we have established that $S$ holds. Now, by $S$, it follows that Santa exists. QED (Those who know the proof of Löb's theorem will agree that the proof is fundamentally the same as the above, except that it is fully rigorous nonsense instead of silly nonsense!) - Thanks! Löb's theorem is new to me. Did you you mean to vary the typeface of "S" that once? – David Feldman Jan 27 2011 at 19:33 1 This is a really nice answer Joel. – Adam Hughes Jan 27 2011 at 19:33 David, that different-font S was a typo. Here is a link to Loeb's theorem on Wikipedia: en.wikipedia.org/wiki/L%C3%B6b's_theorem (watch out for the apostraphe, which somehow causes a problem). And thank you, Adam, I'm glad you like it! – Joel David Hamkins Jan 27 2011 at 19:43 yudkowsky.net/rational/lobs-theorem – Andres Caicedo Jan 27 2011 at 21:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305191040039062, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/5277/what-happens-when-we-connect-a-metal-wire-between-the-2-poles-of-a-battery
# What happens when we connect a metal wire between the 2 poles of a battery? As I remembered, at the 2 poles of a battery, positive or negative electric charges are gathered. So there'll be electric field existing within the battery. This filed is neutralized by the chemical power of the battery so the electric charges will stay at the poles. Since there are electric charges at both poles, there must also be electric fields outside the battery. What happens when we connect a metal wire between the 2 poles of a battery? I vaguely remembered that the wire has the ability to restrain and reshape the electric field and make it within the wire, maybe like a electric field tube. But is that true? - 1 @all Should one trat such a question seriously? I smell some "made up" question. – Georg Feb 16 '11 at 15:03 Why call it made up? – smwikipedia Feb 16 '11 at 16:27 As long as I have this suspicion, I wont tell You how to improve such questions. :=( – Georg Feb 16 '11 at 17:00 Thanks for being frank. – smwikipedia Feb 17 '11 at 2:28 ## 3 Answers Yes Sam, there definitely is electric field reshaping in the wire. Strangely, it is not talked about in hardly any physics texts, but there are surface charge accumulations along the wire which maintain the electric field in the direction of the wire. (Note: it is a surface charge distribution since any extra charge on a conductor will reside on the surface.) It is the change in, or gradient of, the surface charge distribution on the wire that creates, and determines the direction of, the electric field within a wire or resistor. For instance, the surface charge density on the wire near the negative terminal of the battery will be more negative than the surface charge density on the wire near the positive terminal. The surface charge density, as you go around the circuit, will change only slightly along a good conducting wire (Hence the gradient is small, and there is only a small electric field). Corners or bends in the wire will also cause surface charge accumulations that make the electrons flow around in the direction of the wire instead of flowing into a dead end. Resistors inserted into the circuit will have a more negative surface charge density on one side of the resistor as compared to the other side of the resistor. This larger gradient in surface charge distribution near the resistor causes the relatively larger electric field in the resistor (as compared to the wire). The direction of the gradients for all the aforementioned surface charge densities determine the direction of the electric fields. This question is very fundamental, and is often misinterpreted or disregarded by people. We are all indoctrinated to just assume that a battery creates an electric field in the wire. However, when someone asks "how does the field get into the wire and how does the field know which way to go?" they are rarely given a straight answer. A follow up question might be, "If nonzero surface charge accumulations are responsible for the size and direction of the electric field in a wire, why doesn't a normal circuit with a resistor exert an electric force on a nearby pith ball from all the built up charge in the circuit?" The answer is that it does exert a force, but the surface charge and force are so small for normal voltages and operating conditions that you don't notice it. If you hook up a 100,000V source to a resistor you would be able to measure the surface charge accumulation and the force it could exert. Here's one more way to think about all this (excuse the length of this post, but there is so much confusion on this question it deserves appropriate detail). We all know there is an electric field in a wire connected to a battery. But the wire could be as long as desired, and so as far away from the battery terminals as desired. The charge on the battery terminals can't be directly and solely responsible for the size and direction of the electric field in the part of the wire miles away since the field would have died off and become too small there. (Yes, an infinite plane of charge, or other suitably exotic configurations, can create a field that does not decrease with distance, but we are not talking about anything like that.) If the charge near the terminals does not directly and solely determine the size and direction of the electric field in the part of the wire miles away, some other charge must be creating the field there (Yes, you can create an electric field with a changing magnetic field instead of a charge, but we can assume we have a steady current and non-varying magnetic field). The physical mechanism that creates the electric field in the part of the wire miles away is a small gradient of the nonzero surface charge distribution on the wire. And the direction of the gradient of that charge distribution is what determines the direction of the electric field there. For a rare and absolutely beautiful description of how and why surface charge creates and shapes the electric field in a wire refer to the textbook: "Matter and Interactions: Volume 2 Electric and Magnetic Interactions" by Chabay and Sherwood, Chapter 18 "A Microscopic View of Electric Circuits" pg 631-640. - Excellent answer! There are so few who truly understand this issue. Thank you. – Joe Oct 6 '11 at 15:53 – Joe Oct 23 '11 at 9:34 1 Good call on the Jackson paper. Good to hear it straight from the man himself. – David Santo Pietro Mar 12 '12 at 3:39 I was looking for a clear explanation of how exactly electric fields appear along a circuit connected to a battery. This explanation is exactly what I was looking for. – anonymous Jan 2 at 14:18 When the 2 electrodes are at different potential an electric field will be established. The electric charges will gather at the two poles. Positive charges at the cathode and negative charges at the anode. If the two electrodes are not connected by an external conductor they will not be able to leave the surface of the electrodes and they simply accumulate over there producing an open circuit voltage. As soon as the two electrodes are connected by a conductor the charges will flow by the forces of the electric field in the appropriate direction. If the connecting wire has no resistance or almost zero resistance then it will be a short circuit and a huge current will flow only limited by the internal resistance of the battery. If the electrodes are connected by a conductor through a resistance then the current will be limited according to the Ohm's law. $I = \frac{V}{R+r}$ where $I$ is the Current, $V$ is the voltage between the electrodes, $R$ is the external resistance and $r$ is the internal resistance of the battery. (Note: Electric field is not neutralized by the chemical reaction rather it is maintained by the reaction. No reshaping of the field takes place. The charges move through the conductor since it is the path of least resistance. It is similar to the flow of water through a pipe from a tank. The water flows since there exists a pressure difference. The flow of water does not depend upon the orientation of the pipe. it only depends on the pressure difference at the two ends. Gravitational field does not reshape itself through the pipe. Similarly no electric field reshaping takes place. Only the voltage difference matters between the two terminals.) Additional comments for those who think electric field can reshape in a conductor along the conductor: These are simply delta and star connected networks. Now if electric field could reshape then the lines of forces will intersect at three points for the delta and one point for the star networks. But we all know that lines of forces can not intersect. Hence no reshaping of field lines take place. - 1 sb1, the E field always points along the direction of the wire. If you bend a wire the E field has to change directions to follow the new direction of the wire. Why would this not be considered "reshaping" of the E field in the wire? – David Santo Pietro Mar 12 '11 at 16:24 1 @sb1, I am not sure what you mean by "it never does". You correctly say that the field only has components along the wire. However, that means that if you change the direction of the wire, you change the direction of the field. For example, bend the wire into an S shape, and you will get S shaped field lines. Bend the wire into a spiral and you will get spiral shaped field lines. Why do you not consider that reshaping the field? BTW, I have read J.D. Jackson's Classical Electrodynamics cover to cover, so I know all about Maxwell's laws and what they imply. We can go as deep as you prefer. – David Santo Pietro Mar 12 '11 at 17:54 1 The structure of the electric field is determined by the distribution of charges and variation of magnetic fields, not by orientation of a conductor. This is clearly evident from Maxwell's laws. Is it so difficult to understand? – user1355 Mar 13 '11 at 3:51 1 @sb1: You are very mistaken. You keep bringing up things that are obvious, but do not address the question asked. Everyone knows fields can't cross. If you bend a wire into a figure 8 or circle, one part of the wire obviously has to be displaced on top of the other, not actually running through the point of intersection. i.e The metal of the wire can't touch or you have a short. If you did actually bend part of the wire into a figure 8 and actually ran the metal through the point of intersection the current would just skip the loop and there would be no electric field in the pinched off loop. – David Santo Pietro Mar 13 '11 at 6:47 1 Electric fields are a conservative field, but that also has no relevance here. Yes, an electron moving from the negative terminal to the positive terminal will have the same amount of work done regardless of the path or shape of the wire, but this does not mean the direction of or size of the field does not change when you change the length or direction of the wire. If you increase the length of wire connecting the terminals, the size of the electric field in the wire diminishes (E=-dV/dx). But the force will act over a larger distance and so, voila, same amount of work. – David Santo Pietro Mar 13 '11 at 6:48 show 12 more comments Before you connect the wire, the electric field (at least theoretically, this requires that you ignore everything else in the universe, including the contents of the battery) consists of a + charge and a - charge. With a typical car battery these points are separated by about 10 or 20cm. In reality, the charge is distributed in a very complicated manner. For example, if your car battery is rated for 1000 amp-hours, you can compute the "charge" by noting that an amp-second is a coulomb so an amp-hour is 3600 x 1000 coulombs. But actually these charges are held inside the battery chemically and do not appear at the electrodes until you take some of the charge that is there away. So to compute the initial charge you need to know only the voltage and the configuration of the leads. Then it's a matter of electrostatics to compute the charge distribution. After you connect the wire, current begins flowing according to the resistance of the wire. The wire will have a steadily decreasing voltage from one end to the other. Since the electric field is given by the gradient of the voltage, this means that the electric field along the wire will point down the wire. By the way, this happened accidentally to a battery in the back of his SUV. Unfortunately it happened next to a plastic can of gasoline. The resulting fire destroyed his vehicle seconds after he drove to the side of the road an jumped out of it. - 1 An amp-second is a coulomb, not a farad. A farad is a coulomb per volt. – Keenan Pepper Feb 17 '11 at 1:31 Fixed. Oooooooops. – Carl Brannen Feb 17 '11 at 1:38 You have repeated what was said in the question, but you didn't really answer it. – Joe Oct 6 '11 at 15:02 So you think that the answer stated that the wire will have a steadily decreasing voltage? The question could be abbreviated as "what happens to the electric field", which I answered. – Carl Brannen Oct 6 '11 at 18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497948884963989, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/121277-injectivity-scalar-product.html
# Thread: 1. ## Injectivity of the scalar product I want to show that $\hat{A}: V^* \times V \rightarrow \mathbb{R}$ (where $V$ is a vector space and $V^*$ is it's dual space) defined by: $\hat{A}(v^*,v) \equiv <v^*, Av>$ is injective. I have a proof from a book, but I don't quite understand what it's doing. It starts by: Let $\hat{A}=0$. Then $<v^*,Av>=0 \ \forall v^* \in V^*, \ v \in V$. Now since the scalar product is definite, this implies $Av=0 \ \forall v \in V$ and thus $A=0$. Haven't they just proven it for just 0? Also, what does "scalar product is definite" mean? 2. Originally Posted by Showcase_22 I want to show that $\hat{A}: V^* \times V \rightarrow \mathbb{R}$ (where $V$ is a vector space and $V^*$ is it's dual space) defined by: $\hat{A}(v^*,v) \equiv <v^*, Av>$ is injective. What is $A$ here? An invertible operator (matrix), perhaps? Otherwise the redaction of the lemma/proposition/theorem is incomplete. Tonio I have a proof from a book, but I don't quite understand what it's doing. It starts by: Haven't they just proven it for just 0? Also, what does "scalar product is definite" mean? . 3. Oh sorry yes! A is an endomorphism of V. It probably would have helped if I posted that earlier =S 4. i don't think the question is what you gave us. my guess is that the question is to prove that the map $\varphi : \text{End}_{\mathbb{R}}V \longrightarrow (V^* \times V)^*$ defined by $\varphi (A)=\hat{A}$ is injective. 5. Originally Posted by NonCommAlg i don't think the question is what you gave us. my guess is that the question is to prove that the map $\varphi : \text{End}_{\mathbb{R}}V \longrightarrow (V^* \times V)^*$ defined by $\varphi (A)=\hat{A}$ is injective. I agree: this seems to be way sounder. Anyway, to the OP: can we know what book did you take this question from? Tonio 6. Wow! I may mean that, but I have never seen that notation before! The book is "Introduction to Vectors and Tensors: Volume 1". The authors are Ray M Bowen and C C Wang. I'm trying to show that an isomorphism exists between $\vartheta_1^1(V)$ (the set of tensors of order (1,1)) and $L(V;V)$ (the set of linear maps from V to V). The first part works quite well. We know that $\dim \vartheta_1^1(V)=N^2$ where $\dim V = N$. Now we have to show that $\dim L(V;V)=N^2$. To do this let $e_1, \ldots , e_N$ be a basis for V. Define $N^2$ linear transformations $A^k_{\apha} :V \rightarrow U$ by $A^k_{\alpha}e_k=e_{\alpha}$ for $k, \alpha=1, \ldots , N$ $A^k_{\alpha} e_p=0$ for $k \neq p$ If A is an arbitrary member of $L(V;V)$ then $Ae_k \in V$ so $Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}$ where $k=1, \ldots ,N$. But we also know that $A^k_{\alpha}e_k=e_{\alpha}$. This gives that: $Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}=\sum_{\alpha=1}^N A^{\alpha}_k A^k_{\alpha}e_k=\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha}e_k$ We can rearrange this to get: $\left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)e_k \right)=0$ valid $\forall e_k \in \{e_1, \ldots ,e_N \}$ But we know that A is a linear map so the above statement is valid for any $v \in V$. ie. $\left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)v =0$ valid $\forall v \in V$ This implies that $A= \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}$ meaning that the $N^2$ linear transformations we defined at the start generate $L(V;V)$. Now we have to prove that these transformations are linearly independent. We do this by setting $\sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}=0$. From here we can use the $N^2$ linear transformations from before to get: $\sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}(e_p)=\sum_{\alpha=1}^MA^{\alpha}_p e_{\alpha}=0$. But since $e_{\alpha}$ are basis vectors we know that $e_{\alpha} \neq 0$. Therefore $A^{\alpha}_p=0$ where $p, alpha = 1, \ldots ,M$. Therefore the set of $A^s_{\alpha}$ is a basis of $L(V;V)$. This gives that $\dim L(V;V) =\left( \dim V \right)^2=N^2$. There is also a theorem in the book stating that if two vector spaces have the same dimension then an isomorphism exists between the two. In this case $\dim L(V;V)= \dim \vartheta_1^1(V)=N^2$ so an isomorphism exists by the theorem. Finally, I actually want to find an example of an isomorphism between $L(V;V)$ and $\vartheta_1^1(V)$. The book then defines the function $<br /> <br /> \hat{A}: V^* \times V \rightarrow \mathbb{R}<br />$ where $<br /> <br /> \hat{A}(v^*,v) \equiv <v^*, Av><br />$ and $A$ is an endomorphism of V. Since the two vector spaces of the same dimensions, by the pigeonhole principle we have to show that $\hat{A}$ is injective (thus implying that it's a bijection). The book does this by setting $\hat{A}=0 \Rightarrow <v^*,Av>=0 \ \forall v^* \in V^*$ and $v \in V$ (I thought this was confusing, how does this prove that $\hat{A}$ is injective?) Since the scalar product is definite (?) this gives $Av=0 \ \forall v \in V$ and thus $A=0$. Consequently the operation "hat" is an isomorphism. The last part is what i'm particularly confused about. I don't see how setting it to 0 will show that it's injective. Sorry for the lengthy post, I figured it would be better if I showed exactly what i've done so far! 7. Originally Posted by Showcase_22 Wow! I may mean that, but I have never seen that notation before! The book is "Introduction to Vectors and Tensors: Volume 1". The authors are Ray M Bowen and C C Wang. I'm trying to show that an isomorphism exists between $\vartheta_1^1(V)$ (the set of tensors of order (1,1)) and $L(V;V)$ (the set of linear maps from V to V). The first part works quite well. We know that $\dim \vartheta_1^1(V)=N^2$ where $\dim V = N$. Now we have to show that $\dim L(V;V)=N^2$. To do this let $e_1, \ldots , e_N$ be a basis for V. Define $N^2$ linear transformations $A^k_{\apha} :V \rightarrow U$ by $A^k_{\alpha}e_k=e_{\alpha}$ for $k, \alpha=1, \ldots , N$ $A^k_{\alpha} e_p=0$ for $k \neq p$ If A is an arbitrary member of $L(V;V)$ then $Ae_k \in V$ so $Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}$ where $k=1, \ldots ,N$. But we also know that $A^k_{\alpha}e_k=e_{\alpha}$. This gives that: $Ae_k=\sum_{\alpha=1}^N A^{\alpha}_k e_{\alpha}=\sum_{\alpha=1}^N A^{\alpha}_k A^k_{\alpha}e_k=\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha}e_k$ We can rearrange this to get: $\left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)e_k=0$ valid $\forall e_k \in \{e_1, \ldots ,e_N \}$ But we know that A is a linear map so the above statement is valid for any $v \in V$. ie. $\left( A-\sum_{\alpha=1}^N \sum_{s=1}^N A^{\alpha}_s A^s_{\alpha} \right)v =0$ valid $\forall v \in V$ This implies that $A= \sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}$ meaning that the $N^2$ linear transformations we defined at the start generate $L(V;V)$. Now we have to prove that these transformations are linearly independent. We do this by setting $\sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}=0$. From here we can use the $N^2$ linear transformations from before to get: $\sum_{\alpha=1}^N \sum_{s=1}^NA^{\alpha}_s A^s_{\alpha}(e_p)=\sum_{\alpha=1}^MA^{\alpha}_p e_{\alpha}=0$. But since $e_{\alpha}$ are basis vectors we know that $e_{\alpha} \neq 0$. Therefore $A^{\alpha}_p=0$ where $p, alpha = 1, \ldots ,M$. Therefore the set of $A^s_{\alpha}$ is a basis of $L(V;V)$. This gives that $\dim L(V;V) =\left( \dim V \right)^2=N^2$. I find it weird that you went through all this trouble to prove a very elementary fact from linear algebra which, I presume, must be assumed when you reach the necessary level to mess with tensors... There is also a theorem in the book stating that if two vector spaces have the same dimension then an isomorphism exists between the two. In this case $\dim L(V;V)= \dim \vartheta_1^1(V)=N^2$ so an isomorphism exists by the theorem. "...two vector spaces OVER THE SAME FIELD..." Finally, I actually want to find an example of an isomorphism between $L(V;V)$ and $\vartheta_1^1(V)$. The book then defines the function $<br /> <br /> \hat{A}: V^* \times V \rightarrow \mathbb{R}<br />$ where $<br /> <br /> \hat{A}(v^*,v) \equiv <v^*, Av><br />$ and $A$ is an endomorphism of V. Since the two vector spaces of the same dimensions, by the pigeonhole principle we have to show that $\hat{A}$ is injective (thus implying that it's a bijection). The book does this by setting $\hat{A}=0 \Rightarrow <v^*,Av>=0 \ \forall v^* \in V^*$ and $v \in V$ (I thought this was confusing, how does this prove that $\hat{A}$ is injective?) Since the scalar product is definite (?) this gives $Av=0 \ \forall v \in V$ and thus $A=0$. Consequently the operation "hat" is an isomorphism. The last part is what i'm particularly confused about. I don't see how setting it to 0 will show that it's injective. Sorry for the lengthy post, I figured it would be better if I showed exactly what i've done so far! Wow! Well, I have the book by Bowen and oh my god! Nothing as physicists, engineers and "mathematics scientists"(??!?) to mess up big time with notation and make cumbersome and horrible VERY SIMPLE and beautiful stuff...reminded me why I chose, I choose and I will choose mathematics over physics/engineering/whatever forever! In page 213 they actually defined $<v^{*},u>:=v^{*}(u)=$ the action of the map $v^{*}\in V^{*}$ on the vector $u\in V$. This is a rather standard notation, and then we get: $<v^{*},Av>:=v^{*}(Av)$ , so if $<v^{*},Av>=0$ for all $v^{*}\in V^{*}\,,\,v\in V$, then it must be that $Av=0$ for all $v\in V$ (since this is the only possibility for a vector that is mapped by ALL the linear functionals in $V^{*}$ to zero...!), and then $A=0$ and we're done. Tonio 8. lol, half the battle in maths is deciphering the notation!! This figure does climb to much higher values depending on the lecturer and the subject!! I'm really sorry but I don't see how that shows it's injective. We know that $\hat{A} : V^* \times V \rightarrow \mathbb{R}$. For this to be injective we need to associate each linear map in $V^*$ and an element in $V$ to one real number (as in each combination of linear map and a vector to a single real number). However, setting $\hat{A}=0$ and getting $A=0 \ \forall v \in V$ is then a contradiction since you can have two elements in $V^* \times V$ mapped to the same number. For example, $A \begin{pmatrix} 1 \\ 0 \end{pmatrix}=0$ and $A \begin{pmatrix} 0 \\ 1 \end{pmatrix}=0$ so $\hat{A}$ isn't injective (if we're working over $\mathbb{R}^2$). I'm pretty sure i've got the wrong end of the stick, and i'd really appreciate if you could tell me what's wrong with how i'm thinking about this. 9. Originally Posted by Showcase_22 lol, half the battle in maths is deciphering the notation!! This figure does climb to much higher values depending on the lecturer and the subject!! I'm really sorry but I don't see how that shows it's injective. We know that $\hat{A} : V^* \times V \rightarrow \mathbb{R}$. For this to be injective we need to associate each linear map in $V^*$ and an element in $V$ to one real number (as in each combination of linear map and a vector to a single real number). However, setting $\hat{A}=0$ and getting $A=0 \ \forall v \in V$ is then a contradiction since you can have two elements in $V^* \times V$ mapped to the same number. For example, $A \begin{pmatrix} 1 \\ 0 \end{pmatrix}=0$ and $A \begin{pmatrix} 0 \\ 1 \end{pmatrix}=0$ so $\hat{A}$ isn't injective (if we're working over $\mathbb{R}^2$). I'm pretty sure i've got the wrong end of the stick, and i'd really appreciate if you could tell me what's wrong with how i'm thinking about this. Ok, let us try to make some order here, shall we? First, the operation $\hat{}$ is, as shown at top of page 219, a tensor in $\vartheta_1^1(V)$, and it's thus a map from the cartesian product (in this case, exterior direct prodduct) $V^{*}\times V$ into the definition field $\mathbb{R}$ We want an isomorphism $\Phi:L(V,V)\rightarrow \vartheta^1_1(V)$ , i.e.: we want to associate with every endomorphism of $V$ a unique tensor in $\vartheta^1_1(V)$ in such a way that this association is 1-1 and onto AND a vector space homomorphism, aka linear transformation...so far so good? Cool... Since the involved lin. spaces are isomorphic AND of finite dimension, it is enough to define a linear transformation $\Phi$ as above and show that it is 1-1 $\Longleftrightarrow Ker(\Phi)=\{0\}$. So let us define our map: let $A\in L(V,V)$ be any element, and we define $\Phi(A):=\hat{A}$ , where $\hat{A}$ is the tensor defined by $\hat{A}<v^{*},v>:=<v^{*},Av>$...ok? I know, they did all this in a rather sloppy and cumbersome way in the book...your bad! You should be studying maths and not all this nonsense. Anyway...we have now to prove that $Ker(\Phi)=\{0\}\Longleftrightarrow \left(\Phi(A)=0 \Longrightarrow A=0\right)\Longleftrightarrow \left(\hat{A}=0\Longrightarrow A=0\right)$ , and this is why in the book they assume $\hat{A}=0$ in order to conclude $A=0$ . Now, how did they achieve this? Well, $\hat{A}=0\Longrightarrow <v^{*},Av>=0\,\,\forall v^{*}\in V^{*}\,\,\forall v\in V$ . But remember that this notation merely means $0=<v^{*},Av>:=v^{*}(Av)\Longrightarrow$ for any $v\in V$ and for any $v^{*}\in V^{*}$, the linear functional $v^{*}$ maps the vector $Av$ to zero...Again, this is true FOR ANY VECTOR $v\in V$ , and from here that it MUST BE that $A=0$ , i.e. $A$ is the zero linear transformation (or in other words: if for some vector $v\in V$, which a fortiori must be non-zero, we had that $Av=u\neq 0$, then there exists some $w^{*}\in V^{*}$ s.t. $w^{*}(u)=<w^{*},u>=<w^{*},Av>\neq 0$ , contradicting $<v^{*},Av>=0$ for ALL functionals in $V^{*}$ and ALL vectors in $V$ (Please note that there is no apriori relation between $v^{*}\,\,\,and\,\,\,v$ : this is just another clumsy, cumbersome and confusing notation these guys use in their book, instead of the much more clear and non-confusing $\phi\,,\,f$ or something like that to denote elements in $V^{*}$...this is explained in page 203, 8 lines from the bottom.) Hope the above clears out most of the fog in this... Tonio 10. Thankyou Tonio! I was unaware that a linear map between two finite dimensional vector spaces (over the same field) is 1-1 iff the kernel of the map contains only the 0 vector. I do understand what it was talking about now! 11. Originally Posted by Showcase_22 Thankyou Tonio! I was unaware that a linear map between two finite dimensional vector spaces (over the same field) is 1-1 iff the kernel of the map contains only the 0 vector. I do understand what it was talking about now! In fact this is true even without the finite dimension assumption and over any field whatsoever. Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 178, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550346732139587, "perplexity_flag": "head"}
http://mathoverflow.net/questions/19119?sort=oldest
## Why must one sheafify quotients of sheaves? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves (of abelian groups) on a topological space $X$ such that $\mathcal{G}(U)$ is a subgroup of $\mathcal{F}(U)$ for every open set $U$ in $X$. The sheaf associated to the presheaf $P(\mathcal{F}/\mathcal{G})$ defined by ```$$ U\mapsto \mathcal{F}(U)/\mathcal{G}(U) $$``` is called the quotient sheaf $\mathcal{F}/\mathcal{G}$. The associated sheaf functor is left adjoint to the inclusion functor, so it commutes with colimits and in particular with quotients. My question is: Why must one sheafify the presheaf $P(\mathcal{F}/\mathcal{G})$ then? - 2 Maybe an example where sheafification is needed is what you need? – Mariano Suárez-Alvarez Mar 23 2010 at 16:56 ## 2 Answers For presheaves (of sets or groups) we know what this particular (or any) colimit operation is: apply the operation objectiwise (for each $U$). Now the sheafification preserves colimits, hence we apply sheafification to a colimit cocone on presheaves to obtain a colimit cocone in sheaves. Doing sheafification to the presheaves which are alerady sheaves does nothing to them, but it, by the right exactness, does the correct thing to the colimit. This proves that the sheafification following the colimit in presheaves is the correct way to compute the colimit, and in that we did use the right exactness of the sheafification essentially. The fact that it is necessary does not follow from the general nonsense as there are both examples where we accidentally do not need a sheafification step and those where we do need. For the limit constructions on sheaves we never need the sheafification because the embedding of the sheaves into presheaves is left exact hence we can simply compute the limits in presheaves. It seems you had somehow an opposite impression. - Ah, I see! One applies implicitly the (right adjoint) inclusion $Shv\to PShv$ which does NOT commute with colimits. Thanks, Zoran. – roger123 Mar 24 2010 at 10:03 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You ask: "Why must one sheafify the presheaf $P(\mathcal{F}/\mathcal{G})$ then?". The answer is: "Because it is not a sheaf!" Here is an example. Let $X=\mathbb P^1_k, \mathcal F = \mathcal O, \mathcal G=\mathcal O (-2.O)$, where $O$ denotes the origin $O=(0:1)$ . Define $\infty=(1:0)$ and let $z$ be the coordinate on $\mathbb P^1\setminus \infty$ . Consider the covering of X by the open subsets $U_0=X\setminus \infty$ and $U_\infty =X\setminus O$. Let me denote the presheaf $P(\mathcal{F}/\mathcal{G})$ just by $P$. Then $class(z)\in P(U_0)$ and $class(0)\in P(U_\infty)$ are sections of $P$ over the two open sets $U_0$ and $U_\infty$ of our covering which coincide on their intersection $U_{0\infty}$ for the excellent reason that $P(U_{0\infty})=0$ ! [Actually $P(U)=0$ for any open subset $U\subset X$ not containing $O$] But these compatible sections cannot be glued to a global section of $P$ on $X$. Indeed a section of $P$ on $X$ is just a constant $c\in k$ , since $\mathcal O(X)=k$ and $\mathcal O(-2.O)(X)=0$. But that constant $c$ cannot be the glued global section , because its restriction to $P(U_0)$ is $class(c)$ and in $P(U_0)$ we have $class(c) \neq class(z)$ since $z-c$ doesn't vanish with order two at $O$. - I think the question really is: Why is it not a sheaf if sheafification commutes with colimits? – Andrea Ferretti Mar 23 2010 at 18:13 5 Commutes means that sheafification of a (colimit in PShv) is (colimit in Shv) of a sheafification (the latter step is empty operation on sheaves of course) and NOT that sheafification of a colimit in PShv is just a colimit in PShv. – Zoran Škoda Mar 23 2010 at 18:20 Yes, I know, I was just trying to point out what the original problem was. I guess roger was aware of examples of quotient presheaves which are not sheaves. – Andrea Ferretti Mar 24 2010 at 1:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289811253547668, "perplexity_flag": "head"}
http://mathoverflow.net/questions/tagged/analytic-geometry
## Tagged Questions 0answers 108 views ### Asymptotics vs Puiseux series Define asymptotic as a class of sequences {$x_i$},$_{i\in\mathbb N}$ modulo equivalence {$x_i$}={$y_i$} if $\lim_{i\to\infty} (x_i/y_i)=c\in\mathbb R,c\ne 0$. More, we define \$X= … 0answers 275 views ### On Determinants of Laplacians on Riemann Surfaces History of the Formula: In their famous paper "On Determinants of Laplacians on Riemann Surfaces" (1986), D'Hoker and Phong computed the determinant of the Laplacian $\Delta_n^+$ o … 2answers 114 views ### How to tell if a second-order curve goes below the $x$ axis? Suppose we have a second-order curve in general form: (1) $a_{11}x^{2}+2a_{12}xy+a_{22}y^{2}+2a_{13}x+2a_{23}y+a_{33}=0$. I'd like to know if there is a simple condition that ens … 3answers 459 views ### Geometric realization of Hochschild complex Let $A$ be a commutative $\mathbb{C}$-algebra, and consider $C_{\bullet}(A,A)$ the simplicial Hochschild homology module of $A$ with respect to itself (i.e. \$C_{n}(A,A)=A^{\otimes … 0answers 97 views ### Asymptotes of hyperbolic sections of a given cone A book I'm reading (Companion to Concrete Math Vol. I by Melzak) mentions, "...any ellipse occurs as a plane section of any given cone. This is not the case with hyperbolas: for a … 0answers 168 views ### Topology of theta nulls Siegel upper half-space, $\mathfrak{h}_g$, consists of symmetric $g\times g$ complex matrices with positive-definite imaginary part. From an element $Z\in \mathfrak{h}_g$ we can co … 0answers 422 views ### Cohesive ∞-toposes for analytic geometry There is a class of big ∞-toposes that come with a good supply of intrinsic notions of differential geometry and differential cohomology: called cohesive ∞-toposes (after Lawvere's … 3answers 403 views ### Lattice points close to a line Take a sheet of grid paper and draw a straight line in any direction from the origin. What is the closest non-zero grid point $\boldsymbol{p}\in\mathbb{Z}^2$ within a distance \$\ep … 1answer 333 views ### Pathologies of analytic (non-algebraic) varieties. Note: By an "analytic non-algebraic" surface below I mean a two dimensional compact analytic variety $X$ (over $\mathbb{C}$) which is not an algebraic variety. A property of Nag … 1answer 192 views ### Is a compact subset of a Stein space admitting a fundamental system of Stein neighbourhoods necessarily holomorphically convex? Let X be a Stein manifold and let K be a compact subset of X. Suppose that K possesses in X a fundamental system of neighbourhoods which are Stein spaces. Then, it is a result by R … 1answer 605 views ### Are flat morphisms of analytic spaces open? Let $f:X\to Y$ be a morphism of complex analytic spaces. Assume $f$ is flat (or, more generally, that there is a coherent sheaf on $X$ with support $X$ which is $f$-flat). Is $f$ a … 2answers 277 views ### If $K$ and $L$ are compact convex sets with smooth boundary, does their union have piecewise-smooth boundary? Clarification: by "piecewise", I mean a finite number of pieces. I'm sure this must be true, but my search for a citation was in vain (although I did learn the new term "polyconve … 2answers 550 views ### Embeddings and triangulations of real analytic varieties This is a follow up question to my answer here http://mathoverflow.net/questions/35156/how-do-you-define-the-euler-characteristic-of-a-scheme/36038#36038 A real analytic space is … 1answer 241 views ### Is the closure of an open holomorphically convex subset of a Stein space holomorphically convex? Let X be a Stein manifold and U an open, connected, relatively compact, holomorphically convex subset of X. Is the closure of U in X holomorphically convex? Also, if X is a Stein … 0answers 591 views ### Generalized GAGA So, I have heard GAGA works for Rigid Analytic spaces. I know next to nothing about this, but it made me curious as to whether there are any other contexts in which GAGA "works". …
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8752257823944092, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/181728-presumabley-quadratic-word-problem.html
# Thread: 1. ## Presumabley a Quadratic word problem A rectangular field adjacent to a river must be fenced on 3 sides but not on the river bank. What is the largest area that can be enclosed if 50 yards of fencing is used? The chapter is on Quadratic Functions; Graphing them using intercepts and turning points specifically. Given this I assume that the question wants me to place it into a x^2 format. First off I can't figure out how to set it up as a quadratic. Secondly the best result I can get is when the sides are equal, or 50/3, which is approximately equal to 277. The answer is 312. 2. Originally Posted by bkbowser A rectangular field adjacent to a river must be fenced on 3 sides but not on the river bank. What is the largest area that can be enclosed if 50 yards of fencing is used? The chapter is on Quadratic Functions; Graphing them using intercepts and turning points specifically. Given this I assume that the question wants me to place it into a x^2 format. First off I can't figure out how to set it up as a quadratic. Secondly the best result I can get is when the sides are equal, or 50/3, which is approximately equal to 277. The answer is 312. Since the pen is next to a river we only need fence on 3 sides. Let the side parallel to the river be l and the side perpendicular to the river be w. Then using the area and perimeter formula for a rectangle we get $P=l+2w \iff 50=l+2w$ $A=lw$ Now you can use the first equation to eliminate a variable from the 2nd then find the maximum of the resulting quadratic. 3. Start by saying the 50 yards is equal to the sum of the two sides perpendicular (x) and the side that is parallel to the river (y) You get x+x+y=50 or 2x+y = 50 Now the area is just lenth times width or x times y. So in the above equation solve for y, so area can be a function of just x. 4. Ah so it should be something like x(50-2x). 5. Yep A =x(50-2x) Now where has this value a maximum? Or where is the turning point Hint: Half way between the zeros.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344891905784607, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/91508-inverse-matrix.html
# Thread: 1. ## the inverse of a matrix Is the inverse of a matrix unique?? If yes how do we prove that ? 2. Assume that a nonsingular matrix has two inverse matrices and then show that the two matrices are equal to each other. 3. The same proof to show inverses in a group are unique works. Let $A_1^{-1}$ and $A_2^{-1}$ be inverses of A. $A_1^{-1}=A_1^{-1}I_n=A_1^{-1}(AA_2^{-1})=(A_1^{-1}A)A_2^{-1}=I_nA_2^{-1}=A_2^{-1}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8837262988090515, "perplexity_flag": "head"}
http://gilkalai.wordpress.com/2008/11/09/detrimental-noise/?like=1&source=post_flair&_wpnonce=e5382928c8
Gil Kalai’s blog ## Detrimental Noise Posted on November 9, 2008 by ### John Lennon Disclaimer: It is a reasonable belief  (look here, and here), and an extremely reasonable working assumption (look  here) that computationally superior quantum computers can be built. (This post and the draft will be freely updated) I am struggling to meet the deadline of a week ago for a chapter regarding adversarial noise models for quantum error correction. (Update Nov 20: here is the draft; comments are welcomed. Update April 23: Here is the arxived paper, comments are welcome! ) My hypothetical model is called “detrimental.” (This is a reason substantial math postings are a bit slow recently; but I hope a few will come soon.) This project is quite central to my research in the last three years, and it often feels like running, over my head, after my own tail that might not be there. So this effort may well be a CDM (“career damaging move”) but I like it nevertheless. It is related to various exciting conceptual and foundational issues. I do have occasionally a sense of progress, (often followed by a backtrack) and for this chapter, rather than describing detrimental noise by various (counterintuitive) properties as I always did, I think I have an honest definition of detrimental noise. Let me tell you about it. (Here is a recent useful guide: a zoo of quantum algorithms, produced by Stephen Jordan.) ## Detrimental noise Consider a quantum memory with $n$ qubits at a state $\rho_0$. Suppose that $\rho_0$ is a tensor product state. The noise affecting the memory in a short time interval can be described by a quantum operation $E_0$. Lets suppose that $E_0$ acts independently on different qubits and, for qubit $i$ with some small probability $p_i$, $E_0$ changes it state to the maximum entropy state $\tau_i$. This is a very simple form of noise that can be regarded as basic to understanding the standard models of noise as well as of detrimental noise. In the standard model of noise, $E_0$ describes the noise of the quantum memory regardless of the state $\rho$ stored in the memory. This is a quite natural and indeed expected form of noise. A detrimental noise will correspond to a scenario in which, when the quantum memory is at a state $\rho$ and $\rho= U \rho_0$, the noise $E$ will be $U E_0 U^{-1}$. Such noise is the effect of first applying $E_0$ to $\rho_0$ and then applying $U$ to the outcome noiselessly. Of course, in reality we cannot perform $U$ instantly and noiselessly and the most we can hope for is that $\rho$ will be the result of a process. The conjecture is that a noisy process leading to $\rho$ will be subject to noise of the form we have just described. A weaker weaker conjecture is that detrimental noise is present in every natural noisy quantum process. I also conjecture that damaging effects of the detrimental noise cannot be canceled or healed by other components of the overall noise.When we model a noisy quantum system either by a the qubits/gates description or in other ways we make a distinction between “fresh” errors which are introduced in a single computer cycle (or infinitesimally when the evolution is described by a continuous model) and the cumulative errors along the process. The basic insight of fault tolerant quantum computing is that if the incremental errors are standard and sufficiently small then we can make sure that the cumulated errors are as well. The conjecture applies to fresh errors. (Updated: Nov 19; sorry guys, the blue part is over-simplified and incorrect; But an emergency quantifier replacement seemed to have helped; it seems ok now)  The definition of detrimental noise for general quantum systems that we propose is as follows: A detrimental noise of a quantum system at a state $\rho$ commutes with every some non-identity quantum operation which stabilizes $\rho$. Note that this description, Just like for the standard model of noise, we do not specify a single noise operation but rather gives an envelope for a family of noise operations. In the standard model of noise the envelope $\cal D_{\rho}$ of noise operations when the computer is at state $\rho$ does not depend on $\rho$. For detrimental noise there is a systematic relation between the envelope of noise operations ${\cal D}_\rho$ and the state $\rho$ of the computer. Namely, ${\cal D}_{U\rho} = U {\cal D}_\rho U^{-1}$. ## Why is it detrimental? Detrimental noise leads to highly correlated errors when the state of the quantum memory is highly entangled. This is quite bad for quantum error-correction, but an even more devastating property of detrimental noise is that the notion of “expected number of qubit errors” becomes sharply different from the rate of noise as measured by fidelity or trace distance. Since conjugation by a unitary operator preserves fidelity-metric, the expected number of qubit errors increases linearly with the number of qubits for highly entangled states. Here is another little thing from my paper that I’d like to try on you: ## A riddle: Can noise remember the future? Suppose we plan a process and carry it out up to a small amount of errors. Can there be a systematic relation between the errors at some time and the planned process at a later time? Context: In the context of quantum error-correction and fault-tolerant quantum computing, the noise depending on the past process is regarded as a possibility: the environment can “remember” traces of the process leading to the present state. But I did not encounter the possibility that the noise at a certain time can systematically depend on the planned process at a later time. Can it? My answer: yes it can! The assumption that the entire process was carried up to few errors implies a statistical relation between the entire evolution of the performed process, and, in particular, the incremental errors at a given time, and the entire planned process. Example: A gymnast performs a complicated routine. Her errors are observed and are taken into account and her overall score is 9.7525. Is it true that the nature of the errors at a specified moment of the routine may depend on the entire planned routine? ### Like this: This entry was posted in Computer Science and Optimization, Controversies and debates, Physics and tagged Fault-tolerance, Noise, Quantum computers, Quantum error-correction. Bookmark the permalink. ### One Response to Detrimental Noise 1. Gil Kalai says: I added a link to an arxived version of the paper. • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229846000671387, "perplexity_flag": "middle"}
http://en.wikiversity.org/wiki/Primary_mathematics/Proportions
# Primary mathematics/Proportions From Wikiversity Educational level: this is a primary education resource. Educational level: this is a secondary education resource. Ratios are similar to fractions. $x:y= \frac{x} {x+y}$. Remember, The proportion x:y does not equal the fraction $\frac{x} {y}$. For example, if the ratio of boys to girls in a classroom is 1:2, that does not mean half of the class is boys and the other half is girls. It means that $\frac{1} {3}$ of the class is boys and $\frac{2} {3}$ of the class is girls. (But that means that the number of boys is half the number of girls.) But remember that the fraction only applies to the sum of the two sides $\left(x+y\right)$, and can be wrong if there are more than 2 things. For example, if there is a fruit basket that has 16 fruits, and the ratio of apples to oranges is 3:1, that doesn't always mean that there are 12 apples $\left(\frac{3} {4} \times 16\right)$ and 4 oranges $\left(\frac{1} {4} \times 16\right)$. There could be 12 bananas in the fruit basket, which means that there are only 3 apples and one orange.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570141434669495, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/07/22/the-index-of-a-linear-map/?like=1&_wpnonce=401286028f
# The Unapologetic Mathematician ## The Index of a Linear Map Today I want to talk about the index of a linear map $A:V\rightarrow W$, which I’ll motivate in terms of a linear system $Ax=b$ for some linear map $A:\mathbb{F}^m\rightarrow\mathbb{F}^n$. One important quantity is the dimension of the kernel of $A$. In terms of the linear system, this is the dimension of the associated homogenous system $Ax=0$. If there are any solutions of the system under consideration, they will form an affine space of this dimension. But the system is inhomogenous in general, and as such it might not have any solutions. Since every short exact sequence of vector spaces splits we can write $\mathbb{F}^n=\mathrm{Im}(A)\oplus U$. Then the vector $b$ will have some component in the image of $A$, and some component in the complementary subspace $U$. Note that $U$ is not canonical here. I’m just asserting that some such subspace exists so we can make this decomposition. Now the condition that the system have any solutions is that the component of $b$ in $U$ vanishes. But we can pick some basis for $U$ and think of each component of $b$ with respect to each of those basis elements vanishing. That is, we must apply a number of linear conditions equal to the dimension of $U$ in order to know that the system has any solutions at all. And once it does, it will have $\dim(\mathrm{Ker}(A))$ of them. But what happens if we quotient out $\mathbb{F}^n$ by $\mathrm{Im}(A)$? We get the cokernel $\mathrm{Cok}(A)$, which must then be isomorphic to $U$! So the number of conditions we need to apply to $b$ is the dimension of the cokernel. Now I’ll define the index $\mathrm{Ind}(A)$ of the linear map $A$ to be the difference between the dimension of the kernel and the dimension of the cokernel. In the more general case, if one of these dimensions is infinite we may get an infinite index. But often we’ll restrict to the case of “finite index” operators, where the difference works out to be a well-defined finite number. Of course, when dealing with linear systems it’s guaranteed to be finite, but there are a number of results out there for the finite index case. In fact, this is pretty much what a “Fredholm operator” is, if we ever get to that. Anyhow, we’ve got this definition: $\mathrm{Ind}(A)=\dim(\mathrm{Ker}(A))-\dim(\mathrm{Cok}(A))$ Now let’s add and subtract the dimension of the image of $A$: $\mathrm{Ind}(A)=\left(\dim(\mathrm{Ker}(A))+\dim(\mathrm{Im}(A))\right)-\left(\dim(\mathrm{Cok}(A))+\dim(\mathrm{Im}(A))\right)$ Clearly the dimension of the image and the dimension of the cokernel add up to the dimension of the target space. But notice also that the rank-nullity theorem tells us that the dimension of the kernel and the dimension of the image add up to the dimension of the source space! That is, we have the equality $\mathrm{Ind}(A)=\dim(V)-\dim(W)$ What happened here? We started with an analytic definition in terms of describing solutions to a system of equations, and we ended up with a geometric formula in terms of the dimensions of two vector spaces. What’s more, the index doesn’t really depend much on the particulars of $A$ at all. Any two linear maps between the same pair of vector spaces will have the same index! And this gives us a simple tradeoff: for every dimension we add to the space of solutions, we have to add a linear condition to restrict the vectors $b$ for which there are any solutions at all. Alternately, what happens when we add a new equation to a system? With a new equation the dimension of the target space goes up, and so the index goes down. One very common way for this to occur is for the dimension of the solution space to drop. This gives rise to our intuition that each new equation reduces the number of independent solutions by one, until we have exactly as many equations as variables. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 3 Comments » 1. [...] really interested in is an isomorphism. Such a map has no kernel and no cokernel, and so its index is definitely zero. If it weren’t clear enough already, this shows that isomorphic vector [...] Pingback by | July 23, 2008 | Reply 2. [...] , and we couldn’t tell where in to send them under the inverse map. This tells us that the index of an isomorphism must be zero, and thus that the vector spaces must have the same dimension. This [...] Pingback by | October 17, 2008 | Reply 3. [...] to be invertible? Its image must miss some vectors in . That is, we have a nontrivial kernel. The index of is zero, so a trivial kernel would mean a trivial cokernel. We would then have a one-to-one and [...] Pingback by | January 14, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259470105171204, "perplexity_flag": "head"}
http://www.math.uah.edu/stat/urn/MultiHypergeometric.html
$$\newcommand{\P}{\mathbb{P}}$$ $$\newcommand{\E}{\mathbb{E}}$$ $$\newcommand{\R}{\mathbb{R}}$$ $$\newcommand{\N}{\mathbb{N}}$$ $$\newcommand{\bs}{\boldsymbol}$$ $$\newcommand{\var}{\text{var}}$$ $$\newcommand{\cov}{\text{cov}}$$ $$\newcommand{\cor}{\text{cor}}$$ ## 3. The Multivariate Hypergeometric Distribution ### Basic Theory As in the basic sampling model, we start with a finite population $$D$$ consisting of $$m$$ objects. In this section, we suppose in addition that each object is one of $$k$$ types; that is, we have a multi-type population. For example, we could have an urn with balls of several different colors, or a population of voters who are either democrat, republican, or independent. Let $$D_i$$ denote the subset of all type $$i$$ objects and let $$m_i = \#(D_i)$$ for $$i \in \{1, 2, \ldots, k\}$$. Thus $$D = \bigcup_{i=1}^k D_i$$ and $$m = \sum_{i=1}^k m_i$$. The dichotomous model considered earlier is clearly a special case, with $$k = 2$$. As in the basic sampling model, we sample $$n$$ objects at random from $$D$$. Thus the outcome of the experiment is $$\bs{X} = (X_1, X_2, \ldots, X_n)$$ where $$X_i \in D$$ is the $$i$$th object chosen. Now let $$Y_i$$ denote the number of type $$i$$ objects in the sample, for $$i \in \{1, 2, \ldots, k\}$$. Note that $$\sum_{i=1}^k Y_i = n$$ so if we know the values of $$k - 1$$ of the counting variables, we can find the value of the remaining counting variable. As with any counting variable, we can express $$Y_i$$ as a sum of indicator variables: For $$i \in \{1, 2, \ldots, k\}$$ $Y_i = \sum_{j=1}^n \bs{1}(X_j \in D_i)$ We assume initially that the sampling is without replacement, since this is the realistic case in most applications. #### The Joint Distribution Basic combinatorial arguments can be used to derive the probability density function of the random vector of counting variables. Recall that since the sampling is without replacement, the unordered sample is uniformly distributed over the combinations of size $$n$$ chosen from $$D$$. The probability density funtion of $$(Y_1, Y_2, \ldots, Y_k)$$ is given by $\P(Y_1 = y_1, Y_2 = y_2, \ldots, Y_k = y_k) = \frac{\binom{m_1}{y_1} \binom{m_2}{y_2} \cdots \binom{m_k}{y_k}}{\binom{m}{n}}, \quad (y_1, y_2, \ldots, y_k) \in \N^k \text{ with } \sum_{i=1}^k y_i = n$ Proof: The binomial coefficient $$\binom{m_i}{y_i}$$ is the number of unordered subsets of $$D_i$$ (the type $$i$$ objects) of size $$y_i$$. The binomial coefficient $$\binom{m}{n}$$ is the number of unordered samples of size $$n$$ chosen from $$D$$. Thus the result follows from the multiplication principle of combinatorics and the uniform distribution of the unordered sample The distribution of $$(Y_1, Y_2, \ldots, Y_k)$$ is called the multivariate hypergeometric distribution with parameters $$m$$, $$(m_1, m_2, \ldots, m_k)$$, and $$n$$. We also say that $$(Y_1, Y_2, \ldots, Y_{k-1})$$ has this distribution (recall again that the values of any $$k - 1$$ of the variables determines the value of the remaining variable). Usually it is clear from context which meaning is intended. The ordinary hypergeometric distribution corresponds to $$k = 2$$. An alternate form of the probability density function of $$Y_1, Y_2, \ldots, Y_k)$$ is $\P(Y_1 = y_1, Y_2 = y_2, \ldots, Y_k = y_k) = \binom{n}{y_1, y_2, \ldots, y_k} \frac{m_1^{(y_1)} m_2^{(y_2)} \cdots m_k^{(y_k)}}{m^{(n)}}, \quad (y_1, y_2, \ldots, y_k) \in \N_k \text{ with } \sum_{i=1}^k y_i = n$ Proof: The combinatorial proof is to consider the ordered sample, which is uniformly distributed on the set of permutations of size $$n$$ from $$D$$. The multinomial coefficient on the right is the number of ways to partition the index set $$\{1, 2, \ldots, n\}$$ into $$k$$ groups where group $$i$$ has $$y_i$$ elements (these are the coordinates of the type $$i$$ objects). The number of (ordered) ways to select the type $$i$$ objects is $$m_i^{(y_i)}$$. The denominator $$m^{(n)}$$ is the number of ordered samples of size $$n$$ chosen from $$D$$. There is also a simple algebraic proof, starting from the probability density function in Exercise 2. Write each binomial coefficient $$\binom{a}{j} = a^{(j)}/j!$$ and rearrange a bit. #### The Marginal Distributions For $$i \in \{1, 2, \ldots, k\}$$, $$Y_i$$ has the hypergeometric distribution with parameters $$m$$, $$m_i$$, and $$n$$. $\P(Y_i = y) = \frac{\binom{m_i}{y} \binom{m - m_i}{n - y}}{\binom{m}{n}}, \quad y \in \{0, 1, \ldots, n\}$ Proof: An analytic proof is possible, by starting with the joint PDF in Exercise 2 or 3 and summing over the unwanted variables. However, a probabilistic proof is much better: $$Y_i$$ is the number of type $$i$$ objects in a sample of size $$n$$ chosen at random (and without replacement) from a population of $$m$$ objects, with $$m_i$$ of type $$i$$ and the remaining $$m - m_i$$ not of this type. #### Grouping The multivariate hypergeometric distribution is preserved when the counting variables are combined. Specifically, suppose that $$(A_1, A_2, \ldots, A_l)$$ is a partition of the index set $$\{1, 2, \ldots, k\}$$ into nonempty, disjoint subsets. Let $$W_j = \sum_{i \in A_j} Y_i$$ and $$r_j = \sum_{i \in A_j} m_i$$ for $$j \in \{1, 2, \ldots, l\}$$ $$(W_1, W_2, \ldots, W_l)$$ has the multivariate hypergeometric distribution with parameters $$m$$, $$(r_1, r_2, \ldots, r_l)$$, and $$n$$. Proof: Again, an analytic proof is possible, but a probabilistic proof is much better. Effectively, we now have a population of $$m$$ objects with $$l$$ types, and $$r_i$$ is the number of objects of the new type $$i$$. As before we sample $$n$$ objects without replacement, and $$W_i$$ is the number of objects in the sample of the new type $$i$$. Note that the marginal distribution of $$Y_i$$ in Exercise 4 is a special case of grouping. We have two types: type $$i$$ and not type $$i$$. More generally, the marginal distribution of any subsequence of $$(Y_1, Y_2, \ldots, Y_n)$$ is hypergeometric, with the appropriate parameters. #### Conditioning The multivariate hypergeometric distribution is also preserved when some of the counting variables are observed. Specifically, suppose that $$(A, B)$$ is a partition of the index set $$\{1, 2, \ldots, k\}$$ into nonempty, disjoint subsets. Suppose that we observe $$Y_j = y_j$$ for $$j \in B$$. Let $$z = n - \sum_{j \in B} y_j$$ and $$r = \sum_{i \in A} m_i$$. The conditional distribution of $$(Y_i: i \in A)$$ given $$(Y_j = y_j: j \in B)$$ is multivariate hypergeometric with parameters $$r$$, $$(m_i: i \in A)$$, and $$z$$. Proof: Once again, an analytic argument is possible using the definition of conditional probability and the appropriate joint distributions. A probabilistic argument is much better. Effectively, we are selecting a sample of size $$z$$ from a population of size $$r$$, with $$m_i$$ objects of type $$i$$ for each $$i \in A$$. Combinations of the basic results in Exercise 5 and Exercise 6 can be used to compute any marginal or conditional distributions of the counting variables. #### Moments We will compute the mean, variance, covariance, and correlation of the counting variables. Results from the hypergeometric distribution and the representation in terms of indicator variables in Exercise 1 are the main tools. For $$i \in \{1, 2, \ldots, k\}$$, 1. $$\E(Y_i) = n \frac{m_i}{m}$$ 2. $$\var(Y_i) = n \frac{m_i}{m}\frac{m - m_i}{m} \frac{m-n}{m-1}$$ Proof: This follows immediately, since $$Y_i$$ has the hypergeometric distribution with parameters $$m$$, $$m_i$$, and $$n$$. Now let $$I_{t,i} = \bs{1}(X_t \in D_i)$$, the indicator variable of the event that the $$t$$th object selected is type $$i$$, for $$t \in \{1, 2, \ldots, n\}$$ and $$i \in \{1, 2, \ldots, k\}$$. Suppose that $$r$$ and $$s$$ are distinct elements of $$\{1, 2, \ldots, n\}$$, and $$i$$ and $$j$$ are distinct elements of $$\{1, 2, \ldots, k\}$$. Then $\begin{align} \cov(I_{r,i}, I_{r,j}) & = -\frac{m_i}{m} \frac{m_j}{m}\\ \cov(I_{r,i}, I_{s,j}) & = \frac{1}{m - 1} \frac{m_i}{m} \frac{m_j}{m} \end{align}$ Proof: Recall that if $$A$$ and $$B$$ are events, then $$\cov(A, B) = \P(A \cap B) - \P(A) \P(B)$$. In the first case the events are that sample item $$r$$ is type $$i$$ and that sample item $$r$$ is type $$j$$. These events are disjoint, and the individual probabilities are $$\frac{m_i}{m}$$ and $$\frac{m_j}{m}$$. In the second case, the events are that sample item $$r$$ is type $$i$$ and that sample item $$s$$ is type $$j$$. The probability that both events occur is $$\frac{m_i}{m} \frac{m_j}{m-1}$$ while the individual probabilities are the same as in the first case. Suppose again that $$r$$ and $$s$$ are distinct elements of $$\{1, 2, \ldots, n\}$$, and $$i$$ and $$j$$ are distinct elements of $$\{1, 2, \ldots, k\}$$. Then $\begin{align} \cor(I_{r,i}, I_{r,j}) & = -\sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}} \\ \cor(I_{r,i}, I_{s,j}) & = \frac{1}{m - 1} \sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}} \end{align}$ Proof: This follows from the previous result and the definition of correlation. Recall that if $$I$$ is an indicator variable with parameter $$p$$ then $$\var(I) = p (1 - p)$$. In particular, $$I_{r,i}$$ and $$I_{r,j}$$ are negatively correlated while $$I_{r,i}$$ and $$I_{s,j}$$ are positively correlated. For distinct $$i, \; j \in \{1, 2, \ldots, k\}$$, $\begin{align} \cov(Y_i, Y_j) = & -n \frac{m_i}{m} \frac{m_j}{m} \frac{m - n}{m - 1}\\ \cor(Y_i, Y_j) = & -\sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}} \end{align}$ #### Sampling with Replacement Suppose now that the sampling is with replacement, even though this is usually not realistic in applications. The types of the objects in the sample form a sequence of $$n$$ multinomial trials with parameters $$(m_1 / m, m_2 / m, \ldots, m_k / m)$$. The following results now follow immediately from the general theory of multinomial trials, although modifications of the arguments above could also be used. $$(Y_1, Y_2, \ldots, Y_k)$$ has the multinomial distribution with parameters $$n$$ and $$(m_1 / m, m_2, / m, \ldots, m_k / m)$$: $\P(Y_1 = y_1, Y_2 = y_2, \ldots, Y_k = y_k) = \binom{n}{y_1, y_2, \ldots, y_k} \frac{m_1^{y_1} m_2^{y_2} \cdots m_k^{y_k}}{m^n}, \quad (y_1, y_2, \ldots, y_k) \in \N^k \text{ with } \sum_{i=1}^k y_i = n$ For distinct $$i, \; j \in \{1, 2, \ldots, k\}$$, 1. $$\E(Y_i) = n \frac{m_i}{m}$$ 2. $$\var(Y_i) = n \frac{m_i}{m} \frac{m - m_i}{m}$$ 3. $$\cov(Y_i, Y_j) = -n \frac{m_i}{m} \frac{m_j}{m}$$ 4. $$\cor(Y_i, Y_j) = -\sqrt{\frac{m_i}{m - m_i} \frac{m_j}{m - m_j}}$$ Comparing with our previous results, note that the means and correlations are the same, whether sampling with or without replacement. The variances and covariances are smaller when sampling without replacement, by a factor of the finite population correction factor $$(m - n) / (m - 1)$$ #### Convergence to the Multinomial Distribution Suppose that the population size $$m$$ is very large compared to the sample size $$n$$. In this case, it seems reasonable that sampling without replacement is not too much different than sampling with replacement, and hence the multivariate hypergeometric distribution should be well approximated by the multinomial. The following exercise makes this observation precise. Practically, it is a valuable result, since in many cases we do not know the population size exactly. For the approximate multinomial distribution, we do not need to know $$m_i$$ and $$m$$ individually, but only in the ratio $$m_i / m$$. Suppose that $$m_i$$ depends on $$m$$ and that $$m_i / m \to p_i$$ as $$m \to \infty$$ for $$i \in \{1, 2, \ldots, k\}$$. For fixed $$n$$, the multivariate hypergeometric probability density function with parameters $$m$$, $$(m_1, m_2, \ldots, m_k)$$, and $$n$$ converges to the multinomial probability density function with parameters $$n$$ and $$(p_1, p_2, \ldots, p_k)$$. Proof: Consider the hypergeometric probability density function in Exercise 3. In the fraction, there are $$n$$ factors in the denominator and $$n$$ in the numerator. If we group the factors to form a product of $$n$$ fractions, then each fraction in group $$i$$ converges to $$p_i$$. ### Examples and Applications A population of 100 voters consists of 40 republicans, 35 democrats and 25 independents. A random sample of 10 voters is chosen. Find each of the following: 1. The joint density function of the number of republicans, number of democrats, and number of independents in the sample 2. The mean of each variable in (a). 3. The variance of each variable in (a). 4. The covariance of each pair of variables in (a). 5. The probability that the sample contains at least 4 republicans, at least 3 democrats, and at least 2 independents. Answer: 1. $$\P(X = x, Y = y, Z = z) = \frac{\binom{40}{x} \binom{35}{y} \binom{25}{z}}{\binom{100}{10}}$$ for $$x, \; y, \; z \in \N$$ with $$x + y + z = 10$$ 2. $$\E(X) = 4$$, $$\E(Y) = 3.5$$, $$\E(Z) = 2.5$$ 3. $$\var(X) = 2.1818$$, $$\var(Y) = 2.0682$$, $$\var(Z) = 1.7045$$ 4. $$\cov(X, Y) = -1.6346$$, $$\cov(X, Z) = -0.9091$$, $$\cov(Y, Z) = -0.7955$$ 5. 0.2474 #### Cards Recall that the general card experiment is to select $$n$$ cards at random and without replacement from a standard deck of 52 cards. The special case $$n = 5$$ is the poker experiment and the special case $$n = 13$$ is the bridge experiment. In a bridge hand, find the probability density function of 1. The number of spades, number of hearts, and number of diamonds. 2. The number of spades and number of hearts. 3. The number of spades. 4. The number of red cards and the number of black cards. Answer: Let $$X$$, $$Y$$, $$Z$$, $$U$$, and $$V$$ denote the number of spades, hearts, diamonds, red cards, and black cards, respectively, in the hand. 1. $$\P(X = x, Y = y, Z = z) = \frac{\binom{13}{x} \binom{13}{y} \binom{13}{z}\binom{13}{13 - x - y - z}}{\binom{52}{13}}$$ for $$x, \; y, \; z \in \N$$ with $$x + y + z \le 13$$ 2. $$\P(X = x, Y = y) = \frac{\binom{13}{x} \binom{13}{y} \binom{26}{13-x-y}}{\binom{52}{13}}$$ for $$x, \; y \in \N$$ with $$x + y \le 13$$ 3. $$\P(X = x) = \frac{\binom{13}{x} \binom{39}{13-x}}{\binom{52}{13}}$$ for $$x \in \{0, 1, \ldots 13\}$$ 4. $$\P(U = u, V = v) = \frac{\binom{26}{u} \binom{26}{v}}{\binom{52}{13}}$$ for $$u, \; v \in \N$$ with $$u + v = 13$$ In a bridge hand, find each of the following: 1. The mean and variance of the number of spades. 2. The covariance and correlation between the number of spades and the number of hearts. 3. The mean and variance of the number of red cards. Answer: Let $$X$$, $$Y$$, and $$U$$ denote the number of spades, hearts, and red cards, respectively, in the hand. 1. $$\E(X) = \frac{13}{4}$$, $$\var(X) = \frac{507}{272}$$ 2. $$\cov(X, Y) = -\frac{169}{272}$$ 3. $$\E(U) = \frac{13}{2}$$, $$\var(U) = \frac{169}{272}$$ In a bridge hand, find each of the following: 1. The conditional probability density function of the number of spades and the number of hearts, given that the hand has 4 diamonds. 2. The conditional probability density function of the number of spades given that the hand has 3 hearts and 2 diamonds. Answer: Let $$X$$, $$Y$$ and $$Z$$ denote the number of spades, hearts, and diamonds respectively, in the hand. 1. $$\P(X = x, Y = y, \mid Z = 4) = \frac{\binom{13}{x} \binom{13}{y} \binom{22}{9-x-y}}{\binom{48}{9}}$$ for $$x, \; y \in \N$$ with $$x + y \le 9$$ 2. $$\P(X = x \mid Y = 3, Z = 2) = \frac{\binom{13}{x} \binom{34}{8-x}}{\binom{47}{8}}$$ for $$x \in \{0, 1, \ldots, 8\}$$ In the card experiment, a hand that does not contain any cards of a particular suit is said to be void in that suit. Use the inclusion-exclusion rule to show that the probability that a poker hand is void in at least one suit is $\frac{1913496}{2598960} \approx 0.736$ In the card experiment, set $$n = 5$$. Run the simulation 1000 times and compute the relative frequency of the event that the hand is void in at least one suit. Compare the relative frequency with the true probability given in the previous exercise. Use the inclusion-exclusion rule to show that the probability that a bridge hand is void in at least one suit is $\frac{32427298180}{635013559600} \approx 0.051$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 235, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9030240178108215, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/118667/prove-that-n2003n1-is-composite-for-every-n-in-mathbbn-backslash-1?answertab=oldest
# Prove that $n^{2003}+n+1$ is composite for every $n\in \mathbb{N} \backslash\{1\}$ Prove that $n^{2003}+n+1$ is composite for every $n\in \mathbb{N} \backslash\{1\}$. I tried with expanding $n^{2003}+1$, but I got nothing pretty not useful. I also couldn't get any improvement, let alone contradiction for assuming $n^{2003}+n+1=pq$ where $p,q\not= 1$. How should I do this and are there general tips on how to approach these problems, what to think about? - The fact that $2003$ is a prime makes this question harder. I tried for $n=2$ and the last two digits are $11$. Not sure if you could just focus on last few digits and show they are composite – Kirthi Raman Mar 10 '12 at 21:39 The numbers are too huge to focus on last digits only, I believe. – Lazar Ljubenović Mar 10 '12 at 21:50 What is the source of the problem? – Aryabhata Mar 10 '12 at 22:03 Serbian sub-regional competition 2004. – Lazar Ljubenović Mar 10 '12 at 22:06 All I know is if $p$ is prime then $(1+x)^{p} \equiv 1+x^{p}\hspace{4pt}({\text mod}\hspace{3pt} p)$ – Kirthi Raman Mar 10 '12 at 22:06 show 1 more comment ## 2 Answers Let $w=e^{i2\pi/3}$. It's easy to see that $w$ and $w^2$ are all the roots of $x^2+x+1$ and roots of $x^{2003}+x+1$, therefore $x^2+x+1|x^{2003}+x+1$. So we have That $x^{2003}+x+1=(x^2+x+1)P(x)$, where $P(x)$ is some polynomial with integer coefficients. For $x\ge 2$, $x^{2003}+x+1$ is much bigger than $x^2+x+1$ so $P(x)$ is some integer greater than $2$ from which the conclusion follows. - 2 Can you give some insight on how you found this. (Interesting approach) – Kirthi Raman Mar 10 '12 at 22:16 @KirthiRaman I noticed that $2003\equiv 2\pmod{3}$ and that the polynomial $n^{2003}+n+1$ have only 1's and 0's as coefficients. This approach can be generalized to polynomials of the form $x^m+x^{m-1}+...+x+1$, for example $1+x+x^2+x^3|x^{23}+x^6+x+1$ – xD13G0x Mar 10 '12 at 22:21 @Kirthi You can find a couple algebraic derivations in my answer. – Gone Mar 10 '12 at 23:38 Thanks to both Bill Dubuque and Diego S. (I am catching up with what I have forgotten all these years) – Kirthi Raman Mar 11 '12 at 1:05 Hint $\rm\ f = x^{3n+2}+x+1\ = \ x^2\:(x^{3n}-1) + x^2+x+1\$ therefore $\rm\:x^2+x+1\ |\ x^3-1\ |\ x^{3n}-1\:\Rightarrow\: x^2+x+1\ |\ f$ Or, $\rm\ mod\ x^2+x+1\!:\ x^3\equiv 1\ \Rightarrow\ x^{3n+2}+x+1\equiv (x^3)^n x^2 + x + 1 \equiv x^2+x+1\equiv 0$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397120475769043, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/71537/derivation-of-chi-squared-pdf-with-one-degree-of-freedom-from-normal-distributio
# Derivation of chi-squared pdf with one degree of freedom from normal distribution pdf How can we derive the chi-squared probability density function (pdf) using the pdf of normal distribution? I mean, I need to show that $$f(x)=\frac{1}{2^{r/2}\Gamma(r/2)}x^{r/2-1}e^{-x/2} \>, \qquad x > 0\>.$$ - 1 – leonbloy Oct 10 '11 at 21:03 The way you wrote this makes no sense. Did you mean $f_Y(x)$ rather than $f(Y)$? And it makes no sense to say "where X is..." when the foregoing statement doesn't mention anything called X. – Michael Hardy Oct 10 '11 at 21:14 @Harald: Please do not deface your question, I have undone your edits. There is still value in the question, perhaps someone else with the same confusion will come along and be helped by this post. – Zev Chonoles♦ Oct 10 '11 at 22:49 I have tried to restate some things in the question. However, the OP is still encouraged to edit it further to clarify, as only they know what precisely they are trying to ask. – cardinal Oct 11 '11 at 1:37 ## 2 Answers The way the question is expressed is a mess, but I'll assume it means this: if $X\sim N(0,1)$, how do you find the pdf of $X^2$? Here's one way. Remember that the pdf of $X$ is $$\varphi(x) = \frac{1}{\sqrt{2\pi}} e^{-x^2/2}.$$ Let $f$ be the pdf of $X^2$. Then $$\begin{align} f(x) & = \frac{d}{dx} \Pr(X^2 \le x) = \frac{d}{dx} \Pr(-\sqrt{x}\le X\le\sqrt{x}) \\ \\ & = \frac{d}{dx} \frac{1}{\sqrt{2\pi}} \int_{-\sqrt{x}}^\sqrt{x} e^{-u^2/2} \;du = \frac{2}{\sqrt{2\pi}}\frac{d}{dx} \int_0^\sqrt{x} e^{-u^2/2} \;du \\ \\ & = \frac{2}{\sqrt{2\pi}} e^{-\sqrt{x}^2/2} \frac{d}{dx} \sqrt{x} = \frac{2}{\sqrt{2\pi}} e^{-x/2} \frac{1}{2\sqrt{x}} \\ \\ \\ & = \frac{e^{-x/2}}{\sqrt{2\pi x}}. \end{align}$$ Sometimes it might be written as $\dfrac{1}{\sqrt{2\pi}} x^{\frac12 - 1}e^{-x/2}$ so that you can see how it resembles the function involved in defining the Gamma function. Your title said 1 degree of freedom. But what you write seems to allow $r$ to be some number other than 1. If you want to do that, then there's more work to do. - If $X \sim (\mu, \Sigma) \neq (0, I)$, the result you wish to prove does not hold: even if the random variables are independent but have nonzero means, you get a non-central $\chi^2$ pdf which is not what you are trying to show. If $X_1, \ldots, X_n$ are independent standard normal random variables, then $X_i^2$ has a Gamma distribution with scale parameter $\frac{1}{2}$ and order parameter $\frac{1}{2}$. Then, $\sum_{i= 1}^n X_i^2$ is a sum of $n$ independent Gamma random variables each with scale parameter $\frac{1}{2}$ and order parameter $\frac{1}{2}$ and is thus a Gamma random variable with scale parameter $\frac{1}{2}$ and order parameter $\frac{r}{2}$. - Sorry I mean X, not Y. And in the sense that X is normally distributed. I see your point but I need a more mathematicly rigorious derivation i'm afraid. – Harald Oct 10 '11 at 20:59 1 @Harald What exactly do you find nonrigorous about the answer I provided? Or do you mean that you need to have full and complete details about how the two assertions in the two sentences are to be proved individually? These are detailed, for example, in the wikipedia link that leonbloy provided in response to your question, and can also be found in most textbooks on probability and statistics. – Dilip Sarwate Oct 10 '11 at 21:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278734922409058, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/23811/whats-wrong-with-this-equation-for-harmonic-oscillation
# What's wrong with this equation for harmonic oscillation? The question: A particle moving along the x axis in simple harmonic motion starts from its equilibrium position, the origin, at t = 0 and moves to the right. The amplitude of its motion is 1.70 cm, and the frequency is 1.10 Hz. Find an expression for the position of the particle as a function of time. (Use the following as necessary: t, and π.) Using the equations: $$x(t) = A \cos(\omega t + \Phi)$$ $$\omega = 2\pi f$$ I get A = 1.7cm or 0.017m, and $$\omega = 6.91$$ I know that t = 0, x = 0. Thus, $$0 = 0.017 \cos(\Phi )$$ And therefore, $$\Phi = \pi / 2$$ From all of this, it seems to me that the equation for position with respect to time should be: $$x = 0.017 \cos(6.91t + \pi/2)$$ Am I doing something wrong, because the above is not getting checked as the right answer (it's an online homework) - Is there some concept in here that you're not sure of, which you think might be responsible for the error? – David Zaslavsky♦ Apr 15 '12 at 22:28 @DavidZaslavsky: I got it, finally. My phase was off by Pi, and the answer required the unit of Amplitude to be in cm. I was using m. – xbonez Apr 15 '12 at 22:38 ## 1 Answer The cosine has more than one zero. And the text specifies that the particle goes to the right (I assume that the x axis also goes to the right). Now in which direction does the cosine go at $\pi/2$? And where's another zero? - I see your point, and so I tried $$x = 0.017 cos(6.91t + 3 \pi / 2)$$ as well, but it's still wrong! – xbonez Apr 15 '12 at 22:24 Aha!! Got it! They wanted the amplitude in the equation expressed in cm (why?? I assumed they'd want the SI unit). Anyways, the hint about the phase also helped. – xbonez Apr 15 '12 at 22:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9613721966743469, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/31811?sort=oldest
Feit-Thompson Theorem: The Odd Order Paper Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For reference, the Feit-Thompson Theorem states that every finite group of odd order is necessarily solvable. Equivalently, the theorem states that there exist no non-abelian finite simple groups of odd order. I am well aware of the complexity and length of the proof. However, would it be possible to provide a rough outline of the ideas and techniques in the proof? More specifically, the sub-questions of this question are: • Are the techniques in this proof purely group-theoretic or are techniques from other areas of mathematics borrowed? (Such as, for example, other branches of algebra.) In the same vein, how great an influence do the techniques (if any) from number theory and combinatorics have on the proof? (Here "combinatorics" is of course not very specific. I should emphasize that I mean "tools from combinatorics that are pure and solely derived from techniques within the area of combinatorics and that do not require "deep" group theory to derive". Similarly for "number theory".) • What sorts of "character-free" techniques and ideas exist in the proof? Does a character-free proof of this result exist? (Since I suspect the answer to the latter is in the negative, I am primarily interested in an answer to the former.) • What are the underlying "intuitions" behind the proof? That is, how does one come up with such a proof, or at least, certain parts of it? This is a rough question of course; "coming up" with things in mathematics is very difficult to describe. However, since the argument is so long, I suspect some sort of inspiration must have driven the proof. • I have observed in group theory that many arguments naturally divide into "cases" and often the individual cases are easy to tackle and the arguments naturally "flow". Of course, here I speak of arguments whose lengths are no more than a few pages. Does the proof of the Feit-Thompson theorem share the same "structure" as smaller proofs, or is the proof structurally unique? • How often do explicit "elementwise computations" arise in the proof? • Is there any hope that one day someone might discover a considerably shorter proof of the Feit-Thompson Theorem? For example, would the existence of a proof of this theorem less than 50 or so pages be likely? (A proof making strong use of the classification of finite simple groups, or any other non-trivial consequence of the Feit-Thompson Theorem, does not count.) If not, why is it so difficult in group theory to provide more concise arguments? While I have Gorenstein's excellent book entitled Finite Groups at hand, I did not go far enough (when I was reading it) to actually get into the "real meat" of the discussion of the Feit-Thompson theorem; that is, to actually get a sense of the mathematics used to prove the theorem. Nor do I intend to do so in the near future. (Don't get me wrong, I would be really interested to see this proof, but it seems too much unless you intend to research finite group theory or a related area.) Thank you very much for any answers. I am aware that some aspects of this question are imprecise; I have tried my best to be as clear as possible in some cases, but there might still be possible sources of ambiguity and I apologize if they are. (If there are, I would appreciate it if you could try to look for the "obvious interpretation".) Also, I have a relatively strong background in finite group theory (but not a "research-level" background in the area) so feel free to use more complex group-theoretic terminology and ideas if necessary, but if possible, try to give an exposition of the proof that is as elementary as possible. Thanks again! - 5 The original proof contained lots of character theory and lots of case-by-case analysis. With the general material in Gorenstein presupposed, a shorter and more relaxed proof can be found in the two books "Local Analysis for the Odd Order Theorem" and "Character Theory for the Odd Order Theorem". – Steve D Jul 14 2010 at 9:05 The algebra textbook by Dummit and Foote has a series of exercises somewhere in it that are supposed to be designed to give one a feeling for the proof. However, I don't think there's any character theory in these exercises, and I can't say how much they really convey about the proof. Still, you might try to find them... – Dan Ramras Jul 14 2010 at 16:27 The two books cited by Steve D are of course still technical, well beyond Glauberman's short survey: MR1311244 (96h:20036), Bender, Helmut (D-KIEL); Glauberman, George (1-CHI), Local analysis for the odd order theorem. London Mathematical Society Lecture Note Series, 188. Cambridge University Press, Cambridge, 1994. MR1747393 Peterfalvi, Thomas (F-PARIS7), Character theory for the odd order theorem. Translated from the 1986 French original by Robert Sandling and revised by the author. London Mathematical Society Lecture Note Series, 272. Cambridge University Press, Cambridge, 2000. – Jim Humphreys Jul 15 2010 at 13:03 During my stay at the mathematics department at Kiel some professor claimed that H. Bender considered quite a lot of the theory in the book he coauthored with G. Glauberman unnecessary for the proof of the odd order theorem. But I didn't ask H. Bender directly to confirm this statement and don't know any details. – Someone Jul 23 2010 at 14:05 1 The proof could be simplified if number theorist were able to proof the Feit-Thompson conjecture (en.wikipedia.org/wiki/…). – Someone Jul 23 2010 at 14:06 show 1 more comment 3 Answers During a discussion at the n-category theory cafe Stephen Harris sent me this excellent expository article by Glauberman which goes into a bit more depth than wikipedia. - 6 It's good to have this online, since the publication occurred in an out-of-the-way conference volume: MR1756828 (2001b:20027) 20D10, Glauberman, George (1-CHI), Anew look at the Feit-Thompson odd order theorem. 15th School of Algebra (Portuguese) (Canela, 1998). Mat. Contemp. 16 (1999), 73–92. Glauberman is a reliable source for this area of finite group theory. Another of his distinctions is having as his first Ph.D. student a prominent player in recent Iraqi politics, Ahmad Chalabi (less prominent in mathematics). – Jim Humphreys Jul 14 2010 at 18:55 This is an extremely helpful comment. Thank you very much! I have printed out the expository article by Glauberman and really like the way it is written based on my reading of the first few pages. I look forward to reading it in its entirety in the near future! – Amitesh Datta Jul 15 2010 at 14:53 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. http://en.wikipedia.org/wiki/Odd_order_theorem is worth reading. - Thanks for your answer! I did scan through that Wikipedia article prior to asking this question and should probably read it more thoroughly now. It is indeed a very detailed article and does address some aspects of my question. However, there are certain aspects of my question that the article does not address, and certain aspects that the article does not discuss in complete detail. But thanks anyway! – Amitesh Datta Jul 14 2010 at 12:13 2 Its validity aside, I pity the student of math sociology whose views would be inordinately coloured by "Perhaps the most revolutionary aspect of the proof was its length: before the Feit-Thompson paper, few arguments in group theory were more than a few pages long and most could be read in a day. Once group theorists realized that such long arguments could work, a series of papers that were several hundred pages long started to appear." – Junkie May 3 2011 at 16:38 I won't presume to attempt a precis of the Feit-Thompson proof. But I would suggest that your question about the hope of finding a much shorter proof is impossible to answer meaningfully. The current answer, backed up by almost 50 years of recent history, is probably with currently available techniques, there appears to be little prospect of any dramatic shortening of the length of the proof of the odd order theorem." It should also be remembered that many of the currently available accepted techniques of finite group theory were developed to attack this problem, and proved later to be very powerful in a wider context. Many of the techniques are such an integral part of the weaponry of many modern group theorists that they implicitly impose an inevitability and naturality to the structure of the proof of the odd order theorem, complex and forbidding though the details are. But had the question been asked, say in 1955, "Is there any prospect of proving the solvability of finite groups of odd order in the near future?", the answer likely to be given at the time can only be a matter of speculation (for most of us at any rate), but with the benefit of hindsight we can see at present that to make the prospect of such a proof a reality, many new and innovative techniques had to be developed, and profound new insights brought to bear. However, it would be a rash mathematician (and one who took little account of the history of the subject) who would pronounce it impossible to find a significantly shorter proof at some point in the future. It might be a safer bet to suggest that a significantly shorter proof would require some genuinely new insights and ideas, but even a statement such as that might eventually be proved to be presumptuous. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512625932693481, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/kinematics?page=3&sort=votes&pagesize=15
# Tagged Questions The description of the movement of bodies by their position, velocity, acceleration (and possibly higher time derivatives, such as, jerk) without concern for the underlying dynamics/forces/causes. 2answers 299 views ### Calculating force required to stop bungee jumper Given that: bungee jumper weighs 700N jumps off from a height of 36m needs to stop safely at 32m (4m above ground) unstretched length of bungee cord is 25m Whats the force required to stop the ... 1answer 134 views ### Intercept a moving object Object A can move at 50km/h, wants to intercept object B (currently $15^{\circ}$, east of north from A) moving at 26km/h, $40^{\circ}$ east of north. What angle should A take to intercept B? AB is ... 1answer 114 views ### why does perpendicular motion to the direction of someone' s approach does not affect the distance between them I was reading about the the four turtles/bugs math puzzle Four bugs are at the four corners of a square of side length D. They start walking at constant speed in an anticlockwise direction at all ... 2answers 123 views ### Trains travelling at different speeds towards a station C Here's the question: Consider two stations A and B located 100 kilometer apart. There is a station C, located between A and B. Now trains from station A and B start moving towards station C at ... 2answers 368 views ### Sign of Velocity for a Falling Object I'm working on a homework problem in Mathematica. We have to graph the height and the velocity of a function given an initial height and initial velocity. However, when I do the graph for the velocity ... 2answers 74 views ### Acceleration: Value Disparity? If we consider a ball moving at an acceleration of $5ms^{-2}$, over a time of 4 seconds, the distance covered by the ball in the first second is $5m$. In the 2nd second will $5 + 5 = 10m$. In the ... 2answers 377 views ### 2d simulation shooting projectiles that inherit the gun's veocity at a moving target while also moving Overview: I'm working on programming a simulation that requires 'shooting' projectile-type objects at other moving objects. How can I calculate the angle at which to shoot the object to hit? Details: ... 3answers 4k views ### Find radius of curvature, given a velocity vector and acceleration magnitude? The particle P moves along a space curve. At one instant it has velocity $v = (4i-2j-k)$ $m/s$. The magnitude of the acceleration is 8 $m/s^2$. The angle between the acceleration and the velocity ... 2answers 215 views ### Running: Determine how much more energy is needed per extra kilogram of weight (I recently asked this on maths but was directed here) I have recently become a runner and having a keen interest in kinematics I'm very interested in the maths/physics of my running. Can someone ... 1answer 1k views ### Solving straight-line motion question for time I apologise in advance if this question doesn't appeal to the advanced questions being asked in this Physics forum, but I'm a great fan of the Stack Exchange software and would trust the answers ... 2answers 139 views ### Projectile Motion with Drag The overall goal is to write a Mathematica program that will compute the launch angle that will yield the greatest range with using [RandomInt] function, but I was having trouble with the physics. In ... 2answers 110 views ### Calculating Average Velocity I understand that the concept of an average of a data list means finding a certain value 'x', which ensures that the sum of the deviations of the numbers on the left of 'x' and on the right of 'x' ... 2answers 90 views ### Why does gravity assist transfer twice the planet's velocity? In orbital mechanics and aerospace engineering a gravitational slingshot (also known as gravity assist manoeuver or swing-by) is the use of the relative movement and gravity of a planet or other ... 1answer 173 views ### Monkey and tree - projectile motion The famous scenario: A hunter is trying to shoot a Monkey hanging from a tree. However, this question doesn't mention the monkey jumping down from the tree or trying to escape. (The hunter uses a ... 1answer 175 views ### Non-commutative property of rotation Addition of angles are non-commutative in three dimensions. Hence some other angular vector quantities like angular velocity, momentum become non-commutative. What is the physical significance of this ... 1answer 59 views ### Am I making the right assumption about a jump discontinuity in the acceleration? A train heads from Station A to Station B, 4 km away. If the train begins at rest and ends at rest, and its maximum acceleration is 1.5 m/s^2 and maximum deceleration is -6 m/s^2, what's the least ... 1answer 233 views ### Kinematics - Find theta with Coefficient of Friction? I recently found a problem that looked like this: A box sits on a horizontal wooden ramp. The coefficient of static friction between the box and the ramp is ... 3answers 80 views ### Estimate quarter mile time I need to estimate a drag race quarter mile time given the car's weight, bhp and preferably the drive (FWD, RWD, 4WD). I know $v(t) = ds/dt$ and $a(t) = dv/dt = d^2s/dt^2$, but how can I get the ... 2answers 102 views ### Looking for the curve traced by a moving bicycle when its steering bar is fully rotated I am looking for a curve traced by a moving bicycle when its steering bar is fully rotated either clockwise or anti-clockwise. How to model it mathematically? Is the curve a circle? My attempt is ... 1answer 690 views ### Trajectory of projectile thrown downhill I'm teaching myself mechanics, and set out to solve a problem determining the optimum angle to throw a projectile when standing on a hill, for maximum range. My answer seems almost plausible, except ... 1answer 627 views ### What's wrong with this equation for harmonic oscillation? The question: A particle moving along the x axis in simple harmonic motion starts from its equilibrium position, the origin, at t = 0 and moves to the right. The amplitude of its motion is ... 2answers 210 views ### Kinematics Problem The question asks me to find the angular velocity. Now I do not want you to solve my homework, I want explanation please. It states that the acceleration of point P is \$\vec{a}= -3.02 \vec{i} ... 1answer 1k views ### Ascent rate and size of balloon I am part of a school project, Project Stratos to send a balloon to the edge of space (the closer side :P) and was wondering how you would work out the accent rate of a large balloon (roughly 1m^3 of ... 2answers 787 views ### Projectiles problem solving I've only learned about to use kinematics equation when solving projectile problems but today i came across the following equations. where does they come from? Distance travelled Time of flight ... 1answer 103 views ### Utilizing maximum acceleration $a$ for displacement $d$ with initial velocity $v_0$ and final velocity $v_1$ Problem My goal is to move an object from point a to b (displacement $d$) as fast as possible utilizing the maximum available acceleration $a_{max}$, taking into account the initial velocity $v_0$ ... 0answers 70 views ### What is the initial velocity of a projectile so that it passes through a target point in its trajectory? [closed] Let's say I have a projectile being thrown by a player in my 2-D game. I want to work backwards and find the initial velocity to apply to the projectile such that it passes through a target point in ... 0answers 63 views ### Getting pairs of angle and velocity for a projectile to a given destination I'm trying to calculate the initial velocity $v_0$ and angle $\theta$ for a given destination $(x, y)$ with a launch height of $y_0$. Obviously there will be a set of pairs of velocity and angle that ... 1answer 68 views ### Shooting a bullet at a system of blocks [closed] So, I made this question up myself.... and I'm curious about the answer. It requires only secondary-school-level knowledge of physics: You have a surface (ground) with a certain coefficient of ... 0answers 51 views ### I need help with the following Physics' questions [closed] I need to know which formulas and the step-by-step answers for the following physics questions: 1. You throw a ball upward with an initial speed of 7.0 m/s, and it returns to your hand o.92 ... 0answers 47 views ### A 0.1kg ball of dough is thrown up with a velocity of 15m/s. What is the momentum halfway up? [closed] I know that $p=mv$ and (0.1kg)(15m/s)=1.5 kg m/s and the momentum at the vertex is 0, but what is the momentum halfway up? 0answers 43 views ### How to calculate the correct coordinates from a distorted video of a projectile? I am working on a high school project that is related to projectile motion. I am exploring how exactly the position of the center of mass affects the trajectory of a long but thin, javelin-like ... 1answer 132 views ### Projectile motion problem with upward acceleration and horizontal velocity [closed] An electron in a cathode-ray tube is traveling horizontally at 2.10×10^9 cm/s when deflection plates give it an upward acceleration of 5.30×10^17 cm/s^2 . B.) What is its vertical displacement during ... 0answers 79 views ### Kinematics algebra, pT, Peskin Eq.17.59 my question concerns the kinematics of 2 to 2 particle scattering. I refer to Peskin and Schroeder eq.17.59 going from this expression ... 1answer 606 views ### How to find acceleration given position and velocity? Sorry for this very simple question but I am still very new to the laws of motion. I am dealing with 2-dimensional vectors in my programming environment and I'm following these slides to learn about ... 0answers 183 views ### How to model an accelerometer measurements on a car wheel? I am working on kinematically modelling an accelerometer on a car wheel. When working on the initial conditions, I am confused whether or not I should use the gravitational acceleration since there ... 0answers 96 views ### If a car appears in horison and within 2 seconds passes you by, whats the speed it's doing? While watching the first 4 seconds of driving at 745 km/h is ludicrous from any angle wondered 1)If we knew the curvature of the earth in a "flat" desert, what would be the speed of the car? ... 0answers 157 views ### How to calculate the resulting velocitys and rotation speed after two concave polygons collide in 2d so I've been searching google for how to do this, but all anyone seems to care about is collision detection =p, and no results come close to teaching me how to calculate 2d elastic collision between ... 0answers 247 views ### Understanding 1D Kinematics and Uniform Acceleration? [closed] So I was given the following homework problem: You land on an unknown planet somewhere in the universe that clearly has weaker gravity than Earth. To measure g on this planet you do the following ... 0answers 236 views ### Kinematics textbook illustration I have trouble interpreting this illustration. I see why r (position) and a (acceleration) are the way they are, but what happened to v? Why is it smaller than its coordinates? Is this another error ... 3answers 121 views ### How can I solve for time without knowing the vertical velocity? A guy posted this problem on a forum: There is a bird sitting on a pole of height h. you throw a rock at it and the moment the rock leaves your hand the bird starts flying horizontally away from ... 2answers 766 views ### Why do far away objects appear to move slowly in comparison to nearby objects? When we are sitting in a moving train than nearby stationary objects appear to go backwards...in terms of physics we can use the formula velocity of object with respect ... 2answers 487 views ### Calculating Impact Velocity Given Displacement and Acceleration Assume a car has hit a wall in a right angled collision and the front bumper has been displaced 9 cm. The resulting impact is 25g. Also, it is evident by skid marks that the car braked for 5m with ... 2answers 213 views ### Simple kinematics excercise, throwing something upwards I am trying to solve this simple excercise: Question You throw a small coin upwards with $4 \frac{m}{s}$ . How much time does it need to reach the height of $0.5 m$ ? Why do we get two results? ... 3answers 581 views ### Very basic question: When to use $s=vt$, $s=1/2vt$, $s=at$ and $s=a/t^2$? Very basic question: When to use $s=vt$, $s=\frac{1}{2}vt$, $s=at$ and $s=\frac{a}{t^2}$? What was the difference between those? 5answers 172 views ### Why is momentum conserved (or rather what makes an object carry on moving infinitely)? I know this is an incredibly simple question, but I am trying to find a very simple explanation to this other than the simple logic that energy is conserved when two items impact and bounce off each ... 2answers 456 views ### What will be the relative speed of the fly? [duplicate] It has happened many times and i have ignored it everytime. Yesterday it happened again . I was travelling in a train and saw a fly (insect) flying near my seat. Train was running at a speed of ... 2answers 65 views ### Centripetal Force Acceleration Suppose you want to perform a uniform circular motion . Then a body performing uniform circular motion horizontally needs an acceleration $= \frac{v^2}{r}$ at each point on the circular path with ... 1answer 155 views ### Experiment to measure initial speed of high speed tennis ball? I want to devise a way to measure the initial speed of a tennis ball fired from a tennis ball cannon, but without using any speed-measuring devices. Just plan distance-measuring and physics formulas. ... 2answers 714 views ### Preventing a block from sliding on a frictionless inclined plane I want to demonstrate what force $F$ you would have to exert on an inclined plane of angle $t$, mass $M$ to prevent a block on top of it with mass $m$ from sliding up or down the ramp. I worked out ... 3answers 174 views ### If my acceleration is -1 ($a=-1\:\rm{m/s^2}$) and I'm standing in the infinite ($x_0=\infty \:\rm m$), could I reach the point $x=0\:\rm m$? I'm standing in the infinite where $x_0=\infty \:\rm m$. If I have a negative acceleration, could I reach the point $x=0\:\rm m$? Would it be possible to calculate how long would take to reach the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318957924842834, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/88386/implications-of-complex-solutions-of-matiyasevich-chaitin-diophantine-polynomia
## Implications of complex solutions of Matiyasevich / Chaitin diophantine polynomials. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a shot in the dark: In twf:202, an isomorphism $T\cong T^{7}$ between binary trees $T$ and seven tuples of binary trees T^{7} is mentioned. The argument for this isomorphism starts with the observation that the sixth root of unity is obtained from the categorified version of the statement "a planar binary tree is either the tree with one leaf or a pair of planar binary trees." What implications (or extant research has been done?) would complex solutions to either Chaitin's exponential diophantine system (which is essentially a lisp implementation) or Matiyasevich's system have? - A reference to Chaitin's system and/or Matiyasevich's system might be in order. – Gerry Myerson Feb 13 2012 at 23:14 4 I'm not really sure I understand the connection between the first and second paragraphs. I assume it's nothing deeper than the idea that sometimes solutions over larger rings than one started with carry some hidden meaning? As far as I understand, complex solutions to Matiyasevich's universal exponential Diophantine equation have no special significance, and based on the construction I doubt we'll ever discover such a significance (although miracles could happen). What Chaitin did is just to work out an explicit equation. – Henry Cohn Feb 14 2012 at 0:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381675720214844, "perplexity_flag": "middle"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_6&diff=31407&oldid=31221
# User:Michiexile/MATH198/Lecture 6 ### From HaskellWiki (Difference between revisions) | | | | | |---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (19:19, 3 November 2009) (edit) (undo) | | | (8 intermediate revisions not shown.) | | | | | Line 12: | | Line 12: | | | | * <math>C</math> has all finite limits. | | * <math>C</math> has all finite limits. | | | * <math>C</math> has all finite products and all equalizers. | | * <math>C</math> has all finite products and all equalizers. | | - | * <math>C</math> has all pullbacks and a terminal object. Also, the following dual statements are equivalent: | + | * <math>C</math> has all pullbacks and a terminal object. | | | | + | | | | | + | Also, the following dual statements are equivalent: | | | * <math>C</math> has all finite colimits. | | * <math>C</math> has all finite colimits. | | | * <math>C</math> has all finite coproducts and all coequalizers. | | * <math>C</math> has all finite coproducts and all coequalizers. | | Line 24: | | Line 26: | | | | A limit over this diagram is an object <math>C</math> and arrows to all diagram objects. The commutativity conditions for the arrows defined force for us <math>fp_A = p_B = gp_A</math>, and thus, keeping this enforced equation in mind, we can summarize the cone diagram as: | | A limit over this diagram is an object <math>C</math> and arrows to all diagram objects. The commutativity conditions for the arrows defined force for us <math>fp_A = p_B = gp_A</math>, and thus, keeping this enforced equation in mind, we can summarize the cone diagram as: | | | | | | | - | [[Image:EqualizerCone.png]] Now, the limit condition tells us that this is the least restrictive way we can map into <math>A</math> with some map <math>p</math> such that <math>fp = gp</math>, in that every other way we could map in that way will factor through this way. | + | [[Image:EqualizerCone.png]] | | | | + | | | | | + | Now, the limit condition tells us that this is the least restrictive way we can map into <math>A</math> with some map <math>p</math> such that <math>fp = gp</math>, in that every other way we could map in that way will factor through this way. | | | | | | | | As usual, it is helpful to consider the situation in Set to make sense of any categorical definition: and the situation there is helped by the generalized element viewpoint: the limit object <math>C</math> is one representative of a subobject of <math>A</math> that for the case of Set contains all <math>x\in A: f(x) = g(x)</math>. | | As usual, it is helpful to consider the situation in Set to make sense of any categorical definition: and the situation there is helped by the generalized element viewpoint: the limit object <math>C</math> is one representative of a subobject of <math>A</math> that for the case of Set contains all <math>x\in A: f(x) = g(x)</math>. | | Line 30: | | Line 34: | | | | Hence the word we use for this construction: the limit of the diagram above is the ''equalizer of <math>f, g</math>''. It captures the idea of a maximal subset unable to distinguish two given functions, and it introduces a categorical way to define things by equations we require them to respect. | | Hence the word we use for this construction: the limit of the diagram above is the ''equalizer of <math>f, g</math>''. It captures the idea of a maximal subset unable to distinguish two given functions, and it introduces a categorical way to define things by equations we require them to respect. | | | | | | | - | One important special case of the equalizer is the ''kernel'': in a category with a null object, we have a distinguished, unique, member <math>0</math> of any homset given by the compositions of the unique arrows to and from the null object. We define ''the kernel'' <math>Ker(f)</math> of an arrow <math>f</math> to be the equalizer of <math>f, 0</math>. Keeping in mind the arrow-centric view on categories, we tend to denot the arrow from <math>Ker(f)</math> to the source of <math>f</math> by <math>ker(f)</math>. | + | One important special case of the equalizer is the ''kernel'': in a category with a null object, we have a distinguished, unique, member <math>0</math> of any homset given by the compositions of the unique arrows to and from the null object. We define ''the kernel'' <math>Ker(f)</math> of an arrow <math>f</math> to be the equalizer of <math>f, 0</math>. Keeping in mind the arrow-centric view on categories, we tend to denote the arrow from <math>Ker(f)</math> to the source of <math>f</math> by <math>ker(f)</math>. | | | | | | | | In the category of vector spaces, and linear maps, the map <math>0</math> really is the constant map taking the value <math>0</math> everywhere. And the kernel of a linear map <math>f:U\to V</math> is the equalizer of <math>f,0</math>. Thus it is some vector space <math>W</math> with a map <math>i:W\to U</math> such that <math>fi = 0i = 0</math>, and any other map that fulfills this condition factors through <math>W</math>. Certainly the vector space <math>\{u\in U: f(u)=0\}</math> fulfills the requisite condition, nothing larger will do, since then the map composition wouldn't be 0, and nothing smaller will do, since then the maps factoring this space through the smaller candidate would not be unique. | | In the category of vector spaces, and linear maps, the map <math>0</math> really is the constant map taking the value <math>0</math> everywhere. And the kernel of a linear map <math>f:U\to V</math> is the equalizer of <math>f,0</math>. Thus it is some vector space <math>W</math> with a map <math>i:W\to U</math> such that <math>fi = 0i = 0</math>, and any other map that fulfills this condition factors through <math>W</math>. Certainly the vector space <math>\{u\in U: f(u)=0\}</math> fulfills the requisite condition, nothing larger will do, since then the map composition wouldn't be 0, and nothing smaller will do, since then the maps factoring this space through the smaller candidate would not be unique. | | Line 39: | | Line 43: | | | | | | | | | A coequalizer | | A coequalizer | | | | + | | | | [[Image:CoequalizerCoCone.png]] | | [[Image:CoequalizerCoCone.png]] | | | | + | | | | has to fulfill that <math>i_Bf = i_A = i_Bg</math>. Thus, writing <math>q=i_B</math>, we get an object with an arrow (actually, an epimorphism out of <math>B</math>) that identifies <math>f</math> and <math>g</math>. Hence, we can think of <math>i_B:B\to Q</math> as catching the notion of inducing equivalence classes from the functions. | | has to fulfill that <math>i_Bf = i_A = i_Bg</math>. Thus, writing <math>q=i_B</math>, we get an object with an arrow (actually, an epimorphism out of <math>B</math>) that identifies <math>f</math> and <math>g</math>. Hence, we can think of <math>i_B:B\to Q</math> as catching the notion of inducing equivalence classes from the functions. | | | | | | | Line 57: | | Line 63: | | | | [[Image:PreimageDiagram.png]] | | [[Image:PreimageDiagram.png]] | | | | | | | - | where <math>i</math> is a monomorphism representing the subobject, we need to find an object <math>V</math> with a monomorphism injecting it into <math>U</math> such that the map <math>fi: U\to S</math> factors through <math>T</math>. Thus we're looking for dotted maps making the diagram commute, in a universal manner. | + | where <math>i</math> is a monomorphism representing the subobject, we need to find an object <math>V</math> with a monomorphism injecting it into <math>U</math> such that the map <math>\bar fj: V\to S</math> factors through <math>T</math>. Thus we're looking for dotted maps making the diagram commute, in a universal manner. | | | | | | | | The maximality of the subobject means that any other subobject of <math>U</math> that can be factored through <math>T</math> should factor through <math>V</math>. | | The maximality of the subobject means that any other subobject of <math>U</math> that can be factored through <math>T</math> should factor through <math>V</math>. | | Line 77: | | Line 83: | | | | [[Image:PullbackDiagram.png]] | | [[Image:PullbackDiagram.png]] | | | | | | | - | By the definition of a limit, this means that the pullback is an object <math>D</math> with maps <math>\bar f: D\to B</math>, <math>\bar g: D\to A</math> and <math>f\bar g = g\bar f : D\to C</math>, such that any other such object factors through this. | + | By the definition of a limit, this means that the pullback is an object <math>P</math> with maps <math>\bar f: P\to B</math>, <math>\bar g: P\to A</math> and <math>f\bar g = g\bar f : P\to C</math>, such that any other such object factors through this. | | | | | | | | For the diagram <math>U\rightarrow^f S \leftarrow^i T</math>, with <math>i:T\to S</math> one representative monomorphism for the subobject, we get precisely the definition above for the inverse image. | | For the diagram <math>U\rightarrow^f S \leftarrow^i T</math>, with <math>i:T\to S</math> one representative monomorphism for the subobject, we get precisely the definition above for the inverse image. | | Line 99: | | Line 105: | | | | ===Free and forgetful functors=== | | ===Free and forgetful functors=== | | | | | | | | | + | Recall how we defined a free monoid as all strings of some alphabet, with concatenation of strings the monoidal operation. And recall how we defined the free category on a graph as the category of paths in the graph, with path concatenation as the operation. | | | | + | | | | | + | The reason we chose the word ''free'' to denote both these cases is far from a coincidence: by this point nobody will be surprised to hear that we can unify the idea of generating the most general object of a particular algebraic structure into a single categorical idea. | | | | + | | | | | + | The idea of the free constructions, classically, is to introduce as few additional relations as possible, while still generating a valid object of the appropriate type, given a set of generators we view as placeholders, as symbols. Having a minimal amount of relations allows us to introduce further relations later, by imposing new equalities by mapping with surjections to other structures. | | | | + | | | | | + | One of the first observations in each of the cases we can do is that such a map ends up being completely determined by where the generators go - the symbols we use to generate. And since the free structure is made to fulfill the axioms of whatever structure we're working with, these generators combine, even after mapping to some other structure, in a way compatible with all structure. | | | | + | | | | | + | To make solid categorical sense of this, however, we need to couple the construction of a free algebraic structure from a set (or a graph, or...) with another construction: we can define the ''forgetful functor'' from monoids to sets by just picking out the elements of the monoid as a set; and from categories to graph by just picking the underlying graph, and forgetting about the compositions of arrows. | | | | + | | | | | + | Now we have what we need to pinpoint just what kind of a functor the ''free widget generated by''-construction does. It's a functor <math>F: C\to D</math>, coupled with a forgetful functor <math>U: D\to C</math> such that any map <math>S\to U(N)</math> in <math>C</math> induces one unique mapping <math>F(S)\to N</math> in <math>D</math>. | | | | + | | | | | + | For the case of monoids and sets, this means that if we take our generating set, and map it into the set of elements of another monoid, this generates a unique mapping of the corresponding monoids. | | | | + | | | | | + | This is all captured by a similar kind of diagrams and uniquely existing maps argument as the previous object or morphism properties were defined with. We'll show the definition for the example of monoids. | | | | + | | | | | + | '''Definition''' A ''free'' monoid on a generating set <math>X</math> is a monoid <math>F(X)</math> such that there is an inclusion <math>i_X: X\to UF(X)</math> and for every function <math>f : X\to U(M)</math> for some other monoid <math>M</math>, there is a unique homomorphism <math>g:F(X)\to M</math> such that <math>f=U(g) i_X</math>, or in other words such that this diagram commutes: | | | | + | | | | | + | [[Image:FreeMonoid.png]] | | | | + | | | | | + | We can construct a map <math>\phi:Hom_{Mon}(F(X),M) \to Hom_{Set}(X,U(M))</math> by <math>\phi: g\mapsto U(g)\circ i_X</math>. The above definition says that this map is an isomorphism. | | | | | | | | ===Adjunctions=== | | ===Adjunctions=== | | | | | | | - | * Free and forgetful | + | Modeling on the way we construct free and forgetful functors, we can form a powerful categorical concept, which ends up generalizing much of what we've already seen - and also leads us on towards monads. | | - | * Curry and uncurry | + | | | | | | | | | | + | We draw on the definition above of free monoids to give a preliminary definition. This will be replaced later by an equivalent definition that gives more insight. | | | | + | | | | | + | '''Definition''' A pair of functors, | | | | + | | | | | + | [[Image:AdjointPair.png]] | | | | + | | | | | + | is called an ''adjoint pair'' or an ''adjunction'', with <math>F</math> called the ''left adjoint'' and <math>U</math> called the ''right adjoint'' if there is natural transformation <math>\eta: 1\to UF</math>, and for every <math>f:A\to U(B)</math>, there is a unique <math>g: F(A)\to B</math> such that the diagram below commutes. | | | | + | | | | | + | [[Image:AdjointDefinition.png]] | | | | + | | | | | + | The natural transformation <math>\eta</math> is called the ''unit'' of the adjunction. | | | | + | | | | | + | This definition, however, has a significant amount of asymmetry: we can start with some <math>f:A\to U(B)</math> and generate a <math>g: F(A)\to B</math>, while there are no immediate guarantees for the other direction. However, there is a proposition we can prove leading us to a more symmetric statement: | | | | + | | | | | + | '''Proposition''' For categories and functors | | | | + | | | | | + | [[Image:AdjointPair.png]] | | | | + | | | | | + | the following conditions are equivalent: | | | | + | # <math>F</math> is left adjoint to <math>U</math>. | | | | + | # For any <math>c\in C_0</math>, <math>d\in D_0</math>, there is an isomorphism <math>\phi: Hom_D(Fc, d) \to Hom_C(c,Ud)</math>, natural in both <math>c</math> and <math>d</math>. | | | | + | moreover, the two conditions are related by the formulas | | | | + | * <math>\phi(g) = U(g) \circ \eta_c</math> | | | | + | * <math>\eta_c = \phi(1_{Fc})</math> | | | | + | | | | | + | '''Proof sketch''' | | | | + | For (1 implies 2), the isomorphism is given by the end of the statement, and it is an isomorphism exactly because of the unit property - viz. that every <math>f:A\to U(B)</math> generates a unique <math>g: F(A)\to B</math>. | | | | + | | | | | + | Naturality follows by building the naturality diagrams | | | | + | | | | | + | [[Image:AdjointNaturalFirst.png]] [[Image:AdjointNaturalSecond.png]] | | | | + | | | | | + | and chasing through with a <math>f: Fc\to d</math>. | | | | + | | | | | + | For (2 implies 1), we start out with a natural isomorphism <math>\phi</math>. We find the necessary natural transformation <math>\eta_c</math> by considering <math>\phi: Hom(Fc,Fc) \to Hom(c, UFc)</math>. | | | | + | | | | | + | QED. | | | | + | | | | | + | By dualizing the proof, we get the following statement: | | | | + | | | | | + | '''Proposition''' For categories and functors | | | | + | | | | | + | [[Image:AdjointPair.png]] | | | | + | | | | | + | the following conditions are equivalent: | | | | + | # For any <math>c\in C_0</math>, <math>d\in D_0</math>, there is an isomorphism <math>\phi: Hom_D(Fc, d) \to Hom_C(c,Ud)</math>, natural in both <math>c</math> and <math>d</math>. | | | | + | # There is a natural transformation <math>\epsilon: FU \to 1_D</math> with the property that for any <math>g: F(c) \to d</math> there is a unique <math>f: c\to U(d)</math> such that <math>g = \epsilon_D\circ F(f)</math>, as in the diagram | | | | + | | | | | + | [[Image:AdjointDualDefinition.png]] | | | | + | | | | | + | moreover, the two conditions are related by the formulas | | | | + | * <math>\psi(f) = \epsilon_D\circ F(f)</math> | | | | + | * <math>\epsilon_d = \psi(1_{Ud})</math> | | | | + | | | | | + | where <math>\psi = \phi^{-1}</math>. | | | | + | | | | | + | Hence, we have an equivalent definition with higher generality, more symmetry and more ''horsepower'', as it were: | | | | + | | | | | + | '''Definition''' An ''adjunction'' consists of functors | | | | + | | | | | + | [[Image:AdjointPair.png]] | | | | + | | | | | + | and a natural isomorphism | | | | + | | | | | + | [[Image:AdjointIso.png]] | | | | + | | | | | + | The ''unit'' <math>\eta</math> and the ''counit'' <math>\epsilon</math> of the adjunction are natural transformations given by: | | | | + | * <math>\eta: 1_C\to UF: \eta_c = \phi(1_{Fc})</math> | | | | + | * <math>\epsilon: FU\to 1_D: \epsilon_d = \psi(1_{Ud})</math>. | | | | + | | | | | + | ---- | | | | + | | | | | + | Some of the examples we have had difficulties fitting into the limits framework show up as adjunctions: | | | | + | | | | | + | The ''free'' and ''forgetful'' functors are adjoints; and indeed, a more natural definition of what it means to be free is that it is a left adjoint to some forgetful functor. | | | | + | | | | | + | Curry and uncurry, in the definition of an exponential object are an adjoint pair. The functor <math>-\times A: X\mapsto X\times A</math> has right adjoint <math>-^A: Y\mapsto Y^A</math>. | | | | + | | | | | + | ====Notational aid==== | | | | + | | | | | + | One way to write the adjoint is as a ''bidirectional rewrite rule'': | | | | + | | | | | + | <math>\frac{F(X) \to Y}{X\to G(Y)}</math>, | | | | + | | | | | + | where the statement is that the hom sets indicated by the upper and lower arrow, respectively, are transformed into each other by the unit and counit respectively. The left adjoint is the one that has the functor application on the left hand side of this diagram, and the right adjoint is the one with the functor application to the right. | | | | | | | | | | | | Line 110: | | Line 221: | | | | | | | | | ===Homework=== | | ===Homework=== | | | | + | | | | | + | Complete homework is 6 out of 11 exercises. | | | | | | | | # Prove that an equalizer is a monomorphism. | | # Prove that an equalizer is a monomorphism. | | | # Prove that a coequalizer is an epimorphism. | | # Prove that a coequalizer is an epimorphism. | | - | # Prove that given any relation <math>R\subseteq X\times X</math>, its completion to an equivalence relation is the kernel of the coequalizer of the component maps of the relation | + | # Prove that given any relation <math>R\subseteq X\times X</math>, its completion to an equivalence relation is the kernel of the coequalizer of the component maps of the relation. For the purpose of this, we define kernels in Set as the equalizer of <math>f\circ p_1, f\circ p_2: A\times A\to B</math>. (A more general definition of a kernel, independent of zero objects, can be found using a definition of quotients by equivalence relations, and having the kernel be a universal object to factor the quotient through). | | | # Prove that if the right arrow in a pullback square is a mono, then so is the left arrow. Thus the intersection as a pullback really is a subobject. | | # Prove that if the right arrow in a pullback square is a mono, then so is the left arrow. Thus the intersection as a pullback really is a subobject. | | | # Prove that if both the arrows in the pullback 'corner' are mono, then the arrows of the pullback cone are all mono. | | # Prove that if both the arrows in the pullback 'corner' are mono, then the arrows of the pullback cone are all mono. | | | # What is the pullback in the category of posets? | | # What is the pullback in the category of posets? | | | # What is the pushout in the category of posets? | | # What is the pushout in the category of posets? | | | | + | # Prove that the exponential and the product functors above are adjoints. What are the unit and counit? | | | | + | # (worth 4pt) Consider the unique functor <math>!:C\to 1</math> to the terminal category. | | | | + | ## Does it have a left adjoint? What is it? | | | | + | ## Does it have a right adjoint? What is it? | | | | + | # * Prove the propositions in the text. | | | | + | # (worth 4pt) Suppose | | | | + | :[[Image:AdjointPair.png]] | | | | + | :is an adjoint pair. Find a natural transformation <math>FUF\to F</math>. Conclude that there is a natural transformation <math>\mu: UFUF\to UF</math>. Prove that this is associative, in other words that the diagram | | | | + | :[[Image:AdjointMuAssociative.png]] | | | | + | :commutes. Prove that the unit of the adjunction forms a unit for this <math>\mu</math>, in other words, that the diagram | | | | + | :[[Image:AdjointMuUnit.png]] | | | | + | :commutes. | ## Current revision IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. ## Contents ### 1 Useful limits and colimits With the tools of limits and colimits at hand, we can start using these to introduce more category theoretical constructions - and some of these turn out to correspond to things we've seen in other areas. Possibly among the most important are the equalizers and coequalizers (with kernel (nullspace) and images as special cases), and the pullbacks and pushouts (with which we can make explicit the idea of inverse images of functions). One useful theorem to know about is: Theorem The following are equivalent for a category C: • C has all finite limits. • C has all finite products and all equalizers. • C has all pullbacks and a terminal object. Also, the following dual statements are equivalent: • C has all finite colimits. • C has all finite coproducts and all coequalizers. • C has all pushouts and an initial object. For this theorem, we can replace finite with any other cardinality in every place it occurs, and we will still get a valid theorem. ====Equalizer, coequalizer==== Consider the equalizer diagram: A limit over this diagram is an object C and arrows to all diagram objects. The commutativity conditions for the arrows defined force for us fpA = pB = gpA, and thus, keeping this enforced equation in mind, we can summarize the cone diagram as: Now, the limit condition tells us that this is the least restrictive way we can map into A with some map p such that fp = gp, in that every other way we could map in that way will factor through this way. As usual, it is helpful to consider the situation in Set to make sense of any categorical definition: and the situation there is helped by the generalized element viewpoint: the limit object C is one representative of a subobject of A that for the case of Set contains all $x\in A: f(x) = g(x)$. Hence the word we use for this construction: the limit of the diagram above is the equalizer of f,g. It captures the idea of a maximal subset unable to distinguish two given functions, and it introduces a categorical way to define things by equations we require them to respect. One important special case of the equalizer is the kernel: in a category with a null object, we have a distinguished, unique, member 0 of any homset given by the compositions of the unique arrows to and from the null object. We define the kernel Ker(f) of an arrow f to be the equalizer of f,0. Keeping in mind the arrow-centric view on categories, we tend to denote the arrow from Ker(f) to the source of f by ker(f). In the category of vector spaces, and linear maps, the map 0 really is the constant map taking the value 0 everywhere. And the kernel of a linear map $f:U\to V$ is the equalizer of f,0. Thus it is some vector space W with a map $i:W\to U$ such that fi = 0i = 0, and any other map that fulfills this condition factors through W. Certainly the vector space $\{u\in U: f(u)=0\}$ fulfills the requisite condition, nothing larger will do, since then the map composition wouldn't be 0, and nothing smaller will do, since then the maps factoring this space through the smaller candidate would not be unique. Hence, $Ker(f) = \{u\in U: f(u) = 0\}$ just like we might expect. Dually, we get the coequalizer as the colimit of the equalizer diagram. A coequalizer has to fulfill that iBf = iA = iBg. Thus, writing q = iB, we get an object with an arrow (actually, an epimorphism out of B) that identifies f and g. Hence, we can think of $i_B:B\to Q$ as catching the notion of inducing equivalence classes from the functions. This becomes clear if we pick out one specific example: let $R\subseteq X\times X$ be an equivalence relation, and consider the diagram where r1 and r2 are given by the projection of the inclusion of the relation into the product onto either factor. Then, the coequalizer of this setup is an object X / R such that whenever x˜Ry, then q(x) = q(y). #### 1.1 Pullbacks The preimage f − 1(T) of a subset $T\subseteq S$ along a function $f:U\to S$ is a maximal subset $V\subseteq U$ such that for every $v\in V: f(v)\in T$. We recall that subsets are given by (equivalence classes of) monics, and thus we end up being able to frame this in purely categorical terms. Given a diagram like this: where i is a monomorphism representing the subobject, we need to find an object V with a monomorphism injecting it into U such that the map $\bar fj: V\to S$ factors through T. Thus we're looking for dotted maps making the diagram commute, in a universal manner. The maximality of the subobject means that any other subobject of U that can be factored through T should factor through V. Suppose U,V are subsets of some set W. Their intersection $U\cap V$ is a subset of U, a subset of V and a subset of W, maximal with this property. Translating into categorical language, we can pick representatives for all subobjects in the definition, we get a diagram with all monomorphisms: where we need the inclusion of $U\cap V$ into W over U is the same as the inclusion over V. Definition A pullback of two maps $A \rightarrow^f C \leftarrow^g B$ is the limit of these two maps, thus: By the definition of a limit, this means that the pullback is an object P with maps $\bar f: P\to B$, $\bar g: P\to A$ and $f\bar g = g\bar f : P\to C$, such that any other such object factors through this. For the diagram $U\rightarrow^f S \leftarrow^i T$, with $i:T\to S$ one representative monomorphism for the subobject, we get precisely the definition above for the inverse image. For the diagram $U\rightarrow W \leftarrow V$ with both map monomorphisms representing their subobjects, the pullback is the intersection. #### 1.2 Pushouts Often, especially in geometry and algebra, we construct new structures by gluing together old structures along substructures. Possibly the most popularly known example is the Möbius band: we take a strip of paper, twist it once and glue the ends together. Similarily, in algebraic contexts, we can form amalgamated products that do roughly the same. All these are instances of the dual to the pullback: Definition A pushout of two maps $A\leftarrow^f C\rightarrow^g B$ is the co-limit of these two maps, thus: Hence, the pushout is an object D such that C maps to the same place both ways, and so that, contingent on this, it behaves much like a coproduct. ### 2 Free and forgetful functors Recall how we defined a free monoid as all strings of some alphabet, with concatenation of strings the monoidal operation. And recall how we defined the free category on a graph as the category of paths in the graph, with path concatenation as the operation. The reason we chose the word free to denote both these cases is far from a coincidence: by this point nobody will be surprised to hear that we can unify the idea of generating the most general object of a particular algebraic structure into a single categorical idea. The idea of the free constructions, classically, is to introduce as few additional relations as possible, while still generating a valid object of the appropriate type, given a set of generators we view as placeholders, as symbols. Having a minimal amount of relations allows us to introduce further relations later, by imposing new equalities by mapping with surjections to other structures. One of the first observations in each of the cases we can do is that such a map ends up being completely determined by where the generators go - the symbols we use to generate. And since the free structure is made to fulfill the axioms of whatever structure we're working with, these generators combine, even after mapping to some other structure, in a way compatible with all structure. To make solid categorical sense of this, however, we need to couple the construction of a free algebraic structure from a set (or a graph, or...) with another construction: we can define the forgetful functor from monoids to sets by just picking out the elements of the monoid as a set; and from categories to graph by just picking the underlying graph, and forgetting about the compositions of arrows. Now we have what we need to pinpoint just what kind of a functor the free widget generated by-construction does. It's a functor $F: C\to D$, coupled with a forgetful functor $U: D\to C$ such that any map $S\to U(N)$ in C induces one unique mapping $F(S)\to N$ in D. For the case of monoids and sets, this means that if we take our generating set, and map it into the set of elements of another monoid, this generates a unique mapping of the corresponding monoids. This is all captured by a similar kind of diagrams and uniquely existing maps argument as the previous object or morphism properties were defined with. We'll show the definition for the example of monoids. Definition A free monoid on a generating set X is a monoid F(X) such that there is an inclusion $i_X: X\to UF(X)$ and for every function $f : X\to U(M)$ for some other monoid M, there is a unique homomorphism $g:F(X)\to M$ such that f = U(g)iX, or in other words such that this diagram commutes: We can construct a map $\phi:Hom_{Mon}(F(X),M) \to Hom_{Set}(X,U(M))$ by $\phi: g\mapsto U(g)\circ i_X$. The above definition says that this map is an isomorphism. ### 3 Adjunctions Modeling on the way we construct free and forgetful functors, we can form a powerful categorical concept, which ends up generalizing much of what we've already seen - and also leads us on towards monads. We draw on the definition above of free monoids to give a preliminary definition. This will be replaced later by an equivalent definition that gives more insight. Definition A pair of functors, is called an adjoint pair or an adjunction, with F called the left adjoint and U called the right adjoint if there is natural transformation $\eta: 1\to UF$, and for every $f:A\to U(B)$, there is a unique $g: F(A)\to B$ such that the diagram below commutes. The natural transformation η is called the unit of the adjunction. This definition, however, has a significant amount of asymmetry: we can start with some $f:A\to U(B)$ and generate a $g: F(A)\to B$, while there are no immediate guarantees for the other direction. However, there is a proposition we can prove leading us to a more symmetric statement: Proposition For categories and functors the following conditions are equivalent: 1. F is left adjoint to U. 2. For any $c\in C_0$, $d\in D_0$, there is an isomorphism $\phi: Hom_D(Fc, d) \to Hom_C(c,Ud)$, natural in both c and d. moreover, the two conditions are related by the formulas • $\phi(g) = U(g) \circ \eta_c$ • ηc = φ(1Fc) Proof sketch For (1 implies 2), the isomorphism is given by the end of the statement, and it is an isomorphism exactly because of the unit property - viz. that every $f:A\to U(B)$ generates a unique $g: F(A)\to B$. Naturality follows by building the naturality diagrams and chasing through with a $f: Fc\to d$. For (2 implies 1), we start out with a natural isomorphism φ. We find the necessary natural transformation ηc by considering $\phi: Hom(Fc,Fc) \to Hom(c, UFc)$. QED. By dualizing the proof, we get the following statement: Proposition For categories and functors the following conditions are equivalent: 1. For any $c\in C_0$, $d\in D_0$, there is an isomorphism $\phi: Hom_D(Fc, d) \to Hom_C(c,Ud)$, natural in both c and d. 2. There is a natural transformation $\epsilon: FU \to 1_D$ with the property that for any $g: F(c) \to d$ there is a unique $f: c\to U(d)$ such that $g = \epsilon_D\circ F(f)$, as in the diagram moreover, the two conditions are related by the formulas • $\psi(f) = \epsilon_D\circ F(f)$ • εd = ψ(1Ud) where ψ = φ − 1. Hence, we have an equivalent definition with higher generality, more symmetry and more horsepower, as it were: Definition An adjunction consists of functors and a natural isomorphism The unit η and the counit ε of the adjunction are natural transformations given by: • $\eta: 1_C\to UF: \eta_c = \phi(1_{Fc})$ • $\epsilon: FU\to 1_D: \epsilon_d = \psi(1_{Ud})$. Some of the examples we have had difficulties fitting into the limits framework show up as adjunctions: The free and forgetful functors are adjoints; and indeed, a more natural definition of what it means to be free is that it is a left adjoint to some forgetful functor. Curry and uncurry, in the definition of an exponential object are an adjoint pair. The functor $-\times A: X\mapsto X\times A$ has right adjoint $-^A: Y\mapsto Y^A$. #### 3.1 Notational aid One way to write the adjoint is as a bidirectional rewrite rule: $\frac{F(X) \to Y}{X\to G(Y)}$, where the statement is that the hom sets indicated by the upper and lower arrow, respectively, are transformed into each other by the unit and counit respectively. The left adjoint is the one that has the functor application on the left hand side of this diagram, and the right adjoint is the one with the functor application to the right. ### 4 Homework Complete homework is 6 out of 11 exercises. 1. Prove that an equalizer is a monomorphism. 2. Prove that a coequalizer is an epimorphism. 3. Prove that given any relation $R\subseteq X\times X$, its completion to an equivalence relation is the kernel of the coequalizer of the component maps of the relation. For the purpose of this, we define kernels in Set as the equalizer of $f\circ p_1, f\circ p_2: A\times A\to B$. (A more general definition of a kernel, independent of zero objects, can be found using a definition of quotients by equivalence relations, and having the kernel be a universal object to factor the quotient through). 4. Prove that if the right arrow in a pullback square is a mono, then so is the left arrow. Thus the intersection as a pullback really is a subobject. 5. Prove that if both the arrows in the pullback 'corner' are mono, then the arrows of the pullback cone are all mono. 6. What is the pullback in the category of posets? 7. What is the pushout in the category of posets? 8. Prove that the exponential and the product functors above are adjoints. What are the unit and counit? 9. (worth 4pt) Consider the unique functor $!:C\to 1$ to the terminal category. 1. Does it have a left adjoint? What is it? 2. Does it have a right adjoint? What is it? 10. * Prove the propositions in the text. 11. (worth 4pt) Suppose is an adjoint pair. Find a natural transformation $FUF\to F$. Conclude that there is a natural transformation $\mu: UFUF\to UF$. Prove that this is associative, in other words that the diagram commutes. Prove that the unit of the adjunction forms a unit for this μ, in other words, that the diagram commutes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 62, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8784911036491394, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57185?sort=votes
## Nilpotent matrices related to Lie algebras of special orthogonal groups in characteristic 0 ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In terms of matrix theory, the question I'm led to is the following: Start with an `$n$`-dimensional vector space over an algebraically closed field of characteristic 0 such as `$\mathbb{C}$`, which has a non-degenerate symmetric bilinear form. Consider all `$n \times n$` skew-adjoint matrices `$A$` relative to this form. Given `$n\geq 5$`, what is the least power `$k$` for which all nilpotent matrices of this type satisfy `$A^k=0$`? For any nilpotent `$n \times n$` matrix `$A$`, it follows from the Cayley-Hamilton Theorem that `$A^n=0$`. But in the special case here it seems plausible to expect a slightly smaller minimum: namely, `$n-1$` if `$n$` is odd and `$n-2$` if `$n$` is even. I also wonder what is written down in the literature along this line. There is of course a hidden agenda, relative to simple Lie algebras attached to special orthogonal groups over a field like `$\mathbb{C}$`. In the classification of simple types `$A_\ell-D_\ell$` of rank `$\ell$`, the respective Coxeter numbers are `$\ell+1, 2\ell, 2\ell, 2(\ell-1)$`. (Types `$B, C$` share the same Weyl group.) Types `$A, C$` are realized naturally as `$n\times n$` matrices with `$n=h$`, but the other Lie algebras of orthogonal type yield `$n=2\ell+1$$` and `$n=2\ell$`. So my question for the latter types means: does every nilpotent element `$e$` of this natural matrix Lie algebra satisfy `$e^h =0$` as in types `$A,C$`? Behind this question is a related prime characteristic question for restricted Lie algebras, motivated in part by Kostant's classical 1959 paper in Amer. J. Math. (Corollary 5.4). In the general setting of simple Lie algebras he showed that regular (= principal) "nilpotent" elements `$e$` are characterized by a condition on their adjoint operators: `$(\mathrm{ad} \:e)^{2q}\neq 0$` where `$q$` is the sum of coefficients of the highest root expressed relative to simple roots. Moreover, the next power annihilates all regular nilpotents. Earlier he showed that `$q+1 = h$` is the Coxeter number of the Weyl group. (But there is a misprint in that corollary.) ADDED: As Victor points out, except for a small decrease in type `$D$` the four classical families of simple Lie algebras have index of nilpotence in their natural representations given by the Cayley-Hamilton approach in type `$A$`. My question arose from passing to characteristic `$p>0$` via a Chevalley basis over `$\mathbb{Z}$`, then extending scalars. For `$p \geq h$`, results from the mid-1980s on cohomology of restricted Lie algebras and support varieties of modules (Jantzen, Friedlander-Parshall, ... ) reveal that for the built-in `$[p]$` operation on such a Lie algebra one has `$\text{ad}\: e^{[p]} = 0 = (\text{ad} \:e)^p$` for all nilpotents `$e$`. But in natural matrix representations like those for types `$A-D$`, the `$[p]$` operation is the usual matrix power. Here the slightly modified Cayley-Hamilton power needed for vanishing agrees. At the extreme of `$E_8$` there is more divergence: Here the "natural" smallest faithful representation is given by the adjoint module with `$n=248$`, whereas `$h=30$`. For `$p = 31$` this power of each `$\text{ad} \:e$` vanishes, contrasting with Kostant's characteristic 0 result which requires a power at least `$59$` when `$e$` is regular. - 2 For odd n there is n-dimensional orthogonal irreducible representation of sl(2). Then $e\in sl(2)$ would be represented by skew-adjoint operator with $e^{n-1}\ne 0$. This seems to contradict your statement.. – Victor Ostrik Mar 3 2011 at 4:27 ## 1 Answer Let $\lambda$ be a partition of $n.$ Then there exists a skew-symmetric nilpotent matrix whose Jordan blocks sizes are $\lambda_i$ if and only if every even parts has even multiplicity. This follows easily from the representation theory of $\mathfrak{sl}_2$ and is duly recorded in standard sources, e.g. Collingwood and McGovern. It follows that the maximum "nilpotence index" of a skew-symmetric $n\times n$ matrix is $n$ for odd $n$ and $n-1$ for even $n.$ - Thanks for reminding me of the treatment in Collingwood-McGovern. I skimmed their Chapter 5 too quickly. Initially I was looking for an explicit formulation in the matrix theory literature about the optimal estimate of nilpotence index in these cases. – Jim Humphreys Mar 3 2011 at 12:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177857041358948, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/172058-how-many-hours-elapsed-when-body-found-print.html
# how many hours elapsed when the body was found... Printable View • February 21st 2011, 10:13 AM slapmaxwell1 how many hours elapsed when the body was found... a dead body was found within a closed room of a house where the temperature was a constant 70 F. at the time of discovery the core temperature of the body was determined to be 85 F. one hour later a second measurement showed that the core temperature of the body was 80 F. assume that the time of death corresponds to t=0 and that the core temp at that time was 98.6 F, how many hours elapsed before the body was found? ok so my equation that i want to use will be T = Tm + Ce^(kt) so can i say that t(0)= 98.6 F. if that is true then i can find my C value and have the actual problem set up. but im not sure where to go after this. I know i need to find k and then eventually t...which my book says is 1.6 hours well any help would be appreciated..thanks in advance • February 21st 2011, 10:31 AM running-gag Hi The temperature of the body is $T(t) = T_m + Ce^{kt}$, k being negative We know that : - at infinite time T(t) is the temperature of the room, therefore Tm=70 - T(0)=98.6=Tm+C therefore we know the value of C The temperature of the body at the time it was found is $T_1 = T_m + Ce^{kt} = 85$ One hour later $T_2 = T_m + Ce^{k(t+1)} = 80$ Now you just have to solve • February 21st 2011, 10:32 AM Ackbeet Don't confuse capital T with lowercase t in Newton's Law of Cooling problems! Highly dangerous. Might get a negative citation from the Citation Writing Subgroup of the Committee for the Prevention of Notational Abuse. I would agree that $T(0) = 98.6$. The target variable is the time at which the temperature is 85 degrees, call it $t_{d},$ for the time of discovery. You know that $T(t_{d})=85,$ and you also know that $T(t_{d}+1)=80.$ $T=70+Ce^{kt},$ you should be able to find what you're looking for. Make sense? • February 21st 2011, 01:47 PM slapmaxwell1 yeah that does make sense, that was the trick that i missed! that was the missing link the Td and then Td+1.... thanks! • February 21st 2011, 05:39 PM Ackbeet You're welcome! All times are GMT -8. The time now is 06:58 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9685733914375305, "perplexity_flag": "middle"}
http://gmatclub.com/wiki/Combinatorics
# Combinatorics ## Useful links 2. Combinatorics in the GMAT Math Book 3. Combinatorics GMAT Questions - a comprehensive list of combinatorics questions (easy, medium, hard) 4. Combinatorics DS GMAT Questions in GMAT Club Forum 5. Combinatorics PS GMAT Questions in GMAT Club Forum ## Synopsis The goal of this lesson is to get you accustomed to and comfortable with combinatorics problems on the GMAT. We have incorporated the hardest problems from GMAT that we could find, so this collection should make your experience on the final GMAT less stressful because the real problems are not as difficult. Another goal is to provide you with techniques to solve all combinations problems in time you have - 2-3 minutes. Just to start, combinatorics problems are believed to be some of the hardest ones on the GMAT - the elite division of questions that yields only to probability. Yet, even combinatorics questions have several levels of difficulty, making them available through the test: • Easy - one action problems with no complications or limits imposed • Medium - two action problems or one simple limit • Hard - either include extensive calculations or several limits or even both Still, the hardest thing with problems is to identify the solution method. The test writers, on their part, will make sure to include several almost right, but still wrong answers in order to make sure you pick a wrong answer should you mess up even a bit. Here is the bottom line: the point is to figure out how to solve, the rest is arithmetic. Below you will find several scenarios to approach combinatorics problems that, hopefully, will help you easily identify the right solution for each given problem that you meet on your test day, and at the end we have listed an approximation of a systematic approach to combinatorics problems. One word on guessing: personally, we don't like guessing. It is just as hard to guess right as to solve right, so we usually choose solving. ## Formulae There are several rules and formulae to find the number of combinations: 1. Multiplication rule - when the number of available spaces for combinations matches the number of elements (e.g. we have 5 people for 5 positions - the result will equal to $5*4*3*2*1$ or $5!$, which is 120.) or when we have several groups within one set; we will need to multiply the results for groups to find the total. (see example 1, 2, and 3). 2. Addition rule - applies in the more complex situations, primarily when we have a variable number of positions for combinations, and thus have to calculate several different number of combinations for each given number of positions (e.g. we have \$1, \$5, \$10, \$20, \$50 bills what is the number of sums we can come up with?). (see example 7). 3. Permutations formula: $\Large P_n^k = \frac{n!}{(n - k)!}$; unique members (example 7, 8). 4. Combinations formula: $\Large C_n^k = \frac{n!}{(n - k)!*k!}$; non unique members (example 9, 10, 11). Logic The first kind of the problems that we will consider is not what we usually imply under standard combinatorics problems, but still very interesting. Consider the following example from GMAT Plus: {{#x:box| EXAMPLE 1. Of the science books in a certain supply room, 50 are on botany, 65 are on zoology, 90 are on physics. 50 are on geology, and 110 are on chemistry. If science books are removed randomly from the supply room, how many must be removed to ensure that 80 of the books removed are on the same science? (A) 81 (B) 159 (C) 166 (D) 285 (E) 324 }} Such problems invite us to provide a foolproof solution that would work in 100% of the cases. Thus, this means we will need to find a solution for the worst case. In our example, we can say that there is a devil kin sitting in the certain supply room and she hands us the books so that each time we get a new book. So after about 250, we will have removed all of the botany and geology books as well as 50 on zoology, 50 on physics, and 50 on chemistry, but we still don't have 80 of the same kind. We have 15 zoology, 40 physics, and 60 chemistry books left. So, after another 45 books, we will have removed all zoology books, 15 physics, and 15 chemistry books. Still not enough. We have at most 65 books of one kind. Let's remove 14 of each kind of the books we have left. So, after removing 14 of physics and chemistry books we will have a total of 323 books removed and we have 5 stacks that are at most 79 books. Now, however, we need to remove only one book because we will know that we have only two kinds of books left (either chemistry or physics) and any of them will give us a set of 50. Of course in reality it would not be that bad, but we have to take the worst situation. It would be easier, however, if we just took a look at the number of the books that could be left in the room. We know that we need 80 of one kind, so we for sure know that those would not be botany, zoology, or geology books because there too little of them. If we need to guarantee that 80 are removed, we would have 10 physics and 30 chemistry left. However, since we need only 80 of one kind, we can say that either 11 physics and 30 chemistry or 10 physics and 31 chemistry. The trick is to spot see that we need only one stack to be 80, not all. Then, we could subtract 31 from the total number of books (365-41=324). There are not that many of these remove/remained problems, but they are fun. ## Logic and Simple Rules Usually, on the GMAT you can solve and find the number of combinations or permutations without using any formula or even writing out the combinations, but just by applying your logic. For practice, consider the following example from the Princeton Review: {{#x:box| EXAMPLE 2. Katie must place five stuffed animals--a duck, a goose, a panda, a turtle and a swan in a row in the display window of a toy store. How many different displays can she make if the duck and the goose must be either first or last? (A) 120 (B) 60 (C) 24 (D) 12 (E) 6 }} Here is the explanation by the same company. It works but it is not optimal. Let's simplify things. We know either the duck or goose is first or last. Ignore those for the moment. How many ways can we arrange the panda, turtle, and swan in the positions 2, 3 and 4? Be systematic, and list them out: PTS, PST, TPS, TSP, SPT, STP. 6 ways. So if the duck is first and goose is last, there are 6 ways the whole arrangement can work; if the goose is first and the duck is last, that makes 6 more for a total of 12. The answer is (D). This is an average difficulty problem - we have 5 spots and 5 animals to fill them, so we will just need to run the factorial, but even without knowing that, we can solve the problem. Let's use the brain for a second. I don't think you need to write out all the combinations as Princeton Review suggests because it is a pain (yet sometimes it helps when you are not sure about a solution). Anyway, just think logically: there are 3 spots available for swapping: 2, 3, and 4 (the first and the fifth one are occupied by the duck and goose). So, for the first of the three spots, we have 3 animals; for the second - 2, and for the third only one. This means that for every of the three animals in the spot #1, we have 2 animals in spot #2, and one in spot #3. Therefore, to get the total number of combinations we need to multiply 3*2*1. This falls under the multiplication rule of combinatorics. Since the duck and goose provide us with two options, again, according to the multiplication rule, we need to multiply our final result by 2 to get 12. Try solving this problem on your own. {{#x:box| EXAMPLE 3. The president of a country and 4 other dignitaries are scheduled to sit in a row on the 5 chairs represented above. If the president must sit in the center chair, how many different seating arrangements are possible for the 5 people? (A) 4 (B) 5 (C) 20 (D) 24 (E) 120 }} Let's consider another example, this time from the official guide: {{#x:box| EXAMPLE 4. In how many arrangements can a teacher seat 3 girls and 3 boys in a row of 6 if the boys are to have the first, third, and fifth seats? (A) 6 (B) 9 (C) 12 (D) 36 (E) 720 }} I don't know what approach took ETC in this problem, but the most optimal again would be just to straighten things out and then apply logic. This is clearly a unique spots/members situation because otherwise we would have only one combination (girls 2, 4, 6 and boys 1, 3, 5), but it is not in the answers and truly would be stupid to ask. So let's devise a plan. Think for a bit so that you would not waste time doing useless and incorrect calculations. First of all, we have a limitation on our group that defines the number of combinations for odd and even positions. Usually for problems like this, there are two methods of solution: find the total number of combinations and then subtract the ones that fall under a limitation or count all the possible combinations that respect the limitation and then multiply them (usually the preferable approach). In our case, however, the limitation is fairly large and it will be useless counting the total number of combinations and then subtract a very complicated condition. So, let's count the possible number of combinations under our limitation. We know that there are 3 seats for 3 boys, so similarly to the previous example, we get 3! or 3*2=6. The number of girls is the same, which gives us two groups within one set; to find the total number of combinations, we need to multiply the two results for two groups 6*6=36. (because for each combination of girls there are 6 arrangements of boys and vice versa). If you were entirely at a loss with this question, you could have guessed. First of all, 720 seems just a way too much; actually it would be the correct answer if we did not have limitation, but with the limitation it looks too much, so we are down to 36, 12, and 9. 6 is too little; you could have figured that out too. None of the answers is the product of a factorial. Actually, it is very useful to know the factorial products: 2!=2, 3!=6, 4!=24, 5!=120, 6!=720. Anyway, it would be hard to guess cause we would have 12 (6+6) or 36 (6*6). Personally I don't like guessing. I think it is much harder to guess than to solve, so why not do the easiest thing and just solve? Perhaps a pure multiplication rule will be the following that came from an unknown source: {{#x:box| EXAMPLE 5. If a customer makes exactly 1 selection from each of the 5 categories listed below, what is the greatest number of different ice cream sundaes that a customer can create? 12 ice cream flavors; 10 kinds of candy; 8 liquid toppings; 5 kinds of nuts; With or without whip cream; (A) 9600 (B) 4800 (C) 2400 (D) 800 (E) 400 }} According to the problem, the customer must make 1 selection out of each and it can be only one (if it were different it would much more complex). Basically, we have 5 different ingredients, so after picking one of 12 ice cream flavors, the person has 10 choices of candy, and then for each of 10 choices of candy, he/she has 8 options for toppings, and for each of those 8 toppings - 5 kinds of nuts. Moreover, the person will get either with or without whipped cream. Obviously, this is a multiplication case by all means: the number of positions to fill with combinations equals the number of different ingredients - 5, (in such cases, we can't use a permutation or combination formulae). Here is what we get after multiplying: $12*10*8*5*2 = 9,600$ (don't forget the whipped cream). This is a one-action problem. Sometimes, the math may not be very easy. Consider the following example from the Schaum's Intro to Statistics: {{#x:box| EXAMPLE 6. From a class consisting of 12 computer science majors, 10 mathematics majors, and 9 statistics majors, a committee of 4 computer science majors, 4 mathematics majors, and 3 statistics majors is to be formed. How many distinct committees are there? }} To solve the problem, we will need to find the 3 constituting elements - 4 computer majors, 4 math majors, and 3 statistics majors and then, since they are elements of one set - multiply them. Jumping a little ahead, we will use the combinations formula and will get the following results for each group: 495, 210, and 84. Now, since for each computer science major there are a good number of math and stats majors, we multiply. The result we will get is $495*210*84 = 8,731,800$. ## Permutations and Combinations Besides the two problems explained in Kaplan, that you should know by heart, about the 3 out of 5 runners and 3 out of 8 committee members, there are few variations. For example, let's take a Permutation problem from high school textbook: {{#x:box| EXAMPLE 7. Given a selected committee of 8, in how many ways, can the members of the committee divide the responsibilities of a president, vice president, and secretary? }} The solution comes both from the permutations formula and from logic. (Permutations, not combinations formula is used because the order matters since the positions are unique). Scenario 1 - Formula: $P_8^3 = \frac{8!}{5!}=8*7*6=336$ Scenario 2 - Logic: For the President's position we have 8 people and for the VP's - 7, and 6 left for the Secretarial position. Therefore, the total number of permutations equals to $8*7*6=336$. Since the position of a person matters (P - Alex, VP- Jen, and S -Sindy is different from Jen, Sindy, Alex), we do not need to divide by anything. Consider the following more advanced problem from the same textbook (it has a trick to it): {{#x:box| EXAMPLE 8. How many four-digit numbers can you form using ten numbers (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) if the numbers can be used only once? }} It seems to be easy, we take 10*9*8*7 since we have 4 positions, and get 5040, but there is a trick to this problem because 5040 will include numbers that start with a 0, and in reality we don't have those. So, we need to subtract the number of the fake 4-digit numbers. There are two ways to arrive to that: either subtract $\frac{1}{10}$ out of 5040 since all 10 digits are equally represented ($5040-10% = 4536$) or use a Permutation formula: $P_9^3 = 9*8*7 = 504$. $5040-504 = 4536$. However, the problems get more complex by requiring a test taker to make more than one action, so, often, we will need to use addition or multiplication rule along with combination/permutation. Let's consider an interesting problem: {{#x:box| EXMAPLE 9. A person has the following bills: \$1, \$5, \$10, \$20, \$50. How many unique sums can one form using any number of these bills only once? }} First of all, let's reason (reasoning is always good!). There are 5 different bills, and we have to make unique sums of money out of them. Basically, as we figure out from the text, we can use either 1 or all 5 bills for our amounts. Good news is that none of the possible combinations seems to overlap, meaning that there is no way to come up with \$30, except by taking a \$10 and \$20 bills. The bad news is that we will need to calculate the possible sums when taking 1, 2, 3, 4, and 5 bills. Again, relying on logic and common sense, taking one bill at a time, we will get 5 unique sums that will equal to the nominal value of each bill. Taking 5 bills altogether, we will get one amount - the max, that equals to \$86. Now we need to find the number of sums when taking 2, 3, and 4 bills. Using the combination formula, we will get $C_5^2=\frac{5!}{2!*3!} = 10$, for $C_5^3 = \frac{5!}{3!*2!} = 10$, and for $C_5^4 = \frac{5!}{4!}=5$. Total will be: $5+10+10+5+1=31$ possible sums. (You can check by writing all of them out). Moving towards hard problems, let's consider a little more complex situation offered again by Princeton: {{#x:box| EXAMPLE 10 A three-person committee must be chosen from a group of 7 professors and 10 graduate students. If at least one of the people on the committee must be a professor, how many different groups of people could be chosen for the committee? (A) 70 (B) 560 (C) 630 (D) 1,260 (E) 1,980 }} This is a hard combinatorics problem that again requires several actions. At first it may be confusing because it has a weird condition that at least one professor needs to be on the committee. One way to solve would be to find a total without a limit and then subtract the number of situations when there are no professors on the team. The other option will be to calculate the number of combinations with 1 professor and 2 students, 2 professors and 1 student and 3 professors. Scenario 1. We get $C_{17}^3 = \frac{17!}{14! * 3!} = 680$ total possible committees. Now, the number of teams with students only is, using the same formula $C_{10}^3 = \frac{10!}{7! * 3!} = 120$. Now, $680-120=560$. Scenario 2. For a committee with 1 professor and 2 students, we will get $7*C_{10}^2 = 7 * \frac{10!}{8! * 2!} = 45*7=315$. (we multiply by 7 because For 2 professors and 1 student, we will get $C_7^2*10 = 21*10=210$, and for 3 professors, we will get $C_7^3 = 35$. Adding up the combinations we will get: $315+210+35=560$. Princeton Review suggests using Scenario 2 because it is supposedly simpler to understand, however, taking into consideration that you can make a mistake in the endless calculations and that it requires 3 complex operations in contrast to 2 in the first case, it has a weaker standing. In any case, both ways get the correct answer, but you need to choose the one that appeals more to you - the one easier to use. Here is another hard problem with some restrictions; again from an unknown source: {{#x:box| EXAMPLE 11. There are 11 top managers that need to form a decision group. How many ways are there to form a group of 5 if the President and Vice President are not to serve on the same team? }} Again our options are to solve the problem either to find the total number of committees and then subtracting the number of groups that VP and P would end up together or to find the number of groups with VP and P and None. However, the second method will be lengthy and unnecessary complicated, so the best solution is to find the total and subtract all the cases that fall under the limiting condition. Here is the best solution: $\frac{11!}{6!*5!} = \frac{11*10*9*8*7}{5*4*3*2*1} = 11*2*3*7=11*42 = 462$ (since the order does not matter, we use the combinations formula). This means that the total possible number of teams of 5 out of the pool of 11 people is 462, but we have a limiting condition imposed that says that two members of the 11 cannot be on the same team. Therefore, we need to subtract the number of teams where the Vice President and the President serve together. The number of the teams that VP and P would serve together on is perhaps the hardest thing in this problem. Anyway, the trick is to count on how many teams VP and P will be. To do this, we need to imagine the team, and the five chairs: let's assume that P is chair #1 (since the order does not really matter), VP is #2, and the three other spots are available to the rest (9 total), so for the 3rd chair, we will have 9 managers, for the 4th - 8, and the 5th place will be offered only to the remaining 7. Therefore, the total teams that VP and P would meet is $C_9^3 = \frac{9*8*7}{3*2} = 84$. (again we divide because the order of the people showing up on the team does not matter). ## Practice Problems 1. There are 9 books on a shelf, 7 hard cover and 2 soft cover. How many different combinations exist in which you choose 4 books from the 9 and have at least one of them be a soft cover book? (Ans. 91 ) 2. There are 5 married couples and a group of three is to be formed out of them; how many arrangements are there if a husband and wife may not be in the same group? (Ans. 80) 3. How many different signals can be transmitted by hoisting 3 red, 4 yellow and 2 blue flags on a pole, assuming that in transmitting a signal all nine flags are to be used? (Ans. 1260) 4. From a group of 8 secretaries, select 3 persons for promotion. How many distinct selections are there? (Ans. 56) 5. In a certain game, a large container is filled with red, yellow, green, and blue beads worth, respectively, 7, 5, 3, and 2 points each. A number of beads are then removed from the container. If the product of the point values of the removed beads is 147,000, how many red beads were removed? 6. (A) 5 (B) 4 (C) 3 (D) 2 (E) 0 7. There are between 100 and 110 cards in a collection of cards. If they are counted out 3 at a time, there are 2 left over, but if they are counted out 4 at a time, there is 1 left over. How many cards are in the collection? 8. (A) 101 (B) 103 (C) 106 (D) 107 (E) 109 9. The probability that it will rain in NYC on any given day in July is 30%. what is the probability that it will rain on exactly 3 days from July 5 to July 10? 10. There are 4 Fashion magazines and 4 Car magazines. Four magazines are drawn at random, what is the probability that all fashion magazines will be drawn? 11. (A) 1 (B) $\frac{1}{2}$ (C) $\frac{1}{3}$ (D) $\frac{1}{8}$ (E) $\frac{1}{70}$ ## Strategy 1. Read the problem carefully, trying to see both general and specific details, but do not let the numbers confuse you; try to see the whole picture first. 2. Choose the best approach for solving the problem: either take a logical approach by drawing the number of members, seats, etc or apply a formula. 3. If you can't find a way to solve the problem: it seems to be too complex, try to associate it with one of the problems we did. (It is recommended that you memorize at least two problems that are given in the Kaplan math review section so that you would be able to reproduce their solution on paper in a hard moment of panic). Even the most difficult combinatorics GMAT problems are solved using the simple formulae, so look for a way to apply them. There should be one. 4. If you are completely at a loss, there is a good way to estimate/guess take the most and then estimate how much less the answer should be. Usually, you will get down to two answer choices and then you can try to tailor your solution to the answers and see which solution makes more sense. 5. After you have solved the problem, go back to the question and make sure you answered it and make sure you followed all the conditions. 6. It is also recommended to do a fast check on the questions of such difficulty; try to use the other approach if applicable (formula if you used logic or logic if you used a formula) or just make sure your solution is reasonable. (e.g. you may come up with an answer that there are 150 combinations to sit 5 people into 3 seats, but, in fact, it does not make sense because it is too much since even as much as 5! equals only to 120.) 7. Do not spend too much time on any of the problems; you can't afford more than 3 mins on any of them. GMAT Study Guide - a prep wikibook index - edit - roadmap
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 29}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466444253921509, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Reverse_mathematics
Reverse mathematics Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. Its defining method can briefly be described as "going backwards from the theorems to the axioms", in contrast to the ordinary mathematical practice of deriving theorems from axioms. The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory. The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory. Reverse mathematics is usually carried out using subsystems of second-order arithmetic, where many of its definitions and methods are inspired by previous work in constructive analysis and proof theory. The use of second-order arithmetic also allows many techniques from recursion theory to be employed; many results in reverse mathematics have corresponding results in computable analysis. The program was founded by Harvey Friedman (1975, 1976). A standard reference for the subject is (Simpson 2009). General principles In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem “Every bounded sequence of real numbers has a supremum” it is necessary to use a base system which can speak of real numbers and sequences of real numbers. For each theorem that can be stated in the base system but is not provable in the base system, the goal is to determine the particular axiom system (stronger than the base system) that is necessary to prove that theorem. To show that a system S is required to prove a theorem T, two proofs are required. The first proof shows T is provable from S; this is an ordinary mathematical proof along with a justification that it can be carried out in the system S. The second proof, known as a reversal, shows that T itself implies S; this proof is carried out in the base system. The reversal establishes that no axiom system S′ that extends the base system can be weaker than S while still proving T. Use of second-order arithmetic Most reverse mathematics research focuses on subsystems of second-order arithmetic. The body of research in reverse mathematics has established that weak subsystems of second-order arithmetic suffice to formalize almost all undergraduate-level mathematics. In second-order arithmetic, all objects must be represented as either natural numbers or sets of natural numbers. For example, in order to prove theorems about real numbers, the real numbers must be represented as Cauchy sequences of rational numbers, each of which can be represented as a set of natural numbers.[further explanation needed] The axiom systems most often considered in reverse mathematics are defined using axiom schemes called comprehension schemes. Such a scheme states that any set of natural numbers definable by a formula of a given complexity exists. In this context, the complexity of formulas is measured using the arithmetical hierarchy and analytical hierarchy. The reason that reverse mathematics is not carried out using set theory as a base system is that the language of set theory is too expressive. Extremely complex sets of natural numbers can be defined by simple formulas in the language of set theory (which can quantify over arbitrary sets). In the context of second-order arithmetic, results such as Post's theorem establish a close link between the complexity of a formula and the (non)computability of the set it defines. Another effect of using second-order arithmetic is the need to restrict general mathematical theorems to forms that can be expressed within arithmetic. For example, second-order arithmetic can express the principle "Every countable vector space has a basis" but it cannot express the principle "Every vector space has a basis". In practical terms, this means that theorems of algebra and combinatorics are restricted to countable structures, while theorems of analysis and topology are restricted to separable spaces. Many principles that imply the axiom of choice in their general form (such as "Every vector space has a basis") become provable in weak subsystems of second-order arithmetic when they are restricted. For example, "every field has an algebraic closure" is not provable in ZF set theory, but the restricted form "every countable field has an algebraic closure" is provable in RCA0, the weakest system typically employed in reverse mathematics. The big five subsystems of second order arithmetic Second order arithmetic is a formal theory of the natural numbers and sets of natural numbers. Many mathematical objects, such as countable rings, groups, and fields, as well as points in effective Polish spaces, can be represented as sets of natural numbers, and modulo this representation can be studied in second order arithmetic. Reverse mathematics makes use of several subsystems of second order arithmetic. A typical reverse mathematics theorem shows that a particular mathematical theorem T is equivalent to a particular subsystem S of second order arithmetic over a weaker subsystem B. This weaker system B is known as the base system for the result; in order for the reverse mathematics result to have meaning, this system must not itself be able to prove the mathematical theorem T. Simpson (2009) describes five particular subsystems of second order arithmetic, which he calls the Big Five, that occur frequently in reverse mathematics. In order of increasing strength, these systems are named by the initialisms RCA0, WKL0, ACA0, ATR0, and Π11-CA0. The following table summarizes the "big five" systems Simpson (2009, p.42) Subsystem Stands for Ordinal Corresponds roughly to Comments RCA0 Recursive comprehension axiom ωω Constructive mathematics (Bishop) The base system for reverse mathematics WKL0 Weak König's lemma ωω Finitistic reductionism (Hilbert) Conservative over PRA for Π0 2 sentences. Conservative over RCA0 for Π1 1 sentences. ACA0 Arithmetical comprehension axiom ε0 Predicativism (Weyl, Feferman) Conservative over Peano arithmetic for arithmetical sentences ATR0 Arithmetical transfinite recursion Γ0 Predicative reductionism (Friedman, Simpson) Conservative over Feferman's system IR for Π1 1 sentences Π1 1 -CA0 Π1 1 comprehension axiom Ψ0(Ωω) Impredicativism The subscript 0 in these names means that the induction scheme has been restricted from the full second-order induction scheme (Simpson 2009, p. 6). For example, ACA0 includes the induction axiom (0∈X ∧ ∀n(n∈X → n+1∈X)) → ∀n n∈X. This together with the full comprehension axiom of second order arithmetic implies the full second-order induction scheme given by the universal closure of (φ(0) ∧ ∀n(φ(n) → φ(n+1))) → ∀n φ(n) for any second order formula φ. However ACA0 does not have the full comprehension axiom, and the subscript 0 is a reminder that it does not have the full second-order induction scheme either. This restriction is important: systems with restricted induction have significantly lower proof-theoretical ordinals than systems with the full second-order induction scheme. The base system RCA0 RCA0 is the fragment of second-order arithmetic whose axioms are the axioms of Robinson arithmetic, induction for Σ0 1 formulas, and comprehension for Δ0 1 formulas. The subsystem RCA0 is the one most commonly used as a base system for reverse mathematics. The initials "RCA" stand for "recursive comprehension axiom", where "recursive" means "computable", as in recursive function. This name is used because RCA0 corresponds informally to "computable mathematics". In particular, any set of natural numbers that can be proven to exist in RCA0 is computable, and thus any theorem which implies that noncomputable sets exist is not provable in RCA0. To this extent, RCA0 is a constructive system, although it does not meet the requirements of the program of constructivism because it is a theory in classical logic including the excluded middle. Despite its seeming weakness (of not proving any noncomputable sets exist), RCA0 is sufficient to prove a number of classical theorems which, therefore, require only minimal logical strength. These theorems are, in a sense, below the reach of the reverse mathematics enterprise because they are already provable in the base system. The classical theorems provable in RCA0 include: • Basic properties of the natural numbers, integers, and rational numbers (for example, that the latter form an ordered field). • Basic properties of the real numbers (the real numbers are an Archimedean ordered field; any nested sequence of closed intervals whose lengths tend to zero has a single point in its intersection; the real numbers are not countable). • The Baire category theorem for a complete separable metric space (the separability condition is necessary to even state the theorem in the language of second-order arithmetic). • The intermediate value theorem on continuous real functions. • The Banach–Steinhaus theorem for a sequence of continuous linear operators on separable Banach spaces. • A weak version of Gödel's completeness theorem (for a set of sentences, in a countable language, that is already closed under consequence). • The existence of an algebraic closure for a countable field (but not its uniqueness). • The existence and uniqueness of the real closure of a countable ordered field. The first-order part of RCA0 (the theorems of the system that do not involve any set variables) is the set of theorems of first-order Peano arithmetic with induction limited to Σ01 formulas. It is provably consistent, as is RCA0, in full first-order Peano arithmetic. Weak König's lemma WKL0 The subsystem WKL0 consists of RCA0 plus a weak form of König's lemma, namely the statement that every infinite subtree of the full binary tree (the tree of all finite sequences of 0's and 1's) has an infinite path. This proposition, which is known as weak König's lemma, is easy to state in the language of second-order arithmetic. WKL0 can also be defined as the principle of Σ01 separation (given two Σ01 formulas of a free variable n which are exclusive, there is a class containing all n satisfying the one and no n satisfying the other). The following remark on terminology is in order. The term “weak König's lemma” refers to the sentence which says that any infinite subtree of the binary tree has an infinite path. When this axiom is added to RCA0, the resulting subsystem is called WKL0. A similar distinction between particular axioms, on the one hand, and subsystems including the basic axioms and induction, on the other hand, is made for the stronger subsystems described below. In a sense, weak König's lemma is a form of the axiom of choice (although, as stated, it can be proven in classical Zermelo–Fraenkel set theory without the axiom of choice). It is not constructively valid in some senses of the word constructive. To show that WKL0 is actually stronger than (not provable in) RCA0, it is sufficient to exhibit a theorem of WKL0 which implies that noncomputable sets exist. This is not difficult; WKL0 implies the existence of separating sets for effectively inseparable recursively enumerable sets. It turns out that RCA0 and WKL0 have the same first-order part, meaning that they prove the same first-order sentences. WKL0 can prove a good number of classical mathematical results which do not follow from RCA0, however. These results are not expressible as first order statements but can be expressed as second-order statements. The following results are equivalent to weak König's lemma and thus to WKL0 over RCA0: • The Heine–Borel theorem for the closed unit real interval, in the following sense: every covering by a sequence of open intervals has a finite subcovering. • The Heine–Borel theorem for complete totally bounded separable metric spaces (where covering is by a sequence of open balls). • A continuous real function on the closed unit interval (or on any compact separable metric space, as above) is bounded (or: bounded and reaches its bounds). • A continuous real function on the closed unit interval can be uniformly approximated by polynomials (with rational coefficients). • A continuous real function on the closed unit interval is uniformly continuous. • A continuous real function on the closed unit interval is Riemann integrable. • The Brouwer fixed point theorem (for continuous functions on a finite product of copies of the closed unit interval). • The separable Hahn–Banach theorem in the form: a bounded linear form on a subspace of a separable Banach space extends to a bounded linear form on the whole space. • The Jordan curve theorem • Gödel's completeness theorem (for a countable language). • Determinacy for open (or even clopen) games on {0,1} of length ω. • Every countable commutative ring has a prime ideal. • Every countable formally real field is orderable. • Uniqueness of algebraic closure (for a countable field). Arithmetical comprehension ACA0 ACA0 is RCA0 plus the comprehension scheme for arithmetical formulas (which is sometimes called the "arithmetical comprehension axiom"). That is, ACA0 allows us to form the set of natural numbers satisfying an arbitrary arithmetical formula (one with no bound set variables, although possibly containing set parameters). Actually, it suffices to add to RCA0 the comprehension scheme for Σ1 formulas in order to obtain full arithmetical comprehension. The first-order part of ACA0 is exactly first-order Peano arithmetic; ACA0 is a conservative extension of first-order Peano arithmetic. The two systems are provably (in a weak system) equiconsistent. ACA0 can be thought of as a framework of predicative mathematics, although there are predicatively provable theorems that are not provable in ACA0. Most of the fundamental results about the natural numbers, and many other mathematical theorems, can be proven in this system. One way of seeing that ACA0 is stronger than WKL0 is to exhibit a model of WKL0 that doesn't contain all arithmetical sets. In fact, it is possible to build a model of WKL0 consisting entirely of low sets using the low basis theorem, since low sets relative to low sets are low. The following assertions are equivalent to ACA0 over RCA0: • The sequential completeness of the real numbers (every bounded increasing sequence of real numbers has a limit). • The Bolzano–Weierstrass theorem. • Ascoli's theorem: every bounded equicontinuous sequence of real functions on the unit interval has a uniformly convergent subsequence. • Every countable commutative ring has a maximal ideal. • Every countable vector space over the rationals (or over any countable field) has a basis. • Every countable field has a transcendence basis. • König's lemma (for arbitrary finitely branching trees, as opposed to the weak version described above). • Various theorems in combinatorics, such as certain forms of Ramsey's theorem. Arithmetical Transfinite Recursion ATR0 The system ATR0 adds to ACA0 an axiom which states, informally, that any arithmetical functional (meaning any arithmetical formula with a free number variable n and a free class variable X, seen as the operator taking X to the set of n satisfying the formula) can be iterated transfinitely along any countable well ordering starting with any set. ATR0 is equivalent over ACA0 to the principle of Σ11 separation. ATR0 is impredicative, and has the proof-theoretic ordinal $\Gamma_0$, the supremum of that of predicative systems. ATR0 proves the consistency of ACA0, and thus by Gödel's theorem it is strictly stronger. The following assertions are equivalent to ATR0 over RCA0: • Any two countable well orderings are comparable. That is, they are isomorphic or one is isomorphic to a proper initial segment of the other. • Ulm's theorem for countable reduced Abelian groups. • The perfect set theorem, which states that every uncountable closed subset of a complete separable metric space contains a perfect closed set. • Lusin's separation theorem (essentially Σ11 separation). • Determinacy for open sets in the Baire space. Π11 comprehension Π11-CA0 Π11-CA0 is stronger than arithmetical transfinite recursion and is fully impredicative. It consists of RCA0 plus the comprehension scheme for Π11 formulas. In a sense, Π11-CA0 comprehension is to arithmetical transfinite recursion (Σ11 separation) as ACA0 is to weak König's lemma (Σ01 separation). It is equivalent to several statements of descriptive set theory whose proofs make use of strongly impredicative arguments; this equivalence shows that these impredicative arguments cannot be removed. The following theorems are equivalent to Π11-CA0 over RCA0: • The Cantor–Bendixson theorem (every closed set of reals is the union of a perfect set and a countable set). • Every countable abelian group is the direct sum of a divisible group and a reduced group. Additional systems • Weaker systems than recursive comprehension can be defined. The weak system RCA* 0 consists of elementary function arithmetic EFA (the basic axioms plus Δ00 induction in the enriched language with an exponential operation) plus Δ01 comprehension. Over RCA* 0 , recursive comprehension as defined earlier (that is, with Σ01 induction) is equivalent to the statement that a polynomial (over a countable field) has only finitely many roots and to the classification theorem for finitely generated Abelian groups. The system RCA* 0 has the same proof theoretic ordinal ω3 as EFA and is conservative over EFA for Π0 2 sentences. • Weak Weak König's Lemma is the statement that a subtree of the infinite binary tree having no infinite paths has an asymptotically vanishing proportion of the leaves at length n (with a uniform estimate as to how many leaves of length n exist). An equivalent formulation is that any subset of Cantor space that has positive measure is nonempty (this is not provable in RCA0). WWKL0 is obtained by adjoining this axiom to RCA0. It is equivalent to the statement that if the unit real interval is covered by a sequence of intervals then the sum of their lengths is at least one. The model theory of WWKL0 is closely connected to the theory of algorithmically random sequences. In particular, an ω-model of RCA0 satisfies weak weak König's lemma if and only if for every set X there is a set Y which is 1-random relative to X. • DNR (short for "diagonally non-recursive") adds to RCA0 an axiom asserting the existence of a diagonally non-recursive function relative to every set. That is, DNR states that, for any set A, there exists a total function f such that for all e the eth partial recursive function with oracle A is not equal to f. DNR is strictly weaker than WWKL (Lempp et al., 2004). • Δ11-comprehension is in certain ways analogous to arithmetical transfinite recursion as recursive comprehension is to weak König's lemma. It has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Δ11-comprehension but not the other way around. • Σ11-choice is the statement that if η(n,X) is a Σ11 formula such that for each n there exists an X satisfying η then there is a sequence of sets Xn such that η(n,Xn) holds for each n. Σ11-choice also has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Σ11-choice but not the other way around. ω-models and β-models The ω in ω-model stands for the set of non-negative integers (or finite ordinals). An ω-model is a model for a fragment of second-order arithmetic whose first-order part is the standard model of Peano arithmetic, but whose second-order part may be non-standard. More precisely, an ω-model is given by a choice S⊆2ω of subsets of ω. The first order variables are interpreted in the usual way as elements of ω, and +, × have their usual meanings, while second order variables are interpreted as elements of S. There is a standard ω model where one just takes S to consist of all subsets of the integers. However there are also other ω-models; for example, RCA0 has a minimal ω-model where S consists of the recursive subsets of ω. A β model is an ω model that is equivalent to the standard ω-model for Π1 1 and Σ1 1 sentences (with parameters). Non-ω models are also useful, especially in the proofs of conservation theorems. References • Ambos-Spies, K.; Kjos-Hanssen, B.; Lempp, S.; Slaman, T.A. (2004), "Comparing DNR and WWKL", Journal of Symbolic Logic 69 (4): 1089, doi:10.2178/jsl/1102022212. • Friedman, Harvey (1975), "Some systems of second order arithmetic and their use", Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), Vol. 1, Canad. Math. Congress, Montreal, Que., pp. 235–242, MR0429508 • Friedman, Harvey; Martin, D. A.; Soare, R. I.; Tait, W. W. (1976), Systems of second order arithmetic with restricted induction, I, II, "Meeting of the Association for Symbolic Logic", The Journal of Symbolic Logic (Association for Symbolic Logic) 41 (2): 557–559, ISSN 0022-4812, JSTOR 2272259 • Simpson, Stephen G. (2009), Subsystems of second order arithmetic, Perspectives in Logic (2nd ed.), Cambridge University Press, ISBN 978-0-521-88439-6, MR2517689 • Solomon, Reed (1999), "Ordered groups: a case study in reverse mathematics", The Bulletin of Symbolic Logic 5 (1): 45–58, doi:10.2307/421140, ISSN 1079-8986, JSTOR 421140, MR1681895
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9109012484550476, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/23846/vector-space-of-mathbbc4-and-its-basis-the-pauli-matrices/23883
# Vector space of $\mathbb{C}^4$ and its basis, the Pauli matrices How do I write an arbitrary $2\times 2$ matrix as a linear combination of the three Pauli Matrices and the $2\times 2$ unit matrix? Any example for the same might help ? - 4 – Kostya Apr 16 '12 at 13:49 ## 3 Answers The matrices $\sigma_0\equiv \boldsymbol{1}_2$, $\sigma_x$, $\sigma_y$ and $\sigma_z$ form an orthonormal basis of your vector space w.r.t. the scalar product $$(X,Y) \equiv \frac{1}{2}\operatorname{tr}(X\cdot Y),$$ where $X$ and $Y$ label any two complex $2\times 2$ matrices. The factor $1/2$ is just for convenience, you may as well normalise your Pauli matrices by dividing them by $2$. All you want to do now is to decompose an arbitrary element $$M = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$$ of your vector space into the above basis and figure out the coefficients. As usual, this is done by projecting onto that basis by means of the scalar product $$M = (\sigma_0,M)\cdot\sigma_0 + (\sigma_x,M)\cdot\sigma_x + (\sigma_y,M)\cdot\sigma_y + (\sigma_z,M)\cdot\sigma_z\ .$$ This has essentially been said in the above comments, particularly in the link posted by Kostya. - A slow construction would go... $$\begin{pmatrix}a&b\\c&d\end{pmatrix} = a\begin{pmatrix}1&0\\0&0\end{pmatrix} +b\begin{pmatrix}0&1\\0&0\end{pmatrix} +c\begin{pmatrix}0&0\\1&0\end{pmatrix} +d\begin{pmatrix}0&0\\0&1\end{pmatrix}$$ $$\begin{pmatrix}1&0\\0&0\end{pmatrix} =\frac{1}{2} \begin{pmatrix}1&0\\0&1\end{pmatrix} + \frac{1}{2} \begin{pmatrix}1&0\\0&-1\end{pmatrix} =\frac{1}{2}1_2+\frac{1}{2}\sigma_3$$ $$\begin{pmatrix}0&1\\0&0\end{pmatrix} =\ ...$$ $$\Longrightarrow \begin{pmatrix}a&b\\c&d\end{pmatrix} = \frac{a}{2}1_2+\frac{a}{2}\sigma_3+\ ...\ (\text{other combintations of the four matrices})$$ - I like to put it this way: $$\left(\begin{array}{cc} w+z&x-iy\\ x+iy&w-z\end{array}\right)$$ So, for example: $$\left(\begin{array}{cc} 1&5\\1&2\end{array}\right) = \left(\begin{array}{cc} (1.5)+(-0.5)&(3)-i(2i)\\ (3)+i(2i)&(1.5)-(-0.5)\end{array}\right)$$ So $w=1.5, x=3, y=2i, z=-0.5$ and $$\left(\begin{array}{cc} 1&5\\1&2\end{array}\right) = 1.5 + 3\sigma_x + 2i\sigma_y -0.5\sigma_z.$$ You can solve for $w,x,y,z$ from the entries in the array easily. I.e. $x$ is the average of the top right and bottom left entries, etc. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333003163337708, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/213701/proving-that-the-map-is-continuous
# Proving that the map is continuous Let $C[a,b]$ be the set of all continuous functions on $[a,b]$, with the $p$-norm for $p$ in $[1,\infty]$. Let $T$ be the mapping defined by: $$T:g \to g^2$$ where $g$ belongs to $C[a,b]$. Is this map continuous for all $p$? EDIT: The $p$-norm is defined as $\|g\|_p = (\int_a^b \! |g(x)|^{p} \, \mathrm{d} x)^{1/p}$ EDIT 2: Would it be correct to show that: $\|Tg-Tf\|_p \le K\|g-f\|_p$ for some constant $K$? - Do you know how to use TeX? – nikita2 Oct 14 '12 at 16:34 Unfortunately I do not. I will look over the TeX help and try to edit the question though. – Heisenberg Oct 14 '12 at 16:35 Is that better? :) – Heisenberg Oct 14 '12 at 16:38 I've just edited. – Sigur Oct 14 '12 at 16:39 Thanks Sigur. I just made the change before you did. I did not realize posting in $TeX$ was that simple. – Heisenberg Oct 14 '12 at 16:39 show 2 more comments ## 1 Answer This map is not continuous for any $p$. Hint: Consider a piecewise linear function with $f(a) = c$, $f(a+r)=0$, and $f=0$ on $[a+r, b]$. Compute the $p$ norm of $f$ and $f^2$ in terms of $c,r$. Then choose a sequence of $c_n$ and $r_n$ such that for the corresponding functions $f_n$, $\|f_n\|_p \to 0$ but $\|f_n^2\|_p \to \infty$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278520941734314, "perplexity_flag": "head"}
http://mathoverflow.net/questions/76205/is-the-crandall-dilcher-and-pomerance-heuristic-concerning-wall-sun-sun-primes-s
## Is the Crandall, Dilcher and Pomerance heuristic concerning Wall-Sun-Sun primes still state of the art? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is a question about the open problem Fibonacci divisibility from the Open Problem Garden. The problem, originally stated in 1960 by D.D. Wall, has several equivalent formulations one of which is: Find a prime $p$ with $p^2|a_{p-\left(\frac{p}{5}\right)}$, where $a_n$ is the $n$-th Fibonacci number. Such primes are often called Wall-Sun-Sun primes. Since always $p|a_{p-\left(\frac{p}{5}\right)}$ Crandall, Dilcher and Pomerance assumed uniform distribution of the residues and proposed the heuristic $\log \log y -\log \log x$ for the number of such primes in the interval $[x,y]$. Several authors did computer aided searches and no such prime was found up to $9.7 \times 10^{14}$. That confuses me and therefore my question. Q: Is the heuristic of Crandall, Dilcher and Pomerance (maybe in its patched version by Klaska) still considered state of the art? If not, are there other approaches? Edit: For me the question is sufficiently answered. That is more that I could hope for. Thank you very much! - What is Klaska's patch? – Charles Sep 23 2011 at 14:07 1 $\log\log(9.7*10^{14})$ is only $3.54$, so they found about 3.5 fewer than expected - not too surprising. – Stopple Sep 23 2011 at 17:28 @ Charles: In a recent work Jiří Klaška (Short remark on Fibonacci-Wieferich primes, Acta Mathematica Universitatis Ostraviensis, Vol. 15 (2007), No. 1, 21--25) argues that one only has to consider primes = 1 or 9 mod 10. That modifies CDP to 1/2 log log. @ Stopple: The searches for Wolstenholme (2 known), Wieferich (2 known) and Wilosn (3 known) primes are on the spot. Just the Wall-Sun-Sun primes do not seem to fit. @quid, @Francois: Wow! Thx for checking the data. I need some time to digest that, especially your comment on the result of Klaska. – Uwe Stroinski Sep 25 2011 at 12:40 ## 2 Answers As quid mentioned, Klyve and I have done some computational investigations on Fibonacci-Wieferich/Wall-Sun-Sun primes. In particular, we collected all primes $p < 9.7\times10^{14}$ such that $F_{p-(p/5)} \equiv Ap \pmod{p^2}$ with $|A| < 2\times10^6$. I've just crunched our data for primes in the range from $6.5\times10^{14}$ to $9.5\times10^{14}$ to see if the incidence of small values of $A$ matches the Crandall–Dilcher–Pomerance heuristic. Here are the results: $$\begin{matrix} p \in [6.5\times10^{14},7.0\times10^{14}) & : & 2112 & 2085 & 2083 & 2155 & 2170.39 \cr p \in [7.0\times10^{14},7.5\times10^{14}) & : & 1905 & 1915 & 2021 & 1953 & 2016.36 \cr p \in [7.5\times10^{14},8.0\times10^{14}) & : & 1867 & 1854 & 1781 & 1870 & 1882.50 \cr p \in [8.0\times10^{14},8.5\times10^{14}) & : & 1768 & 1779 & 1707 & 1669 & 1765.11 \cr p \in [8.5\times10^{14},9.0\times10^{14}) & : & 1598 & 1561 & 1650 & 1686 & 1661.35 \cr p \in [9.0\times10^{14},9.5\times10^{14}) & : & 1568 & 1592 & 1519 & 1556 & 1568.96 \cr \end{matrix}$$ The first four columns report the count of primes $p$ in the given interval whose corresponding $A$ values lie in the respective intervals $(-2\times10^6,-10^6)$, $(-10^6,0)$, $(0,10^6)$, and $(10^6,2\times10^6)$. The last column represents the value predicted by the Crandall–Dilcher–Pomerance heuristic (namely $10^6\log\left(\frac{\log y}{\log x}\right)$ for the interval from $x$ to $y$). The agreement between experimental data and theoretical values is pretty good. If I understand Klaška's adjustment correctly, he proposes an expected count of roughly half that proposed by the Crandall–Dilcher–Pomerance heuristic. Thus, the data does not appear to support Klaška's modified heuristic. However, note that Klaška's argument is specifically for the special value $A = 0$, so the above data does not invalidate his proposed estimate. - Thank you for the detailed data! – quid Sep 24 2011 at 23:20 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A very recent paper on computations on this and related things is by Dorais and Klyve; they write that their computations (for the related Wieferich primes) are in line with a conjecture of Crandall, Dilcher, Pomerance. So, it does not appear there was much change in the general expectation and what you mention still seems to be state of the art. Also note that Crandall, Dilcher, Pomerance were aware of inexistence up to about $10^{12}$. Now `$$ \log \log (10^{15}) - \log log (10^{12}) = 0.22...$$` so that also in the larger range nothing was found does not seem to shake the heuristic of CDP too much; one would not expect to find something. - 2 Thanks for mentioning our paper! I don't recall checking whether our data for Wall-Sun-Sun primes matches theoretical expectations. I will crunch the data very soon and report the results here... – François G. Dorais♦ Sep 24 2011 at 3:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9225093126296997, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/107545/cliques-in-the-paley-graph-and-a-problem-of-sarkozy
## Cliques in the Paley graph and a problem of Sarkozy ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The following question is motivated by pure curiosity; it is not a part of any research project and I do not have any applications. The question comes as an interpolation between two notoriously difficult open problems. The first problem is to show that if $p\equiv 1\pmod 4$ is prime, and a set $A\subset{\mathbb F}_p$ has the property that the difference of any two elements of $A$ is a square, then $A$ is "small". (Basic details can be found here). Notice that, letting `${\mathcal Q}:=\{x^2\colon x\in{\mathbb F}_p\}$`, one can write the assumption as $A-A\subset{\mathcal Q}$. The second problem, to my knowledge first posed by Andras Sarkozy several years ago, is to determine whether the set of all squares is as a sumset; that is, whether ${\mathcal Q}=A+B$ with `$A,B\subset{\mathbb F}_p$` and `$\min\{|A|,|B|\}\ge 2$`. The conjectural answer is, of course, negative, provided that $p$ is sufficiently large. Both problems just mentioned seem to be quite tough; but, maybe, the following combination of the two is more tractable: For a prime $p\equiv 1\pmod 4$, writing ${\mathcal Q}$ for the set of all squares in ${\mathbb F}_p$, does there exist a set $A\subset{\mathbb F}_p$ such that $A-A={\mathcal Q}$? Compared to the first of the two aforementioned problems, we now assume that every quadratic residue is representable as a difference of two elements of $A$; compared to the second problem we assume that $B=-A$. Is there a way to utilize these extra assumptions? A funny observation is that sets $A$ with the property in question do exist for $p=5$ and also for $p=13$; however, it would be very plausible to conjecture that these values of $p$ are exceptional. (In this direction, Peter Mueller has verified computationally that no other exceptions of this sort occur for $p<1000$.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9599578380584717, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Inverse_temperature
# Thermodynamic beta From Wikipedia, the free encyclopedia (Redirected from Inverse temperature) Jump to: navigation, search This article does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2009) In statistical mechanics, the thermodynamic beta (or occasionally perk) is the reciprocal of the thermodynamic temperature of a system. It can be calculated in the microcanonical ensemble from the formula $\beta\triangleq\frac{1}{k_B}\left(\frac{\partial S}{\partial E}\right)_{V, N} = \frac1{k_B T} \,,$ where kB is the Boltzmann constant, S is the entropy, E is the energy, V is the volume, N is the particle number, and T is the absolute temperature. It has units reciprocal to that of energy, or in units where kB=1 also has units reciprocal to that of temperature. Thermodynamic beta is essentially the connection between the information theoretic/statistical interpretation of a physical system through its entropy and the thermodynamics associated with its energy. It can be interpreted as the entropic response to an increase in energy. If a system is challenged with a small amount of energy, then β describes the amount by which the system will "perk up," i.e. randomize. Though completely equivalent in conceptual content to temperature, β is generally considered a more fundamental quantity than temperature owing to the phenomenon of negative temperature, in which β is continuous as it crosses zero where T has a singularity.[1] ## Details ### Statistical interpretation From the statistical point of view, β is a numerical quantity relating two macroscopic systems in equilibrium. The exact formulation is as follows. Consider two systems, 1 and 2, in thermal contact, with respective energies E1 and E2. We assume E1 + E2 = some constant E. The number of microstates of each system will be denoted by Ω1 and Ω2. Under our assumptions Ωi depends only on Ei. Thus the number of microstates for the combined system is $\Omega = \Omega_1 (E_1) \Omega_2 (E_2) = \Omega_1 (E_1) \Omega_2 (E-E_1) . \,$ We will derive β from the fundamental assumption of statistical mechanics: When the combined system reaches equilibrium, the number Ω is maximized. (In other words, the system naturally seeks the maximum number of microstates.) Therefore, at equilibrium, $\frac{d}{d E_1} \Omega = \Omega_2 (E_2) \frac{d}{d E_1} \Omega_1 (E_1) + \Omega_1 (E_1) \frac{d}{d E_2} \Omega_2 (E_2) \cdot \frac{d E_2}{d E_1} = 0.$ But E1 + E2 = E implies $\frac{d E_2}{d E_1} = -1.$ So $\Omega_2 (E_2) \frac{d}{d E_1} \Omega_1 (E_1) - \Omega_1 (E_1) \frac{d}{d E_2} \Omega_2 (E_2) = 0$ i.e. $\frac{d}{d E_1} \ln \Omega_1 = \frac{d}{d E_2} \ln \Omega_2 \quad \mbox{at equilibrium.}$ The above relation motivates a definition of β: $\beta =\frac{d \ln \Omega}{ d E}.$ ### Connection with thermodynamic view On the other hand, when two systems are in equilibrium, they have the same thermodynamic temperature T. Thus intuitively one would expect that β be related to T in some way. This link is provided by the fundamental assumption written as $S = k_B \ln \Omega, \,$ where kB is the Boltzmann constant. So $d \ln \Omega = \frac{1}{k_B} d S .$ Substituting into the definition of β gives $\beta = \frac{1}{k_B} \frac{d S}{d E}.$ Comparing with the thermodynamic formula $\frac{d S}{d E} = \frac{1}{T} ,$ we have $\beta = \frac{1}{k_B T} = \frac{1}{\tau}$ where $\tau$ is sometimes called the fundamental temperature of the system with units of energy. ## References 1. Kittel, Charles; Kroemer, Herbert (1980), Thermal Physics (2 ed.), United States of America: W. H. Freeman and Company, ISBN 978-0471490302
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052664041519165, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Volume_of_Revolution&diff=21266&oldid=20363
Volume of Revolution From Math Images (Difference between revisions) | | | | | |---------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (16:31, 21 June 2011) (edit) (undo) | | | (7 intermediate revisions not shown.) | | | | | Line 31: | | Line 31: | | | | Volume of all discs = <math>{\sum}{\pi}y^2{\Delta x}</math>, with <math>X</math> ranging from 0 to 1 | | Volume of all discs = <math>{\sum}{\pi}y^2{\Delta x}</math>, with <math>X</math> ranging from 0 to 1 | | | | | | | - | If we make the slices infinitesmally thick, the Riemann sum becomes the same as: | + | If we make the slices infinitesimally thick, the Riemann sum becomes the same as: | | | | | | | | <math>\int_0^1 {\pi}y^2\,dx ={\pi}\int_0^1 (x^2)^2\, dx</math> | | <math>\int_0^1 {\pi}y^2\,dx ={\pi}\int_0^1 (x^2)^2\, dx</math> | | Line 37: | | Line 37: | | | | <br><br> | | <br><br> | | | | | | | - | Evaluating this intergral, | + | Evaluating this integral, | | | | | | | | | | | | Line 51: | | Line 51: | | | | | | | | | In the example we discussed, the area is revolved about the <math>x</math>-axis. This does not always have to be the case. A function can be revolved about any fixed axis. Also, given a different function, to find the volume of revolution about the <math>x</math>-axis, we can substitute it in the place of <math>x^2</math>. Note: we would also need to change the bounds as per the given information. The method discussed in the example works for all functions that have bounds and are revolved about the <math>x</math>-axis. | | In the example we discussed, the area is revolved about the <math>x</math>-axis. This does not always have to be the case. A function can be revolved about any fixed axis. Also, given a different function, to find the volume of revolution about the <math>x</math>-axis, we can substitute it in the place of <math>x^2</math>. Note: we would also need to change the bounds as per the given information. The method discussed in the example works for all functions that have bounds and are revolved about the <math>x</math>-axis. | | | | + | | | | | + | ==Washer Method== | | | | + | | | | | + | [[Image:cross_section.jpg|Cross Section|right]] | | | | + | The washer method can be used when the rotated plane does not touch the axis around which it is being rotated. One instance in which the plane isn't touching the rotational axis is when the plane is not just bounded by one function, but instead two. For now we'll assume that one function is consistently smaller than the other, so there is a 'smaller function' and a 'larger function.' The main image on this page is an example of when the washer method is used. The top curve (which we will call f(x) ) is proportional to the square root of x, and the bottom curve (which we will call g(x) )is linear. The boundaries for the functions are x =2 and x = 10. A cross section is shown to the right. | | | | + | | | | | + | The basic philosophy behind the washer method is the same as behind the disk method. We still must integrate around the rotational axis. The difference is that we cannot just use one radius (ie Radius = R - r). This wouldn't work because then two sections with the same area would necessarily have to have the same volume, but this is not the case. If two circles of the same radius are rotated around the same axis, if one is farther away, it will create more volume, as demonstrated in the animation below. Thus, instead of subtracting the radius of the smaller function from the bigger function, we subtract the volume the rotated smaller function would create from the volume the bigger function creates. The formula for the washer method is: <br /> | | | | + | <math> V = \pi \times \int(f(x)^2 - g(x)^2)dx </math> | | | | + | | | | | + | <pausegif id="1" wiki="no">Rotating_circles.gif</pausegif> | | | | + | | | | | + | If one function is not consistently smaller than the other, we can break up the problem into two smaller problems. If the functions f(x) and g(x) cross at some arbitrary value c, we use f(x) as the larger function from our start value to c, but as the smaller function from c to the end value. If our start and end values are a and b respectively, the formula is: | | | | + | <math> V = \pi \times \Big( \int_a^c (f(x)^2 - g(x)^2)dx + \int_c^b (g(x)^2 - f(x)^2)dx \Big) </math> | | | | | | | | ==References== | | ==References== | | Line 60: | | Line 73: | | | | |SiteURL=http://wikis.swarthmore.edu/miwiki/index.php/User:Nordhr | | |SiteURL=http://wikis.swarthmore.edu/miwiki/index.php/User:Nordhr | | | |Field=Calculus | | |Field=Calculus | | - | |InProgress=No | + | |Pre-K=No | | | | + | |Elementary=No | | | | + | |MiddleSchool=No | | | | + | |HighSchool=Yes | | | | + | |HigherEd=Yes | | | | + | | | | }} | | }} | Current revision Solid of revolution Field: Calculus Image Created By: Nordhr Website: Nordhr Solid of revolution This image is a solid of revolution Basic Description When finding the volume of revolution of solids, in many cases the problem is not with the calculus, but with actually visualizing the solid. To find the volume of a solid like a cylinder, usually we use the formula ${\pi} {r^2} h$. Alternatively we can imagine chopping up the cylinder into thin cylindrical plates, much like slicing up bread, computing the volume of each thin slice, then summing up the volumes of all the slices. The disc method is much like slicing up bread and computing the volume of each slice http://mathdemos.gcsu.edu/mathdemos/sectionmethod/sectionmethod.html A More Mathematical Explanation [Click to view A More Mathematical Explanation] Disk Method In general, given a function, we can graph it then revolve the area under the curve b [...] [Click to hide A More Mathematical Explanation] Disk Method In general, given a function, we can graph it then revolve the area under the curve between two specific coordinates about a fixed axis to obtain a solid called the solid of revolution. The volume of the solid can then be computed using the disc method. Note: There are other ways of computing the volumes of complicated solids other than the disc method. In the disc method, we imagine chopping up the solid into thin cylindrical plates calculating the volume of each plate, then summing up the volumes of all plates. For example, let's consider a region bounded by $y=x^2$, $y=0$,$x=0$ and $x=1$ <-------Plotting the graph of this area, If we revolve this area about the x axis ($y=0$), then we get the image below to the left. This image shows a plane area being revolved to create a solid http://curvebank.calstatela.edu/volrev/volrev.htm To find the volume of the solid using the disc method: Volume of one disc = ${\pi} y^2{\Delta x}$ where $y$- which is the function- is the radius of the circular cross-section and $\Delta x$ is the thickness of each disc. Using the analogy of the bread, computing the volume of one disc would correspond to computing the volume of one slice of bread. With this in mind, the area of one disc would correspond to the area of a slice of bread, while the thickness of a disc would correspond to the thickness of a slice of bread. To find the total volume of the bread, we would have to sum up the volumes of each of the slices. Volume of all discs: Volume of all discs = ${\sum}{\pi}y^2{\Delta x}$, with $X$ ranging from 0 to 1 If we make the slices infinitesimally thick, the Riemann sum becomes the same as: $\int_0^1 {\pi}y^2\,dx ={\pi}\int_0^1 (x^2)^2\, dx$ Evaluating this integral, ${\pi}\int_0^1 x^4 dx$ =$[{{x^5\over 5} + C|}_0^1] {\pi}$ =$[{1\over 5} + {0\over 5}] {\pi}$ =${\pi}\over 5$ volume of solid= ${\pi\over 5} units^3$ In the example we discussed, the area is revolved about the $x$-axis. This does not always have to be the case. A function can be revolved about any fixed axis. Also, given a different function, to find the volume of revolution about the $x$-axis, we can substitute it in the place of $x^2$. Note: we would also need to change the bounds as per the given information. The method discussed in the example works for all functions that have bounds and are revolved about the $x$-axis. Washer Method The washer method can be used when the rotated plane does not touch the axis around which it is being rotated. One instance in which the plane isn't touching the rotational axis is when the plane is not just bounded by one function, but instead two. For now we'll assume that one function is consistently smaller than the other, so there is a 'smaller function' and a 'larger function.' The main image on this page is an example of when the washer method is used. The top curve (which we will call f(x) ) is proportional to the square root of x, and the bottom curve (which we will call g(x) )is linear. The boundaries for the functions are x =2 and x = 10. A cross section is shown to the right. The basic philosophy behind the washer method is the same as behind the disk method. We still must integrate around the rotational axis. The difference is that we cannot just use one radius (ie Radius = R - r). This wouldn't work because then two sections with the same area would necessarily have to have the same volume, but this is not the case. If two circles of the same radius are rotated around the same axis, if one is farther away, it will create more volume, as demonstrated in the animation below. Thus, instead of subtracting the radius of the smaller function from the bigger function, we subtract the volume the rotated smaller function would create from the volume the bigger function creates. The formula for the washer method is: $V = \pi \times \int(f(x)^2 - g(x)^2)dx$ Click to stop animation. If one function is not consistently smaller than the other, we can break up the problem into two smaller problems. If the functions f(x) and g(x) cross at some arbitrary value c, we use f(x) as the larger function from our start value to c, but as the smaller function from c to the end value. If our start and end values are a and b respectively, the formula is: $V = \pi \times \Big( \int_a^c (f(x)^2 - g(x)^2)dx + \int_c^b (g(x)^2 - f(x)^2)dx \Big)$ References Bread image http://mathdemos.gcsu.edu/mathdemos/sectionmethod/sectionmethod.html Revolving image http://mathdemos.gcsu.edu/mathdemos/sectionmethod/sectionmethod.html Teaching Materials There are currently no teaching materials for this page. Add teaching materials. About the Creator of this Image made in OpenGL Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 23, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8647451400756836, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/2803/inverse-of-a-10x10-antisymmetric-matrix
# Inverse of a 10X10 antisymmetric matrix I want to invert a 10X10 antisymmetric matrix in Python around 10,000 - 20,000 times. Is there a faster way to do it other than to use the built-in inverse function in Python? Thanks. - 1 Why do you want to want to perform an involutive operation more than once? – Nick Kidman Jul 13 '12 at 13:34 I am confused why the fact that $(A^{-1})^{-1}=A$ for any matrix $A$ doesn't make this a lot easier... – Benjamin Horowitz Jul 13 '12 at 13:49 2 Presumably, the poster is inverting 10000 matrices once, not one matrix 10000 times. – user1504 Jul 13 '12 at 14:00 This should be on scicomp.SE – Colin K Jul 13 '12 at 14:05 Are you inverting the same matrix repeatedly or inverting multiple $10\times10$ matrices? – Paul♦ Jul 13 '12 at 18:26 ## 3 Answers I picked this trick up from Jack Poulson when he answered this related question on antisymmetric (or skew-symmetric) matrix exponentials. An antisymmetric (more commonly called skew-symmetric matrix) $A$ is one in which $A^{T} = -A$. Since the matrix wasn't called skew-Hermitian, I'm assuming that the matrix is real. Conveniently, $(iA)^{H} = -iA^{H}$, where $H$ denotes the Hermitian transpose, so you could compute $iA$, and invert it using the LAPACK routine ZHESV (or CHESV; unless it is also positive definite, in which case you could use ZPOSV or CPOSV). At this point, you have $(iA)^{-1} = -iA^{-1} = B$. It follows that $iB = A^{-1}$. Unfortunately, NumPy and SciPy don't implement those functions (you'd have to call them from another language, like Fortran, C, C++, Java, etc.; there could be other libraries that provide a Python interface to LAPACK, but I don't know any that implement all of it). Based on the module `scipy.linalg`, your best option is probably to call `scipy.linalg.solve` (and ignore any symmetry), or if $iA$ is Hermitian positive definite, call `scipy.linalg.solveh_banded` after rearranging your data appropriately. Based on looking at the source, at a high level, it doesn't matter whether you call `scipy.linalg.solve` or `scipy.linalg.inv`; both are ultimately LAPACK calls. If any conceivable speed difference matters, you may as well test the two, but before you do so, you're probably better off making sure you use a high-quality BLAS implementation (ATLAS, Intel MKL (if appropriate), GotoBLAS) and making sure you build LAPACK, NumPy, and SciPy accordingly. Also, if there's any task parallelism with these matrices, you could potentially exploit that as well. All of this assumes that you want to invert 10,000 - 20,000 different matrices. I presume that if you wanted to invert the same matrix that many times, you know to just calculate the LU decomposition once, and use it to solve a linear system with 10,000 - 20,000 different right hand sides (each with the same coefficient matrix), in which case, the appropriate functions are marked in `scipy.linalg`. I can add further details if need be. - Yes, this is the wrong place to post the question, but can't resist answering. Python doesn't have a built-in matrix inverse. Numpy does. Numpy's algorithm is written in a low-level language, and written by matrix-inversion experts, so it's about as fast as possible. Unless you are a matrix-inversion expert yourself, you cannot write one that is faster. - Yep, I meant I was using numpy's inverse function. Thanks for the help though! – tut_einstein Jul 13 '12 at 15:00 You should really be asking in the Maths or IT Stack Exchanges, however if you want a layman's view: • Python's matrix inversion is likely to be written in a high level language and compiled so it will be a lot faster than interpreted Python code • as far as I know there aren't any cunning short-cut algorithms specifically for anti-symmetric matrices So I would use the Python implementation rather than writing your own (in Python). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331310391426086, "perplexity_flag": "middle"}
http://quant.stackexchange.com/tags/put-call-parity/hot?filter=month
# Tag Info ## Hot answers tagged put-call-parity 3 ### Call vs. Put Option A simple intuitive answer why the OTM Call is more expensive than the OTM Put is because of the skewness of the log-normal distribution. Think about it, what is the probability that the stock price is above 110 at expiration and what is the probability it is below 90? This should answer your question. Written in probability terms: The median of the ... 3 ### Call vs. Put Option The put call parity is given as follows: $$c_t-p_t = S_t - \frac{X}{e^{r(T-t)}}$$ If you assume $r=0$, you get $$c_t-p_t = S_t - X$$ So, $c_t \neq p_t$. The rationale behind it is much more financial than mathematical. You have to look at the payoff on both side of the equation, and you see that both portfolio will give the same payoff at time $T$ (the ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346665740013123, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/68301?sort=votes
## What makes a theorem ‘good’? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have been pondering the issue of what makes a theorem noteworthy. There are many famous examples of 'outstanding' theorems, such as Roth's theorem in Diophantine approximation, Szemeredi's Theorem, Vinogradov's Theorem, etc. There are also examples of theorems that are only marginal improvements despite significant effort... such as Dyson's theorem (which only improved Siegel's theorem from $2\sqrt{d}$ to $\sqrt{2d}$, where $d$ is the degree of the algebraic number being approximated). In fact I was told that Dyson gave up mathematics because he worked so hard only to get what he thought was a very small improvement. So what do you think is a 'good' theorem? Presumably when you set out to work out a theorem, you intend your theorem to be good. So how do you decide what is a good theorem to prove? How do you decide which results are important achievements that are noteworthy? Some of my criteria include: 1) The perceived strength of the hypothesis and the conclusion. For example, the following theorem is surely not good: Suppose every element of a set $A$ is a power of 2 larger than 1. Then every element of $A$ is even. In this case the hypothesis is extremely strong, and the conclusion extremely weak. However, here's an example of what I consider to be an outstanding theorem, based solely on the weakness of the hypothesis and the strength of the result: Suppose $\alpha$ is an algebraic number of degree $d$, and suppose that $\epsilon > 0$. Then there exists only finitely many integers $p,q$ with $\gcd(p,q) = 1$ such that $$\displaystyle \left | \alpha - \frac{p}{q} \right| < \frac{1}{q^{2 + \epsilon}}$$ Here all we are assuming that $\alpha$ is algebraic, which is not a particularly strong assertion... but the conclusion is very strong. This is compared to Thue's theorem, Siegel's theorem, and Dyson's theorem where $2 + \epsilon$ is replaced with $d/2 + \epsilon$, $2\sqrt{d} + \epsilon$, and $\sqrt{2d} + \epsilon$ respectively. 2) How much it improved on the previous best result. For example, Szmeredi's theorem which asserted that the assumption of a subset $A \subset \mathbb{N}$ having positive upper density is sufficient for $A$ to have arithmetic progressions of length $k$ for any $k \geq 3$ is a significant improvement over Roth's (other) theorem which asserted the same for length three arithmetic progressions. 3) How much it spurred further research. Examples in this category could include the $h$-cobordism theorem, Falting's theorem, and Roth's theorem. So what qualities do you think make up a magnificent theorem and what are some examples? - 3 The question in the last sentence of this question is way too broad. At the very least, the question needs to be tightened up to ask about qualities that make a theorem "good" (not just a laundry list of people's favorite theorems) and be community wiki. But I suspect that even a tightened up question will be closed. – Andy Putman Jun 20 2011 at 17:02 1 I'm not voting to close... but I'm very tempted. And I agree with Andy: this should be community-wiki. – André Henriques Jun 20 2011 at 17:05 1 This is not a mathematical question, just a collection of statements. Voted to close. In any case if somebody finds it interesting, it should be a community wiki. – Mark Sapir Jun 20 2011 at 17:16 4 Voting to close as off-topic. Nevertheless, you might be interested in Tao's What is good mathematics? (arxiv.org/abs/math/0702396) and Thurston's On proof and progress in mathematics (arxiv.org/abs/math/9404236). – Qiaochu Yuan Jun 20 2011 at 17:22 I think this question is very interesting, when narrowed to the perspective of young-career mathematicians trying to decide what to work on. (Presumably Roth, Szemeredi, et al. did not have to worry about this!) – Frank Thorne Jun 20 2011 at 17:31 show 1 more comment ## 1 Answer Quadratic reciprocity and Riemann-Roch. Any theorem that still powers research a century later, in other words. But that is 'great', not just 'good'. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548376202583313, "perplexity_flag": "middle"}
http://meta.stats.stackexchange.com/questions/1460/are-notation-questions-off-topic
# Are notation questions off-topic? I was wondering if the Stats.SE community would know of a more compact way of writing $$(X_1, X_2, \dots, X_n) \sim F$$ since I use it so often, but there's nothing inherently difficult about the question that would require statistical knowledge. For that reason, I'm unsure whether it is considered relevant here. Are questions regarding "good notation" fair game for Stats.SE? - Are you simply asking about $\LaTeX$ shortcuts? If so, this question may help: latex-macros-for-expectation-variance-and-covariance, also there is an SE site for $\TeX$. – gung Oct 28 '12 at 2:26 1 I meant in general, like writing. I want to know about a concise way of writing it. – Christopher Aden Oct 28 '12 at 2:47 – gung Oct 28 '12 at 2:53 Yes, I do mean writing fewer characters in the line, but still retaining the clarity. I'm hesitant to use $\mathbf{X_n} \sim^{iid} F$, as it might be interpreted as a multivariate distribution on the vector. – Christopher Aden Oct 28 '12 at 3:44 5 The expression in the question is already ambiguous: literally, it says the vector $(X_i)$ has the multivariate distribution $F$. If you mean that you have a set of iid variables, it would be more correct and less ambiguous to write something like $\left\{ X_i \right\}\stackrel{iid}{\sim}F$. But in general, it's clearer to state what you mean (in English) the first time: then more readers are likely to understand your words correctly. – whuber♦ Oct 28 '12 at 15:57 ## 2 Answers I think they should be on-topic here. Notation can greatly assist clarity (or, in some cases, retard it). One good bit of advice I got on reading about models is to follow the subscripts. But that, of course, necessitates that the subscripts are correctly written in the first place. As an aside, the professor who taught ANOVA to me (and many others) was not aided in his exposition by the fact that, on the board, his i's and j's (and sometimes his k's) all looked identical. - They seem to be on topic, as attested by 17 questions already bearing the notation tag. - 1 To clarify, you mean that they seem to be on-topic? – Russell S. Pierce Jan 20 at 15:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558614492416382, "perplexity_flag": "middle"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G05/g05pjc.html
# NAG Library Function Documentnag_rand_varma (g05pjc) ## 1  Purpose nag_rand_varma (g05pjc) generates a realization of a multivariate time series from a vector autoregressive moving average (VARMA) model. The realization may be continued or a new realization generated at subsequent calls to nag_rand_varma (g05pjc). ## 2  Specification #include <nag.h> #include <nagg05.h> void nag_rand_varma (Nag_OrderType order, Nag_ModeRNG mode, Integer n, Integer k, const double xmean[], Integer p, const double phi[], Integer q, const double theta[], const double var[], Integer pdv, double r[], Integer lr, Integer state[], double x[], Integer pdx, NagError *fail) ## 3  Description Let the vector ${X}_{t}={\left({x}_{1t},{x}_{2t},\dots ,{x}_{kt}\right)}^{\mathrm{T}}$, denote a $k$-dimensional time series which is assumed to follow a vector autoregressive moving average (VARMA) model of the form: $Xt-μ= ϕ1Xt-1-μ+ϕ2Xt-2-μ+⋯+ϕpXt-p-μ+ εt-θ1εt-1-θ2εt-2-⋯-θqεt-q$ (1) where ${\epsilon }_{t}={\left({\epsilon }_{1t},{\epsilon }_{2t},\dots ,{\epsilon }_{kt}\right)}^{\mathrm{T}}$, is a vector of $k$ residual series assumed to be Normally distributed with zero mean and covariance matrix $\Sigma $. The components of ${\epsilon }_{t}$ are assumed to be uncorrelated at non-simultaneous lags. The ${\varphi }_{i}$'s and ${\theta }_{j}$'s are $k$ by $k$ matrices of parameters. $\left\{{\varphi }_{i}\right\}$, for $\mathit{i}=1,2,\dots ,p$, are called the autoregressive (AR) parameter matrices, and $\left\{{\theta }_{j}\right\}$, for $\mathit{j}=1,2,\dots ,q$, the moving average (MA) parameter matrices. The parameters in the model are thus the $p$ $k$ by $k$ $\varphi $-matrices, the $q$ $k$ by $k$ $\theta $-matrices, the mean vector $\mu $ and the residual error covariance matrix $\Sigma $. Let $Aϕ= ϕ1 I 0 . . . 0 ϕ2 0 I 0 . . 0 . . . . . . ϕp-1 0 . . . 0 I ϕp 0 . . . 0 0 pk×pk and Bθ= θ1 I 0 . . . 0 θ2 0 I 0 . . 0 . . . . . . θq- 1 0 . . . 0 I θq 0 . . . 0 0 qk×qk$ where $I$ denotes the $k$ by $k$ identity matrix. The model (1) must be both stationary and invertible. The model is said to be stationary if the eigenvalues of $A\left(\varphi \right)$ lie inside the unit circle and invertible if the eigenvalues of $B\left(\theta \right)$ lie inside the unit circle. For $k\ge 6$ the VARMA model (1) is recast into state space form and a realization of the state vector at time zero computed. For all other cases the function computes a realization of the pre-observed vectors ${X}_{0},{X}_{-1},\dots ,{X}_{1-p}$, ${\epsilon }_{0},{\epsilon }_{-1},\dots ,{\epsilon }_{1-q}$, from (1), see Shea (1988). This realization is then used to generate a sequence of successive time series observations. Note that special action is taken for pure MA models, that is for $p=0$. At your request a new realization of the time series may be generated more efficiently using the information in a reference vector created during a previous call to nag_rand_varma (g05pjc). See the description of the argument mode in Section 5 for details. The function returns a realization of ${X}_{1},{X}_{2},\dots ,{X}_{n}$. On a successful exit, the recent history is updated and saved in the array r so that nag_rand_varma (g05pjc) may be called again to generate a realization of ${X}_{n+1},{X}_{n+2},\dots $, etc. See the description of the argument mode in Section 5 for details. Further computational details are given in Shea (1988). Note, however, that nag_rand_varma (g05pjc) uses a spectral decomposition rather than a Cholesky factorization to generate the multivariate Normals. Although this method involves more multiplications than the Cholesky factorization method and is thus slightly slower it is more stable when faced with ill-conditioned covariance matrices. A method of assigning the AR and MA coefficient matrices so that the stationarity and invertibility conditions are satisfied is described in Barone (1987). One of the initialization functions nag_rand_init_repeatable (g05kfc) (for a repeatable sequence if computed sequentially) or nag_rand_init_nonrepeatable (g05kgc) (for a non-repeatable sequence) must be called prior to the first call to nag_rand_varma (g05pjc). ## 4  References Barone P (1987) A method for generating independent realisations of a multivariate normal stationary and invertible ARMA$\left(p,q\right)$ process J. Time Ser. Anal. 8 125–130 Shea B L (1988) A note on the generation of independent realisations of a vector autoregressive moving average process J. Time Ser. Anal. 9 403–410 ## 5  Arguments 1:     order – Nag_OrderTypeInput On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument. Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor. 2:     mode – Nag_ModeRNGInput On entry: a code for selecting the operation to be performed by the function. ${\mathbf{mode}}=\mathrm{Nag_InitializeReference}$ Set up reference vector and compute a realization of the recent history. ${\mathbf{mode}}=\mathrm{Nag_GenerateFromReference}$ Generate terms in the time series using reference vector set up in a prior call to nag_rand_varma (g05pjc). ${\mathbf{mode}}=\mathrm{Nag_InitializeAndGenerate}$ Combine the operations of ${\mathbf{mode}}=\mathrm{Nag_InitializeReference}$ and $\mathrm{Nag_GenerateFromReference}$. ${\mathbf{mode}}=\mathrm{Nag_ReGenerateFromReference}$ A new realization of the recent history is computed using information stored in the reference vector, and the following sequence of time series values are generated. If ${\mathbf{mode}}=\mathrm{Nag_GenerateFromReference}$ or $\mathrm{Nag_ReGenerateFromReference}$, then you must ensure that the reference vector r and the values of k, p, q, xmean, phi, theta, var and pdv have not been changed between calls to nag_rand_varma (g05pjc). Constraint: ${\mathbf{mode}}=\mathrm{Nag_InitializeReference}$, $\mathrm{Nag_GenerateFromReference}$, $\mathrm{Nag_InitializeAndGenerate}$ or $\mathrm{Nag_ReGenerateFromReference}$. 3:     n – IntegerInput On entry: $n$, the number of observations to be generated. Constraint: ${\mathbf{n}}\ge 0$. 4:     k – IntegerInput On entry: $k$, the dimension of the multivariate time series. Constraint: ${\mathbf{k}}\ge 1$. 5:     xmean[k] – const doubleInput On entry: $\mu $, the vector of means of the multivariate time series. 6:     p – IntegerInput On entry: $p$, the number of autoregressive parameter matrices. Constraint: ${\mathbf{p}}\ge 0$. 7:     phi[${\mathbf{k}}×{\mathbf{k}}×{\mathbf{p}}$] – const doubleInput On entry: must contain the elements of the ${\mathbf{p}}×{\mathbf{k}}×{\mathbf{k}}$ autoregressive parameter matrices of the model, ${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$. The $\left(i,j\right)$th element of ${\varphi }_{\mathit{l}}$ is stored in ${\mathbf{phi}}\left[\left(\mathit{l}-1\right)×k×k+\left(j-1\right)×k+i-1\right]$, for $\mathit{l}=1,2,\dots ,p$, $i=1,2,\dots ,k$ and $j=1,2,\dots ,k$. Constraint: the elements of phi must satisfy the stationarity condition. 8:     q – IntegerInput On entry: $q$, the number of moving average parameter matrices. Constraint: ${\mathbf{q}}\ge 0$. 9:     theta[${\mathbf{k}}×{\mathbf{k}}×{\mathbf{q}}$] – const doubleInput On entry: must contain the elements of the ${\mathbf{q}}×{\mathbf{k}}×{\mathbf{k}}$ moving average parameter matrices of the model, ${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$. The $\left(i,j\right)$th element of ${\theta }_{\mathit{l}}$ is stored in ${\mathbf{theta}}\left[\left(\mathit{l}-1\right)×k×k+\left(\mathit{j}-1\right)×k+\mathit{i}-1\right]$, for $\mathit{l}=1,2,\dots ,q$, $\mathit{i}=1,2,\dots ,k$ and $\mathit{j}=1,2,\dots ,k$. Constraint: the elements of theta must be within the invertibility region. 10:   var[$\mathit{dim}$] – const doubleInput Note: the dimension, dim, of the array var must be at least ${\mathbf{pdv}}×{\mathbf{k}}$. Where ${\mathbf{VAR}}\left(i,j\right)$ appears in this document, it refers to the array element • ${\mathbf{var}}\left[\left(j-1\right)×{\mathbf{pdv}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • ${\mathbf{var}}\left[\left(i-1\right)×{\mathbf{pdv}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. On entry: ${\mathbf{VAR}}\left(\mathit{i},\mathit{j}\right)$ must contain the ($\mathit{i},\mathit{j}$)th element of $\Sigma $, for $\mathit{i}=1,2,\dots ,{\mathbf{k}}$ and $\mathit{j}=1,2,\dots ,{\mathbf{k}}$. Only the lower triangle is required. Constraint: the elements of var must be such that $\Sigma $ is positive semidefinite. 11:   pdv – IntegerInput On entry: the stride separating row or column elements (depending on the value of order) in the array var. Constraint: ${\mathbf{pdv}}\ge {\mathbf{k}}$. 12:   r[lr] – doubleCommunication Array On entry: if ${\mathbf{mode}}=\mathrm{Nag_GenerateFromReference}$ or $\mathrm{Nag_ReGenerateFromReference}$, the array r as output from the previous call to nag_rand_varma (g05pjc) must be input without any change. If ${\mathbf{mode}}=\mathrm{Nag_InitializeReference}$ or $\mathrm{Nag_InitializeAndGenerate}$, the contents of r need not be set. On exit: information required for any subsequent calls to the function with ${\mathbf{mode}}=\mathrm{Nag_GenerateFromReference}$ or $\mathrm{Nag_ReGenerateFromReference}$. See Section 8. 13:   lr – IntegerInput On entry: the dimension of the array r. Constraints: • if ${\mathbf{k}}\ge 6$, ${\mathbf{lr}}\ge \left(5{\mathit{r}}^{2}+1\right)×{{\mathbf{k}}}^{2}+\left(4\mathit{r}+3\right)×{\mathbf{k}}+4$; • if ${\mathbf{k}}<6$, ${\mathbf{lr}}\ge \left({\left({\mathbf{p}}+{\mathbf{q}}\right)}^{2}+1\right)×{{\mathbf{k}}}^{2}+\phantom{\rule{0ex}{0ex}}\left(4×\left({\mathbf{p}}+{\mathbf{q}}\right)+3\right)×{\mathbf{k}}+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left\{{\mathbf{k}}\mathit{r}\left({\mathbf{k}}\mathit{r}+2\right),{{\mathbf{k}}}^{2}{\left({\mathbf{p}}+{\mathbf{q}}\right)}^{2}+\mathit{l}\left(\mathit{l}+3\right)+{{\mathbf{k}}}^{2}\left({\mathbf{q}}+1\right)\right\}+4$. Where $\mathit{r}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{p}},{\mathbf{q}}\right)$ and if ${\mathbf{p}}=0$, $\mathit{l}={\mathbf{k}}\left({\mathbf{k}}+1\right)/2$, or if ${\mathbf{p}}\ge 1$, $\mathit{l}={\mathbf{k}}\left({\mathbf{k}}+1\right)/2+\left({\mathbf{p}}-1\right){{\mathbf{k}}}^{2}$. See Section 8 for some examples of the required size of the array r. 14:   state[$\mathit{dim}$] – IntegerCommunication Array Note: the actual argument supplied must be the array state supplied to the initialization functions nag_rand_init_repeatable (g05kfc) or nag_rand_init_nonrepeatable (g05kgc). On entry: contains information on the selected base generator and its current state. On exit: contains updated information on the state of the generator. 15:   x[$\mathit{dim}$] – doubleOutput Note: the dimension, dim, of the array x must be at least • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pdx}}×{\mathbf{n}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{k}}×{\mathbf{pdx}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. Where ${\mathbf{X}}\left(i,t\right)$ appears in this document, it refers to the array element • ${\mathbf{x}}\left[\left(t-1\right)×{\mathbf{pdx}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • ${\mathbf{x}}\left[\left(i-1\right)×{\mathbf{pdx}}+t-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. On exit: ${\mathbf{X}}\left(\mathit{i},\mathit{t}\right)$ will contain a realization of the $\mathit{i}$th component of ${X}_{\mathit{t}}$, for $\mathit{i}=1,2,\dots ,k$ and $\mathit{t}=1,2,\dots ,n$. 16:   pdx – IntegerInput On entry: the stride separating row or column elements (depending on the value of order) in the array x. Constraints: • if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${\mathbf{pdx}}\ge {\mathbf{k}}$; • if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${\mathbf{pdx}}\ge {\mathbf{n}}$. 17:   fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_CLOSE_TO_STATIONARITY The reference vector cannot be computed because the AR parameters are too close to the boundary of the stationarity region. NE_INT On entry, ${\mathbf{k}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{k}}\ge 1$. On entry, lr is not large enough, ${\mathbf{lr}}=〈\mathit{\text{value}}〉$: minimum length required $\text{}=〈\mathit{\text{value}}〉$. On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 0$. On entry, ${\mathbf{p}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{p}}\ge 0$. On entry, ${\mathbf{pdv}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdv}}>0$. On entry, ${\mathbf{pdx}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdx}}>0$. On entry, ${\mathbf{q}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{q}}\ge 0$. NE_INT_2 On entry, ${\mathbf{pdv}}=〈\mathit{\text{value}}〉$ and ${\mathbf{k}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdv}}\ge {\mathbf{k}}$. On entry, ${\mathbf{pdx}}=〈\mathit{\text{value}}〉$ and ${\mathbf{k}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdx}}\ge {\mathbf{k}}$. On entry, ${\mathbf{pdx}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdx}}\ge {\mathbf{n}}$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_INVALID_STATE On entry, state vector has been corrupted or not initialized. NE_INVERTIBILITY On entry, the moving average parameter matrices are such that the model is non-invertible. NE_POS_DEF On entry, the covariance matrix var is not positive semidefinite to machine precision. NE_PREV_CALL k is not the same as when r was set up in a previous call. Previous value of ${\mathbf{k}}=〈\mathit{\text{value}}〉$ and ${\mathbf{k}}=〈\mathit{\text{value}}〉$. NE_STATIONARY_AR On entry, the AR parameters are outside the stationarity region. NE_TOO_MANY_ITER An excessive number of iterations were required by the NAG function used to evaluate the eigenvalues of the covariance matrix. An excessive number of iterations were required by the NAG function used to evaluate the eigenvalues of the matrices used to test for stationarity or invertibility. An excessive number of iterations were required by the NAG function used to evaluate the eigenvalues stored in the reference vector. ## 7  Accuracy The accuracy is limited by the matrix computations performed, and this is dependent on the condition of the argument and covariance matrices. ## 8  Further Comments Note that, in reference to NE_INVERTIBILITY, nag_rand_varma (g05pjc) will permit moving average parameters on the boundary of the invertibility region. The elements of r contain amongst other information details of the spectral decompositions which are used to generate future multivariate Normals. Note that these eigenvectors may not be unique on different machines. For example the eigenvectors corresponding to multiple eigenvalues may be permuted. Although an effort is made to ensure that the eigenvectors have the same sign on all machines, differences in the signs may theoretically still occur. The following table gives some examples of the required size of the array r, specified by the argument lr, for $k=1,2$ or $3$, and for various values of $p$ and $q$. | | | | | | | |----|----|-----|------|------------------------------------------------------------------------------|------| | | | | | $q$ | | | | | | | | | | | | 0 | 1 | 2 | 3 | | | | | | | | | | | 13 | 20 | 31 | 46 | | | 0 | 36 | 56 | 92 | 144 | | | | 85 | 124 | 199 | 310 | | | | | | | | | | | 19 | 30 | 45 | 64 | | | 1 | 52 | 88 | 140 | 208 | | | | 115 | 190 | 301 | 448 | | p | | | | | | | | | 35 | 50 | 69 | 92 | | | 2 | 136 | 188 | 256 | 340 | | | | 397 | 508 | 655 | 838 | | | | | | | | | | | 57 | 76 | 99 | 126 | | | 3 | 268 | 336 | 420 | 520 | | | | 877 | 1024 | 1207 | 1426 | Note that nag_tsa_arma_roots (g13dxc) may be used to check whether a VARMA model is stationary and invertible. The time taken depends on the values of $p$, $q$ and especially $n$ and $k$. ## 9  Example This program generates two realizations, each of length $48$, from the bivariate AR(1) model $Xt-μ=ϕ1Xt-1-μ+εt$ with $ϕ1= 0.80 0.07 0.00 0.58 ,$ $μ= 5.00 9.00 ,$ and $Σ= 2.97 0 0.64 5.38 .$ The pseudorandom number generator is initialized by a call to nag_rand_init_repeatable (g05kfc). Then, in the first call to nag_rand_varma (g05pjc), ${\mathbf{mode}}=\mathrm{Nag_InitializeAndGenerate}$ in order to set up the reference vector before generating the first realization. In the subsequent call ${\mathbf{mode}}=\mathrm{Nag_ReGenerateFromReference}$ and a new recent history is generated and used to generate the second realization. ### 9.1  Program Text Program Text (g05pjce.c) ### 9.2  Program Data Program Data (g05pjce.d) ### 9.3  Program Results Program Results (g05pjce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 170, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6743291020393372, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/12120/from-geometrical-figures-to-function
# From geometrical figures to function There's one basilar math thing that keeps bugging me: the fact that a really simple 2D geometrical figure (like a circle) might not be a function. I know what the definition of a function is. A circle is not a function (of one variable) because it would associate two values of the codomain to a single value of the domain. But this doesn't help my intuitivity. It sounds terribly weird that a given curve (say a sinusoidal) is a function only unless you rotate it by 45 or more degrees... Is there any simple way (a concept similar to that of how most people imagine a function: a curve on a graph) to represent 2D geometrical figures like a circle (or a rotated sinusoidal, or whatever)? The only one I can think of is using a function of two or more variables, but this sounds pretty dirty to me: why should I use a function in three dimensions just to see its shadow on two dimensions? Besides if I think of the function as a real object (in our real, 3D space), I cannot help thinking that it's not a 2D circle, it's a 3D weird object which can be seen as a circle when rotated in a particular way (just like the Penrose stairs look possible when rotated in a special way). - In polar coordinates, the equation for a circle is a function. When the circle is centered at the origin, the function is constant! – Blue Nov 27 '10 at 21:13 Yes, I know, but in polar coordinates even a simple sinusoidal or an exponential are not functions. I'm looking for a common way to represent all the 2D geometrical figures (or at least those that a non-mathematician mind imagines simple figures). – peoro Nov 27 '10 at 21:19 3 What about parametric equations? There is only one independent variable. – Raskolnikov Nov 27 '10 at 21:42 @peoro: Star-convex figures (with respect to the origin) are functions in polar coordinates. – Dario Nov 27 '10 at 21:48 "It sounds terribly weird that a given curve (say a sinusoidal) is a function only unless you rotate it by 45 or more degrees..." You are confounding the concept of function with the concept of a graph of a function. – yasmar Nov 27 '10 at 21:52 show 1 more comment ## 1 Answer Some possibilities: First of all, you might also want to switch to polar coordinates, in which points are defined through angle $\varphi$ and radius $r$ rather than $x$/$y$ coordinates. For example, in a polar coordinate system, a circle (let's take the unit circle) $K$ actually is a function (now of form $r(\varphi)$ rather than $y(x)$). As the radius is constant, we end up with $$K: r(\varphi) = 1$$ But most figures still aren't functions in polar coordinates, so we might have to take a more general approach: A curve A curve is a function that produces coordinates rather than a single value from some parameter, i.e. $\mathbb{R} \to \mathbb{R}^2$ in our case. Let $K$ be our unit circle again - now parameterized by some angle $\varphi$ $$K(\varphi) = (\cos \varphi, \sin \varphi)$$ At least, this should work for most figures. The most general form though is simply an equation that the coordinates have to satisfy. $$K: x^2+y^2 = 1$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8962647914886475, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/77537-praxis-help-please.html
# Thread: 1. ## Praxis help please If xRy = x-y = 7t Is the equation: reflexive symmetric transitive 2. Originally Posted by tigerpaw If xRy = x-y = 7t Is the equation: reflexive symmetric transitive You need to show some work. At least explain what you don't understand. BTW: R appears to be a relation. Is that correct? What is t? 3. thank you for your help. I read xRy as x is related to y which normally indicates an ordered pair. I am confused from that point, the only remaining question then reads then x – y = 7t and if this equation is reflexive and/or symmetric and/or transitive thanks 4. Well first I find it hard to think a Praxis question is so poorly written. This is not an equation but rather set of ordered pairs. Each pair $(x,y)$ is in the collection is there if and only if $x – y = 7t$ for some $t$. Reflexive: Is $7 \cdot 0 = x - x$ true for every $x$? Symmetric: If $x - y = 7t$ does it follow that $y - x = 7( - t)$? Transitive: $x - y = 7t\;\& \,y - z = 7s\, \Rightarrow \,x - z = 7(t + s)$. May I ask, what level of the Praxis are you studying for? Is it the subject matter area test? 5. Absolutely, it is the Praxis II High school math content knowledge. A great number of the questions are extremely vague with little information. I have a few more questions that I have seen if you are interested. thanks 6. Originally Posted by tigerpaw Absolutely, it is the Praxis II High school math content knowledge. A great number of the questions are extremely vague with little information. I have a few more questions that I have seen if you are interested. I have both written for and edited for these test. Do you realize that the area exam of the Praxis assumes an equivalence of an undergraduate major in the subject matter? Many U.S. states have adopted that standard for secondary certification. In mathematics that becomes a great problem! Here in the states it is relatively easy to move among several subject disciplines. BUT, that is not our experience with mathematics. Unfortunately we have found that a complete undergraduate course in mathematics was a prerequisite for passing the Praxis subject area test in mathematics. You need to know that finding. 7. thank you, that I have discovered and I have sought the additional courses. I teach in a very low income rural county school with an enrollment close to 450 students. I agree with your statement regarding the the undergraduate experience and accompanying technical knowledge needed to pass the Praxis. I teach basic algebra and geometry to students who fall thru the cracks and fight to keep them in school. As imagined, I have struggled with the advanced portions of the exam. Also, many of the views seen in the discrete portion of the subject matter have been difficult. regardless, thank you for your help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951149582862854, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/277154/about-the-basis-of-a-module/277169
# About the basis of a module Let $n\in \mathbb Z, n\ne 0$. Prove that $\mathbb Z/n\mathbb Z$-module $\mathbb Z/n\mathbb Z$ has a basis, but $\mathbb Z$-module $\mathbb Z/n\mathbb Z$ hasn't any basis. Hope everyone help me with that. Thanks. - ## 1 Answer If you view $M = \mathbb Z / n \mathbb Z$ as module over $R = \mathbb Z / n \mathbb Z$ the element $1$ will form a basis: let $k \in \mathbb Z / n \mathbb Z$. Then $k = k \cdot 1$. On the other hand, if the ring acting on $M$ is $R = \mathbb Z$ then any subset of $\mathbb Z / n \mathbb Z$ will be linearly dependent, just multiply by $n$: for example, $1 \cdot n = 0$. - @user52523 From this answer you can see that basis elements for $\Bbb Z$ modules are in conflict with elements of finite order (torsion elements). – rschwieb Jan 14 at 14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8541796803474426, "perplexity_flag": "head"}
http://mathoverflow.net/questions/50173?sort=newest
## How to prove Con(PA) in ZFC? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) PA doesn't prove Con(PA) but ZFC does. That means the extra axiom of infinity is of tantamount importance in the proof. Not seen such a proof, think it would be interesting. Heard of it. - 7 ZFC proves that the natural numbers (which exist by the axiom of infinity) are a model of PA, and therefore by soundness that PA is consistent? – Gabriel Ebner Dec 22 2010 at 16:52 4 Is this a research-level question? – Andrej Bauer Dec 22 2010 at 18:23 3 Vote to close since this is too elementary a question for MO. – Timothy Chow Dec 22 2010 at 18:36 Wikipedia's article en.wikipedia.org/wiki/Axiom_of_infinity has a good explanation of how ZF proves that there is a set $\omega$ and an operation $S$ obeying the Peano axioms. In other words, ZF proves that there is a model of PA. (continued...) – David Speyer Dec 23 2010 at 13:45 This no doubt reveals my ignorance of set theory, but it seems to me to be a little tricky to finish from here. I would like a theorem of ZF saying "For any theory T, if T has a model then Con(T)". It's not clear to me that this claim can be expressed in ZF! Everytime I try, I wind up wanting a truth predicate planetmath.org/encyclopedia/… . (continued) – David Speyer Dec 23 2010 at 13:53 show 2 more comments ## 2 Answers You can also prove the consistency of PA with second order logic. The key thing is that you need a higher order induction hypothesis. In first order logic + PA, the induction hypothesis are limited to first order expressions. The strength of a logic is often determined by what you allow in the induction hypothesis. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Within ZFC you can formalize Tarski's definition of truth, then prove that the axioms of PA are all true and that the rules of inference preserve truth. This gives a formal proof of Con(PA). This allows you to prove not just the consistency of PA, but the consistency of PA + Con(PA), or PA + Con(PA) + Con(PA+Con(PA)), etc. Nothing close to the full strength of ZFC is needed for any of this (though of course you need something beyond PA). - @Steven : Small typo in the first line: The second ZFC should be PA. – Andres Caicedo Dec 22 2010 at 17:04 Andres: Thanks. I went to edit this, but it looks like Terry Tao did it for me. – Steven Landsburg Dec 22 2010 at 20:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316417574882507, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/166386-sketching-graph-r3-print.html
# Sketching graph in R3 Printable View • December 16th 2010, 12:30 AM SyNtHeSiS Sketching graph in R3 Let $f(x,y) = x^{1/3}y^{1/3}$, and let C be the curve of intersection of $z = f(x,y)$ with the plane $y = x$. Sketch the graph of the curve C. I know how to graph y = x in R3, but I am not sure how to graph z = f(x,y). Would you use the same steps as you would when sketching a graph in R2? (e.g. first, second derivatives, asymptotes etc). • December 16th 2010, 01:08 AM FernandoRevilla Quote: Originally Posted by SyNtHeSiS Let $f(x,y) = x^{1/3}y^{1/3}$, and let C be the curve of intersection of $z = f(x,y)$ with the plane $y = x$. Sketch the graph of the curve C. The intersection $S$ is: $S=\{(x,x,\sqrt[3]{x^2}):\;x\in \mathbb{R}\}\subset \mathbb{R}^3$ You can determine the position of $z=\sqrt[3]{x^2}$ drawing the curve, as usual. Fernando Revilla • December 16th 2010, 04:08 AM SyNtHeSiS 1 Attachment(s) Thanks. I tried checking the graph on wolfram alpha and I dont get why when x < 0 the graph (real part) is negative. I mean if you let x = -2 in z = x^{2/3} you get a positive value. All times are GMT -8. The time now is 05:17 PM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052249193191528, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/104558?sort=votes
## What do we know about the semigroup $e^{it\sqrt{-\Delta}}$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm very interested in the properties of the semigroup $e^{it\sqrt{-\Delta}}$, it may has some fundamental differences(such as the kernel) with the well-known schrodinger semigroup $e^{it\Delta}$. Any properties (or references or books) that related this semigroup are appreciated. Thanks! - 1 Try putting "heat semigroup" in google and you'll get lots relevant references. – André Henriques Aug 12 at 13:51 Are you going to accept the answer? – timur Oct 20 at 13:21 ## 1 Answer The wave operator decomposes as $$\partial_t^2-\Delta = (\partial_t-i\sqrt{-\Delta})(\partial_t+i\sqrt{-\Delta}),$$ so you can think of $e^{it\sqrt{-\Delta}}$ as solving a "half of" the wave equation. In particular, it has a finite propagation speed. This can also be seen from the dispersion relation $\omega = |\xi|$, where $\omega$ and $\xi$ are the Fourier variables for $t$ and $x$, respectively. On the other hand, the Schrödinger propagator $e^{it\Delta}$ has the dispersions relation $\omega=|\xi|^2$, which makes it genuinely dispersive, i.e., the propagation speed depends on the frequency. Note that $e^{it\Delta}$ is not the heat semigroup, which the other answers and comments seem to suggest. - 1 The downvoter care to comment? – timur Aug 13 at 0:23 You're right, $e^{it\Delta}$ isn't the heat semigroup. Editing my answer. – Nik Weaver Aug 13 at 3:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292948246002197, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4248479
Physics Forums Page 5 of 9 « First < 2 3 4 5 6 7 8 > Last » ## Why don't photons experience time? Quote by PeterDonis Exactly: that's the point. Putting things in laymen's terms distorts them. I think it boils down to the same thing. Massless particles do not sense the passage of time, or however else one may prefer to say that. Quote by PeterDonis The Lorentz transformation is *not* valid at v = c; the factor that goes to zero is in the denominator, and you can't divide by zero. Um, that's why I said analyzed in the limit as v=c. Perhaps better wording would have been as v goes to c. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by dm4b I think it boils down to the same thing. Massless particles do not sense the passage of time, or however else one may prefer to say that. But how one prefers to say it has a huge effect on what inferences lay people draw from it. Say that massless particles are fundamentally different physically from massive ones, so the concept of "passage of time" doesn't even apply to massless particles, and you get questions about why that is, which leads to a fruitful discussion about the behavior of timelike vs. null vectors or worldlines and the way that Lorentz transformations separately take each of those subspaces of Minkowski spacetime into itself. But say that massless particles do not sense the passage of time, and you get interminable threads about how this means photons don't move in time at all, only in space, how a photon can see the entire Universe all at once, etc., etc., leading to all sorts of further inferences that are just false. Then you have to patiently go back and explain how, when you said massless particles do not sense the passage of time, you didn't really mean that, but something else. Quote by dm4b Um, that's why I said analyzed in the limit as v=c. Perhaps better wording would have been as v goes to c. But that doesn't cover the case v = c, only v < c but getting closer and closer. Also, the statement as you gave it is frame-dependent: an object can be moving at v = .9999999999999999c in one frame but be at rest in another, and its "deltaT" changes in concert with that. But an object that is moving at v = c in one frame is moving at v = c in every frame. The two kinds of objects (timelike vs. lightlike) are fundamentally different. Recognitions: Gold Member Post #57 by DaleSpam in this thread and post #59 by me might be helpful at this point. Quote by PeterDonis But how one prefers to say it has a huge effect on what inferences lay people draw from it. Say that massless particles are fundamentally different physically from massive ones, so the concept of "passage of time" doesn't even apply to massless particles, and you get questions about why that is, which leads to a fruitful discussion about the behavior of timelike vs. null vectors or worldlines and the way that Lorentz transformations separately take each of those subspaces of Minkowski spacetime into itself. But say that massless particles do not sense the passage of time, and you get interminable threads about how this means photons don't move in time at all, only in space, how a photon can see the entire Universe all at once, etc., etc., leading to all sorts of further inferences that are just false. Then you have to patiently go back and explain how, when you said massless particles do not sense the passage of time, you didn't really mean that, but something else. Just because something leads to confusion doesn't necessarily mean it is fundamentally incorrect. A more technical and exact discussion can alleviate the chances of that and be more fruitful, but that doesn't mean the same kind of confusion can't happen there too. Quote by PeterDonis But that doesn't cover the case v = c, only v < c but getting closer and closer. Exactly, that's the point of a limit. Plot that up and tell me the trend you see. Quote by PeterDonis Also, the statement as you gave it is frame-dependent: an object can be moving at v = .9999999999999999c in one frame but be at rest in another, and its "deltaT" changes in concert with that. But an object that is moving at v = c in one frame is moving at v = c in every frame. The two kinds of objects (timelike vs. lightlike) are fundamentally different. Exactly, combine that with the trend above and what does that suggest. Combine that with the fact that neutrinos would not able to undergo neutrino oscillations if they had zero mass and what does that suggest. It all suggests that "massless particles do not sense the passage of time" In short, I think that saying the phrase in quotes is dead wrong would be as misleading as saying it is technically exact. Recognitions: Gold Member Science Advisor It seems like the term "passage of time" is being thrown around so loosely I can't even ascertain how it is being defined in this context. If you want to ascribe a quantity / notion of time that is frame independent then you could talk about $\int_{\gamma } d\tau$ (where $\gamma$ is the time - like curve the massive particle is traveling on). What would "passage of time" even mean for light when you can't use proper time as an affine parameter along a null - like path? Are you wanting to use coordinate time? Coordinate time isn't frame independent so what kind of physical significance of "passage of time" can you even define for that? Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by dm4b Just because something leads to confusion doesn't necessarily mean it is fundamentally incorrect. Yes, and I wasn't necessarily saying that "photons don't experience the passage of time" is incorrect. Pedagogy always involves judgment calls, which different people can make in different ways; no argument there. Quote by dm4b A more technical and exact discussion can alleviate the chances of that and be more fruitful, but that doesn't mean the same kind of confusion can't happen there too. No argument here either. Quote by dm4b Exactly, that's the point of a limit. Plot that up and tell me the trend you see. Exactly, combine that with the trend above and what does that suggest. That a null interval is exactly zero. Which we already knew since you can plug a null interval directly into the Minkowski interval formula $ds^2 = dt^2 - dx^2 - dy^2 - dz^2$ and get zero. Quote by dm4b Combine that with the fact that neutrinos would not able to undergo neutrino oscillations if they had zero mass and what does that suggest. It all suggests that "massless particles do not sense the passage of time" The general fact that lightlike intervals are zero suggests to me that timelike and lightlike objects are fundamentally different. However, since you mention neutrino oscillations specifically, we can go into more detail for that specific case. Neutrinos come in three "flavors", electron, muon, and tau, corresponding to the three kinds of "electron-like" leptons. Neutrino oscillation means that a neutrino that starts out as one flavor can change to a different flavor--more precisely, the quantum mechanical mixture of flavors of neutrinos changes over time: the amplitudes for the different flavor eigenstates oscillate. Oscillating amplitudes in themselves don't require timelike objects: photon amplitudes can oscillate and photons are massless. The point is that the flavor eigenstates of neutrinos--the states in which only one flavor amplitude is nonzero--are *different* than the mass eigenstates--the states in which a neutrino has a definite invariant mass. But for this to lead to neutrino oscillations as defined above, there must be more than one mass eigenstate, so that the amplitudes for different mass eigenstates can oscillate with different frequencies, which in turn means that the amplitudes for each flavor eigenstate (which are just different linear combinations of the mass eigenstates) also oscillate. That means at least one neutrino mass eigenstate must have a nonzero mass. It does *not* require that *all* of the neutrino mass eigenstates have nonzero mass; there could still be one such state with zero mass. AFAIK the current belief is that all of the mass eigenstates have nonzero mass, but that's based on experimental data, not theoretical requirements. So I would say that the statement "neutrino oscillation requires neutrinos to have non-zero invariant mass" is, while technically correct, a little misleading since it invites the false implication that *any* kind of "oscillation" requires a non-zero invariant mass. Quote by PeterDonis So I would say that the statement "neutrino oscillation requires neutrinos to have non-zero invariant mass" is, while technically correct, a little misleading since it invites the false implication that *any* kind of "oscillation" requires a non-zero invariant mass. After reading your post, I think we're in agreement on pretty much everything and I enjoyed your summary of neutrino oscillations. I guess I'm just known to not be very picky about some of the laymen's descriptions - but, then again, I'm not in a position where I have to explain away the confusions they create. ;-) There is one exception - virtual particles. I really wish they invented a different way to talk about those guys! Even graduate level QFT physics texts could do a better job here. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by dm4b virtual particles. I really wish they invented a different way to talk about those guys! Even graduate level QFT physics texts could do a better job here. I'm not familiar with enough QFT texts to comment on them, but I remember having to make a large mental adjustment when I found out about non-perturbative phenomena in QFT. A. Zee's book, Quantum Field Theory in a Nutshell, has a good treatment--at least it made the basics clear to me--and he comments at one point that it took a long time for many QFT theorists to admit that there was more to QFT than Feynman diagrams and perturbation theory, which is where the concept of virtual particles comes from. Quote by WannabeNewton I was talking about light as a wave traveling through the medium. If you want to talk about the individual photons then it is much more subtle than that. This is not related to the thread so for now take a look at: http://physics.stackexchange.com/que...-through-glass there is no subtlety of individual photon speed.It is always c.The refractive index concept applies to phase speed of light which has nothing to do with photon's speed.refractive index was used when one has no picture of electrons etc.Also in more modern treatment classical theory of refractive index does agree with quantum explanations.Also the retarding of light in a medium of refractive index n is overall written with a factor c/n.But still it is wrong to say that light in a medium light is retarded at speed c/n. Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Naty1 We haven't any experimental evidence I can think of at either [the 'infinities', nor at v= c] yet, so a discussion seems moot,maybe that's your point, and that's ok by me.... I don't understand what you mean by saying that we don't have experimental evidence at v=c. SR says massless particles always move at c and massive ones never do. We observe that massless particles always move at c, and we never observe a massive one to. This is like saying that biology has no empirical evidence about whether humans can reproduce by fission. Biology says that bacteria can reproduce by fission and humans can't. We observe that bacteria reproduce by fission, and we never observe a human to do so. What experiment would satisfy you, even in principle, that massive particles *can't* move at c? If the only experiment you'll accept is one in which we accelerate a massive particle to c and see what happens, then there is no experiment, even in principle, that would convince you that motion at v=c doesn't exist. This would be like saying that you want to see a human to reproduce by fission so that you can test whether humans can reproduce by fission. Recognitions: Gold Member bcrowell I don't understand what you mean by saying that we don't have experimental evidence at v=c. ben..thanks for the interest. [Look, this could be worse, much worse: just imagine if I were a student of yours with all these crazy perspectives!! ] I seem to be making things worse rather than better....[That's what my wife always claims!!] yeah, we seem to have good evidence massive particles can't get to v =c..... I have never considered that an issue. This below seems to be one example which I had not seen before....I just stumbled across it....but it conveys the concept I am attempting to describe already: The description of event horizons given by general relativity is thought to be incomplete. When the conditions under which event horizons occur are modeled using a more comprehensive picture of the way the universe works, that includes both relativity and quantum mechanics, event horizons are expected to have properties that are different from those predicted using general relativity alone. I'll start a new thread...that may enable you guys to help me understand "what happens when a null like path [a photon] intersects a null like surface [an event horizon]. [just a first thought as a problem statement] let's do that separately after [if] I collect my feeble thoughts!! In a universe full of particles that can only move at lightspeed (i.e. gauge boson) there should be no possibility of interaction from the particles' point of view because time has stopped for them, according to SR Is it true according current physic knowledge? If two photon travel parallel in empty universe, what will happen? Gosh, don't give me warning because of this. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by SysAdmin Is it true according current physic knowledge? No. Photons don't interact with each other, but that isn't because they're massless; see below for further comment on that. There are massless particles that do interact with each other: gluons, for example. Quote by SysAdmin If two photon travel parallel in empty universe, what will happen? Nothing. But that's not because they "don't experience time". It's because (a) photons don't interact with each other period; photons only interact with particles carrying electric charge, and photons don't carry any electric charge; and (b) the two photons are moving in the same direction at the same speed, so their worldlines will never intersect, so even if they could interact in principle, they wouldn't. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by Naty1 This below seems to be one example which I had not seen before....I just stumbled across it....but it conveys the concept I am attempting to describe already: This doesn't have anything to do with photons specifically; it has to do with quantum gravity vs. classical gravity. If quantum effects change the properties of event horizons from what classical GR models them as, that affects *everything* that comes into that region of spacetime, not just photons. Quote by PeterDonis No. Photons don't interact with each other, but that isn't because they're massless; see below for further comment on that. There are massless particles that do interact with each other: gluons, for example. Nothing. But that's not because they "don't experience time". It's because (a) photons don't interact with each other period; photons only interact with particles carrying electric charge, and photons don't carry any electric charge; and (b) the two photons are moving in the same direction at the same speed, so their worldlines will never intersect, so even if they could interact in principle, they wouldn't. Photon live in a instant, it's emitted than re-absorb instantly (according to itself), it doesn't decay, not even at the Schwartzschild Horizon and not interact each other in gravitational force. Does all gauge boson behave like this? Does gluon emitted and re-absorb instantly? Now I understand time dilation is 0 for v=c under SR. Will it be also 0 under GR? Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by SysAdmin Photon live in a instant, it's emitted than re-absorb instantly (according to itself), No, this is not correct. You are saying that a photon's worldline contains only a single event; that's not correct, photon worldlines contain multiple events. You can't use proper time to label the events, but you can use other affine parameters; and the fact that you can't use proper time to label the events does *not* mean that "they all happen at the same time". Quote by SysAdmin it doesn't decay, not even at the Schwartzschild Horizon Photons don't "decay", exactly, but they can be absorbed, and this can happen anywhere, including at or inside a black hole's horizon. Quote by SysAdmin and not interact each other in gravitational force. Huh? Photons do interact with gravity, like anything that has energy. That means that beams of photons *can* interact with each other gravitationally. (When you do the math, it turns out that antiparallel beams attract each other, but parallel beams don't; that's due to the way the photons' spin affects the interaction.) Quote by SysAdmin Does all boson behave like this? No. None of them do, including photons. Quote by SysAdmin Now I understand time dilation is 0 for v=c under SR. Will it be also 0 under GR? It's true that null worldlines have a zero spacetime "length" in GR just as they do in SR. But it's not IMO a good description to say that that means "time dilation is 0". The reason it's not a good description is that it leads to invalid inferences like the ones you made in the quotes above. Recognitions: Gold Member Quote by PeterDonis You are saying that a photon's worldline contains only a single event; that's not correct, photon worldlines contain multiple events. What are the multiple events on a photon's worldline? Page 5 of 9 « First < 2 3 4 5 6 7 8 > Last » Thread Tools | | | | |---------------------------------------------------------|------------------------------|---------| | Similar Threads for: Why don't photons experience time? | | | | Thread | Forum | Replies | | | Special & General Relativity | 12 | | | General Physics | 16 | | | General Physics | 21 | | | Medical Sciences | 0 | | | General Discussion | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9551189541816711, "perplexity_flag": "middle"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/S/s20aqc.html
# NAG Library Function Documentnag_fresnel_s_vector (s20aqc) ## 1  Purpose nag_fresnel_s_vector (s20aqc) returns an array of values for the Fresnel integral $S\left(x\right)$. ## 2  Specification #include <nag.h> #include <nags.h> void nag_fresnel_s_vector (Integer n, const double x[], double f[], NagError *fail) ## 3  Description nag_fresnel_s_vector (s20aqc) evaluates an approximation to the Fresnel integral $Sxi=∫0xisinπ2t2dt$ for an array of arguments ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$. Note:  $S\left(x\right)=-S\left(-x\right)$, so the approximation need only consider $x\ge 0.0$. The function is based on three Chebyshev expansions: For $0<x\le 3$, $Sx=x3∑′r=0arTrt, with ​ t=2 x3 4-1.$ For $x>3$, $Sx=12-fxxcosπ2x2-gxx3sinπ2x2 ,$ where $f\left(x\right)=\underset{r=0}{{\sum }^{\prime }}\phantom{\rule{0.25em}{0ex}}{b}_{r}{T}_{r}\left(t\right)$, and $g\left(x\right)=\underset{r=0}{{\sum }^{\prime }}\phantom{\rule{0.25em}{0ex}}{c}_{r}{T}_{r}\left(t\right)$, with $t=2{\left(\frac{3}{x}\right)}^{4}-1$. For small $x$, $S\left(x\right)\simeq \frac{\pi }{6}{x}^{3}$. This approximation is used when $x$ is sufficiently small for the result to be correct to machine precision. For very small $x$, this approximation would underflow; the result is then set exactly to zero. For large $x$, $f\left(x\right)\simeq \frac{1}{\pi }$ and $g\left(x\right)\simeq \frac{1}{{\pi }^{2}}$. Therefore for moderately large $x$, when $\frac{1}{{\pi }^{2}{x}^{3}}$ is negligible compared with $\frac{1}{2}$, the second term in the approximation for $x>3$ may be dropped. For very large $x$, when $\frac{1}{\pi x}$ becomes negligible, $S\left(x\right)\simeq \frac{1}{2}$. However there will be considerable difficulties in calculating $\mathrm{cos}\left(\frac{\pi }{2}{x}^{2}\right)$ accurately before this final limiting value can be used. Since $\mathrm{cos}\left(\frac{\pi }{2}{x}^{2}\right)$ is periodic, its value is essentially determined by the fractional part of ${x}^{2}$. If ${x}^{2}=N+\theta $ where $N$ is an integer and $0\le \theta <1$, then $\mathrm{cos}\left(\frac{\pi }{2}{x}^{2}\right)$ depends on $\theta $ and on $N$ modulo $4$. By exploiting this fact, it is possible to retain significance in the calculation of $\mathrm{cos}\left(\frac{\pi }{2}{x}^{2}\right)$ either all the way to the very large $x$ limit, or at least until the integer part of $\frac{x}{2}$ is equal to the maximum integer allowed on the machine. ## 4  References Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications ## 5  Arguments 1:     n – IntegerInput On entry: $n$, the number of points. Constraint: ${\mathbf{n}}\ge 0$. 2:     x[n] – const doubleInput On entry: the argument ${x}_{\mathit{i}}$ of the function, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. 3:     f[n] – doubleOutput On exit: $S\left({x}_{i}\right)$, the function values. 4:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 0$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. ## 7  Accuracy Let $\delta $ and $\epsilon $ be the relative errors in the argument and result respectively. If $\delta $ is somewhat larger than the machine precision (i.e., if $\delta $ is due to data errors etc.), then $\epsilon $ and $\delta $ are approximately related by: $ε≃ x sin π2 x2 Sx δ.$ Figure 1 shows the behaviour of the error amplification factor $\left|\frac{x\mathrm{sin}\left(\frac{\pi }{2}{x}^{2}\right)}{S\left(x\right)}\right|$. However if $\delta $ is of the same order as the machine precision, then rounding errors could make $\epsilon $ slightly larger than the above relation predicts. For small $x$, $\epsilon \simeq 3\delta $ and hence there is only moderate amplification of relative error. Of course for very small $x$ where the correct result would underflow and exact zero is returned, relative error-control is lost. For moderately large values of $x$, $ε ≃ 2x sin π2 x2 δ$ and the result will be subject to increasingly large amplification of errors. However the above relation breaks down for large values of $x$ (i.e., when $\frac{1}{{x}^{2}}$ is of the order of the machine precision); in this region the relative error in the result is essentially bounded by $\frac{2}{\pi x}$. Hence the effects of error amplification are limited and at worst the relative error loss should not exceed half the possible number of significant figures. Figure 1 None. ## 9  Example This example reads values of x from a file, evaluates the function at each value of ${x}_{i}$ and prints the results. ### 9.1  Program Text Program Text (s20aqce.c) ### 9.2  Program Data Program Data (s20aqce.d) ### 9.3  Program Results Program Results (s20aqce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 67, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6755232214927673, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/13466/why-must-the-field-equations-be-differential
# Why must the field equations be differential? In Landau–Lifshitz's Course of Theoretical Physics, Vol. 2 (‘Classical Fields Theory’), Ch. IV, § 27, there is an explanation why the field equations should be linear differential equations. It goes like this: Every solution of the field equations gives a field that can exist in nature. According to the principle of superposition, the sum of any such fields must be a field that can exist in nature, that is, must satisfy the field equations. As is well known, linear differential equations have just this property, that the sum of any solutions is also a solution. Consequently, the field equations must be linear differential equation. Actually, this reasoning is not logically valid. Not only the authors forget to explain the word ‘differential’, but they also do not actually prove that the field equations must be linear. (Just in case: this observation is not due to me.) But it seems that the last issue can be easily overcome. However, it is exactly the word ‘differential’, not ‘linear’, that is bothering me. There is a nice theorem of Peetre stating that the linear operator $D$ that acts on (the ring of) functions and does not increase supports, that is, $\mathop{\mathrm{supp}} f \supset \mathop{\mathrm{supp}} Df$, must be a differential operator. The property of preserving supports can be considered as a certain locality property. Hence, the field equations must be differential because all interactions must propagate with a finite velocity. But there is another notion of ‘locality’ of an operator: the operator $D$ is called local if the function $Df$ in the neighbourhood $V$ can be computed with $f$ determined only on $V$ as well, i.e., $(Df)|_V$ is completely defined by $f|_V$. The locality in this sense is not equivalent to the locality in the sense of supports' preserving. (Unfortunately, I do not have an illustrative example at hand right now, so there is a possibility of mistake $M$ hiding here.) The question is: what physical circumstances determine the (correct one) notion of locality for a given physical problem? (Assuming there is no mistake $M$.) And does my reasoning really justifies the word ‘differential’ in the context of field equations? If so, are there any references containing more accurate argument than the one presented in Landau–Lifshitz's Course? - 4 Hm, my view is that one shouldn't interpret any statement in a physics book as a rigorous statement that can be proved. There will always be some hidden assumptions and even if you could pin them down exactly, in the end they wouldn't matter anyway. The point here is to make the qualitative statement that being linear is connected to superposition and being local is connected to differentiability. That's all there is to it, IMHO. – Marek Aug 13 '11 at 7:21 – Willie Wong Aug 13 '11 at 16:28 Suggestion for a better title(v1): Why must field equations be differential equations? – Qmechanic♦ Feb 5 '12 at 1:24 ## 4 Answers All physical phenomena occur on a substrate, medium or 'space'. This space has properties which guarantee the stability of the system, such as the existence of a characteristic propagation speed of 'c'. The reaction allowed at each point in space, and at every moment, is strictly due to the local environment, its immediate neighborhood and so 'action at a distance' is not permitted, nor infinities. This physical environment (1D) is illustrated in this image (by Hans de Vries) and found in his online book "Understanding Relativistic Quantum Field Theory". This medium allows the propagation of the classical wave equation $$\frac{\partial^{2}\psi}{\partial t^{2}}=c^{2}\frac{\partial^{2}\psi}{\partial\mathit{x}^{2}}$$ ψ is the vertical displacement in the mechanical model. ...the equation is satisfied by any arbitrary function which shifts along with a speed v (or −v). A function ”stretched” by a factor v has it’s slopes decreased by a factor v, while it’s second order derivatives are lower by a factor $v^2$ . - This does not really answer your question why the equation should be differential. But I think that the two notions of locality you mentioned are just equivalent, if I am not mistaken. Let us prove that the second definition impies the first one. One has to show that if a point $x$ does not belong to $supp(f)$ then $x$ does not belong to $supp(Df)$. Indeed, then there exists an open neighborhood $V$ of $x$ such that $f|_V=0$. Hence by the assumption $Df|_V=0$. Hence $x\not\in supp(Df)$ as requested. Let us prove now the converse statement that the first definition implies the second one. Assume that $f|_V=g|_V$. Then $(f-g)|_V=0$. Hence $supp(f-g)\cap V=\emptyset$. Consequently $supp(D(f-g))\cap V=\emptyset$, i.e. $D(f-g)|_V=0$. That means that $Df|_V=Dg|_V$. - It is not clear what the physical meaning of local should be. There is a large literature on this, and not any overwhelming consensus. See my own question, where I accepted what seemed like the best answer although nothing really got completely settled. Locality in Quantum Mechanics The reference provided me was extremely interesting, see arXiv:quant-ph/9809030 Furthermore, the position operators that have been proposed for Relativistic Particle Mechanics have very inconvenient non-locality properties. Furthermore furthermore, it is a myth that special relativity forbids propagation of information faster than the speed of light, as has been shown by Prof. Geroch of Chicago in «¿Faster than Light?» http://arxiv.org/abs/1005.1614 so the physical basis of what locality should be is rather complicated...and not yet clear to me. - Not all fields follow the superposition principle. A temperature field does not. A pressure field does, etc. Those words in Landau–Lifshitz's book are an attempt to "substantiate" equations obtained mostly empirically in the frame of a differential approach. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471648335456848, "perplexity_flag": "head"}
http://mathoverflow.net/questions/21207?sort=oldest
## Applications of Faber’s conjecture ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Faber's perfect pairing conjecture states that the tautological ring $R^*$ of the moduli space $\mathcal{M}_g$ of curves of genus $g$ behaves as if it were the rational cohomology of a closed, oriented manifold of dimension $g-2$. Specifically, $R^{g-2}$ is rank one, and multiplication into this degree gives a perfect pairing between $R^k$ and $R^{g-2-k}$. My understanding is that it is known (through work of Looijenga, Faber, and Pandharipande) that $R^{g-2} = \mathbb{Q}$, but the perfect pairing part hasn't been proven (though it has been verified in low genus cases). I'd like to know: 1. Why might Faber have conjectured this to be the case? What is it about $R^*$ that suggests that it might satisfy Poincare duality? 2. If true, what sort of applications does this have (to our understanding of $\mathcal{M}_g$, for instance)? - 1 I could be mis-remembering, but I thought that the conjecture says that the tautological ring looks like the cohomology ring of a projective variety, not just a closed oriented manifold. This would give quite a bit more structure. – Jeffrey Giansiracusa Apr 13 2010 at 16:11 ## 1 Answer 1. Numerical evidence, from computing the cases $g=2,3,\dots$, eventually $g\le 15$, and seeing the symmetry in the numbers $\dim R_g^n$. I recall Carel saying he made the conjecture when $g$ was still pretty low, maybe 6. For any $g$, there is an algorithm computing $\dim R^n_g$ in finite time, that Faber came up with. 2. That's not so clear. But that's a very mysterious property. The search for the meaning is on. - Wow; that's surprising. Do you know: has anyone constructed submanifolds of $M_g$ whose cohomology realizes $R^*$ in these low genus cases? – Craig Westerland Apr 14 2010 at 0:44 To the best of my knowledge, no such submanifolds are known. – VA Apr 14 2010 at 1:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955835223197937, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/41877/finding-max-radius-of-propeller-angular-velocity/41901
# Finding Max Radius of Propeller (angular velocity) I am looking at a question from University Physics The given answer Whats the intuition behind using the below diagram of finding $v_{tip}$? I was looking the the $v_{tan}$ not knowing how to continue. Also $v_{tip}$ will rotate and turn positive and negative - ## 1 Answer Firstly, we are just talking about the magnitude of $v_{tip}$. This will not change with rotation of the velocity. In the question we do not care about the direction of the velocity. It just says it may never move faster than $270 m s^{-1}$. That is why the solution only uses $v_{tip}^2$. Now to find $v_{tip}^2$ they decide to use a configuration (position) of the propeller which is easy to handle. When the propeller is horizontal, the radial (tangential) velocity of the tip $v_{tan}$ is pointing downward. Additionally the plane is move forward with $v_{plane}$, so the propeller is moving forward at the same speed, too. Since these velocities are perpendicular to each other, we can apply a simple superposition to add them up (as the "Side View" diagram shows). Then to get the total $v_{tip}$ we use Pythagoras: $|v_{tip}| = \sqrt{v_{plane}^2+v_{tan}^2}$ or $v_{tip}^2 = v_{plane}^2+v_{tan}^2$ Now of course, the propeller will rotate, so the direction of $v_{tan}$ will change. But it will always be of the same magnitude, and always perpendicular to $v_{plane}$. Hence, the same equation will hold for all positions of the propeller. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8980008959770203, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/344/what-is-the-implied-volatility-skew/347
# What is the implied volatility skew? I often hear people talking about the skew of the volatility surface, model, etc... but it appears to me that a clear standard definition is not unanimously in place among practitioners. So here is my question: Does anyone has a clear and unifying definition that can be stated in mathematical terms of what skew is in the context of risk neutral pricing? If not, do you have a set of definitions of the skew with respect to a specific context? - 1 Given that I found an entire paper in the JOD devoted to this question, I think it is arguably not "soft," so I removed that tag. Skew is also a technical term, not "jargon", so I removed that tag, too. – Tal Fishman Aug 25 '11 at 16:54 ## 5 Answers Scott Mixon argues in What Does Implied Volatility Skew Measure that among all measures of implied volatility skew, the (25 delta put volatility - 25 delta call volatility)/50 delta volatility is the most descriptive and least redundant (volatility is Black-Scholes implied volatility). His paper, recently published in the Journal of Derivatives, gives a number of both theoretical and empirical arguments in favor of this measure. He distinguishes between "skew", which is a measure of the slope of the implied volatility curve for a given expiration date, and "skewness", which is the skewness of an option implied, risk neutral probability distribution. To calculate the latter, one needs to have a theoretical framework or model, whereas the former is easily observable from options prices. - Skew is indeed a widely used word and can represent one of the following: • Skew(ness) - 3rd standardized moment that represents assymetry of the distribution (olaker metioned it his answer). • (Volatility) skew - is observable property of implied volatility surface that can be seen on the market after the 1987 crash. It shows that OTM puts (high demand) are usually have higher price (for the same expiry) than OTM calls (high supply to buy protective puts). In the first assumption you can think about IV surface = term structure (IV changes over time) + volatility skew (IV changes with strikes) See also Volatility Skew FAQ for brief explanation. • Skew can also represent term in volatility model that adds adjustment to represent volatility skew which itself is a subject of proper calibration. For example see If Skew Fits article where local volatility model has the following form: $${\sigma_t} = \sigma_{atm}+\sigma_{skew}+\sigma_{kurt}.$$ - "It shows that OTM puts (high demand) are usually have higher price (for the same expiry) than OTM calls (high supply to buy protective puts)." -- this is incorrect both factually and grammatically. – quant_dev Feb 9 '11 at 16:07 First we must define what we mean by implied volatility. Let $c_{BS}(t,S(t),K,T;\sigma)$ denote the price of the call option with strike price $K$ and maturity $T$ in the Black-Scholes model with the volatility $\sigma$ (emphasized in the argument). Furthermore, let $c_{MA}(t,S(t),K,T;\sigma)$ denote the corresponding price on the market. The volatility $\sigma_{imp}$ is defined by the specific volatility for which $$c_{BS}(t,S(t),K,T;\sigma) = c_{MA}(t,S(t),K,T)$$ for some fixed $t$. This implied volatility is very much dependent on the strike price $K$ (which is quite intuitive). It is a well known phenomenon on the market that the maps $K \longmapsto \sigma_{imp}(K)$ have a so called smiley or skewed shape. By smiley shape we mean that $\sigma_{imp}(K)$ has a convex shape with high values for both small and large values of $K$ and a minimum around the forward price $Se^{rt}$. This smiley phenomenon is very common in connection with options on currencies, but not so common for options on stocks. In connection to stocks we often observe a so called skewed shape of the maps $K \longmapsto \sigma_{imp}(K)$. Meaning that the implied volatility is high for low strikes and not as high for high strikes. It doesn't even have to have a minimum around the forward price. This is an example of a so called smiley shape: This is an example of a so called skewed shape: - The skew of a distribution is a measure of its assymetry. Let $X_n$ be a discrete process (say, of daily returns) with mean 0 and noncentered volatility $\sigma$. Then the noncentered skew is defined as $$\frac{1}{n}\sum_{k=1}^{n}\frac{X_k^3}{\sigma}.$$ It will be positive is $X_n$ and $X_n^2$ are positively correlated and negative if they are negatively correlated. Roughly speaking, the skew expresses the correlation between the move of a random process and its volatility. The volatility skew is the slope of the graph of implied volatility versus strike. A negative skew corresponds to a downward slope which is observed in equity options. - I'm curious - is it possible to estimate the correlation between the equity and its IV, if all you know is the IV skew? – Gravitas Jun 6 '11 at 0:43 Implied volatility skew is simply collection of implied volatilities on the same underlying instrument for a given expiration. Term "implied volatility skew" is only loosely connected to statistical definition of skewness. Implied volatility surface is the collection of implied volatilities on the same underlying for several expirations. If BS formula were to be true, you would expect implied volatility to be a constant across strikes and expirations. This is not true, and deviation (from constant) is referred to as skew. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458791613578796, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/23534?sort=newest
## Is a subspace with a certain property dense in the dual of a vector space? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose we have a normed vector space $V$ and its dual `$V^*$`, and suppose that `$X \subseteq V^*$` has the property that for every $v \in V$, there is some $\phi \in X$ with $\Vert \phi \Vert = 1$ such that $\phi(v) = \Vert v \Vert$. Is $X$ dense in `$V^*$` (in the operator norm)? Note that this is a stronger property than $\Vert v \Vert = \sup_{\phi\in X} \frac{\phi(v)}{\Vert \phi \Vert}$, since we are assuming that the supremum is realized. I think the answer is probably "no." A nice example (passed to me originally made up by Terry Tao) showing that the second condition (the supremum over $X$ gives the norm) does not imply dense is the following: consider $l^1$ and `$(l^1)^* = l^\infty$`. Then the space of eventually zero sequences in $l^\infty$ is sufficient for the norm: given $f\in l^1$, let $\phi_n$ be a truncation of the sign function of $f$ to the first $n$ indices. Then $\lim_{n\to \infty} \phi_n(f) = \Vert f \Vert$. However (for $f$ with infinite support), there is no finite sequence $\phi$ with $\phi(f) = \Vert f \Vert$. - You must assume that $X$ is a subspace of $V$, else the result is trivially wrong (take $X$ to be the unit sphere of a reflexive space $V$). – Harald Hanche-Olsen May 5 2010 at 3:52 A subset $B$ of the unit sphere of the dual of a Banach space $V$ that has the property that every vector in $V$ achieves its norm at some functional in $B$ is called a boundary for $V$. Recently Hermann Pfitzner arXiv:0807.2810 solved Godefroy's boundary problem in the affirmative. The boundary problem was whether a subset of $V$ is weakly compact if it is compact in the topology of pointwise convergence on some boundary for $V$. This was a problem of some note, with earlier related results by, including others, Grothendieck and Bourgain-Talagrand. – Bill Johnson May 6 2010 at 7:36 ## 2 Answers The answer is negative. Since the linear span of the Dirac masses is not a dense subspace of the dual of $C[0,1]$. - 1 Excellent! One of those answers which, when you see it, makes you slap your forehead and say “why didn't I think of that?” – Harald Hanche-Olsen May 5 2010 at 14:11 Thanks! This is a great simple example. – Alden Walker May 5 2010 at 17:11 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Far from a complete answer, but the answer is yes if $V=\ell^1$: For then $X$ must contain every sequence $x\in\ell^\infty$ with $|x_n|=1$ for all $n$ (consider $v\in\ell^1$ given by $v_n=\bar x_n/n^2$), and the space of linear combinations of such sequences is dense in $\ell^\infty$. To see the latter, merely note that any complex number $z$ with $|z|\le2$ can be written as $x+y$ with $|x|=|y|=1$. (Over the reals, you need to work a tiny bit harder.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528281092643738, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/199455-find-coordinates-point-intersection-perpendicular-bisectors.html
# Thread: 1. ## Find the coordinates of the point of intersection of perpendicular bisectors Hello, in a pre-calculus book I am given this problem: Find the coordinates of the point of intersection of the perpendicular bisectors of the sides of a triangle whose vertices are located at (-a, 0), (b, c), and (a, 0). The only apparent tools with which we are supposed to find the answer are the Pythagorean theorem, the distance formula, the midpoint formula, and the equation of a line. I have tried this a couple of times but end up drowned in variables. Could anyone give me a hand? 2. ## Re: Find the coordinates of the point of intersection of perpendicular bisectors Hello, Ragnarok! Did you make a sketch? Find the coordinates of the point of intersection of the perpendicular bisectors of the sides of a triangle whose vertices are located at (-a, 0), (b, c), and (a, 0). Code: ``` | | R | * (b,c) | * * *| * * | * P * | * Q - - * - - - - + - - - - * - - (-a,0) | (a,0) |``` The perpendicular bisector of side $PQ$ is the y-axis, $x = 0.$ .[1] The slope of side $QR$ is $\tfrac{c}{b-a}$ . . The perpendicular slope is $\tfrac{a-b}{c}$ . . . . The midpoint of side $QR$ is $\left(\tfrac{a+b}{2},\:\tfrac{c}{2}\right)$ The equation of the perpendicular bisector of side $QR$ is: . . $y - \tfrac{c}{2} \:=\:\tfrac{a-b}{c}\left(x - \tfrac{a+b}{2}\right) \quad\Rightarrow\quad y \:=\:\tfrac{a-b}{c}x + \tfrac{b^2+c^2-a^2}{2c}$ .[2] The intersection of [1] and [2] is: . $\left(0,\:\frac{b^2+c^2-a^2}{2c}\right)$ #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8742053508758545, "perplexity_flag": "head"}
http://www.chemeurope.com/en/encyclopedia/Rigid_rotor.html
My watch list my.chemeurope.com my.chemeurope.com With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • Home • Encyclopedia • Rigid_rotor Rigid rotor The rigid rotor is a mechanical model that is used to explain rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object, such as a top. To orient such an object in space three angles are required. A special rigid rotor is the linear rotor which is a 2-dimensional object, requiring two angles to describe its orientation. An example of a linear rotor is a diatomic molecule. More general molecules like water (asymmetric rotor), ammonia (symmetric rotor), or methane (spherical rotor) are 3-dimensional, see classification of molecules. The linear rotor The linear rigid rotor model consists of two point masses located at fixed distances from their center of mass. The fixed distance between the two masses and the values of the masses are the only characteristics of the rigid model. However, for many actual diatomics this model is too restrictive since distances are usually not completely fixed. Corrections on the rigid model can be made to compensate for small variations in the distance. Even in such a case the rigid rotor model is a useful point of departure (zeroth-order model). The classical linear rigid rotor The classical linear rotor consists of two point masses m1 and m2 (with reduced mass $\mu = \frac{m_1m_2}{m_1+m_2}$) at a distance R. The rotor is rigid if R is independent of time. The kinematics of a linear rigid rotor is usually described by means of spherical polar coordinates, which form a coordinate system of R3. In the physics convention the coordinates are the colatitude (zenith) angle $\theta \,$, the longitudinal (azimuth) angle $\varphi\,$ and the distance R. The angles specify the orientation of the rotor in space. The kinetic energy T of the linear rigid rotor is given by $2T = \mu R^2\big [\dot{\theta}^2+(\dot\varphi\,\sin\theta)^2\big]= \mu R^2 \big(\dot{\theta}\;\; \dot{\varphi} \Big) \begin{pmatrix} 1 & 0 \\ 0 & \sin^2 \theta \\ \end{pmatrix} \begin{pmatrix} \dot{\theta}\\ \dot{\varphi} \end{pmatrix} = \mu \Big(\dot{\theta}\;\; \dot{\varphi} \Big) \begin{pmatrix} h_\theta^2 & 0 \\ 0 & h_\varphi^2 \\ \end{pmatrix} \begin{pmatrix} \dot{\theta}\\ \dot{\varphi} \end{pmatrix},$ where $h_\theta = R\,$ and $h_\varphi= R\sin\theta\,$ are scale (or Lamé) factors. Scale factors are of importance for quantum mechanical applications since they enter the Laplacian expressed in curvilinear coordinates. In the case at hand (constant R) $\nabla^2 = \frac{1}{h_\theta h_\varphi}\left[ \frac{\partial}{\partial \theta} \frac{h_\varphi}{h_\theta} \frac{\partial}{\partial \theta} +\frac{\partial}{\partial \varphi} \frac{h_\theta}{h_\varphi} \frac{\partial}{\partial \varphi} \right]= \frac{1}{R^2}\left[\frac{1}{\sin\theta} \frac{\partial}{\partial \theta} \sin\theta \frac{\partial}{\partial \theta} +\frac{1}{\sin^2\theta}\frac{\partial^2}{\partial \varphi^2} \right].$ The classical Hamiltonian function of the linear rigid rotor is $H = \frac{1}{2\mu R^2}\left[p^2_{\theta} + \frac{p^2_{\varphi}}{\sin^2\theta}\right].$ The quantum mechanical linear rigid rotor The linear rigid rotor model can be used in quantum mechanics to predict the rotational energy of a diatomic molecule. The rotational energy depends on the moment of inertia for the system, I. In the center of mass reference frame, the moment of inertia is equal to: I = μR2 where μ is the reduced mass of the molecule and R is the distance between the two atoms. According to quantum mechanics, the energy levels of a system can be determined by solving the Schrödinger equation: $\hat H Y = E Y$ where Y is the wave function and $\hat H$ is the energy (Hamiltonian) operator. For the rigid rotor in a field-free space, the energy operator corresponds to the kinetic energy[1] of the system: $\hat H = - \frac{\hbar^2}{2\mu} \nabla^2$ where $\hbar$ is Planck's constant divided by 2π and $\nabla^2$ is the Laplacian. The Laplacian is given above in terms of spherical polar coordinates. The energy operator written in terms of these coordinates is: $\hat H =- \frac{\hbar^2}{2I} \left [ {1 \over \sin \theta} {\partial \over \partial \theta} \left ( \sin \theta {\partial \over \partial \theta} \right ) + {1 \over {\sin^2 \theta}} {\partial^2 \over \partial \varphi^2} \right]$ This operator appears also in the Schrödinger equation of the hydrogen atom after the radial part is separated off. The eigenvalue equation becomes $\hat H Y_\ell^m (\theta, \varphi ) = \frac{\hbar^2}{2I} \ell(\ell+1) Y_\ell^m (\theta, \varphi ).$ The symbol $Y_\ell^m (\theta, \varphi )$ represents a set of functions known as the spherical harmonics. Note that the energy does not depend on $m \,$. The energy $E_\ell = {\hbar^2 \over 2I} \ell \left (\ell+1\right )$ is $2\ell+1$-fold degenerate: the functions with fixed $\ell\,$ and $m=-\ell,-\ell+1,\dots,\ell$ have the same energy. Introducing the rotational constant B, we write, $E_\ell = B\; \ell \left (\ell+1\right )\quad \textrm{with}\quad B \equiv \frac{\hbar^2}{2I}.$ In the unit of reciprocal length the rotational constant is, $\bar B \equiv \frac{B}{hc} = \frac{h}{8\pi^2cI},$ with c the speed of light. If cgs units are used for h, c, and I, $\bar B$ is expressed in wave numbers, cm-1, a unit that is often used for rotational-vibrational spectroscopy. The rotational constant $\bar B(R)$ depends on the distance R. Often one writes $B_e = \bar B(R_e)$ where Re is the equilibrium value of R (the value for which the interaction energy of the atoms in the rotor has a minimum). A typical rotational spectrum consists of a series of peaks that correspond to transitions between levels with different values of the angular momentum quantum number ($\ell$). Consequently, rotational peaks appear at energies corresponding to an integer multiple of ${2\bar B}$. Selection rules Rotational transitions of a molecule occur when the molecule absorbs a photon [a particle of a quantized electromagnetic (em) field]. Depending on the energy of the photon (i.e., the wavelength of the em field) this transition may be seen as a sideband of a vibrational and/or electronic transition. Pure rotational transitions, in which the vibronic (= vibrational plus electronic) wave function does not change, occur in the microwave region of the em spectrum. Typically, rotational transitions can only be observed when the angular momentum quantum number changes by 1 ($\Delta l = \pm 1$). This selection rule arises from a first-order perturbation theory approximation of the time-dependent Schrödinger equation. According to this treatment, rotational transitions can only be observed when one or more components of the dipole operator have a non-vanishing transition moment. If z is the direction of the electric field component of the incoming em wave, the transition moment is, $\langle \psi_2 | \mu_z | \psi_1\rangle = \left ( \mu_z \right )_{21} = \int \psi_2^*\mu_z\psi_1\, \mathrm{d}\tau .$ A transition occurs if this integral is non-zero. By separating the rotational part of the molecular wavefunction from the vibronic part, one can show that this means that the molecule must have a permanent dipole moment. After integration over the vibronic coordinates the following rotational part of the transition moment remains, $\left ( \mu_z \right )_{l,m;l',m'} = \mu \int_0^{2\pi} \mathrm{d}\phi \int_0^\pi Y_{l'}^{m'} \left ( \theta , \phi \right )^* \cos \theta\,Y_l^m\, \left ( \theta , \phi \right )\; \mathrm{d}\cos\theta .$ Here $\mu \cos\theta \,$ is the z component of the permanent dipole moment. The moment μ is the vibronically averaged component of the dipole operator. Only the component of the permanent dipole along the axis of a heteronuclear molecule is non-vanishing. By the use of the orthogonality of the spherical harmonics $Y_l^m\, \left ( \theta , \phi \right )$ it is possible to determine which values of l, m, l', and m' will result in nonzero values for the dipole transition moment integral. This constraint results in the observed selection rules for the rigid rotor: $\Delta m = 0 \quad\hbox{and}\quad \Delta l = \pm 1$ Non-rigid linear rotor The rigid rotor is commonly used to describe the rotational energy of diatomic molecules but it is not a completely accurate description of such molecules. This is because molecular bonds (and therefore the interatomic distance R) are not completely fixed; the bond between the atoms stretches out as the molecule rotates faster (higher values of the rotational quantum number l). This effect can be accounted for by introducing a correction factor known as the centrifugal distortion constant $\bar{D}$ (bars on top of various quantities indicate that these quantities are expressed in cm-1): $\bar E_l = {E_l \over hc} = \bar {B}l \left (l+1\right ) - \bar {D}l^2 \left (l+1\right )^2$ where $\bar D = {4 \bar {B}^3 \over \bar {\boldsymbol\omega}^2}$ $\bar \boldsymbol\omega$ is the fundamental vibrational frequency of the bond (in cm-1). This frequency is related to the reduced mass and the force constant (bond strength) of the molecule according to $\bar \boldsymbol\omega = {1\over 2\pi c} \sqrt{k \over \mu }$ The non-rigid rotor is an acceptably accurate model for diatomic molecules but is still somewhat imperfect. This is because, although the model does account for bond stretching due to rotation, it ignores any bond stretching due to vibrational energy in the bond (anharmonicity in the potential). Arbitrarily shaped rigid rotor An arbitrarily shaped rigid rotor is a rigid body of arbitrary shape with its center of mass fixed (or in uniform rectilinear motion) in field-free space R3, so that its energy consists only of rotational kinetic energy (and possibly constant translational energy that can be ignored). A rigid body can be (partially) characterized by the three eigenvalues of its moment of inertia tensor, which are real nonnegative values known as principal moments of inertia. In microwave spectroscopy—the spectroscopy based on rotational transitions—one usually classifies molecules (seen as rigid rotors) as follows: • spherical rotors • symmetric rotors • oblate symmetric rotors • prolate symmetric rotors • asymmetric rotors This classification depends on the relative magnitudes of the principal moments of inertia. Coordinates of the rigid rotor Different branches of physics and engineering use different coordinates for the description of the kinematics of a rigid rotor. In molecular physics Euler angles are used almost exclusively. In quantum mechanical applications it is advantageous to use Euler angles in a convention that is a simple extension of the physical convention of spherical polar coordinates. The first step is the attachment of a right-handed orthonormal frame (3-dimensional system of orthogonal axes) to the rotor (a body-fixed frame) . This frame can be attached arbitrarily to the body, but often one uses the principal axes frame—the normalized eigenvectors of the inertia tensor, which always can be chosen orthonormal, since the tensor is symmetric. When the rotor possesses a symmetry-axis, it usually coincides with one of the principal axes. It is convenient to choose as body-fixed z-axis the highest-order symmetry axis. One starts by aligning the body-fixed frame with a space-fixed frame (laboratory axes), so that the body-fixed x, y, and z axes coincide with the space-fixed X, Y, and Z axis. Secondly, the body and its frame are rotated actively over a positive angle $\alpha\,$ around the z-axis (by the right-hand rule), which moves the y- to the y'-axis. Thirdly, one rotates the body and its frame over a positive angle $\beta\,$ around the y'-axis. The z-axis of the body-fixed frame has after these two rotations the longitudinal angle $\alpha \,$ (commonly designated by $\varphi\,$) and the colatitude angle $\beta\,$ (commonly designated by $\theta\,$ ), both with respect to the space-fixed frame. If the rotor were cylindrical symmetric around its z-axis, like the linear rigid rotor, its orientation in space would be unambiguously specified at this point. If the body lacks cylinder (axial) symmetry, a last rotation around its z-axis (which has polar coordinates $\beta\,$ and $\alpha\,$ ) is necessary to specify its orientation completely. Traditionally the last rotation angle is called $\gamma\,$. The convention for Euler angles described here is known as the z'' − y' − z convention; it can be shown (in the same manner as in this article) that it is equivalent to the z − y − z convention in which the order of rotations is reversed. The total matrix of the three consecutive rotations is the product $\mathbf{R}(\alpha,\beta,\gamma)= \begin{pmatrix} \cos\alpha & -\sin\alpha & 0 \\ \sin\alpha & \cos\alpha & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \cos\beta & 0 & \sin\beta \\ 0 & 1 & 0 \\ -\sin\beta & 0 & \cos\beta \\ \end{pmatrix} \begin{pmatrix} \cos\gamma & -\sin\gamma & 0 \\ \sin\gamma & \cos\gamma & 0 \\ 0 & 0 & 1 \end{pmatrix}$ Let $\mathbf{r}(0)$ be the coordinate vector of an arbitrary point $\mathcal{P}$ in the body with respect to the body-fixed frame. The elements of $\mathbf{r}(0)$ are the 'body-fixed coordinates' of $\mathcal{P}$. Initially $\mathbf{r}(0)$ is also the space-fixed coordinate vector of $\mathcal{P}$. Upon rotation of the body, the body-fixed coordinates of $\mathcal{P}$ do not change, but the space-fixed coordinate vector of $\mathcal{P}$ becomes, $\mathbf{r}(\alpha,\beta,\gamma)= \mathbf{R}(\alpha,\beta,\gamma)\mathbf{r}(0).$ In particular, if $\mathcal{P}$ is initially on the space-fixed Z-axis, it has the space-fixed coordinates $\mathbf{R}(\alpha,\beta,\gamma) \begin{pmatrix} 0 \\ 0 \\ r \\ \end{pmatrix}= \begin{pmatrix} r \cos\alpha\sin\beta \\ r \sin\alpha \sin\beta \\ r \cos\beta \\ \end{pmatrix},$ which shows the correspondence with the spherical polar coordinates (in the physical convention). Knowledge of the Euler angles as function of time t and the initial coordinates $\mathbf{r}(0)$ determine the kinematics of the rigid rotor. Classical kinetic energy The following text forms a generalization of the well-known special case of the rotational energy of an object that rotates around one axis. It will be assumed from here on that the body-fixed frame is a principal axes frame; it diagonalizes the instantaneous inertia tensor $\mathbf{I}(t)$ (expressed with respect to the space-fixed frame), i.e., $\mathbf{R}(\alpha,\beta,\gamma)^{-1}\; \mathbf{I}(t)\; \mathbf{R}(\alpha,\beta,\gamma) = \mathbf{I}(0)\quad\hbox{with}\quad \mathbf{I}(0) = \begin{pmatrix} I_1 & 0 & 0 \\ 0 & I_2 & 0 \\ 0 & 0 & I_3 \\ \end{pmatrix},$ where the Euler angles are time-dependent and in fact determine the time dependence of $\mathbf{I}(t)$ by the inverse of this equation. This notation implies that at t = 0 the Euler angles are zero, so that at t = 0 the body-fixed frame coincides with the space-fixed frame. The classical kinetic energy T of the rigid rotor can be expressed in different ways: • as a function of angular velocity • in Lagrangian form • as a function of angular momentum • in Hamiltonian form. Since each of these forms has its use and can be found in textbooks we will present all of them. Angular velocity form As a function of angular velocity T reads, $T = \frac{1}{2} \left[ I_1 \omega_x^2 + I_2 \omega_y^2+ I_3 \omega_z^2 \right]$ with $\begin{pmatrix} \omega_x \\ \omega_y \\ \omega_z \\ \end{pmatrix} = \begin{pmatrix} -\sin\beta\cos\gamma & \sin\gamma & 0 \\ \sin\beta\sin\gamma & \cos\gamma & 0 \\ \cos\beta & 0 & 1 \\ \end{pmatrix} \begin{pmatrix} \dot{\alpha} \\ \dot{\beta} \\ \dot{\gamma} \\ \end{pmatrix}.$ The vector $\boldsymbol{\omega} = (\omega_x, \omega_y, \omega_z)$ contains the components of the angular velocity of the rotor expressed with respect to the body-fixed frame. It can be shown that $\boldsymbol{\omega}$ is not the time derivative of any vector, in contrast to the usual definition of velocity. The dots over the time-dependent Euler angles indicate time derivatives. The angular velocity satisfies equations of motion known as Euler's equations (with zero applied torque, since by assumption the rotor is in field-free space). Lagrange form Backsubstitution of the expression of $\boldsymbol{\omega}$ into T gives the kinetic energy in Lagrange form (as a function of the time derivatives of the Euler angles). In matrix-vector notation, $2 T = \begin{pmatrix} \dot{\alpha} & \dot{\beta} & \dot{\gamma} \end{pmatrix} \; \mathbf{g} \; \begin{pmatrix} \dot{\alpha} \\ \dot{\beta} \\ \dot{\gamma}\\ \end{pmatrix},$ where $\mathbf{g}$ is the metric tensor expressed in Euler angles—a non-orthogonal system of curvilinear coordinates— $\mathbf{g}= \begin{pmatrix} I_1 \sin^2\beta \cos^2\gamma+I_2\sin^2\beta\sin^2\gamma+I_3\cos^2\beta & (I_2-I_1) \sin\beta\sin\gamma\cos\gamma & I_3\cos\beta \\ (I_2-I_1) \sin\beta\sin\gamma\cos\gamma & I_1\sin^2\gamma+I_2\cos^2\gamma & 0 \\ I_3\cos\beta & 0 & I_3 \\ \end{pmatrix}.$ Angular momentum form Often the kinetic energy is written as a function of the angular momentum $\vec{L}$ of the rigid rotor. This vector is a conserved (time-independent) quantity. With respect to the body-fixed frame it has the components $\mathbf{L}$, which can be shown to be related to the angular velocity, $\mathbf{L} = \mathbf{I}(0)\; \boldsymbol{\omega}\quad\hbox{or}\quad L_i = \frac{\partial T}{\partial\omega_i},\;\; i=x,\,y,\,z.$ Since the body-fixed frame moves (depends on time) these components are not time independent. If we were to represent $\vec{L}$ with respect to the stationary space-fixed frame, we would find time independent expressions for its components. The kinetic energy is given by $T = \frac{1}{2} \left[ \frac{L_x^2}{I_1} + \frac{L_y^2}{I_2}+ \frac{L_z^2}{I_3}\right].$ Hamilton form The Hamilton form of the kinetic energy is written in terms of generalized momenta $\begin{pmatrix} p_\alpha \\ p_\beta \\ p_\gamma \\ \end{pmatrix} \ \stackrel{\mathrm{def}}{=}\ \begin{pmatrix} \partial T/{\partial \dot{\alpha}}\\ \partial T/{\partial \dot{\beta}} \\ \partial T/{\partial \dot{\gamma}} \\ \end{pmatrix} = \mathbf{g} \begin{pmatrix} \; \, \dot{\alpha} \\ \dot{\beta} \\ \dot{\gamma}\\ \end{pmatrix},$ where it is used that the $\mathbf{g}$ is symmetric. In Hamilton form the kinetic energy is, $2 T = \begin{pmatrix} p_{\alpha} & p_{\beta} & p_{\gamma} \end{pmatrix} \; \mathbf{g}^{-1} \; \begin{pmatrix} p_{\alpha} \\ p_{\beta} \\ p_{\gamma}\\ \end{pmatrix},$ with the inverse metric tensor given by ${\scriptstyle \sin^2\beta}\;\;\mathbf{g}^{-1}=$ $\begin{pmatrix} \frac{\cos^2\gamma}{I_1}+\frac{\sin^2\gamma}{I_2} & \left(\frac{1}{I_2}-\frac{1}{I_1}\right){\scriptstyle \sin\beta\sin\gamma\cos\gamma}& -\frac{\cos\beta\cos^2\gamma}{I_1}-\frac{\cos\beta\sin^2\gamma}{I_2} \\ \left(\frac{1}{I_2}-\frac{1}{I_1}\right){\scriptstyle \sin\beta\sin\gamma\cos\gamma}& \frac{\sin^2\beta\sin^2\gamma}{I_1}+\frac{\sin^2\beta\cos^2\gamma}{I_2} & \left(\frac{1}{I_1}-\frac{1}{I_2}\right){\scriptstyle \sin\beta\cos\beta\sin\gamma\cos\gamma}\\ -\frac{\cos\beta\cos^2\gamma}{I_1}-\frac{\cos\beta\sin^2\gamma}{I_2} & \left(\frac{1}{I_1}-\frac{1}{I_2}\right){\scriptstyle \sin\beta\cos\beta\sin\gamma\cos\gamma} & \frac{\cos^2\beta\cos^2\gamma}{I_1}+ \frac{\cos^2\beta\sin^2\gamma}{I_2}+\frac{\sin^2\beta}{I_3} \\ \end{pmatrix}.$ This inverse tensor is needed to obtain the Laplace-Beltrami operator, which (multiplied by $-\hbar^2$) gives the quantum mechanical energy operator of the rigid rotor. The classical Hamiltonian given above can be rewritten to the following expression, which is needed in the phase integral arising in the classical statistical mechanics of rigid rotors, $\begin{array}{lcl} T &=& \frac{1}{2I_1 \sin^2\beta} \left( (p_\alpha- p_\gamma\cos\beta)\cos\gamma -p_\beta \sin\beta\sin\gamma \right)^2 \\ &&+ \frac{1}{2I_2 \sin^2\beta} \left( (p_\alpha- p_\gamma\cos\beta)\sin\gamma +p_\beta \sin\beta\cos\gamma \right)^2 + \frac{p_\gamma^2}{2I_3}. \\ \end{array}$ Quantum mechanical rigid rotor As usual quantization is performed by the replacement of the generalized momenta by operators that give first derivatives with respect to its canonically conjugate variables (positions). Thus, $p_\alpha \longrightarrow -i \hbar \frac{\partial}{\partial \alpha}$ and similarly for pβ and pγ. It is remarkable that this rule replaces the fairly complicated function pα of all three Euler angles, time derivatives of Euler angles, and inertia moments (characterizing the rigid rotor) by a simple differential operator that does not depend on time or inertia moments and differentiates to one Euler angle only. The quantization rule is sufficient to obtain the operators that correspond with the classical angular momenta. There are two kinds: space-fixed and body-fixed angular momentum operators. Both are vector operators, i.e., both have three components that transform as vector components among themselves upon rotation of the space-fixed and the body-fixed frame, respectively. The explicit form of the rigid rotor angular momentum operators is given here (but beware, they must be multiplied with $\hbar$). The body-fixed angular momentum operators are written as $\hat{\mathcal{P}}_i$. They satisfy anomalous commutation relations. The quantization rule is not sufficient to obtain the kinetic energy operator from the classical Hamiltonian. Since classically pβ commutes with cosβ and sinβ and the inverses of these functions, the position of these trigonometric functions in the classical Hamiltonian is arbitrary. After quantization the commutation does no longer hold and the order of operators and functions in the Hamiltonian (energy operator) becomes a point of concern. Podolsky[1] proposed in 1928 that the Laplace-Beltrami operator (times $-\tfrac{1}{2}\hbar^2$) has the appropriate form for the quantum mechanical kinetic energy operator. This operator has the general form (summation convention: sum over repeated indices—in this case over the three Euler angles $q^1,\,q^2,\,q^3 \equiv \alpha,\,\beta,\,\gamma$): $\hat{H} = - \tfrac{\hbar^2}{2}\;|g|^{-1/2} \frac{\partial}{\partial q^i} |g|^{1/2} g^{ij} \frac{\partial}{\partial q^j},$ where | g | is the determinant of the g-tensor: $|g| = I_1\, I_2\, I_3\, \sin^2 \beta \quad \hbox{and}\quad g^{ij} = (\mathbf{g}^{-1})_{ij}.$ Given the inverse of the metric tensor above, the explicit form of the kinetic energy operator in terms of Euler angles follows by simple substitution. The corresponding eigenvalue equation gives the Schrödinger equation for the rigid rotor in the form that it was solved for the first time by Kronig and Rabi[2] (for the special case of the symmetric rotor). This is one of the few cases where the Schrödinger equation can be solved analytically. All these cases were solved within a year of the formulation of the Schrödinger equation. Nowadays it is common to proceed as follows. It can be shown that $\hat{H}$ can be expressed in body-fixed angular momentum operators (in this proof one must carefully commute differential operators with trigonometric functions). The result has the same appearance as the classical formula expressed in body-fixed coordinates, $\hat{H} = \tfrac{1}{2}\left[ \frac{\mathcal{P}_x^2}{I_1}+ \frac{\mathcal{P}_y^2}{I_2}+ \frac{\mathcal{P}_z^2}{I_3} \right].$ The action of the $\hat{\mathcal{P}}_i$ on the Wigner D-matrix is simple. In particular $\mathcal{P}^2\, D^j_{m'm}(\alpha,\beta,\gamma)^* = \hbar^2 j(j+1) D^j_{m'm}(\alpha,\beta,\gamma)^* \quad\hbox{with}\quad \mathcal{P}^2= \mathcal{P}^2_x + \mathcal{P}_y^2+ \mathcal{P}_z^2,$ so that the Schrödinger equation for the spherical rotor (I = I1 = I2 = I3) is solved with the (2j + 1)2 degenerate energy equal to $\tfrac{\hbar^2 j(j+1)}{2I}$. The symmetric top (= symmetric rotor) is characterized by I1 = I2. It is a prolate (cigar shaped) top if I3 < I1 = I2. In the latter case we write the Hamiltonian as $\hat{H} = \tfrac{1}{2}\left[ \frac{\mathcal{P}^2}{I_1}+ \mathcal{P}_z^2\Big(\frac{1}{I_3} -\frac{1}{I_1} \Big) \right],$ and use that $\mathcal{P}_z^2\, D^j_{m k}(\alpha,\beta,\gamma)^* = k^2\, D^j_{m k}(\alpha,\beta,\gamma)^*.$ Hence $\hat{H}\,D^j_{m k}(\alpha,\beta,\gamma)^* = E_{jk} D^j_{m k}(\alpha,\beta,\gamma)^* \quad \hbox{with}\quad E_{jk} = \frac{j(j+1)}{2I_1} + k^2\left(\frac{1}{2I_3}-\frac{1}{2I_1}\right).$ The eigenvalue Ej0 is 2j + 1-fold degenerate, for all eigenfunctions with $m=-j,-j+1,\dots, j$ have the same eigenvalue. The energies with |k| > 0 are 2(2j + 1)-fold degenerate. This exact solution of the Schrödinger equation of the symmetric top was first found in 1927.[2] The asymmetric top problem ($I_1 \ne I_2 \ne I_3$) is not exactly soluble. References Cited references 1. ^ a b B. Podolsky, Phys. Rev., vol. 32, p. 812 (1928) 2. ^ a b R. de L. Kronig and I. I. Rabi, Phys. Rev., vol. 29, pp. 262-269 (1927). General references • D. M. Dennison, Rev. Mod. Physics, vol. 3, pp. 280-345, (1931) (Especially Section 2: The Rotation of Polyatomic Molecules). • J. H. Van Vleck, Rev. Mod. Physics, vol. 23, pp. 213-227 (1951). • McQuarrie, Donald A (1983). Quantum Chemistry. Mill Valley, Calif.: University Science Books. ISBN 0-935702-13-X. • H. Goldstein, C. P. Poole, J. L. Safko, Classical Mechanics, Third Ed., Addison Wesley Publishing Company, San Francisco (2001) ISBN 0-201-65702-3. (Chapters 4 and 5) • V. I. Arnold, Mathematical Methods of Classical Mechanics, Springer-Verlag (1989), ISBN 0-387-96890-3. (Chapter 6). • H. W. Kroto, Molecular Rotation Spectra, Dover Inc., New York, (1992). • W. Gordy and R. L. Cook, Microwave Molecular Spectra, Third Ed., Wiley, New York (1984). • D. Papoušek and M. T. Aliev, Molecular Vibrational-Rotational Spectra, Elsevier, Amsterdam (1982).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 101, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8785730004310608, "perplexity_flag": "middle"}
http://openwetware.org/index.php?title=User:Timothee_Flutre/Notebook/Postdoc/2011/11/10&diff=679807&oldid=658171
User:Timothee Flutre/Notebook/Postdoc/2011/11/10 From OpenWetWare (Difference between revisions) | | | | | |----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ( improve R code for simulations) | | Current revision (21:46, 27 February 2013) (view source) ( add info about confounders in phenotype) | | | (10 intermediate revisions not shown.) | | | | | Line 23: | | Line 23: | | | | * '''Likelihood''': we start by writing the usual linear regression for one individual | | * '''Likelihood''': we start by writing the usual linear regression for one individual | | | | | | | - | <math>\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \text{ with } \epsilon_i \overset{i.i.d}{\sim} \mathcal{N}(0,\tau^{-1})</math> | + | <math>\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \; \text{ with } \; \epsilon_i \; \overset{i.i.d}{\sim} \; \mathcal{N}(0,\tau^{-1})</math> | | | | | | | | where <math>\beta_1</math> is in fact the additive effect of the SNP, noted <math>a</math> from now on, and <math>\beta_2</math> is the dominance effect of the SNP, <math>d = a k</math>. | | where <math>\beta_1</math> is in fact the additive effect of the SNP, noted <math>a</math> from now on, and <math>\beta_2</math> is the dominance effect of the SNP, <math>d = a k</math>. | | Line 236: | | Line 236: | | | | | | | | | <math>\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})</math> | | <math>\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})</math> | | | | + | | | | | + | In eQTL studies, the weights can be estimated from the data using a hierarchical model (see below), by pooling all genes together as in Veyrieras ''et al'' (PLoS Genetics, 2010). | | | | | | | | | | | | Line 284: | | Line 286: | | | | | | | | | | | | | - | * '''Binary phenotype''': case-control like in GWAS, logistic regression, see Guan & Stephens (2008) for Laplace approximation | + | * '''Binary phenotype''': using a similar notation, we model case-control studies with a [http://en.wikipedia.org/wiki/Logistic_regression logistic regression] where the probability to be a case is <math>\mathsf{P}(y_i = 1) = p_i</math>. | | | | | | | - | to do | + | There are many equivalent ways to write the likelihood, the usual one being: | | | | | | | | | + | <math>y_i | p_i \; \overset{i.i.d}{\sim} \; \mathrm{Binomial}(1,p_i)</math> with the [http://en.wikipedia.org/wiki/Log-odds log-odds] (logit function) being <math>\mathrm{ln} \frac{p_i}{1 - p_i} = \mu + a \, g_i + d \, \mathbf{1}_{g_i=1}</math> | | | | | | | - | * '''Link between Bayes factor and P-value''': see Wakeley (2008) | + | Let's use <math>X_i^T=(1 \; g_i \; \mathbf{1}_{g_i=1})</math> to denote the <math>i</math>-th row of the design matrix <math>X</math>. We can also keep the same definition as above for <math>B=(\mu \; a \; d)^T</math>. Thus we have: | | | | + | | | | | + | <math>p_i = \frac{e^{X_i^TB}}{1 + e^{X_i^TB}}</math> | | | | + | | | | | + | As the <math>y_i</math>'s can only take <math>0</math> and <math>1</math> as values, the likelihood can be written as: | | | | + | | | | | + | <math>\mathcal{L}(B) = \mathsf{P}(Y | X, B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}</math> | | | | + | | | | | + | We still use the same prior as above for <math>B</math> (but there is no <math>\tau</math> anymore), so that: | | | | + | | | | | + | <math>B | \Sigma_B \sim \mathcal{N}_3(0, \Sigma_B)</math> | | | | + | | | | | + | where <math>\Sigma_B</math> is a 3 x 3 matrix with <math>(\sigma_\mu^2 \; \sigma_a^2 \; \sigma_d^2)</math> on the diagonal and 0 elsewhere. | | | | + | | | | | + | As above, the Bayes factor is used to compare the two models: | | | | + | | | | | + | <math>\mathrm{BF} = \frac{\mathsf{P}(Y | X, M1)}{\mathsf{P}(Y | X, M0)} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a=0, d=0)} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}</math> | | | | + | | | | | + | The interesting point here is that there is no way to analytically calculate these integrals (marginal likelihoods). Therefore, we will use [http://en.wikipedia.org/wiki/Laplace_approximation Laplace's method] to approximate them, as in Guan & Stephens (2008). | | | | + | | | | | + | Starting with the numerator: | | | | + | | | | | + | <math>\mathsf{P}(Y|X,M1) = \int \exp \left[ N \left( \frac{1}{N} \mathrm{ln} \, \mathsf{P}(B) + \frac{1}{N} \mathrm{ln} \, \mathsf{P}(Y | X, B) \right) \right] \mathsf{d}B</math> | | | | + | | | | | + | Let's use <math>f</math> to denote the function inside the exponential: | | | | + | | | | | + | <math>\mathsf{P}(Y|X,M1) = \int \exp \left( N \; f(B) \right) \mathsf{d}B</math> | | | | + | | | | | + | The function <math>f</math> is defined by: | | | | + | | | | | + | <math>f: \mathbb{R}^3 \rightarrow \mathbb{R}</math> | | | | + | | | | | + | <math>f(B) = \frac{1}{N} \left( -\frac{3}{2} \mathrm{ln}(2 \pi) - \frac{1}{2} \mathrm{ln}(|\Sigma_B|) - \frac{1}{2}(B^T \Sigma_B^{-1} B) \right) + \frac{1}{N} \sum_{i=1}^N \left( y_i \mathrm{ln}(p_i) + (1-y_i) \mathrm{ln}(1 - p_i) \right)</math> | | | | + | | | | | + | This function will then be used to approximate the integral, like this: | | | | + | | | | | + | <math>\mathsf{P}(Y|X,M1) \approx N^{-3/2} (2 \pi)^{3/2} |H(B^\star)|^{-1/2} e^{N f(B^\star)}</math> | | | | + | | | | | + | where <math>H</math> is the [http://en.wikipedia.org/wiki/Hessian_matrix Hessian] of <math>f</math> and <math>B^\star = (\mu^\star a^\star d^\star)^T</math> is the point at which <math>f</math> is maximized. | | | | + | | | | | + | We therefore need two things: <math>H</math> and <math>B^\star</math>. Note that for both we need to calculate the first derivatives of <math>f</math>. As <math>f</math> is multi-dimensional (it takes values in <math>\mathbb{R}^3</math>), we need to calculate its [http://en.wikipedia.org/wiki/Gradient gradient]. | | | | + | | | | | + | In the following, some formulas from [http://en.wikipedia.org/wiki/Matrix_calculus matrix calculus] are sometimes required. In such cases, I will use the [http://en.wikipedia.org/wiki/Matrix_calculus#Layout_conventions numerator layout]. | | | | + | | | | | + | <math>\nabla f = \frac{\partial f}{\partial B} = \left( \frac{\partial f}{\partial \mu} \; \frac{\partial f}{\partial a} \; \frac{\partial f}{\partial d} \right)</math> | | | | + | | | | | + | <math>\nabla f = - \frac{1}{2N} \frac{\partial B^T\Sigma_B^{-1}B}{\partial B} + \frac{1}{N} \sum_i \left( y_i \frac{\partial \mathrm{ln}(p_i)}{\partial B} + (1-y_i) \frac{\partial \mathrm{ln}(1-p_i)}{\partial B} \right)</math> | | | | + | | | | | + | <math>\nabla f = - \frac{1}{N} B^T\Sigma_B^{-1} + \frac{1}{N} \sum_i \left( \frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \frac{\partial p_i}{\partial B}</math> | | | | + | | | | | + | A simple form for the first derivatives of <math>p_i</math> also exists when writing <math>p_i = e^{X_i^tB} (1 + e^{X_i^tB})^{-1}</math>: | | | | + | | | | | + | <math>\frac{\partial p_i}{\partial B} = \frac{\partial e^{X_i^TB}}{\partial B} (1 + e^{X_i^TB})^{-1} + e^{X_i^TB} \frac{\partial (1 + e^{X_i^TB})^{-1}}{\partial B}</math> | | | | + | | | | | + | <math>\frac{\partial p_i}{\partial B} = e^{X_i^TB} \frac{\partial X_i^TB}{\partial B} (1 + e^{X_i^TB})^{-1} - e^{X_i^TB} (1 + e^{X_i^TB})^{-2} \frac{\partial (1 + e^{X_i^TB})}{\partial B}</math> | | | | + | | | | | + | <math>\frac{\partial p_i}{\partial B} = p_i X_i^T - p_i (1 + e^{X_i^TB})^{-1} e^{X_i^TB} \frac{\partial X_i^TB}{\partial B}</math> | | | | + | | | | | + | <math>\frac{\partial p_i}{\partial B} = p_i (1 - p_i) X_i^T</math> | | | | + | | | | | + | This simplifies the gradient of <math>f</math> into: | | | | + | | | | | + | <math>\nabla f = - \frac{1}{N} B^T\Sigma_B^{-1} + \frac{1}{N} \sum_i (y_i - p_i) X_i^T</math> | | | | + | | | | | + | To find <math>B^\star</math>, we set <math>\nabla f(B^\star) = 0</math>. However, in this equation, <math>B^\star</math> is present not only alone but also in the <math>p_i</math>'s. As <math>p_i</math> is a non-linear function of <math>B</math>, the equation can't be solved directly but an iterative procedure is required, typically a [http://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient_method conjugate gradient method] (as in Guan & Stephens) or [http://en.wikipedia.org/wiki/Newton_method_in_optimization Newton's method]. The former only requires <math>f</math> and <math>\nabla f</math> while the latter also requires <math>H</math>. Remember that, in any case, we need <math>H</math> for the Laplace approximation, so let's calculate it: | | | | + | | | | | + | <math>H = - \frac{1}{N} \Sigma_B^{-1} - \frac{1}{N} \sum_i \frac{\partial p_i}{\partial B} X_i^T</math> | | | | + | | | | | + | <math>H = - \frac{1}{N} \Sigma_B^{-1} - \frac{1}{N} \sum_i X_i^T p_i (1-p_i) X_i</math> | | | | + | | | | | + | <math>H = - \frac{1}{N} (\Sigma_B^{-1} + X^T W X)</math> | | | | + | | | | | + | where <math>W</math> is the N x N matrix with <math>p_i(1-p_i)</math> on the diagonal. | | | | + | | | | | + | Note that all second derivatives of <math>f</math> are strictly negative. Therefore, <math>f</math> is globally convex, which means that it has a unique global maximum, at <math>B^\star</math>. As a consequence, we have the right to use Laplace's method to approximate the integral of <math>f</math> around its maximum. | | | | + | | | | | + | implementation in R -> to do | | | | + | | | | | + | finding the effect sizes and their std error: to do | | | | + | | | | | + | | | | | + | * '''Link between Bayes factor and P-value''': see Wakefield (2008) | | | | | | | | to do | | to do | | Line 304: | | Line 388: | | | | | | | | | | | | | - | * '''Confounding factors in phenotype''': factor analysis, see Stegle ''et al'' (PLoS Computational Biology, 2010) | + | * '''Confounders in phenotype''': it is well known in molecular biology that any experiment involving several assays (e.g. measuring gene expression levels with a DNA microarray) suffers from "unknown confounders", the most (in)famous being the so-called "batch effects". | | | | | | | - | to do | + | For instance, samples from individual 1 and 2 are correlated with each other because they were treated another day than all the other samples. Such a correlations has nothing to do with the genotype at a given SNP (in most cases). However, the core model, <math>y_i = \mu + \beta g_i + \epsilon_i</math> assumes that the errors are uncorrelated between individuals: <math>\epsilon_i \overset{\mathrm{i.i.d}}{\sim} \mathcal{N}(0,\tau^{-1})</math>. If this is not the case, i.e. if the <math>y_i</math>'s are correlated but this correlation has nothing to do with the <math>g_i</math>'s, then more variance in the errors won't be accounted for, and we'll loose power when trying to detect weak, yet non-zero <math>\beta</math>. | | | | + | | | | | + | An intuitive way of removing these confounders is to realize that we can use all gene expression levels to try to identify them. Indeed, batch effects are very likely to affect all genes in a sample (though possibly at different magnitudes). As the effect of the confounders are, as a first approximation, typically much bigger than the effect of a SNP genotype, we can try to learn the confounders using all gene expression levels, and only them. So let's put all of them into a <math>G \times N</math> matrix <math>Y_1</math> with genes in rows and individuals in columns. | | | | + | | | | | + | For the moment, the data are expressed in the [http://en.wikipedia.org/wiki/Standard_basis standard basis], i.e. the basis of the observations. But some confounders are present in these data, they contribute with noise and redundancy and hence dilute the signal. The idea is, first, to identify a new basis which will correspond to a "mix" of the original samples (e.g. one component of this "mix" may correspond to the day at which the samples were processed), and second, to remove these components from the data in order to only keep the signal. | | | | + | | | | | + | to be continued | | | | + | | | | | + | see also factor analysis, see Stegle ''et al'' (PLoS Computational Biology, 2010) | | | | | | | | | | | | - | * '''Genetic relatedness''': linear mixed model | + | * '''Confounders in genotype''': mainly pop structure and genetic relatedness, linear mixed model (LMM), see Zhou & Stephens (Nature Genetics, 2012) | | | | | | | | to do | | to do | | | | | | | | | | | | - | * '''Discrete phenotype''': count data as from RNA-seq, Poisson-like likelihood | + | * '''Discrete phenotype''': count data (e.g. from RNA-seq), Poisson-like likelihood, generalized linear model (GLM), see Sun (Biometrics, 2012) | | | | | | | | to do | | to do | | Line 324: | | Line 416: | | | | | | | | | | | | | - | * '''Non-independent genes''': enrichment in known pathways, learn "modules" | + | * '''Non-independent genes''': enrichment in known pathways, learn "modules", distributions on networks | | | | | | | | to do | | to do | Current revision Project name Main project page Previous entry      Next entry Bayesian model of univariate linear regression for QTL detection This page aims at helping people like me, interested in quantitative genetics, to get a better understanding of some Bayesian models, most importantly the impact of the modeling assumptions as well as the underlying maths. It starts with a simple model, and gradually increases the scope to relax assumptions. See references to scientific articles at the end. • Data: let's assume that we obtained data from N individuals. We note $y_1,\ldots,y_N$ the (quantitative) phenotypes (e.g. expression levels at a given gene), and $g_1,\ldots,g_N$ the genotypes at a given SNP (encoded as allele dose: 0, 1 or 2). • Goal: we want to assess the evidence in the data for an effect of the genotype on the phenotype. • Assumptions: the relationship between genotype and phenotype is linear; the individuals are not genetically related; there is no hidden confounding factors in the phenotypes. • Likelihood: we start by writing the usual linear regression for one individual $\forall i \in \{1,\ldots,N\}, \; y_i = \mu + \beta_1 g_i + \beta_2 \mathbf{1}_{g_i=1} + \epsilon_i \; \text{ with } \; \epsilon_i \; \overset{i.i.d}{\sim} \; \mathcal{N}(0,\tau^{-1})$ where β1 is in fact the additive effect of the SNP, noted a from now on, and β2 is the dominance effect of the SNP, d = ak. Let's now write the model in matrix notation: $Y = X B + E \text{ where } B = [ \mu \; a \; d ]^T$ This gives the following multivariate Normal distribution for the phenotypes: $Y | X, \tau, B \sim \mathcal{N}(XB, \tau^{-1} I_N)$ Even though we can write the likelihood as a multivariate Normal, I still keep the term "univariate" in the title because the regression has a single response, Y. It is usual to keep the term "multivariate" for the case where there is a matrix of responses (i.e. multiple phenotypes). The likelihood of the parameters given the data is therefore: $\mathcal{L}(\tau, B) = \mathsf{P}(Y | X, \tau, B)$ $\mathcal{L}(\tau, B) = \left(\frac{\tau}{2 \pi}\right)^{\frac{N}{2}} exp \left( -\frac{\tau}{2} (Y - XB)^T (Y - XB) \right)$ • Priors: we use the usual conjugate prior $\mathsf{P}(\tau, B) = \mathsf{P}(\tau) \mathsf{P}(B | \tau)$ A Gamma distribution for τ: $\tau \sim \Gamma(\kappa/2, \, \lambda/2)$ which means: $\mathsf{P}(\tau) = \frac{\frac{\lambda}{2}^{\kappa/2}}{\Gamma(\frac{\kappa}{2})} \tau^{\frac{\kappa}{2}-1} e^{-\frac{\lambda}{2} \tau}$ And a multivariate Normal distribution for B: $B | \tau \sim \mathcal{N}(\vec{0}, \, \tau^{-1} \Sigma_B) \text{ with } \Sigma_B = diag(\sigma_{\mu}^2, \sigma_a^2, \sigma_d^2)$ which means: $\mathsf{P}(B | \tau) = \left(\frac{\tau}{2 \pi}\right)^{\frac{3}{2}} |\Sigma_B|^{-\frac{1}{2}} exp \left(-\frac{\tau}{2} B^T \Sigma_B^{-1} B \right)$ • Joint posterior (1): $\mathsf{P}(\tau, B | Y, X) = \mathsf{P}(\tau | Y, X) \mathsf{P}(B | Y, X, \tau)$ • Conditional posterior of B: $\mathsf{P}(B | Y, X, \tau) = \frac{\mathsf{P}(B, Y | X, \tau)}{\mathsf{P}(Y | X, \tau)}$ Let's neglect the normalization constant for now: $\mathsf{P}(B | Y, X, \tau) \propto \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)$ Similarly, let's keep only the terms in B for the moment: $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B) exp((Y-XB)^T(Y-XB))$ We expand: $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Sigma_B^{-1} B - Y^TXB -B^TX^TY + B^TX^TXB)$ We factorize some terms: $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T (\Sigma_B^{-1} + X^TX) B - Y^TXB -B^TX^TY)$ Importantly, let's define: $\Omega = (\Sigma_B^{-1} + X^TX)^{-1}$ We can see that ΩT = Ω, which means that Ω is a symmetric matrix. This is particularly useful here because we can use the following equality: Ω − 1ΩT = I. $\mathsf{P}(B | Y, X, \tau) \propto exp(B^T \Omega^{-1} B - (X^TY)^T\Omega^{-1}\Omega^TB -B^T\Omega^{-1}\Omega^TX^TY)$ This now becomes easy to factorizes totally: $\mathsf{P}(B | Y, X, \tau) \propto exp((B^T - \Omega X^TY)^T\Omega^{-1}(B - \Omega X^TY))$ We recognize the kernel of a Normal distribution, allowing us to write the conditional posterior as: $B | Y, X, \tau \sim \mathcal{N}(\Omega X^TY, \tau^{-1} \Omega)$ • Posterior of τ: Similarly to the equations above: $\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau)$ But now, to handle the second term, we need to integrate over B, thus effectively taking into account the uncertainty in B: $\mathsf{P}(\tau | Y, X) \propto \mathsf{P}(\tau) \int \mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B) \mathsf{d}B$ Again, we use the priors and likelihoods specified above (but everything inside the integral is kept inside it, even if it doesn't depend on B!): $\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} \tau^{N/2} exp(-\frac{\tau}{2} B^T \Sigma_B^{-1} B) exp(-\frac{\tau}{2} (Y - XB)^T (Y - XB)) \mathsf{d}B$ As we used a conjugate prior for τ, we know that we expect a Gamma distribution for the posterior. Therefore, we can take τN / 2 out of the integral and start guessing what looks like a Gamma distribution. We also factorize inside the exponential: $\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} \int \tau^{3/2} exp \left[-\frac{\tau}{2} \left( (B - \Omega X^T Y)^T \Omega^{-1} (B - \Omega X^T Y) - Y^T X \Omega X^T Y + Y^T Y \right) \right] \mathsf{d}B$ We recognize the conditional posterior of B. This allows us to use the fact that the pdf of the Normal distribution integrates to one: $\mathsf{P}(\tau | Y, X) \propto \tau^{\frac{N+\kappa}{2} - 1} e^{-\frac{\lambda}{2} \tau} exp\left[-\frac{\tau}{2} (Y^T Y - Y^T X \Omega X^T Y) \right]$ We finally recognize a Gamma distribution, allowing us to write the posterior as: $\tau | Y, X \sim \Gamma \left( \frac{N+\kappa}{2}, \; \frac{1}{2} (Y^T Y - Y^T X \Omega X^T Y + \lambda) \right)$ • Joint posterior (2): sometimes it is said that the joint posterior follows a Normal Inverse Gamma distribution: $B, \tau | Y, X \sim \mathcal{N}IG(\Omega X^TY, \; \tau^{-1}\Omega, \; \frac{N+\kappa}{2}, \; \frac{\lambda^\ast}{2})$ where $\lambda^\ast = Y^T Y - Y^T X \Omega X^T Y + \lambda$ • Marginal posterior of B: we can now integrate out τ: $\mathsf{P}(B | Y, X) = \int \mathsf{P}(\tau) \mathsf{P}(B | Y, X, \tau) \mathsf{d}\tau$ $\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}}}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \int \tau^{\frac{N+\kappa+3}{2}-1} exp \left[-\tau \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right) \right] \mathsf{d}\tau$ Here we recognize the formula to integrate the Gamma function: $\mathsf{P}(B | Y, X) = \frac{\frac{\lambda^\ast}{2}^{\frac{N+\kappa}{2}} \Gamma(\frac{N+\kappa+3}{2})}{(2\pi)^\frac{3}{2} |\Omega|^{\frac{1}{2}} \Gamma(\frac{N+\kappa}{2})} \left( \frac{\lambda^\ast}{2} + (B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY) \right)^{-\frac{N+\kappa+3}{2}}$ And we now recognize a multivariate Student's t-distribution: $\mathsf{P}(B | Y, X) = \frac{\Gamma(\frac{N+\kappa+3}{2})}{\Gamma(\frac{N+\kappa}{2}) \pi^\frac{3}{2} |\lambda^\ast \Omega|^{\frac{1}{2}} } \left( 1 + \frac{(B - \Omega X^TY)^T \Omega^{-1} (B - \Omega X^TY)}{\lambda^\ast} \right)^{-\frac{N+\kappa+3}{2}}$ We hence can write: $B | Y, X \sim \mathcal{S}_{N+\kappa}(\Omega X^TY, \; (Y^T Y - Y^T X \Omega X^T Y + \lambda) \Omega)$ • Bayes Factor: one way to answer our goal above ("is there an effect of the genotype on the phenotype?") is to do hypothesis testing. We want to test the following null hypothesis: $H_0: \; a = d = 0$ In Bayesian modeling, hypothesis testing is performed with a Bayes factor, which in our case can be written as: $\mathrm{BF} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a = 0, d = 0)}$ We can shorten this into: $\mathrm{BF} = \frac{\mathsf{P}(Y | X)}{\mathsf{P}_0(Y)}$ Note that, compare to frequentist hypothesis testing which focuses on the null, the Bayes factor requires to explicitly model the data under the alternative. This makes a big difference when interpreting the results (see below). $\mathsf{P}(Y | X) = \int \mathsf{P}(\tau) \mathsf{P}(Y | X, \tau) \mathsf{d}\tau$ First, let's calculate what is inside the integral: $\mathsf{P}(Y | X, \tau) = \frac{\mathsf{P}(B | \tau) \mathsf{P}(Y | X, \tau, B)}{\mathsf{P}(B | Y, X, \tau)}$ Using the formula obtained previously and doing some algebra gives: $\mathsf{P}(Y | X, \tau) = \left( \frac{\tau}{2 \pi} \right)^{\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} exp\left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY) \right)$ Now we can integrate out τ (note the small typo in equation 9 of supplementary text S1 of Servin & Stephens): $\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \frac{\frac{\lambda}{2}^{\frac{\kappa}{2}}}{\Gamma(\frac{\kappa}{2})} \int \tau^{\frac{N+\kappa}{2}-1} exp \left( -\frac{\tau}{2} (Y^TY - Y^TX\Omega X^TY + \lambda) \right)$ Inside the integral, we recognize the almost-complete pdf of a Gamma distribution. As it has to integrate to one, we get: $\mathsf{P}(Y | X) = (2\pi)^{-\frac{N}{2}} \left( \frac{|\Omega|}{|\Sigma_B|} \right)^{\frac{1}{2}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$ We can use this expression also under the null. In this case, as we need neither a nor d, B is simply μ, ΣB is $\sigma_{\mu}^2$ and X is a vector of 1's. We can also defines $\Omega_0 = ((\sigma_{\mu}^2)^{-1} + N)^{-1}$. In the end, this gives: $\mathsf{P}_0(Y) = (2\pi)^{-\frac{N}{2}} \frac{|\Omega_0|^{\frac{1}{2}}}{\sigma_{\mu}} \left( \frac{\lambda}{2} \right)^{\frac{\kappa}{2}} \frac{\Gamma(\frac{N+\kappa}{2})}{\Gamma(\frac{\kappa}{2})} \left( \frac{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda}{2} \right)^{-\frac{N+\kappa}{2}}$ We can therefore write the Bayes factor: $\mathrm{BF} = \left( \frac{|\Omega|}{\Omega_0} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX\Omega X^TY + \lambda}{Y^TY - \Omega_0 N^2 \bar{Y}^2 + \lambda} \right)^{-\frac{N+\kappa}{2}}$ When the Bayes factor is large, we say that there is enough evidence in the data to support the alternative. Indeed, the Bayesian testing procedure corresponds to measuring support for the specific alternative hypothesis compared to the null hypothesis. Importantly, note that, for a frequentist testing procedure, we would say that there is enough evidence in the data to reject the null. However we wouldn't say anything about the alternative as we don't model it. The threshold to say that a Bayes factor is large depends on the field. It is possible to use the Bayes factor as a test statistic when doing permutation testing, and then control the false discovery rate. This can give an idea of a reasonable threshold. • Hyperparameters: the model has 5 hyperparameters, $\{\kappa, \, \lambda, \, \sigma_{\mu}, \, \sigma_a, \, \sigma_d\}$. How should we choose them? Such a question is never easy to answer. But note that all hyperparameters are not that important, especially in typical quantitative genetics applications. For instance, we are mostly interested in those that determine the magnitude of the effects, σa and σd, so let's deal with the others first. As explained in Servin & Stephens, the posteriors for τ and B change appropriately with shifts (y + c) and scaling ($y \times c$) in the phenotype when taking their limits. This also gives us a new Bayes factor, the one used in practice (see Guan & Stephens, 2008): $\mathrm{lim}_{\sigma_{\mu} \rightarrow \infty \; ; \; \lambda \rightarrow 0 \; ; \; \kappa \rightarrow 0 } \; \mathrm{BF} = \left( \frac{N}{|\Sigma_B^{-1} + X^TX|} \right)^{\frac{1}{2}} \frac{1}{\sigma_a \sigma_d} \left( \frac{Y^TY - Y^TX (\Sigma_B^{-1} + X^TX)^{-1} X^TY}{Y^TY - N \bar{Y}^2} \right)^{-\frac{N}{2}}$ Now, for the important hyperparameters, σa and σd, it is usual to specify a grid of values, i.e. M pairs (σa,σd). For instance, Guan & Stephens used the following grid: $M=4 \; ; \; \sigma_a \in \{0.05, 0.1, 0.2, 0.4\} \; ; \; \sigma_d = \frac{\sigma_a}{4}$ Then, we can average the Bayes factors obtained over the grid using, as a first approximation, equal weights: $\mathrm{BF} = \sum_{m \, \in \, \text{grid}} \frac{1}{M} \, \mathrm{BF}(\sigma_a^{(m)}, \sigma_d^{(m)})$ In eQTL studies, the weights can be estimated from the data using a hierarchical model (see below), by pooling all genes together as in Veyrieras et al (PLoS Genetics, 2010). • Implementation: the following R function is adapted from Servin & Stephens supplementary text 1. ```BF <- function(G=NULL, Y=NULL, sigma.a=NULL, sigma.d=NULL, get.log10=TRUE){ stopifnot(! is.null(G), ! is.null(Y), ! is.null(sigma.a), ! is.null(sigma.d)) subset <- complete.cases(Y) & complete.cases(G) Y <- Y[subset] G <- G[subset] stopifnot(length(Y) == length(G)) N <- length(G) X <- cbind(rep(1,N), G, G == 1) inv.Sigma.B <- diag(c(0, 1/sigma.a^2, 1/sigma.d^2)) inv.Omega <- inv.Sigma.B + t(X) %*% X inv.Omega0 <- N tY.Y <- t(Y) %*% Y log10.BF <- as.numeric(0.5 * log10(inv.Omega0) - 0.5 * log10(det(inv.Omega)) - log10(sigma.a) - log10(sigma.d) - (N/2) * (log10(tY.Y - t(Y) %*% X %*% solve(inv.Omega) %*% t(X) %*% cbind(Y)) - log10(tY.Y - N*mean(Y)^2))) if(get.log10) return(log10.BF) else return(10^log10.BF) } ``` In the same vein as what is explained here, we can simulate data under different scenarios and check the BFs: ```N <- 300 # play with it PVE <- 0.1 # play with it grid <- c(0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2) MAF <- 0.3 G <- rbinom(n=N, size=2, prob=MAF) tau <- 1 a <- sqrt((2/5) * (PVE / (tau * MAF * (1-MAF) * (1-PVE)))) d <- a / 2 mu <- rnorm(n=1, mean=0, sd=10) Y <- mu + a * G + d * (G == 1) + rnorm(n=N, mean=0, sd=tau) for(m in 1:length(grid)) print(BF(G, Y, grid[m], grid[m]/4)) ``` • Binary phenotype: using a similar notation, we model case-control studies with a logistic regression where the probability to be a case is $\mathsf{P}(y_i = 1) = p_i$. There are many equivalent ways to write the likelihood, the usual one being: $y_i | p_i \; \overset{i.i.d}{\sim} \; \mathrm{Binomial}(1,p_i)$ with the log-odds (logit function) being $\mathrm{ln} \frac{p_i}{1 - p_i} = \mu + a \, g_i + d \, \mathbf{1}_{g_i=1}$ Let's use $X_i^T=(1 \; g_i \; \mathbf{1}_{g_i=1})$ to denote the i-th row of the design matrix X. We can also keep the same definition as above for $B=(\mu \; a \; d)^T$. Thus we have: $p_i = \frac{e^{X_i^TB}}{1 + e^{X_i^TB}}$ As the yi's can only take 0 and 1 as values, the likelihood can be written as: $\mathcal{L}(B) = \mathsf{P}(Y | X, B) = \prod_{i=1}^N p_i^{y_i} (1-p_i)^{1-y_i}$ We still use the same prior as above for B (but there is no τ anymore), so that: $B | \Sigma_B \sim \mathcal{N}_3(0, \Sigma_B)$ where ΣB is a 3 x 3 matrix with $(\sigma_\mu^2 \; \sigma_a^2 \; \sigma_d^2)$ on the diagonal and 0 elsewhere. As above, the Bayes factor is used to compare the two models: $\mathrm{BF} = \frac{\mathsf{P}(Y | X, M1)}{\mathsf{P}(Y | X, M0)} = \frac{\mathsf{P}(Y | X, a \neq 0, d \neq 0)}{\mathsf{P}(Y | X, a=0, d=0)} = \frac{\int \mathsf{P}(B) \mathsf{P}(Y | X, B) \mathrm{d}B}{\int \mathsf{P}(\mu) \mathsf{P}(Y | X, \mu) \mathrm{d}\mu}$ The interesting point here is that there is no way to analytically calculate these integrals (marginal likelihoods). Therefore, we will use Laplace's method to approximate them, as in Guan & Stephens (2008). Starting with the numerator: $\mathsf{P}(Y|X,M1) = \int \exp \left[ N \left( \frac{1}{N} \mathrm{ln} \, \mathsf{P}(B) + \frac{1}{N} \mathrm{ln} \, \mathsf{P}(Y | X, B) \right) \right] \mathsf{d}B$ Let's use f to denote the function inside the exponential: $\mathsf{P}(Y|X,M1) = \int \exp \left( N \; f(B) \right) \mathsf{d}B$ The function f is defined by: $f: \mathbb{R}^3 \rightarrow \mathbb{R}$ $f(B) = \frac{1}{N} \left( -\frac{3}{2} \mathrm{ln}(2 \pi) - \frac{1}{2} \mathrm{ln}(|\Sigma_B|) - \frac{1}{2}(B^T \Sigma_B^{-1} B) \right) + \frac{1}{N} \sum_{i=1}^N \left( y_i \mathrm{ln}(p_i) + (1-y_i) \mathrm{ln}(1 - p_i) \right)$ This function will then be used to approximate the integral, like this: $\mathsf{P}(Y|X,M1) \approx N^{-3/2} (2 \pi)^{3/2} |H(B^\star)|^{-1/2} e^{N f(B^\star)}$ where H is the Hessian of f and $B^\star = (\mu^\star a^\star d^\star)^T$ is the point at which f is maximized. We therefore need two things: H and $B^\star$. Note that for both we need to calculate the first derivatives of f. As f is multi-dimensional (it takes values in $\mathbb{R}^3$), we need to calculate its gradient. In the following, some formulas from matrix calculus are sometimes required. In such cases, I will use the numerator layout. $\nabla f = \frac{\partial f}{\partial B} = \left( \frac{\partial f}{\partial \mu} \; \frac{\partial f}{\partial a} \; \frac{\partial f}{\partial d} \right)$ $\nabla f = - \frac{1}{2N} \frac{\partial B^T\Sigma_B^{-1}B}{\partial B} + \frac{1}{N} \sum_i \left( y_i \frac{\partial \mathrm{ln}(p_i)}{\partial B} + (1-y_i) \frac{\partial \mathrm{ln}(1-p_i)}{\partial B} \right)$ $\nabla f = - \frac{1}{N} B^T\Sigma_B^{-1} + \frac{1}{N} \sum_i \left( \frac{y_i}{p_i} - \frac{1-y_i}{1-p_i} \right) \frac{\partial p_i}{\partial B}$ A simple form for the first derivatives of pi also exists when writing $p_i = e^{X_i^tB} (1 + e^{X_i^tB})^{-1}$: $\frac{\partial p_i}{\partial B} = \frac{\partial e^{X_i^TB}}{\partial B} (1 + e^{X_i^TB})^{-1} + e^{X_i^TB} \frac{\partial (1 + e^{X_i^TB})^{-1}}{\partial B}$ $\frac{\partial p_i}{\partial B} = e^{X_i^TB} \frac{\partial X_i^TB}{\partial B} (1 + e^{X_i^TB})^{-1} - e^{X_i^TB} (1 + e^{X_i^TB})^{-2} \frac{\partial (1 + e^{X_i^TB})}{\partial B}$ $\frac{\partial p_i}{\partial B} = p_i X_i^T - p_i (1 + e^{X_i^TB})^{-1} e^{X_i^TB} \frac{\partial X_i^TB}{\partial B}$ $\frac{\partial p_i}{\partial B} = p_i (1 - p_i) X_i^T$ This simplifies the gradient of f into: $\nabla f = - \frac{1}{N} B^T\Sigma_B^{-1} + \frac{1}{N} \sum_i (y_i - p_i) X_i^T$ To find $B^\star$, we set $\nabla f(B^\star) = 0$. However, in this equation, $B^\star$ is present not only alone but also in the pi's. As pi is a non-linear function of B, the equation can't be solved directly but an iterative procedure is required, typically a conjugate gradient method (as in Guan & Stephens) or Newton's method. The former only requires f and $\nabla f$ while the latter also requires H. Remember that, in any case, we need H for the Laplace approximation, so let's calculate it: $H = - \frac{1}{N} \Sigma_B^{-1} - \frac{1}{N} \sum_i \frac{\partial p_i}{\partial B} X_i^T$ $H = - \frac{1}{N} \Sigma_B^{-1} - \frac{1}{N} \sum_i X_i^T p_i (1-p_i) X_i$ $H = - \frac{1}{N} (\Sigma_B^{-1} + X^T W X)$ where W is the N x N matrix with pi(1 − pi) on the diagonal. Note that all second derivatives of f are strictly negative. Therefore, f is globally convex, which means that it has a unique global maximum, at $B^\star$. As a consequence, we have the right to use Laplace's method to approximate the integral of f around its maximum. implementation in R -> to do finding the effect sizes and their std error: to do • Link between Bayes factor and P-value: see Wakefield (2008) to do • Hierarchical model: pooling genes, learn weights for grid and genomic annotations, see Veyrieras et al (PLoS Genetics, 2010) to do • Multiple SNPs with LD: joint analysis of multiple SNPs, handle correlation between them, see Guan & Stephens (Annals of Applied Statistics, 2011) for MCMC, see Carbonetto & Stephens (Bayesian Analysis, 2012) for Variational Bayes to do • Confounders in phenotype: it is well known in molecular biology that any experiment involving several assays (e.g. measuring gene expression levels with a DNA microarray) suffers from "unknown confounders", the most (in)famous being the so-called "batch effects". For instance, samples from individual 1 and 2 are correlated with each other because they were treated another day than all the other samples. Such a correlations has nothing to do with the genotype at a given SNP (in most cases). However, the core model, yi = μ + βgi + εi assumes that the errors are uncorrelated between individuals: $\epsilon_i \overset{\mathrm{i.i.d}}{\sim} \mathcal{N}(0,\tau^{-1})$. If this is not the case, i.e. if the yi's are correlated but this correlation has nothing to do with the gi's, then more variance in the errors won't be accounted for, and we'll loose power when trying to detect weak, yet non-zero β. An intuitive way of removing these confounders is to realize that we can use all gene expression levels to try to identify them. Indeed, batch effects are very likely to affect all genes in a sample (though possibly at different magnitudes). As the effect of the confounders are, as a first approximation, typically much bigger than the effect of a SNP genotype, we can try to learn the confounders using all gene expression levels, and only them. So let's put all of them into a $G \times N$ matrix Y1 with genes in rows and individuals in columns. For the moment, the data are expressed in the standard basis, i.e. the basis of the observations. But some confounders are present in these data, they contribute with noise and redundancy and hence dilute the signal. The idea is, first, to identify a new basis which will correspond to a "mix" of the original samples (e.g. one component of this "mix" may correspond to the day at which the samples were processed), and second, to remove these components from the data in order to only keep the signal. to be continued see also factor analysis, see Stegle et al (PLoS Computational Biology, 2010) • Confounders in genotype: mainly pop structure and genetic relatedness, linear mixed model (LMM), see Zhou & Stephens (Nature Genetics, 2012) to do • Discrete phenotype: count data (e.g. from RNA-seq), Poisson-like likelihood, generalized linear model (GLM), see Sun (Biometrics, 2012) to do • Multiple phenotypes: matrix-variate distributions, tensors to do • Non-independent genes: enrichment in known pathways, learn "modules", distributions on networks to do • References: • Servin & Stephens (PLoS Genetics, 2007) • Guan & Stephens (PLoS Genetics, 2008) • Stephens & Balding (Nature Reviews Genetics, 2009)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 89, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8363431096076965, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/50231-vector-multiplication-very-urgent-15-mins.html
# Thread: 1. ## Vector multiplication Very urgent: 15 mins Hi. i really need help quick. Im doing a question where I have to find the tangent of a function. I have 2 vectors r_u and r_v. $r_u = i + 2uk \ \text{and} \ r_v = j +2vk$ I just need to find the normal vector at the point (1,-1,2) which is given by $r_u * r_v$ but its been so long since ive done this i forgot how to multiply these two. I have a matrix i j k 1 0 2u 0 1 2v 2. Hello, Have a look here : Cross product - Wikipedia, the free encyclopedia The formula is given : $\begin{array}{ccc} i & j & k \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{array}$ The cross product is the determinant of this matrix. Use Sarrus's rule Rule of Sarrus - Wikipedia, the free encyclopedia
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8705715537071228, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/74528/how-to-prove-uniqueness-of-matrix-polynomial-and-its-eigendecomposition
## how to prove uniqueness of matrix polynomial and its eigendecomposition ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, all! Let $\underset{l \times l}{A(x)}$ be square polynomial matrix over $GF(q)[x]$, where $q$ is a prime power. Let $x_i \in GF(q^m)$ ($x_i \not= 0$) be eigenvalue of $A(x)$: $det(A(x_i)) = 0$. Let $\underset{l \times 1}{\mathbf{v}_{i, j}} \in GF(q^m)^l$ be an right eigenvector that corresponds to $x_i$. So we have $A(x_i) \cdot \mathbf{v}_{i,j} = \underset{l \times 1}{\mathbf{0}}$. How it could be proved that there are existed no polynomial vector $\mathbf{c}(x) = \left( c_0(x), c_1(x), \ldots, c_{l-1}(x) \right)$ that for all $x_i$ and $\mathbf{v}_{i,j}$ $\mathbf{c}(x_i) \cdot \mathbf{v}_{i,j} = 0$ and that does not belong to $GF(q)[x]$-linear space generated by $A(x)$? I suppose, existence of no such polynomial should be a nice guess because of correspondence to eigendecomposition notion from classic linear algebra. So I suppose that system of polynomial matrix eigenvalues and its right eigenvectors and original polynomial matrix are in one-to-one correspondence. But proof for this is not clear. Thank you! - 1 You are really going to need to give a ton of additional material. What is $q,$ what is the overall setting, what work have you done so far, why do you think "are existed no..." might be true? If you have no idea what is going on, it is simply unfair of you to ask strangers to put their effort into this. – Will Jagy Sep 4 2011 at 19:29 Thank you! I put some fixes to my quesitions. – spk Sep 5 2011 at 10:30 Is it true? Take in the scalar case $A(x)=x^2$, $x_1=1$, $v_1=1$, $c(x)=x$. Then $c(x)v$ vanishes, but $c(x)$ is not spanned by $x^2$. – Federico Poloni Sep 7 2011 at 12:25 Do you mean $x_1 = 0$? I have to consider only non-zero elements that vanishes the determinant of $A(x)$. Thank you, I put fix to main text. – spk Sep 7 2011 at 13:34 1 Incidentally, I hope you know (or realized) that eigenvectors of matrix polynomials are not linearly independent (a degree-$d$ has $dl$ eigenpairs, and thus its eigenvectors are simply too many to be linearly independent). Maybe this answers your question. – Federico Poloni Sep 7 2011 at 20:03 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307628273963928, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2692/why-a-self-financing-replicating-portfolio-should-always-exist?answertab=active
# Why a self-financing replicating portfolio should always exist? According to my understanding the derivation of the Black-Scholes PDE is based on the assumption that the price of the option should change in time in such a way that it should be possible to construct such a self-financing portfolio whose price replicates the price of the option (within a very small time interval). And my question is: Why do we assume that the price of the option has this property? I will explain myself in more details. First, we assume that the price of a call option $C$ depends on the price of the underlying stock $S$ and time $t$. Then we use the Ito's lemma to get the following expression: $d C = (\frac{\partial C}{\partial t} + S\mu\frac{\partial C}{\partial S} + \frac{1}{2}S^2 \sigma^2 \frac{\partial^2C}{\partial S^2}) dt + \sigma S \frac{\partial C}{\partial S} dW$ (1) , where $\mu$ and $\sigma$ are parameters which determine the time evolution of the stock price: $dS = S(\mu dt + \sigma dW)$ (2) Now we construct a self-financing portfolio which consist of $\omega_s$ shares of the underlying stock and $\omega_b$ shares of a bond. Since the portfolio is self financing, its price $P$ should change in this way: $dP(t) = \omega_s dS(t) + \omega_b dB(t)$. (3) Now we require that $P=C$ and $dP = dC$. It means that we want to find such $\omega_s$ and $\omega_b$ that the portfolio has the same price that the option and its change in price has the same value as the change in price of the option. OK. Why not? If we want to have such a portfolio, we can do it. The special requirements to its price and change of its price should fix its content (i.e. the requirement should fix the portion of the stock and bond in the portfolio ($\omega_s$ and $\omega_b$)). If we substitute (2) in (3), and make use of the fact that $dB = rBdt$ we will get: $\frac{\partial C}{\partial t} + S\mu\frac{\partial C}{\partial S} + \frac{1}{2}S^2 \sigma^2 \frac{\partial^2C}{\partial S^2} = \omega_s S \mu + \omega_b r B$ (4) and $\sigma S \frac{\partial C}{\partial S} = \omega_s S \sigma$ (5) From last equation we can determine $\omega_s$: $\omega_s = \frac{\partial C}{\partial S}$ (6) So, we know the portion of the stock in the portfolio. Since we also know the price of the portfolio (it is equal to the price of the option), we can also determine the portion of the bond in the portfolio ($\omega_b$). Now, if we substitute the found $\omega_s$ and $\omega_b$ into the (4) we will get an expression which binds $\frac{\partial C}{\partial t}$, $\frac{\partial C}{\partial S}$, and $\frac{\partial^2 C}{\partial S^2}$: $\frac{\partial C}{\partial t} + rS \frac{\partial C}{\partial S} + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 C}{\partial S^2} = rC$ This is nothing but the Black-Scholes PDE. What I do not understand is what requirement binds the derivatives of $C$ over $S$ and $t$. In other words, we apply certain requirements (restrictions) to our portfolio (it should follow the price of the option). As a consequence, we restrict the content of our portfolio (we fix $\omega_s$ and $\omega_b$). But we do not apply any requirements to the price of the option. Well, we say that it should be a function of the $S$ and $t$. As a consequence, we got the equation (1). But from that we will not get any relation between the derivatives of $C$. We, also constructed a replicating portfolio, but why its existence should restrict the evolution of the price of the option? It looks to me that the requirement that I am missing is the following: The price of the option should depend on $S$ and $t$ in such a way that it should be possible to create a self-financing portfolio which replicates the price of the option. Am I right? Do we have this requirement? And do we get the Black-Scholes PDE from this requirement? If it is the case, can anybody, please, explain where this requirement comes from. - in theory, yes, but then you need to readjust your position dynamically intraday and very often the cost and slippage of this replication will be so high that in practice it doesn't work. – RockScience Jan 3 '12 at 10:45 ## 2 Answers I feel that the best way to answer your question is to first quote your problematic idea and then carefully explain the subtle alternative. :) The derivation of the Black-Scholes PDE is based on the assumption that the price of the option should change in time in such a way that ... And my question is: Why do we assume that the price of the option has this property? The derivation of B-S PDE doesn't require the assumption above, though it will make derivation easier indeed. Besides, this is not an assumption. This is a natural consequence that we can derive. We don't need to assume option price's properties (i.e. should change in time in such a way ... ). Instead, we derive it, the way option price must follow. To construct such a self-financing portfolio whose price replicates the price of the option (within a very small time interval). You probably somewhat had this chicken-and-egg mystery in your mind: how can I replicate the price of the option before I ever know it? The truth is that (at least to me), for most of the time, my self-financing portfolio is trying to do/replicate something else irrelevant to the mysterious option price, and I found that the option price had better follow what I am doing, not the other way around. Now we require that P=C and dP=dC. Let's think a little deeper about the equation P=C: who forces it? why? how they do it? The answer is somewhat subtle: it depends on when you ask these questions. • At expiry (T): P$_T$ = C$_T$ is enforced by contract/exchange/law. If your counter-party default, buyer can sue the writer, or exchange will handle it and guarantee buyer's right. • Anytime between now and expiry: P$_t$ = C$_t$ is not forced by contract/exchange/law. If there are markets for options, stocks, and bonds, B-S arbitragers will jump in to "help" (instead of force it) by trading against any price deviation. Arbitrager will monetize the deviation by constantly rebalancing a self-financing portfolio P$_t$ that will eventually replicate the option's payoff at expiry. See the difference? P$_T$ = C$_T$ is a requirement, i.e. boundary condition, enforced by law in reality, but P$_t$ = C$_t$ is not. P$_t$ = C$_t$ is more like a consequence than a requirement. It means that we want to find such ωs and ωb that the portfolio has the same price that the option and its change in price has the same value as the change in price of the option. OK. Why not? No, again we find the portfolio for something else. It's the option price that had better attach to our specially designed portfolio, not the other way around. Alright, then what is the self-financing portfolio supposed to do? It's easier to answer the question step-by-step backward. Let's live in a discrete world for a moment without losing generality. • At $T-1$: The self-financing portfolio P$_{T-1}$ is designed to replicate the option's value at $T$, i.e. pay-off. • At $T-2$: The self-financing portfolio P$_{T-2}$ is designed to replicate the most near future self-financing portfolio's value (which has been determined in the previous step) $P$ $_{T-1}$ that will replicate the option's pay-off in the further future $T$. There is nothing to do with $C$ $_{T-2}$. • At $T-3$: The self-financing portfolio $P$ $_{T-3}$ is designed to replicate the most near future self-financing portfolio $P$ $_{T-2}$ that will replicate $P$ $_{T-1}$ that will replicate the option's pay-off at $T$. Again, C$_{T-3}$ is irrelevant. • ... • At $T-n$: The self-financing portfolio $P$ $_{T-n}$ is designed to replicate the most near future self-financing portfolio $P$ $_{T-n+1}$ that will replicate $P$ $_{T-n+2}$ that will replicate ... (eventually option's pay-off at $T$). If I do every step correctly, I can hang on long enough to the expiry to let the contract/exchange/law enforce the arbitrage/convergence. Note that this is the only convergence (requirement) guaranteed in reality. C$_t$ = P$_t$ is not guaranteed. It is helped by the market/arbitrager, who are willing to (but they don't have to, nobody has to) diligently trade and hedge in order to secure the profit from the price difference. In other words, we apply certain requirements (restrictions) to our portfolio (it should follow the price of the option). As a consequence, we restrict the content of our portfolio (we fix ωs and ωb). But we do not apply any requirements to the price of the option. Now you should be able to answer yourself. In any cases, your portfolio is not restricted by the option price. It's the other way around: the market/arbitrager help restrict the option price using the self-financing portfolio. Well, we say that it should be a function of the S and t. As a consequence, we got the equation (1). But from that we will not get any relation between the derivatives of C. We, also constructed a replicating portfolio, but why its existence should restrict the evolution of the price of the option? Let me give you another analogy. If I tell you that at $T$, apple$_T$ is guaranteed to equal to orange$_T$. Now at $t$, an orange$_t$ = \$5, what do you think about apple$_t\$'s value? Should the existence of orange and its evolution restrict apple's price? What will you do if their price are different? To make my analogy more similar, let me also tell you this: "Hey, you got to do something to your orange$t$! Otherwise, it will become banana{t+1}!" There is no guarantee between apple and banana at expiry $T$. What I do not understand is what requirement binds the derivatives of C over S and t. It looks to me that the requirement that I am missing is the following: The price of the option should depend on S and t in such a way that it should be possible to create a self-financing portfolio which replicates the price of the option. Am I right? Do we have this requirement? And do we get the Black-Scholes PDE from this requirement? If it is the case, can anybody, please, explain where this requirement comes from. Is there still a missing requirement to you in my apple and orange analogy? Do you need any? :) Again, the market/arbitrager help option price depend on S and t. Their trading activities make things work like that. However, this is not a pre-request to create the self-financing portfolio. Now let me try to revise your argument: The price of the option had better depend on S and t in such a way that it follows the value of a specially designed self-financing portfolio by volatility arbitrager. The arbitrager's self-financing portfolio is designed to replicate the most near future self-financing portfolio that will eventually replicate the option's payoff at expiry. In conclusion, the missing requirement you originally thought is actually, a natural consequence. :) - Will highly appreciate if someone can kindly teach me how to make P\$_{T−2}\$ work. P\$_{T−2}\$ display properly in edit mode (and comments), but not after post? Why? Orz – 楊祝昇 Jan 6 '12 at 1:39 @chrisaycock: Thanks a lot for the attempt! :) Yeah, it feels like debugging, not fun. I found an acceptable alternative using \$P\$ \$_{T-3}\$. It seems that \$_{x}\$ ... z\$_{y}\$ doesn't work but \$_{x}\$ ... \$z\$ \$_{y}\$ will work, Orz. – 楊祝昇 Jan 6 '12 at 1:40 Orz, debugging makes my answer a community wiki. Thus, I deleted and re-post it. – 楊祝昇 Jan 6 '12 at 1:43 You are quite correct that there are further assumptions in the replicating argument. Once you are assuming your equation (2), that is, that $$dS = S(\mu dt + \sigma dW)$$ along with the determinism of interest rates, the rest of the replication argument necessarily follows because you have constructed a mathematical world with nothing else in it. Hence $S, t, r, \sigma$ and $q$ are sufficient to price the option. Nothing in finance forces any of the assumptions to be true, and as a matter of fact they are false to certain degrees. Consider for example a world in which $B(t)$ it itself stochastic, which of course we necessarily have to treat for itnerest rate options. In that case, an American call option will have a pricing formula depending on $S$, $t$, and various interest rate variables. Alternatively, consider the case where $$dS = S(\mu dt + \sigma dW + dJ)$$ for some jump process $J$. Depending on how $J$ behaves it may be theoretically impossible to replicate the option. One can still get at valuations using diversification arguments and the like, but those valuations will depend on other parameters. - thank you for the answer. My problem is not in the assumptions but in the logic which combines these assumptions to obtain the Black-Scholes PDE. It is OK for me to assume that the stock prices is given by the diffusion process (equation 2) and that the bond is deterministic (i.e. not stochastic). I also can accept the fact that it is always possible to construct a self-financing replicating portfolio. what is not clear is why all that restricts the price of the options (as function of S and t). – Roman Jan 5 '12 at 18:10 Well, what it means is that there is a "replication price" for options based solely on $S, t$ and the other variables, such that trading the option is unnecessary to achieve teh same payoff as the option itself. Actual option prices can differ, but any difference enables arbitrage and would hence disappear quickly (at least theoretically) – Brian B Jan 5 '12 at 19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 104, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335042834281921, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4001391
Physics Forums ## Where does Hamilton's principle come from? Does it have any deeper theoretical foundation or is it just true empirically? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Which Hamilton's principle? "The system develops in such a way as to extremize the action functional" ## Where does Hamilton's principle come from? Well, if you are familiar with the Feynman's path integral formulation of QM, the transition amplitude for a particular classical phase trajectory is proportional to: [tex] \exp \left( \frac{i}{\hbar} \, S \right) [/tex] where S is the classical action of the system. Now, in classical mechanics, there is no mention of Planck's constant. Formally, you go to classical mechanics by setting $\hbar \rightarrow 0$. In this limit, the integral with a huge complex exponential is evaluated by the stationary phase approximation, i.e. the dominant contribution to the integral comes from the phase trajectory that makes the action extremal: [tex] \delta S =0 [/tex] Intuitively, you can understand this by the rapidly oscillating phase factor. Any phase trajectory that is not extremal, has a counterpart with the opposite phase, canceling their contribution. The only one left is the extremal phase trajectory. You may notice that the last condition is the Hamilton's least (extremal) action principle. Thanks for the reply. Though I can't quite see, why is that only the extremal integral is not cancelled out. It seems that there is infinity of possible values with their corresponding action integrals - why should the extremal one survive - can you go into, please? Recognitions: Gold Member Quote by Trave11er Thanks for the reply. Though I can't quite see, why is that only the extremal integral is not cancelled out. It seems that there is infinity of possible values with their corresponding action integrals - why should the extremal one survive - can you go into, please? A more generalized formulation is to talk about a path (among the variation range) for which the action integral is stationary. As we know, a quadratic function has an extremum, but a third power function doesn't necessaritly have any extremum. The case of a stationary action integral is like a third power function with one point where the derivative is zero. For the sake of simplicity let's say the points of the graph of the third power function represent the variation range of possible paths. For any point that is not the point where the derivative is zero the following property applies: if you evaluate two paths, infinitisimally close to each other, then the action integrals of those two paths come out differently, and the difference is proportional to the magnitude of the derivative at that point of the graph. But at the point on the graph where the derivative is zero the outcomes of the action integral are "bunched up" so to speak. At the 'stationary point' of the 'principle of stationary action' there is a unique situation: for two paths, infinitisimally close to each other, the difference in their action integrals goes to zero the closer to the 'stationary point'. Mathematically this is trivial of course. As I understand it Feynman emphasized this as expressing a crucial physics point. Thanks a lot. Thread Tools Similar Threads for: Where does Hamilton's principle come from? Thread Forum Replies Classical Physics 1 Classical Physics 2 Classical Physics 0 Classical Physics 7 General Physics 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9054682850837708, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2184/what-is-the-prg-period-of-stream-ciphers-such-as-rc4-or-salsa20?rq=1
# What is the PRG period of stream ciphers such as RC4 or Salsa20? I am confused about how long a stream cipher can be used before you should change the key. To be concrete, let me use the stream cipher based on RC4 as an example. Let's say I want to encrypt a very long message. I pick a key with 128 bits and start encrypting using the RC4 stream cipher. How many rounds does RC4 have to run before its PRG starts over again? How about Salsa20: how long can that run with the same key before running the risk of leaking information? I realize that as a practical matter the period will probably be far longer than any real-world message, but I am still interested in knowing the bound. - ## 3 Answers The internal state of RC4 consists of a shuffled 256-element array and two pointers into that array. Thus, there are a total of $$256! \times 256^2 \approx 2^{1700.00}$$ possible states. Since the state update function of RC4 is reversible, it acts as a permutation on this set of possible states, so that every starting state will eventually recur after sufficiently many iterations. How many is sufficiently many? Well, if we assume that the update function behaves like a random permutation (which it, of course, does not, but it's a good first approximation), then the expected cycle length is approximately $2^{1700}/2 = 2^{1699}$ iterations(!). Indeed, the probability that the cycle length starting from a random state is at least $k$ iterations is approximately $1 - k/2^{1700}$; this means that hitting a cycle of less than, say, $2^{200}$ iterations should happen less than once in $2^{1700-200} = 2^{1500}$ initializations, i.e. basically never within the lifetime of the universe. Of course, as noted, the RC4 state update function is not a random permutation. For example, there's a known class of $254!$ short cycles of $256^2-256 = 65280$ states each; fortunately, the standard RC4 key setup is guaranteed never to hit them. For more information on the actual cycle structure of RC4, see e.g. "Cryptanalysis of RC4-like Ciphers" by S. Mister and S. E. Tavares. - By definition of Salsa20 used as a stream cipher, it uses a 64-bit block counter and 64-bytes blocks, limiting its capacity to $2^{73}$ bits. After that, the counter would rollover, and thus the output. In a sense, this is the period. RC4 has no such explicit limit on the size of its output. We do not known how to exactly compute the period size, which very likely depends on the key. Since the RC4 state can take at most $2^{16}\cdot 256!$ values, and RC4 outputs $8$ bits at each step, its period must be less than $2^{1703}$ bits. I do not know if this upper bound can be improved, but I'd be surprised if the exponent could be lowered by more than 25. The expected period for an iterated random permutation of $n$ elements is $(n+1)/2$, therefore I'd be surprised if the average RC4 period was lower than $2^{1676}$ bits. However, there could be much shorter period for some keys, and given the overly simplistic key scheduling of RC4 it might even be possible to exhibit such a key. - The RC4 period depends on the exact key (PDF) that was used, but should in general be very long. It is expected to be at least 10^100. See the first link, page 4, for a more detailed description of the relationship between the key and the period. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473658204078674, "perplexity_flag": "head"}
http://nrich.maths.org/411/clue
### So Big One side of a triangle is divided into segments of length a and b by the inscribed circle, with radius r. Prove that the area is: abr(a+b)/ab-r^2 ### Gold Again Without using a calculator, computer or tables find the exact values of cos36cos72 and also cos36 - cos72. ### Bend What is the longest stick that can be carried horizontally along a narrow corridor and around a right-angled bend? # 30-60-90 Polypuzzle ##### Stage: 5 Challenge Level: Finding the lengths depends on using the ratio for the sides of a 30-60-90 triangle as the name of the problem suggests. Below the diagrams show how to take the pieces which make a square of unit area and fit the pieces together to make an equilateral triangle of the same area with side $2t$ and knowing this you can calculate $t$. The way pieces fit together gives you that $p=t$ and the rest is up to you! You can calculate the length '$t$' knowing the area of the equilateral triangle. Pythagoras theorem and the sine rule can be used in finding the other lengths. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9026405811309814, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/03/09/dual-billiards/?like=1&source=post_flair&_wpnonce=bcc517a348
# The Unapologetic Mathematician ## Dual Billiards I’m not sure when I’ll get to post tomorrow, so I’m giving a little extra tonight: something I realized at about 3 in the morning last night. The last time I talked about billiards I was linking to Rich Schwartz’ paper on “outer billiards”. I noted that it seemed to me there should be some sort of “duality” between outer and inner billiards, turning problems in one into problems in the other. I think I’ve figured it out. I haven’t checked through all the details, but it looks good enough to satisfy my curiosity. If I were going to write a paper and use this fact, of course, I’d rake it over the coals. So here it is: inner and outer billiards are related by projective duality. Those of you who know what this is probably are already thinking either “ah, I see” or “of course it is. you didn’t see that before now?” For the rest of you, I’ll skim a bit about the projective plane and formal geometry. I’m sure I’ll eventually come back and write more about them, but for now I can give enough of the gist. First of all, projective geometry tweaks the familiar axioms from Euclid’s Elements. Euclid says that given a line $l$ and a point $p$ off the line there is exactly one line through $p$ parallel to $l$. In projective geometry, though, any two lines intersect, and moreover they intersect exactly once. That seems nutty at first, but we can make it work by adding a “line at infinity”, with one point for each direction parallel lines could run. If lines seem parallel, they’ll run into each other at that point. The other ingredient is a formal approach to geometry. Remember when I defined the natural numbers, I said that we don’t care what it is that satisfies these properties, just that anything satisfying these properties will do whatever we say the natural numbers will. Well the same goes for geometry. We have an intuitive idea of “point”, “line”, and “plane”, but that doesn’t really matter. David Hilbert famously said that all of Euclidean geometry should still be true if we replaced “point”, “line”, and “plane” with “table”, “chair”, “beer mug”, wherever they occur. Here: “Any two tables intersect in a unique chair”. So all the axioms of projective geometry do is set up a system of referents and relations like the Peano axioms do. Any things that fill those referential slots and relations between those things implementing the axioms will do. The points and lines of the regular Euclidean plane, plus those points and the line “at infinity”, satisfy the right axioms, and so everything projective geometry says will hold true for them. Here’s the trick: The lines and points of the projective plane also satisfy those axioms. Did you miss that? The axioms for “points” and “lines” of projective geometry are satisfied by the lines and points of the projective plane. We can switch lines and points and everything still works out! For example, we’ve talked about the axiom that any two “lines” share a unique “point”. There’s also an axiom that any two “points” share a unique “line” through them. Switching lines and points swaps these two axioms. Any result for projective geometry is really two results: one for the points and lines and one for the lines and points. Okay, here’s how this all ties back to billiards: don’t think of a ball moving along the table and bouncing off the edge. Think of the line the ball is traveling on and the line of the edge it’s moving towards. They share a unique point, where the lines intersect. Then there’s another line intersecting the edge line in the same point at the same angle, but “on the other side”. That’s the line the ball follows after the bounce, and so on. In outer billiards, we have a point and the edge point it’s heading towards. There’s a unique line between them, and another point on the same line the same distance away, but on the other side of the edge point. We interchange points and lines, lengths and angles, and transform inner billiards into outer billiards and vice versa. Of course, the calculations strike me as being pretty horrendous in all but the simplest situations. I don’t know that it would be useful to use this duality in practice, but maybe it can come in handy. Actually, for all I know the experts are already well aware of it. Still, it’s nice to have figured it out. ### Like this: Posted by John Armstrong | Billiards ## 2 Comments » 1. Hi John, I don’t buy this connection between inner and outer billiards that you mention. I agree that projective duality exchanges some of the corresponding concepts used in the definition of the two dynamical systems, but I don’t think that this fact raises the supposed correspondence to anything like the level of actual mathematics. Note that Projective dualities do not respect the metric notions of angle and length. Inner billiards is defined using the concept of the angle of incidence, and outer billiards is defined using the concept of distance to the shape. These two notions are not swapped by projective dualities, except in such trivial cases as the circle. Your analogy seems to work well on a vague level – and perhaps could inspire a general transfer of theorems between the two subjects – but when examined closely it breaks down. (The experts Here is one way to see that the connection you mention is not really useful: It doesn’t produce anything like an equivalence between any two examples of the dynamical systems. I mean “equivalence” in the precise, mathematical sense. There are different ways that two different dynamical systems could be equivalent. The strongest notion is one of conjugacy. One has maps f1: X1->X1 and f2 X2->X2, and then a conjugacy would be a map g: X1->X2 that made the obvious square commute. A weaker notion would be orbit equivalence, in which the map g carries f1-orbits to f2-orbits. A weaker notion still would be measurable orbit equivalence, where everything in sight is measure preserving, and g carries almost all orbits to orbits. And so on. In order for your “correspondence” to really be of any value, you would have to show how it induced some kind of useful equivalence between dynamical systems of the one kind and the other. I can imagine that this might work out just fine for circles, but I don’t see how it could work in any other case. It would take just one nontrivial case to convince me. Can you supply it? I think it is misleading to imply that it is only “horrendous calculations” that prevent your correspondence from being useful in practice. This implies that the correspondence actually works, and that it is only a matter of having the fortitude to implement it. best, Rich Comment by | September 9, 2007 | Reply 2. Thanks for your thoughts on the matter. It was mostly a guess on my part, and I didn’t really go much further with it beyond tossing it by Jayadev. You’re far more the expert here than I am, so I’ll bow to your knowledge. Still, I’m glad in a way to hear that this doesn’t work, and I’m not inadvertently sitting on top of a gold mine. Comment by | September 11, 2007 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394665956497192, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/31617-need-help-vectors-question.html
# Thread: 1. ## Need help with vectors question Sorry, pretty much in a rush to finish this soon as I have been stuck at understanding this vectors chapter for a really long time. I hope someone will be able to help with it...I've tried but I really cannot figure it out alone. The line L has cartesian equations $x - 2 = \frac{y - 5}{2} = -z$. Find the perpendicular distance from the origin to L. Find also the acute angle between L and the z-axis. I don't know what concepts I am missing out on but I am stuck. Help will be much appreciated. I'm trying to learn and not just get answers. Rushing on revision as I don't have much time...planning to finish my math revision in 2 months. 2. Workings: 3. Originally Posted by Puzzled Sorry, pretty much in a rush to finish this soon as I have been stuck at understanding this vectors chapter for a really long time. I hope someone will be able to help with it...I've tried but I really cannot figure it out alone. The line L has cartesian equations $x - 2 = \frac{y - 5}{2} = -z$. Find the perpendicular distance from the origin to L. Find also the acute angle between L and the z-axis. I don't know what concepts I am missing out on but I am stuck. Help will be much appreciated. I'm trying to learn and not just get answers. Rushing on revision as I don't have much time...planning to finish my math revision in 2 months. The parametric form of the line is: $<br /> L(\lambda ) = \left[ {\begin{array}{*{20}c}<br /> {\lambda - 2} \\<br /> {2\lambda + 5} \\<br /> { - \lambda } \\<br /> \end{array}} \right] = \left[ {\begin{array}{*{20}c}<br /> { - 2} \\<br /> 5 \\<br /> 0 \\<br /> \end{array}} \right] + \left[ {\begin{array}{*{20}c}<br /> \lambda \\<br /> {2\lambda } \\<br /> { - \lambda } \\<br /> \end{array}} \right]<br />$ RonL 4. The classic formula for distance of a point to a line is: $\begin{array}{l}<br /> l(t) = Q + tD\,\& \,P \notin l(t) \\ <br /> d\left( {l,P} \right) = \frac{{\left\| {\overrightarrow {QP} \times D} \right\|}}{{\left\| D \right\|}} \\ <br /> \end{array}$. 5. Originally Posted by Puzzled Sorry, pretty much in a rush to finish this soon as I have been stuck at understanding this vectors chapter for a really long time. I hope someone will be able to help with it...I've tried but I really cannot figure it out alone. The line L has cartesian equations $x - 2 = \frac{y - 5}{2} = -z$. Find the perpendicular distance from the origin to L. Find also the acute angle between L and the z-axis. I don't know what concepts I am missing out on but I am stuck. Help will be much appreciated. I'm trying to learn and not just get answers. Rushing on revision as I don't have much time...planning to finish my math revision in 2 months. The perpendicular distance from the origin to the line can be found in lots of ways. I will resolve $x=[-2,5,0]^t$ into a component parrallel to the line and subtract this from $x$ to find the component normal $n$ to the line and the answer will be the norm of this component. $<br /> \begin{array}{l}<br /> l = \left[ {\begin{array}{*{20}c}<br /> 1 \\<br /> 2 \\<br /> { - 1} \\<br /> \end{array}} \right]\\ \\ <br /> n = x - (x.l/\left\| l \right\|)l \\ <br /> \end{array}<br />$ RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695510864257812, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/166624-fixed-point-syntax-print.html
# Fixed point syntax! Printable View • December 20th 2010, 02:56 AM cantona Fixed point syntax! Hi everyone! I am a little bit confused how write this: My map is: $F(f)(x)=\frac{1}{2}f(x)-\frac{3}{5}$. I must calculated a fixed point. I know how to do that and I now how much it is. But problem is in syntax. Is that the right one: $F(f(x_0))=f(x_0)$ and the result in that form $f(x_0)=-\frac{6}{5}$ ? Or is there $f_0(x)$ and not $f(x_0)$ Thanks in advance • December 21st 2010, 03:58 AM HallsofIvy It is $f_0(x)$. Your map, F, is acting on functions, not numbers. A "fixed point" of F is a function, $f_0(x)$ such that $F(f_0)= f_0$. Since $F(f)(x)$ is defined as $\frac{1}{2}f(x)- \frac{3}{5}$ a fixed point is a function $f_0(x)$ such that $\frac{1}{2}f(x)- \frac{3}{5}= f(x)$ for all x. Of course, from that $\frac{1}{2}f(x)= -\frac{3}{5}$ so that the function $f_0$ is defined by $f_0(x)= -\frac{6}{5}$ for all x. All times are GMT -8. The time now is 03:07 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946038544178009, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/147401/does-the-contraction-from-the-localised-ring-preserve-colon-ideals-and-ideal-sum
# Does the contraction from the localised ring preserve colon ideals and ideal sums/products? Let $A$ be a commutative ring and $B = S^{-1}A$ be its localisation with respect to a certain multiplicative subset of $A$. Consider the contraction (in $A$) of colon ideals and ideal sums and ideal products (in $B$) as long as they make sense. Do contracted ideals still possess the original characteristics? That is, will the contraction of colon ideals (resp. of sums resp. of products) in $B$ be colon ideals (resp. sums resp. products) of the corresponding contracted ideals in $A$? I suspect there are counterexamples if $A$ is not noetherian, but I have no idea how to tackle this. (Thanks for pointing out obscurity. I hope this time it is more legible.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916664719581604, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6576?sort=newest
## Finding the new zeros of a “perturbed” polynomial ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a univariate polynomial with real coefficients, p(x), with degree n, suppose we know all the zeros xj, and they are all real. Now suppose I perturb each of the coefficients pj (for j ≤ n) by a small real perturbation εj. What are the conditions on the perturbations (edit: for example, how large can they be, by some measure) so that the solutions remain real? Some thoughts: surely people have thought of this problem in terms of a differential equation valid for small ε that lets you take the known solutions to the new solutions. Before I try to rederive that, does it have a name? This seems like a pretty general solution technique, but perhaps it is so general as to be intractable practically, which might explain why I don't know about it. If you ignore the smallness of the perturbation, then there is a general question here which seems like it might be related to Horn's problem: given two real polynomials p(x) and q(x) of degree n and their strictly real roots, what can you infer about the roots of p(x)+q(x)? This question is very interesting and I would love to hear what people know about it. But I'm also happy with the perturbed subproblem above, assuming it is indeed simpler. - 1 You might be interested in en.wikipedia.org/wiki/Wilkinson%27s_polynomial . – Qiaochu Yuan Nov 23 2009 at 17:09 All real and distinct? If there are even multiplicities, then in general no perturbation is allowed, I think. – Theo Johnson-Freyd Nov 23 2009 at 22:23 Well it depends on what "direction" the perturbation is in. p(x)=x^2 can be perturbed by adding any negative constant, but no positive constant. – jc Nov 23 2009 at 23:51 ## 5 Answers Assume that the roots of $p(x)$ are real and distinct. Then we may let $\mu > 0$ denote the minimum distance between any two roots. Let $q(x)$ be any polynomial of degree $< n$ such that $|q(\alpha)| < (\mu/2)^n$ for any root $\alpha$ of $p(x)$. Then I claim that $g(x) = p(x) + q(x)$ has real roots. Note that for a root $\alpha$ of $p(x)$, we have $|g(\alpha)| = |q(\alpha)| < (\mu/2)^n$. Yet $$|g(\alpha)| = \prod |\alpha - \beta_i|$$ where the $\beta_i$ are the roots of $g$. It follows that $g(x)$ has a root $\beta$ such that $|\alpha - \beta| < \mu/2$. By the triangle inequality, $\alpha$ is uniquely determined by $\beta$ and this inequality. In particular, $g(x)$ has exactly one root within $\mu/2$ of each root of $p(x)$. Since $g(x)$ and $p(x)$ have the same degree, this exhausts all the roots of $g(x)$. Yet if $\beta$ was complex, then $|\alpha - \beta| = |\alpha - \overline{\beta}| < \mu/2$, a contradiction. If the roots of $p(x)$ are not distinct, then one is in trouble, as the example $p(x) = x^2$ shows. This can be thought of as an application of the $\mathbf{R}$-version of Krasner's Lemma, and is, in particular, an (exact) analog of the argument that the splitting field of a separable polynomial $f(x)$ over the $p$-adics is locally constant. EDIT: I am confused about your second question. Let $f(x)$ be any polynomial of degree $n$ with real coefficients. Let $A$ and $B$ denote the maximum and minimum values of $f(x)$ on the interval $[0,n]$. Choose any $M > \max(|A|,|B|)$. A polynomial of degree $n$ may be defined by specifying $n+1$ of its values. Let $p(x)$ be the polynomial of degree $n$ such that the values $p(k)$ for $k = 0,\ldots,n$ are alternatively $-M$ and $+M$. By the intermediate value theorem, $p(x)$ has $n$ real roots. Let $q(x) = f(x) - p(x)$. The signs of $q(k)$ also alternate for $k = 0, \ldots,n$ by construction. It follows that $q(x)$ also has $n$ real roots. Yet $p(x) + q(x) = f(x)$, and thus the sum of two polynomials with real roots does not satisfy any restrictions. - 1 Those are both very nice answers. In the case of the sum of two arbitrary polynomials, is there an obvious way to make the question less trivial so that it has an interesting answer? (Just out of curiosity.) – Steve Flammia Nov 24 2009 at 4:11 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The following criterion might help (see for example these lecture notes): all roots of a polynomial are real if and only if the $n \times n$ Hankel matrix defined by $H_{ij}=s_k$ for all $i+j=k-2$ is positive definite, where $s_k$ is the sum of the $k$-th powers of the $n$ roots, i.e. $s_k = \sum_{i=1}^n x_i^k$ with your notation. These sums can be computed directly from the polynomial coefficients using Newton's identities. - Nice, I think the inequalities derived in those notes are the ones I was grasping for in my answer. – jc Nov 23 2009 at 20:26 That's a pretty neat result! Before I wade through the lecture notes, is there a simple intuition (perhaps having to do with moments?) as to why this might be true? – Steve Flammia Nov 23 2009 at 20:28 2 Well, one direction is easy. Suppose the roots are real. Let M be the Vandermonde matrix M_{ij} = (x_i)^j. Then the Hankel matrix in question is M^T M, so it is positive definite. – David Speyer Nov 24 2009 at 0:18 One way to go about this kind of question is to realize that the set of all real polynomials of degree n, interpreted as the space R^{n+1} of its coefficients is partitioned into sets in which the "behavior of the roots" is the same. E.g. for real quadratics $a_2 x^2 + a_1 x + a_0$, one has either two real roots or two complex roots depending on whether the discriminant $a_1^2-4a_0 a_2>0$ or <0 (when it's equal to zero one has a double root). The idea is that the behavior of the roots is the same in each chamber, where the chambers are separated by the set of polynomials that have multiple roots, i.e. where the discriminant of the polynomial is zero. This set is called the discriminant variety. (After taking logarithms, one can see the amoeba. In pictures like those, one should view the different components of the complement of the amoeba as the sets in the partition I'm talking about.) Hence one imagines your starting polynomial p(x) as a point in the space of polynomials which is in the chamber of polynomials where all roots are real and one can figure out which perturbations keep it in this chamber by calculating the shape of the discriminant variety. This is where my knowledge gets shaky. I believe it is known how to calculate the shape of the discriminant variety and the chambers, in terms of inequalities on the coefficients (see e.g. the answer to this question) but I was never able to figure out all the details myself. See e.g. Passare and Tsikh's article "Algebraic equations and hypergeometric series" which unfortunately is not online anywhere. Does anybody know how to do these calculations? (See the notes in F G's answer, I think they provide a way of making what I've said more explicit. In particular, Lemma 9) I might ask this second part as a question myself if nobody steps up to fill the rest of this answer in. - There is a famous example of this in numerical analysis called Winkinson's polynomial. A tiny change to one coefficient causes the location of some roots to change dramatically. Also, the polynomial goes from all real roots to having large imaginary components. The example shows that finding roots can be an ill-conditioned problem. - I don't know exactly what sort of answer you are expecting. An obvious observation (which you have presumably made yourself) are that if all the roots are simple then a sufficiently small perturbation will do. Conversely, if there is a repeated root then I'm pretty sure there are arbitrarily small perturbations that have imaginary root. This is obvious for roots with even multiplicity. Actually, it's also obvious for roots with odd multiplicity because one can perturb them to have multiplicity 1 (by adding a very small multiple of (x-c), where c is the root) and thereby not having a full set of real roots. But you seem to be asking for a more detailed answer, such as how to tell from the polynomial how small the perturbation needs to be. One thing that seems worth doing is working out the highest common factor of the polynomial and its derivative. If there are no repeated roots, then this will be 1, and if there aren't then it won't. But probably if the polynomial comes very close to having a repeated root (it might, say, be the polynomial x^2-0.0000000001) then something will show up in the calculation of the hcf. If I do Euclid's algorithm on x^2-t and 2x, then I get x^2-t=(x/2)2x-t and then 2x=(-2x/t)(-t). The very large coefficient (-2x/t) seems to be telling me that I only just managed to get a constant. Or perhaps it's easier just to say that x^2-t is almost a multiple of 2x. Anyhow, it seems likely that if Euclid's algorithm works robustly, then you can afford bigger perturbations. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530603289604187, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/47572/is-there-a-nice-way-to-write-navier-stokes-equations-in-exterior-calculus?answertab=active
# Is there a nice way to write Navier-Stokes equations in exterior calculus I'm considering to study some high-dimensional Navier-Stokes equations. One problem is to do write the viscous equation for vorticity, helicity and other conserved quantities. I think it might be better if it is possible to work with differential form and exterior calculus? Is there any reference that I may find somewhere? - 2 You mean in a form other than $\rho (\partial_t + v \cdot \nabla) v= f+ \dot{\overline \sigma}(\dot \nabla)$? Or formulas for the quantities you mention? – Muphrid Dec 25 '12 at 22:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152883887290955, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/09/11/all-derivations-of-semisimple-lie-algebras-are-inner/?like=1&source=post_flair&_wpnonce=c589d5bcbb
# The Unapologetic Mathematician ## All Derivations of Semisimple Lie Algebras are Inner It turns out that all the derivations on a semisimple Lie algebra $L$ are inner derivations. That is, they’re all of the form $\mathrm{ad}(x)$ for some $x\in L$. We know that the homomorphism $\mathrm{ad}:L\to\mathrm{Der}(L)$ is injective when $L$ is semisimple. Indeed, its kernel is exactly the center $Z(L)$, which we know is trivial. We are asserting that it is also surjective, and thus an isomorphism of Lie algebras. If we set $D=\mathrm{Der}(L)$ and $I=\mathrm{Im}(\mathrm{ad})$, we can see that $[D,M=I]\subseteq I$. Indeed, if $\delta$ is any derivation and $x\in L$, then we can check that $\displaystyle\begin{aligned}\left[\delta,\mathrm{ad}(x)\right](y)&=\delta([\mathrm{ad}(x)](y))-[\mathrm{ad}(x)](\delta(y))\\&=\delta([x,y])-[x,\delta(y)]\\&=[\delta(x),y]+[x,\delta(y)]-[x,\delta(y)]\\&=[\mathrm{ad}(\delta(x))](y)\end{aligned}$ This makes $I\subseteq D$ an ideal, so the Killing form $\kappa$ of $I$ is the restriction of $I\times I$ of the Killing form of $D$. Then we can define $I^\perp\subseteq D$ to be the subspace orthogonal (with respect to $\kappa$) to $I$, and the fact that the Killing form is nondegenerate tells us that $I\cap I^\perp=0$, and thus $[I,I^\perp]=0$. Now, if $\delta$ is an outer derivation — one not in $I$ — we can assume that it is orthogonal to $I$, since otherwise we just have to use $\kappa$ to project $\delta$ onto $I$ and subtract off that much to get another outer derivation that is orthogonal. But then we find that $\displaystyle\mathrm{ad}(\delta(x))=[\delta,\mathrm{ad}(x)]=0$ since this bracket is contained in $[I^\perp,I]=0$. But the fact that $\mathrm{ad}$ is injective means that $\delta(x)=0$ for all $x\in L$, and thus $\delta=0$. We conclude that $I^\perp=0$ and that $I=D$, and thus that $\mathrm{ad}$ is onto, as asserted. ### Like this: Posted by John Armstrong | Algebra, Lie Algebras ## 6 Comments » 1. I have a basic understanding of the nature of (finite) groups. I have played around with S3 for a few hours and days, and have respect for the depth of its properties. I experience both fear and awe when trying to think about larger and larger S groups. Algebra seems like an infinite maze. The fact there are only countably many possible algebraic expressions is some comfort, but not that much, because my brain feels decidedly finite. I am trying to get a grip on implications and applications. Does this theorem lead one (eventually) to a better understanding of polynomial equations? Is there something geometrical one can infer? Can I use it to write interesting computer programs? Comment by Ralph Dratman | September 11, 2012 | Reply 2. It helps simplify the project of classifying Lie algebras and their representations, which turns out to be of use on quite a lot of theoretical physics, for one thing. Comment by | September 11, 2012 | Reply 3. To me this seems like breaking large rocks for small change, but I guess you have to enjoy it. Still — to contradict myself — I actually do find this tempting. I wish someone would give me a few hints about those uses in physics. QCD or something like that? Comment by Ralph Dratman | September 12, 2012 | Reply 4. I’ve mentioned before — though quite a while ago, now — that Lie algebras arise as the “infinitesimal” versions of Lie groups. That is, if you look at a continuously-varying collection of symmetries, if you want to do calculus on it you’re going to end up using Lie algebras. Since quite a lot of modern physics is about symmetries, this comes up a lot. As for large rocks and small change, I understand the frustration given how hard I’ve twisted your arm to force you to read this stuff. Comment by | September 12, 2012 | Reply 5. As for me, I find the algebraic approach much easier to wrap my brain around than the other presentations of Lie theory I’ve struggled with. (Still not _easy_, just considerably easier I’ve seen all the topics that have been covered so far in this series many times before, but until now have never had any clue what the heck they meant. I *really* appreciate the way John has presented this material, it’s finally starting to make a bit of sense to me. Comment by | September 14, 2012 | Reply 6. I apologize. I certainly did not mean to denigrate your work. On the contrary, I admire it. Otherwise, of course, I would not be reading and asking questions. Comment by Ralph Dratman | September 14, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532477855682373, "perplexity_flag": "head"}
http://mathoverflow.net/questions/56281/domination-in-nice-lattices
Domination in Nice Lattices Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let an integer vector be nice when it has only two nonzero components, which sum to zero. So (0, 0, 3, 0, -3) and (-1, 0, 1, 0, 0) are examples of nice vectors in $n=5$ dimensions. Call a lattice nice if it is of the form $\mathbb{Z}$-span({$v_1, v_2, \dotsc, v_m$}), where all $v_i$ are nice. (Note: the $v_i$ are not necessarily linearly independent so $m$ could be larger than the dimension; although WOLOG $m \le \tbinom{n}{2}$.) Is the following decision problem in P? • INPUT: a nice lattice and a vector $x \in \mathbb{Z}^n$ • QUESTION: does the lattice contain a $y$ such that $y_i \ge x_i$ for all $i=1, \dotsc, n$? Motivation and background: • in general lattices, the problem is NP-complete (via the unbounded knapsack problem) • if this problem lies in P, one can solve an interesting more general problem A possibly interesting partial result would be to demonstrate any useful structure for nice lattices! (I posted a flow formulation of the problem on cstheory) - Trivial comment. If we take $v_1=(1,-1,0, \dots, 0), v_2=(0,1,-1,0, \dots, 0), \dots, v_{n-1}=(0, \dots, 1,-1)$, then the $\mathbb{Z}$-span of all the $v_i$'s is in fact all integer vectors whose coordinates sum to zero. – Tony Huynh Feb 22 2011 at 14:53 Not-as-trivial comment: There are at most finitely many vectors y which are both nice and greater than x (the sum of whose components must be at most 0). This can be transformed into finding integer solutions to a system of linear inequalities; if the transform went the other way, I think it would show the problem to be NP-complete. Gerhard "Ask Me About System Design" Paseman, 2011.02.22 – Gerhard Paseman Feb 22 2011 at 17:16 Of course, there is the brute force approach: if the sum of the coefficients of x is -r, try the O(r^(n-1)) nice vectors above x and see which are in the desired lattice. Gerhard "Is There A Better Way" Paseman, 2011.02.22 – Gerhard Paseman Feb 22 2011 at 17:23 Yes, your comments are valid. Tony, more generally than what you state, the problem becomes poly-time solvable if all of the nonzero entries form a chain under division (e.g., all 1s, 2s, 6s, 30s...) – Dave Pritchard Feb 24 2011 at 11:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903630256652832, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/269012/the-order-of-the-group?answertab=active
# The order of the group The order of the smallest possible non trivial group containing elements $x$ and $y$ such that $x^7 = y^2 = e$ and $yx = x^4 y$ is 1. 1 2. 2 3. 7 4. 14 I am stuck on this problem. Can anyone help me please? - 1 Since you are new, I want to give you some advice about the site: To get the best possible answers, you should explain what your thoughts on the problem are so far. That way, people won't tell you things you already know, and they can write answers at an appropriate level; also, people are much more willing to help you if you show that you've tried the problem yourself. If this is homework, please add the [homework] tag; people will still help, so don't worry. – Zev Chonoles♦ Jan 2 at 6:32 Are these questions from Subject GRE? – Ram Jan 2 at 6:33 2 Should we assume both $x$ and $y$ are nontrivial? – anon Jan 2 at 6:36 GRE means,????? – Prasanta Jan 2 at 6:36 2 I'm with @anon on this. Any group contains such elements $x$ and $y$, because we can always take $x=y=1$. Of course, in some groups there can be other suitable pairs $(x,y)$, but this is irrelevant if the question is formulated as it is. So some kind of an additional non-triviality assumption is required to make the question at least a bit interesting. – Dan Shved Jan 2 at 7:01 show 5 more comments ## 3 Answers Hint: 1. Lagrange's theorem: For any finite group $G$, the order of every subgroup $H$ of $G$ divides the order of $G$. 2. $yx = x^4y$; $y(yx) = x$ do the manipulations using $yx = x^4y$ again and again and you will get $x = x^m$ for some $m \le 7$ use that fact - assume $y \neq 0$ $yx = x^4y$ left multiply both sides by $y$ $x = y x^4y$ $\Rightarrow$ $x = x^4yx^3y$ $\Rightarrow$ $x = x^4x^4yx^2y$ finally you will get some relation between powers of $x$, you can use that. – Ram Jan 2 at 6:51 1 @ Ram, I get $x=x^{16}$,so,$x=x^2$,so $x=e$....which is wrong?? – Prasanta Jan 2 at 6:59 1 If x = e, plug it back into the relations you are given. Can you find out anything new about y? – Billy Jan 2 at 7:07 1 It would be helpful for me to improve my answers if reason for down vote is provided. I gave this particular method as answer since these type of questions appear very frequently in Indian exams and I felt it would be nice if I give a general procedure. – Ram Jan 2 at 7:15 2 @Ram I downvoted because I feel that this is not the best approach to the question. There's no need to prove that $x=e$ when we can simply set it to equal $e$. Although I do agree that proving it can also improve one's understanding of the problem. – Dan Shved Jan 2 at 7:21 show 10 more comments ## Did you find this question interesting? Try our newsletter email address First solution - $x=y=e$ satisfies the relations. The smallest non-trivial group has order 2, and the relations can be satisfied within that group. Now suppose we want $x$ and $y$ distinct (not stated in the question). If $x$ and $y$ are both non-trivial (i.e. $\neq e$) then the first relation shows that the group must contain non-trivial elements of orders 2 and 7, and then Lagrange means that the order of such a group must be divisible by 14. [note we have not used the second relation or shown it is compatible with this conclusion] So to get a non-trivial group of order less than 14, one of $x$ or $y$ must be the identity. If we set $y=e$ we see that $x^7=x^3=1$ so that $x=e$, which is not what we want. If we set $x=e$ then $y^2=e$ and this can be done in a group of order 2. - Up to isomorphism, there is only one group of order 1, one group of order 2, one group of order 7, and two groups of order 14. Figure out (or look up) what those groups are. Which ones are non-trivial? Look for suitable elements $x$ and $y$ in those groups. You may want to start from the smallest group and work your way up. - Nice answer, but I doubt that might be little advanced for him right now. – Ram Jan 2 at 6:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9372690916061401, "perplexity_flag": "head"}