url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/2877?sort=oldest
|
## When does the sheaf direct image functor f_* have a right adjoint?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Say f: X → Y is a morphism of schemes. The sheaf direct image functor f★ always has a left adjoint, namely the sheaf inverse image functor f★ (with tensoring).
Under what (sufficient) conditions do we know that f★ has a right adjoint? What is it?
Answer to a related question (edit): If f★ preserves quasicoherence, then its restriction to quasicoherents f★: QCoh(X) → QCoh(Y) has a right adjoint when f is affine (in particular, any closed immersion or finite morphism will do). The basic idea is to globalize the affine case (see Eric Wofsey's answer below); thanks to Pablo Solis for pointing me to page 6 of Ravi Vakil's notes explaining this.
In this question, I'm not restricting to the quasi-coherent categories. One reason for working with non-quasicoherents is that j! , the "extension by zero" right adjoint to j★ for an open immersion j, doesn't take qcoh to qcoh.
-
## 4 Answers
Provided that X is quasi-compact and separated and f is separated then what is true is that Rf`_`* : D(X) -> D(Y) has a right adjoint f^! where these are the unbounded derived categories of sheaves of modules with quasicoherent cohomology. This is the Grothendieck duality functor. Its existence can be viewed as a consequence of the fact that Rf`_`* in such a situation preserves coproducts. It is worth mentioning I guess that sometimes one does not need such big derived categories to produce an adjoint (for instance if X and Y are smooth and projective over some field).
One gets a right adjoint on the level of abelian categories of all sheaves of modules corresponding to the inclusion of a closed subscheme as well namely the inverse image of the subsheaf with supports.
There is the obvious cheat that if f:X -> Y is an isomorphism then the adjoint pair you know gives an equivalence so that f^* is also right adjoint to f`_`*.
-
Are you saying "f is a closed immersion" is a sufficient condition? – Andrew Critch Oct 28 2009 at 2:11
1
Yes, I believe that is all one needs and then one can take the open complement and get a six functor diagram (I hope there is not some hypothesis I am forgetting). This is discussed in Artin's book on Grothendieck Topologies I believe. – Greg Stevenson Oct 28 2009 at 2:25
Really? I believe it if you restrict attention to the qcoh categories (in fact all you need then is for f to be affine), but for the larger categories I'm unconvinced... I also couldn't find Artin's book :( – Andrew Critch Oct 29 2009 at 5:12
1
Really... it works in the generality of sheaves of modules on ringed spaces and sheaves of abelian groups on topological spaces. See for instance Dan Murfet's notes therisingsea.org/notes/RingedSpaceModules.pdf the relevant bit is Proposition 97 on page 38 for the sheaves of modules. – Greg Stevenson Oct 29 2009 at 6:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
We sometimes (when !!??) have a second adjoint pair (f_!,f^!) between the sheaf categories where f_! is direct image with proper support and f^! is a right adjoint. Now when f is proper on has f_!=f_* , so f^! is right adjoint to f_* .
You can find out what it does by adjoint yoga with the sheaf-Homs: Hom(f_*F,G)=Hom(F,f^!G).
Set F=O_X. Then (f^!G)(U)=Hom(O_X(U), f^!G(U))=Hom((f_* O_X)(U), G(U)). If you can determine the latter you know more. This is a very general answer, but it can help in concrete situations, boiling down the question to the knowledge of f_*O_X...
If you don't know whether the right adjoint exists, you can also try to define one via this equation.
-
Enclosing them in a pair of works – Greg Stevenson Oct 27 2009 at 20:59
The underscores only make italics in the preview; in the actual post it works fine. – Eric Wofsey Oct 27 2009 at 21:00
You were right, thanks – Peter Arndt Oct 27 2009 at 21:06
Thanks for the answer! Where can I read about this? – Andrew Critch Oct 27 2009 at 21:19
The general categorial picture is analyzed in here: tac.mta.ca/tac/volumes/11/4/11-04abs.html For the scheme part I am not sure, I have been skimming through EGA, Hartshorne's Residues and Duality and Konrad's Grothendieck Duality and Base Change but without success... - now I should go back to work! – Peter Arndt Oct 27 2009 at 22:17
show 4 more comments
If f_* has a right adjoint, it must preserve colimits and hence be right-exact. Thus a necessary condition is that the higher derived functors vanish. In particular, when everything is affine and we have X=Spec B, Y=Spec A, then I believe the adjoint exists and is given by M \mapsto Hom_A(B,M).
-
Are you sure about the affine case even when the modules aren't quasi-coherent? I know the corresponding result for A-modules is true, but a non-quasicoherent O_A-Module (sheaf on Spec A) won't be the "tilde" of any A-module... – Andrew Critch Oct 28 2009 at 2:07
Oh, I was assuming we were in the category of quasicoherent sheaves. – Eric Wofsey Oct 28 2009 at 2:56
i agree with the answer from Greg, but in a different formalism. He used derived category. I did not. If f:X--->Y is a morphism of scheme. Then if f is affine morphism. Then direct image functor f_* :Cx---->Cy, has right adjoint functor f^!. Where Cx and Cy are category of sheaves or in particular, category of quasi coherent sheaves. So, in general, what we need is only the scheme is quasi compact and quasi separated.(I believe the quasi compact can be dropped, but I need some time to check globalization, I believe the flag variety of affine Kac-Moody algebra which is not quasi compact lies in this case). The reference is M.Kontsevich and A.Rosenberg Noncommutative spaces and flat descent. MPIM preprint
There is another related question. In category of quasi coherent sheaves. Say, if we have scheme morphism X---->Y, we always can get inverse image functor f^*: QcohY--->QcohX. But the direct image functor does not always exist. But if the scheme we are talking about is quasi compact and quasi separated. It exists. There is of course weaker condition. For this case, one can see the following papers: 1 D.Orlov Quasi coherent sheaves in commutative and noncommutative geometry 2 M.Kontsevich, A.Rosenberg. Noncommutative stack MPIM preprint 3 SGA 6
-
I find your answer difficult to understand... twice you say "the scheme is quasi compact and quasi separated", but which scheme are you talking about? X? Y? Or are you talking about the morphism X --> Y? – Andrew Critch Nov 17 2009 at 3:29
Oh, I am sorry for not pointing out. I mean the quasi compactness and quasi separtedness of scheme Y implies the existence of direct image functor which is right adjoint to f^*. For your original question, I mean the scheme X should be quasi compact and quasi separated. – Shizhuo Zhang Nov 17 2009 at 6:41
Shizhuo, the question in the title is about the sheaf direct image, not about the direct image for quasicoherent modules (the latter direct image may even not exist for a general morphism of schemes). – Zoran Škoda May 15 2011 at 19:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9007976651191711, "perplexity_flag": "middle"}
|
http://wiki.panotools.org/index.php?title=Lens_Correction_in_PanoTools&diff=13055&oldid=13054
|
# Lens Correction in PanoTools
From PanoTools.org Wiki
(Difference between revisions)
| | | | |
|--------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| (→Portable Correction Coefficients) | | (→Fully Portable Corrections) | |
| Line 101: | | Line 101: | |
| | To make a lens correction fully portable also requires expressing the fitted focal length in physical units rather than in pixels. | | To make a lens correction fully portable also requires expressing the fitted focal length in physical units rather than in pixels. |
| | | | |
| − | The focal length in pixels must be known in order to compute, or to apply, any lens calibration, portable or not. Physically, this quantity depends on both lens and camera properties. In most cases today, equipment manufacturers' specifications provide the needed data: | + | The focal length in pixels must be known in order to compute, or to apply, any lens calibration, portable or not. Physically, this quantity depends on lens focal length and camera properties. In most cases today, equipment manufacturers' specifications provide the needed data: |
| − | : <math>\textstyle F = {(focal\ length\ in\ mm)} \frac {image\ width\ in\ pixels} {sensor\ width\ in\ mm}</math>. | + | : <math>\textstyle F_{pixels} = F_{mm} \frac {sensor\ width\ in\ pixels} {sensor\ width\ in\ mm}</math>. |
| | In any practical calibration scheme F is actually an adjustable parameter. However the fitted value is expected to be quite close to the one implied by these physical specifications, the main uncertainty being how accurately the nominal lens focal length reflects the true one. | | In any practical calibration scheme F is actually an adjustable parameter. However the fitted value is expected to be quite close to the one implied by these physical specifications, the main uncertainty being how accurately the nominal lens focal length reflects the true one. |
| | | | |
| − | If h is the width of a pixel in mm, the portable form of the fitted lens focal length is | + | With <math> h </math> the width of a pixel in mm, the portable form of the fitted lens focal length is |
| | : <math>\textstyle F_{mm} = h F_{pixels} = h e</math>, scale factor e defined above. | | : <math>\textstyle F_{mm} = h F_{pixels} = h e</math>, scale factor e defined above. |
| | | | |
| − | To adapt a portable correction to a given image format it is only necessary to calculate <math>\textstyle F_{pixels} </math> from <math>\textstyle F_{mm} </math> and the the given sensor dimensions and pixel counts. | + | To adapt a portable correction to a given image format it is only necessary to calculate <math>\textstyle F_{pixels} </math> from the calibrated <math>\textstyle F_{mm} </math> and the the given sensor dimensions and pixel counts. |
| | | | |
| | Unfortunately PanoTools does not use any physical parameters. So as it stands now, portable lens calibrations would have to be calculated, saved and restored by front-end software that has access to the camera's sensor size. But if physical pixel size were added to the PanoTools parameter set, libpano could handle portable lens calibrations autonomously. | | Unfortunately PanoTools does not use any physical parameters. So as it stands now, portable lens calibrations would have to be calculated, saved and restored by front-end software that has access to the camera's sensor size. But if physical pixel size were added to the PanoTools parameter set, libpano could handle portable lens calibrations autonomously. |
## Revision as of 04:34, 24 January 2011
This article is a mathematical analysis of how the panotools library computes lens correction parameters, why those parameters are not portable, and how they could be made portable. For a more general and use-oriented description of the current way panotools deals with lens distortion see Lens correction model
# Lens Correction in PanoTools
The PanoTools library implements an effective, but rather idiosyncratic method for correcting lens projections, that causes a good deal of puzzlement. Lens parameters optimized for one image format generally do not work for a different format; even rotating a set of images 90 degrees before aligning them produces different and incompatible lens parameters. One would expect that there must be a way to convert either of those parameter sets to a common form, that would apply equally well to both formats, or indeed to any image taken with the same lens. To see how that might be done, I have made a detailed analysis of PanoTools lens correction computations, based on the code in historic as well as current versions of libpano and helpful discussions with Helmut Dersch.
## Why Lens Correction?
To make a panoramic image from photographs, it is essential to be able to calculate the direction in space corresponding to any given position in a given photo. Specifically, we need to know the angles between the view directions of the photos (the alignment of the images), and a radial projection function that relates the distance of a point from image center to the true angle of view, measured from the optical axis of the lens. Given a set of control points linking the images, PanoTools estimates both the alignment and the lens projection by a nonlinear least squares fitting procedure -- optimization. Using the fitted lens parameters, the stitcher can correct each image to match the ideal geometry of the scene, according to whatever projection is chosen for the panorama. Done right, that makes all the images fit together perfectly; moreover, it yields a panoramic image that seems to have been made with a perfect lens.
## Ideal Lens Models
The radial projection curve of a real lens may approximate some known mathematical function, but in practice it must be determined experimentally, a process known as calibrating the lens. A calibration is a parametrized mathematical model, fitted to experimental data. The typical model consists of an ideal angle-to-radius function, and a polynomial that converts the ideal radius to the actual radius measured on the image.
Like many lens calibration programs, libpano uses just two ideal functions to model lenses: rectilinear, for 'normal' lenses, and 'fisheye', for all others. The rectilinear projection has radius proportional to the tangent of the view angle. PT's 'fisheye', better known as the equal-angle spherical projection, has radius proportional to the angle itself. The constant of proportionality is the lens focal length, F. With angle A in radians, and R the ideal radius, the formulas are
Rectilinear: $\textstyle \frac R F = \tan(A)$
Equal-angle: $\textstyle \frac R F = A$
Of course R and F have to be measured in the same units. If we have F in mm, then R is in mm also. If we want to measure R in pixels, then we need F in pixels. In any case, F is the constant of proportionality between the actual radius and the value of a trigonometric function that defines the basic shape of the projection.
In physical optics, focal length is defined as the first derivative of R by A, at A = 0. That is easy to see if we write $\textstyle R = F A$ or $\textstyle R = F \tan(A)$, because the slopes of A and tan(A) are both 1 at A = 0. This is also true of other trigonometric functions commonly used as ideal lens projections:
Equal-Area: $\textstyle \frac R F = 2\sin\left(\frac A 2 \right)$
Stereographic: $\textstyle \frac R F = 2\tan\left(\frac A 2 \right)$.
The dimensionless quantity $\textstyle N = \frac R F$ is the normalized ideal radius. Multiplying N by the focal length, in any units, gives the ideal image radius in the same units.
## Generic Correction Scheme
The difference between the real lens projection and the ideal one is modeled by an adjustable correction function that gives the observed radius as a function of the ideal radius. The adjustable part is almost always a polynomial, because it it easy to fit polynomials to experimental data. The argument to the polynomial should be the normalized ideal radius,
$\textstyle N = \frac R F$,
because that makes the polynomial coefficients independent of how image size is measured. The constant term is 0 because both radii are zero at the same point. If the coefficient of the linear term is 1, so that the first derivative at 0 is 1, then the value of the polynomial will be the normalized observed radius, n = r / F. Multiplying n by the focal length, in any units, gives the observed image radius in the same units:
$\textstyle r = F n$.
Many calibration packages use a polynomial with only even order terms beyond the first:
$\textstyle n = N + a N^2 + b N^4 + c N^6$.
Equivalently
$\textstyle n = N ( 1 + a N + b N^3 + c N^5 )$
The expression in parentheses is the ratio of observed to ideal radius, which is expected to be close to 1 everywhere if the ideal model function is well chosen.
## PanoTools Correction Scheme
Lens correction in PanoTools is unusual in several respects. First, it ignores the physical parameters of the lens (focal length) and camera (pixel size). Instead, it computes angle-to-radius scale factors from image dimensions and fields of view, as described below. All correction computations are in terms of image radii, measured in pixels, rather than the normalized radii described above. However, normalized radii are evaluated implicitly.
Second, the correction is computed in equal-angle spherical coordinates, rather than camera coordinates. Observed image points are found by remapping those coordinates to the ideal lens projection, and rescaling them according to the ratio of pixel sizes in the source and ideal images.
Third, the correction polynomial is normalized to hold a certain radius, $\textstyle r_0$, constant. It is a cubic polynomial, that computes the ratio of observed to ideal radius. Its argument is $\textstyle \frac R {r_0}$, and its constant term is set so that the result is exactly 1 when the argument is 1, that is, when $\textstyle R = r_0$. With
$\textstyle X = \frac R {r_0}$
The correction factor is
$\textstyle x = (1 - a - b - c) + a X + b X^2 + c X^3$,
and the observed radius is given by
$\textstyle r = R x$.
The observed radius is thus formally a 4th order polynomial in R:
$\textstyle r = s R + t R^2 + u R^3 + v R^4$,
where $\textstyle s = (1-a-b-c),\ t = \frac a {r_0},\ u = \frac b {{r_0}^2},\ v = \frac c {{r_0}^3}$.
The normalization makes the PanoTools polynomial equivalent to the generic one, but with different coefficients. This can be seen as follows. The ideal radius is
$\textstyle R = F N$
where F is the ideal focal length in pixels, so we can write the adjusted radius as
$\textstyle r = F N\ \operatorname{poly}\left(\frac {F N} {r_0} \right)$,
If $r_0$ is proportional to F, then the quotient is proportional to N, and the polynomial is equivalent to one whose argument is N. That is the case when $r_0$ is proportional to source image size, which is proportional to F by definition. But the proportionality factor varies with source image format, so the PanoTools coefficients also depend on source format.
The overall computation proceeds as follows. PanoTools computes the ideal radius R by mapping a point in the panorama (which plays the role of the ideal image) to equal angle spherical projection. Then $\textstyle R = \sqrt{ h^2 + v^2 }$, where h and v are the pixel coordinates relative to the center of the equal-angle projection. Then PT's radius() function computes x as described, and returns scaled coordinates ( h x, v x ). If the lens is rectilinear, PT next remaps those coordinates to rectilinear; if it is a fisheye, no remapping is needed. In either case the coordinates are finally rescaled to account for any difference in resolution between the panorama and the source image. The scale factor is computed from the dimensions and angular fields of view of the panorama and the source image, as follows.
$\textstyle d = \frac {half\ width\ of\ pano} {\operatorname{A2Npano}\ {(half\ hfov\ of\ pano)}}$,
$\textstyle e = \frac {half\ width\ of\ source} {\operatorname{A2Nsource}\ {(half\ hfov\ of\ source)}}$,
where A2Npano() and A2Nsource() are the ideal projection functions for panorama and lens. There are many panorama projections but only two lens projections:
$\textstyle {\operatorname{A2Nsource}( a )} = tan( a )$ for rectilinear lenses
$\textstyle {\operatorname{A2Nsource}( a )} = a$ for fisheye lenses.
The scale factor from panorama to source coordinates is
$\textstyle s = \frac e d$.
Factors $d$ and $e$ are focal lengths in pixels, because A2N() yields the normalized radius, equal to $\textstyle \frac R F$. For the panorama, which follows an ideal projection, $d$ is identical to $F_{pano}$. In fact, under the name “distance factor”, $d$ is used by many of libpano's coordinate transformation functions to convert radius in pixels to the ideal normalized radius in trigonometric units.
The true source projection is unknown, so $e$ is an estimate of $F_{source}$ according to the fitted correction parameters. Since hfov is one of those parameters, $e$ will be proportional to the true $F_{source}$; the constant of proportionality will approach 1 as the fitted polynomial coefficients approach 0.
In other words, $e$ is a biased estimate of $F_{source}$. However, the overall correction is equivalent to the generic one because the bias in the correction polynomial cancels the bias in the focal length. The only real defect in the PanoTools scheme is that its parameters work for just one image format.
## Portable Correction Coefficients
In the generic calibration scheme, dividing image coordinates by F makes it possible for the fitted correction parameters (apart from F) to be independent of both image format and physical pixel size, so that they apply to any image made with the given lens. As explained above, dividing image coordinates by any factor proportional to F is logically sufficient; however values other than F itself lead to non-portable parameter values.
In the PanoTools scheme, the "distance parameter" d, which is the focal length in panorama pixels, would be the appropriate divisor. That would make the argument of the radius scaling polynomial the ideal normalized radius,
$\textstyle N = \frac {R_{pano}} {F_{pano}}$
and the fitted coefficient values would be portable.
Alternatively the current non-portable coefficients can be converted using data available inside libpano. With
$\textstyle k = \frac d {r_0}$,
$\textstyle w' = w k$,
$\textstyle a' = a k^2$,
$\textstyle b' = b k^3$,
$\textstyle c' = c k^4$
are the coefficients of a polynomial in $\textstyle N = \frac R d$ that computes the same radius correction factor as the PT polynomial. The constant term w' is no longer a simple function of the other three, however it can be reduced to 1 by dividing all coefficients by w'. The reduced coefficients are
$\textstyle W = 1$
$\textstyle A = a \frac k w$
$\textstyle B = b \frac {k^2} w$
$\textstyle C = c \frac {k^3} w$
So the portable radius mapping is
$\textstyle r = R ( 1 + A N + B N^2 + C N^3 )$
Along with the ideal function A2Nsource(), which gives N as a function of angle, this constitutes a portable lens correction function.
## Fully Portable Corrections
To make a lens correction fully portable also requires expressing the fitted focal length in physical units rather than in pixels.
The focal length in pixels must be known in order to compute, or to apply, any lens calibration, portable or not. Physically, this quantity depends on lens focal length and camera properties. In most cases today, equipment manufacturers' specifications provide the needed data:
$\textstyle F_{pixels} = F_{mm} \frac {sensor\ width\ in\ pixels} {sensor\ width\ in\ mm}$.
In any practical calibration scheme F is actually an adjustable parameter. However the fitted value is expected to be quite close to the one implied by these physical specifications, the main uncertainty being how accurately the nominal lens focal length reflects the true one.
With $h$ the width of a pixel in mm, the portable form of the fitted lens focal length is
$\textstyle F_{mm} = h F_{pixels} = h e$, scale factor e defined above.
To adapt a portable correction to a given image format it is only necessary to calculate $\textstyle F_{pixels}$ from the calibrated $\textstyle F_{mm}$ and the the given sensor dimensions and pixel counts.
Unfortunately PanoTools does not use any physical parameters. So as it stands now, portable lens calibrations would have to be calculated, saved and restored by front-end software that has access to the camera's sensor size. But if physical pixel size were added to the PanoTools parameter set, libpano could handle portable lens calibrations autonomously.
```-- 23 Jan 2010 T K Sharpless
```
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 58, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8863586783409119, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/150727/locus-of-z-as-cartesian-equation
|
# Locus of Z as cartesian equation
Could you please help with this locus problem? I think I am aiming for a cartesian equation in terms of $x$ and $y$ that may look like a circle equation e.g. $(x+a)^2 + (y+b)^2$ but I'm not sure.
Given there is a locus of $z$ such that $$\frac{|z-12j|}{|z+36|}=3,$$ then $|z-12j| = 3|z+36|$.
Now I want to write the locus of $z$ as a cartesian equation in terms of $x$ and $y$. Let $z=x+yj$. $$\begin{align*} |x+yj - 12j| &= 3|x+yj+36|\\ |x+(y-12)j| &= 3|(x+36)+yj| \end{align*}$$ Where should I go from here?
-
1
Square both sides and use the fact that $|a+bj|^2 = a^2 + b^2$, then simplify. – Eric Stucky May 28 '12 at 11:12
You can type equations directly, instead of creating images and then linking to them. In fact, it's better not to rely on images. – Arturo Magidin May 28 '12 at 11:46
Thanks @ArturoMagidin I'd like to learn how to type equations, is there some instructions for that somewhere on here? – NSDigital May 28 '12 at 11:50
– Arturo Magidin May 28 '12 at 11:53
Note: $(x+a)^2 + (y+b)^2$ is not an equation. A dead giveaway that it is not an equation is the fact that it does not have an equal sign in it. – Arturo Magidin May 29 '12 at 17:21
## 1 Answer
Just as with real numbers, where squaring removes absolute value bars, you can do something similar with complex numbers. Remember that if $a$ and $b$ are real numbers (as your $x$ and $y$ appear to be, though you never actually said so), we have $$|a+bj| = \sqrt{a^2+b^2},$$ so that $|a+bj|^2 = a^2+b^2$. So from $$|x + (y-12)j| = 3|(x+36)+yj|$$ we get $$|x+(y-12)j|^2 = 9|(x+36)+yj|^2$$ and whence to $$\begin{align*} x^2 + (y-12)^2 &= 9\Bigl((x+36)^2 + y^2\Bigr)\\ x^2 + y^2 - 24y + 144 &= 9\Bigl( x^2 + 72x + 1296 + y^2\Bigr)\\ 0 &= 8x^2 + 648x + 8y^2 + 24y + 11520. \end{align*}$$ Since both sides of your original equation necessarily have the same sign, there is no problem with the possible introduction of spurious solutions, so you'll get a specific conic out of this equation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533690214157104, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/102287/counting-distinct-sets-of-pairwise-distances-for-some-number-of-passengers-loade
|
# Counting distinct sets of pairwise distances for some number of passengers loaded onto different cars of a ferris wheel
We load three different passengers - A, B, and C - onto a ferris wheel with $M$ cars symmetrically distributed around the wheel. Let d(AB), d(AC), and d(BC) represent the number of cars between all possible sets of two passengers going clockwise around the ferris wheel. If no two passengers are loaded in the same car, how many distinct sets, $S$, of these three pairwise distances (d(AB), d(AC), d(BC)) are possible? How does this generalize as the number of passengers loaded in distinct cars increases to $M$?
To clarify, each passenger is distinct/labeled, and the set of all possible pairwise distances should form an ordered list. For example, (d(AB), d(AC), d(BC)) = (2,3,4) would be counted as distinct from (d(AB), d(AC), d(BC)) = (3,2,4). If one filled the ferris wheel with $(M - 1)$ passengers, there would therefore be $S = M$ possible sets of these pairwise distances.
Correct me if I'm wrong, but I believe this is equivalent to asking for the number, $S$, of possible ordered lists of $N$ zero or positive integers for some $N < M$ s.t. their sum is equal to $(M - N)$.
Results of brute force calculations:
For $N$ = 3 and $M$ = 3 to 10, a brute force calculation shows that the number of such distinct ordered lists, $S$, increases as {1,3,6,10,15,21,28,36}, which fits the relation: $S = \frac{1}{2}(M-2)(M-1)$, or $S = \frac{1}{2}\frac{(M-1)!}{(M-N)!}$. This relation holds when $M = 100$ and $S = 4851$.
For $N$ = 4 and $M$ = 4 to 10, a brute force calculation shows that the number of such distinct ordered lists, $S$, increases as {1,4,10,20,35,56,84}, which fits the relation: $S = \frac{1}{6}(M-3)(M-2)(M-1)$, or $S = \frac{1}{6}\frac{(M-1)!}{(M-N)!}$. This relation holds when $M = 20$ and $S = 969$.
For $N$ = 5 and $M$ = 5 to 10, a brute force calculation shows that the number of such distinct ordered lists, $S$, increases as {1,5,15,35,70,126}, which fits the relation: $S = \frac{1}{24}(M-3)(M-2)(M-1)$, or $S = \frac{1}{24}\frac{(M-1)!}{(M-N)!}$.
For $N$ = 6 and $M$ = 6 to 10, we have $S$ increasing as {1,6,21,56,126}, which fits the relation $S = \frac{1}{120}\frac{(M-1)!}{(M-N)!}$.
From these results, one might guess that $S = C*\frac{(M-1)!}{(M-N)!}$, where $C$ is some fractional coefficient. Using the predicted coefficients for $N = {3,4,5,6}$, which are {$\frac{1}{2}$,$\frac{1}{6}$,$\frac{1}{24}$,$\frac{1}{120}$}, I'd say that $C = \frac{1}{(N-1)!}$, which gives us a solution of:
$S = \frac{1}{(N-1)!}\frac{(M-1)!}{(M-N)!}$
Is this true?
-
You're asking for the number of partitions of $M$ into 3 parts. You can use that phrase for a websearch. I'm pretty sure the question has been asked and answered on this website. – Gerry Myerson Jan 25 '12 at 12:30
@Gerry Myerson, the answer should be greater than the number of integer partitions of the cars without passengers, no? I'm asking for the count of all sets of possible pairwise distances between distinct/labeled passengers. – Tess Jan 25 '12 at 12:37
I missed that $(2,3,4)$ and $(3,2,4)$ count as different. So then it's the number of solutions in positive integers of $x+y+z=M$, which has also come up on this website more than once. – Gerry Myerson Jan 25 '12 at 23:01
## 1 Answer
Your numerical data don’t agree with your statement of the problem, though they’re related. I’ll solve the problem as you stated it and show what related problem your numerical data solve.
Let the riders be $A_0,A_1,\dots,A_{N-1}$. Number the cars clockwise from $0$ through $M-1$, with $A_0$ occupying Car $0$. If $A_k$ occupies car $c_k$, the distance from $A_0$ to $A_k$ is $c_k$. Thus, the relative placements of all of the riders are completely determined by the list $\langle c_1,\dots,c_{N-1}\rangle$ of distances from $A_0$, and the other distances between riders can be inferred from these, since $M$ is known. The number of distinct ordered lists of distances is therefore equal to the number of lists of the form $\langle c_1,\dots,c_{N-1}\rangle$, where $c_k$ is the distance from $A_0$ to $A_k$. This is simply the number of $(N-1)$-tuples of distinct integers from the set $\{1,2,\dots,M-1\}$, which is $$(M-1)(M-2)\cdots(M-N+1)=\frac{(M-1)!}{(M-N)!}\;:$$ there are $M-1$ possible locations for $A_1$, after which $A_2$ can occupy any of the $M-2$ remaining cars, and so on.
For $M=N=3$, for instance, there are $(3-1)(3-2)=2$ possible lists, $\langle 1,2\rangle$ and $\langle 2,1\rangle$, depending on whether the clockwise seating order is $A_0,A_1,A_2$ or $A_0,A_2,A_1$. Similarly, for $N=3$ and $M=4$ there are $6$ possible lists, not $3$: $\langle 1,2\rangle,\langle 2,1\rangle,\langle 1,3\rangle,\langle 3,1\rangle,\langle 2,3\rangle$, and $\langle 3,2\rangle$.
Suppose, now, that you don’t care about the identities of the riders other that $A_0$. That is, you’re interested just in the set of inter-rider distances, not in which distance goes with which pair of riders. Then all you need to know is which set of $N-1$ cars are occupied by riders $A_1$ through $A_{N-1}$: you need to know the set $\{c_1,\dots,c_{N-1}\}$ of numbers, but not the order in which its members are associated with riders $A_1$ through $A_{N-1}$. You can still infer from this set what all of the other inter-rider distances are; you just don’t know which distance goes with which pair of riders. For this problem you’re counting the $(N-1)$-element subsets of the $(M-1)$-element set $\{1,\dots,M-1\}$, a number which is given by the binomial coefficient
$$\binom{M-1}{N-1}=\frac{(M-1)!}{(N-1)!(M-N)!}\;.$$
This expression gives the numbers from your brute force calculations.
The relationship between the two counts is straightforward: each set of $N-1$ car numbers can be arranged in an ordered list in $(N-1)!$ ways, so the number of lists must be $(N-1)!$ times the number of unordered sets, and indeed
$$\frac{(M-1)!}{(M-N)!}=(N-1)!\binom{M-1}{N-1}\;.$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468472003936768, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/1683/mechanics-around-a-rail-tank-wagon/1800
|
# Mechanics around a rail tank wagon
Some time ago I came across a problem which might be of interest to the physics.se, I think. The problem sounds like a homework problem, but I think it is not trivial (i am still thinking about it):
Consider a rail tank wagon filled with liquid, say water.
Suppose that at some moment $t=0$, a nozzle is opened at left side of the tank at the bottom. The water jet from the nozzle is directed vertically down. Question:
What is the final velocity of the rail tank wagon after emptying?
Simplifications and assumptions:
Rail tracks lie horizontally, there is no rolling (air) friction, the speed of the water jet from the nozzle is subject to the Torricelli's law, the horizontal cross-section of the tank is a constant, the water surface inside the tank remains horizontal.
Data given:
$M$ (mass of the wagon without water)
$m$ (initial mass of the water)
$S$ (horizontal cross-section of the tank)
$S\gg s$ (cross sectional area of the nozzle)
$\rho$ (density of the water)
$l$ (horizontal distance from the nozzle to the centre of the mass of the wagon with water)
$g$ (gravitational acceleration)
My thinking at the moment is whether dimensional methods can shed light on a way to the solution. One thing is obvious: If $l=0$ then the wagon will not move at all.
-
3
@Pavel: what kind of argument is that? There is also no reason for it to not start moving. Except if you provide such reason and you didn't provide any reason (in particular, you haven't used any of the assumptions of the problem in your answer). This problem is certainly very non-trivial and it reminds me of Feynman's problem of the sprinkler (to which he provided at least two opposite answers at different times and ended up doing an experiment to make sure). – Marek Dec 6 '10 at 21:03
2
@Martin: thanks for this surprisingly difficult problem ! – Frédéric Grosshans Dec 7 '10 at 9:17
4
This has to be one of the best questions on this site :-) – Sklivvz♦ Dec 8 '10 at 21:35
3
I found that after about 10 or 20 comments, discussing this problem was frustrating. My mindset was, "I spent three hours of hard focus working on this problem. If everyone else would stop yapping for a while and do the same, they would see that I am right." Looking back, I realize this attitude was pretty arrogant, and served only to upset me and probably piss off some of my correspondents. So I would like to apologize in general for any curt or rude comments I made here and retire from further conversation. Thank you, Martin, for the interesting problem. – Mark Eichenlaub Dec 10 '10 at 22:42
3
– Marek Dec 11 '10 at 14:42
show 28 more comments
## 10 Answers
Interesting problem. I think my approach and answer is very close to other posted solutions. I also added a possible scenario. The basic summary is it is the change in the average momentum of the water in the wagon that causes the wagon to move. Requiring the water to distribute it self evenly in the wagon causes this relation:
• average momentum of water in the wagon = $l\times$ mass flow out of wagon
In cases where the wagon has been and forever shall expel water at a constant rate, the wagon stands still. Imagine it being refilled from above its center of mass. You can actually do this same problem with an empty cart being filled from above instead of emptying below. With $l$ being the horizontal point from the wagon's center of mass at which the water falls down.
The wagon does move if there is some fluctuation in the mass flow out of the wagon either by abrupt starts/stops or by running out of water.
Variables
• $t_{c}\to$ time when wagon runs dry
• $l\to$ distance from center of mass of wagon to nozzle, positive $l$ implies nozzle is on the right side of the wagon
• $x(t)\to$ center of mass of wagon
• $x_{cm}(t)\to$ center of mass of everything
• $h(t)\to$ height of water in the container
• $m(t)\to$total mass of the wagon including any water it holds
• $m_{w}\to$ mass of initial water
• $m_{c}\to$ mass of the wagon; the c is for the critical point of $m(t)$ when all the water is gone.
Originally c was for container but it makes sense $m(t_{c})=m_c$
Frame of Reference
• $x(0)=0$
• $\dot{x}(0)=0$
Drainage
I'm going to side step the issue of initial conditions for now. I'm going to treat the system as if the nozzle was always open and water has always been running. Only concerned with how a container with a constant cross section, S, would drain.
• Torricelli's Law : Mass Flow =$-\dot{m}(t)$ : Mass of System
$$v(t)=\sqrt{2 g h(t)}$$ $$-\dot{m}(t)=\rho s v(t)$$ $$m(t)=\rho S h(t) + m_{c}$$ Combine to eliminate $m(t)$ and $v(t)$ $$\frac{\dot{h}}{\sqrt{h(t)}}=-\frac{s}{S}\sqrt{2 g}$$
The answer to the differential equation: $$h(t)=h(0){\left(1-t\sqrt{\frac{g {s}^{2}}{2 {S}^{2} h(0)}}\right)}^{2}$$ $$h(t)=h(0){\left(1-\frac{t}{t_{c}}\right)}^{2}$$ where $t_{c}=\sqrt{\frac{2 {S}^{2} h(0)}{g {s}^{2}}}$ and $h(t>t_c)=0$
from there we get $m(t)$: $$m(t)=\rho S h(0) {(1-\frac{t}{t_{c}})}^{2} + m_{c}$$ $$m(t)=m_{w} {(1-\frac{t}{t_{c}})}^{2} + m_{c}$$ and for $m(t>t_{c})$ is simply $m_{c}$, the mass of the wagon
Center of Mass
In order to find the center of mass we will account for all of it. At $t=0$, $x_{cm}(0)=x(0)$=0 since all the mass is in the wagon and we assumed equally distributed.
• The Wagon and its contents $$m(t)x(t)$$
• Water that has left the wagon
If water leaves the the wagon at $t=\tau$, then it will have speed $\dot{x}(\tau)$. Therefore its location is $f(t,\tau)$: $$f(t,\tau) = l+x(\tau)+\dot{x}(\tau)(t-\tau)$$ Then we just integrate to get their contributions. We get their infinitesimal masses from our mass flow: $$\int_0^t f(t,\tau) [-\dot{m}(\tau)]d\tau$$
• Combine $$m(0)x_{cm}(t)=m(t)x(t)-\int_0^t f(t,\tau)\dot{m}(\tau)d\tau$$
Differentiating gives us: $$m(0)\dot{x_{cm}}(t)=\dot{m}(t)x(t)+m(t)\dot{x}(t)-f(t,t)\dot{m}(t)-\int_0^t \frac{df(t,\tau)}{dt}\dot{m}(\tau)d\tau$$
Simplifying: $$f(t,t)=x(t)+ l$$ $$\frac{df(t,\tau)}{dt}=\dot{x}(\tau)$$
Integration by parts: $$\int_0^t\dot{m}(\tau)\dot{x}(\tau)d\tau=m(t)\dot{x}(t)-\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$
Repalce: $$m(0)\dot{x_{cm}}(t)=\dot{m}(t)x(t)+m(t)\dot{x}(t)-\dot{m}(t)(x(t)+ l)-m(t)\dot{x}(t)+\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$
Explanation - In order these terms stand for:
• mass dissapearing from wagon at the center of mass
• momentum of wagon and its contents
• mass appearing outside of wagon at the nozzle
• last two terms account for momentum of water outside of the wagon
Combining the first and third terms gives us the average momentum the water in the wagon must have to maintain its even distribution horizontally in the container. They are not evidence for instantaneous dissapearance from the center and reappearance at the nozzle.
Result: $$m(0)\dot{x_{cm}}(t)=-\dot{m}(t) l+\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$
where: $$m(t)=m_{w} {(1-\frac{t}{t_{c}})}^{2} + m_{c}$$
Wagon w/ Brakes
In this scenario, the wagon has been losing water before $t=0$. However the force of the brakes keeps $\dot{x}(t)=0$. At $t=0$ the brakes are released and it is allowed to move. This avoids any instantaneous jump in velocity by the wagon. It also allows $x_{cm}$ to be a non-zero constant after $t=0$.
Setting $t=0$: $$m(0)\dot{x_{cm}}(0)=-\dot{m}(0) l+\int_0^0m(\tau)\ddot{x}(\tau)d\tau$$ $$m(0)\dot{x_{cm}}(0)=-\dot{m}(0) l$$ $$\dot{x_{cm}}(0)=-\frac{\dot{m}(0)}{m(0)} l$$ $$\dot{x_{cm}}(0)=\frac{2 l m_w}{t_c m(0)}$$
For $t>0$ there is no force from the brakes: $$\ddot{x_{cm}}(t\ge0)=0$$ $$\dot{x_{cm}}(t\ge0)=\frac{2 l m_w}{t_c m(0)}$$
In other words in this situation at $t=0$ the momentum of the whole system matches that of the water in side the wagon. The only question now is as time evolves how is that momentum transfered to the wagon and water leaving the moving wagon.
Differentiate the system's momentum: $$m(0)\ddot{x_{cm}}(t)=-\ddot{m}(t) l+\frac{d}{d t}\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$ $$0=-\ddot{m}(t)l+m(t)\ddot{x}(t)$$ $$\ddot{x}(t)=\frac{\ddot{m}(t)l}{m(t)}$$
Physical Considerations
Therefore we have a simple system as long as $\ddot{m}(t)$ is continuous. The physical explanation is that if we abruptly closed the nozzle the water in the wagon does not come to an immediate stop relative to the wagon. It sloshes around and after a certain relaxation time redistributes its momentum to the system as a whole. Similarly with the quick turn on, the water in the container can't just gain an average momentum to match $-\dot{m}(t)l$. Again there must be some relaxation time for the water to hit that equilibrium where it can evenly distribute itself in the wagon. It is not that these situations are impossible but that my equations would not take into account these relaxation times.
My situation just avoids that. The water in the wagon has already hit some equilibrium before $t=0$. Also having the water move under its own weight provides a slow turn off.
Velocity of Wagon
Combining the results from previous sections: $$\ddot{x}(t)=\frac{2\frac{m_w}{{t_c}^2}l}{m_{w} {(1-\frac{t}{t_{c}})}^{2} + m_{c}}$$ $$\ddot{x}(t)=\frac{2 l m_w}{{t_c}^2 m_c}{\left[\frac{m_w}{m_c}{(1-\frac{t}{t_c})}^{2}+1\right]}^{-1}$$
$$\int\frac{du}{1+u^2}=\arctan(u)$$ $$u=\sqrt{\frac{m_w}{m_c}}(1-\frac{t}{t_c})$$ $$\dot{x}(t)=-\frac{2 l}{t_c}\sqrt{\frac{m_w}{m_c}}\int\frac{du}{1+u^2}$$
$$\dot{x}(t)=\frac{2 l}{t_c}\sqrt{\frac{m_w}{m_c}}\left[\arctan\sqrt{\frac{m_w}{m_c}}-arctan\sqrt{\frac{m_w}{m_c}}\left(1-\frac{t}{t_c}\right)\right]$$
Extremely Heavy Wagon: $\sqrt{\frac{m_w}{m_c}}\ll1$ $$\arctan(x)\to x-\frac{1}{3}x^3$$ $$\dot{x}(t_c)=\frac{2 l m_w}{t_c m_c}$$ $$\dot{x_{cm}}(t\ge0)=\frac{2 l m_w}{t_c m(0)}$$
This makes physical sense. The wagon's final momentum is just about equal to our initial momentum. The higher order terms would account for the momentum that the dispensed water has.
Regular Wagon: $\sqrt{\frac{m_w}{m_c}}\gg1$ $$\arctan(x)\to \frac{\pi}{2}$$ $$\dot{x}(t_c)=\frac{\pi l}{t_c}\sqrt{\frac{m_w}{m_c}}$$ $$\dot{x_{cm}}(t\ge0)=\frac{2 l m_w}{t_c m(0)}$$ $$p_{cm}(t\ge0)=\frac{2}{\pi}\sqrt{\frac{m_w}{m_c}}p(t)$$
This case has the wagon with a significantly smaller portion of the systems momentum.
-
+1 this is pretty much the same thing I did (at least mathematically) and I think it's the clearest solution yet posted. – David Zaslavsky♦ Dec 13 '10 at 9:56
Really nice work! Now if you will add the case of m'(0)=0 (mass flow at t=0 is zero) then +1 is guaranteed:) – Martin Gales Dec 13 '10 at 11:33
1
@kalle43 My specific case avoids that issue. The energy was there starting before $t=0$ the water in the wagon has an average horizontal momentum. I assumed that the water level in the tank remains horizontal at all times as the problem stated. Giving the water in the wagon some momentum was the only way to satisfy this requirement – David Dec 13 '10 at 12:37
1
@kalle: I still think that energy is not conserved. This is not a closed(i repeate:closed) system and gravity is adding energy until the tank is empty. – Martin Gales Dec 13 '10 at 12:44
1
@kalle43 & @Martin If I get time I'll try to work out m'(0)=0 case. As kalle43's suggestion points out the crux might be lowering the initial mass flow as the source of energy for the average momentum of the water in the wagon. Very interesting, thanks Kalle43. – David Dec 13 '10 at 12:52
show 6 more comments
OK, that is my second tentative to solve this problem. I think I have a solution this time, thank to the discussion of others in that thread. The solution is $v_{\text{final}}=\sqrt{2gh(0)}\frac{ls}{h(0)S}(1-\frac{\pi}{2})$ if $m\gg M$. This corresponds to a few millimetres per second towards the left for a wagon full of water.
Here is how I've derived it :
Notation
In order not to neglect not negligible contributions, I will pose the problem for a cart of a quite arbitrary shape, before restricting it to our cart.
We have :
• $S(z)$ : section of the cart at altitude $z$
• $h(t)$ : height of water at time $t$
• $l(z)$ : abscissa of the centre of mass (CoM) of the slice of water at altitude $z$ - $M$ : Mass of the empty cart
• $m = \int_0^h(0) dz S(z) \rho$ : initial mass of water
• $\mu(t)$ : remaining mass of water at time $t$
• $f(t)=-dµ/dt > 0$ is the mass flow of water
• $v_v(z,t) < 0$ : vertical speed of the water slice at altitude $z$
• $v_h(z,t)$ : horizontal speed of its CoM.
In the case of the cart, we will have :
• $S(z)$ is constant above the nozzle. Let $\delta+\epsilon$ be the nozzle height. We then have $S(z)=S$ for $z>\delta+\epsilon$. For numerical appplications, we'll suppose a $3\times3\times10$ m³ cart, with $S=30$ m².
• The last part of the nozzle is a pipe of height $\delta\ll h(0)$. In this pipe $S(z<\delta)= s\ll S$. If the output has a 10 cm side, $s=1O^{-2}$ m².
• $h(0) = 3$ m
• Above the nozzle, the CoM of the water is fixed at $l(z>\delta+\epsilon)=0$, while in the lower part, $l(z<\delta)=-l$, wher $l=5$ m.
• I'll assume $M=10^4$ kg, but I've no idea whether it's realistic.
• $\rho = 10^3$ kg·m⁻³
• $m=\rho S h(0) =$ 9·10⁴ kg
• $g=10 m·s⁻²$
Vertical movement of water
In the following, we will assume that the horizontal acceleration $a$ of the cart stays $a\ll g$ during the movement. A nonzero acceleration would induce correction terms proportional to $\frac{a^2}{g^2}$, and we will check that this hypothesis is consistent later. This assumption allows us to neglect any motion of the cart when looking at the movement of water in the cart referential, and then compute $f(t)$, $h(t)$ and $\mu(t)$. We will then use the resulting f this computation to find the horizontal movement of the cart.
The incompressibility of water allows us to write
$$f(t)=-\rho S(z,t) v_v(z,t) =\rho S(h(t)) \frac{dh}{dt} =- \rho s v_v(0,t) \quad(*)$$
Bernoulli, at altitude $h$ and $0$ gives us
\begin{gather} \left(\frac{dh}{dt}\right)^2 + 2gh = (v_v(0,t))^2 \\ 2gh=(dh/dt)² (\frac{S(h)²}{S(0)}² -1) \end{gather}
In our case, except in the nozzle, $\frac{S(h)^2}{S(0)^2}=\frac{S^2}{s^2}\simeq 10^7$. We will therefore neglect the $-1$ in the following.
This equation has the following solution : $$h(t)=h(0)(1-t/t_m)^2 \text{ for } t\in[0, t_m]$$
and $h(t>t_m)=0$, with $t_m=\frac Ss \sqrt{2h(0)/g}$. Here $t_m=3\cdot 10^3 \sqrt(6/10) \sim 2000$ s.
We have then $\mu(t)=m (1-t/t_m)^2$ and $f(t)=f(0)(1-t/t_m)$ with $f(0)=\rho s \sqrt{2gh(0)}\sim=10^{-2+3}\sqrt{60}\sim80$ kg·s⁻¹.
Conservation of the horizontal momentum
Now comes the interesting part of the problem, the horizontal movement.
Momenta will be computed in the cart referential ($P^{CR}$) and in the rail referential ($P^RR$).If you look at the water inside the cart, its momentum will be
$$P^{CR}_{\text{water}}=\rho\int_{0}^{h(t)}dz S(z) v_h(z,t)$$
with $v_h(z,t)= dl/dz v_v(z,t)$. From that and the expression $(*)$, we have
$$P^{CR}_{\text{water}}=- f(t) \int_{0}{h(t)}dz dl/dz= f(t) (l(0)-l(h(t))).$$
Going back to the more physical rail-refrential, we have then
$$P^{RR}_{\text{water}}=µ(t)v(t) + f(t) (l(0)-l(h(t)))$$
We also have, for the cart,
$$P^{RR}_{\text{cart}}=M v(t)$$
As stated in other answers (but not my previous one :-(), one should not forget the momentum of the water which has left the cart in previous time :
$$P^{RR}_{\text{leaked water}}=\int_0^t d\tau f(\tau) v(\tau)$$
Summing these term, together with the momentum conservation, we have :
$$0=P^{RR}_{\text{total}}=(M+\mu(t))v(t) + f(t) (l(0)-l(h(t))) + \int_0^t d\tau f(\tau) v(\tau)$$
For example when the cart is empty, $f(t)=0$, $\mu(t)=0$ and the above equations becomes : $$0=P^{RR}_{\text{total}}=Mv_{\text{final}} + \int_0^t d\tau f(\tau) v(\tau)$$ The cart can have a final nonzero speed, if its momentum is compensated by the net momentum of the water having left the cart.
Differentiating the momentum conservation relatively to $t$, we obtain,
$$0=(M+µ(t))\frac{dv}{dt} - f(t) v(t) + \frac{df}{dt}(l(0)-l(h(t))) - f(t) \frac{dh}{dt} \frac{dl}{dz} + f(t) v(t)$$
This equation can be simplified into
$$\frac{dv}{dt}=\frac{1}{M+\mu(t)}\left[\frac{df}{dt}[l(h(t))-l(0)] - \frac{dl}{dz}\frac{f(t)^2}{\rho S(h(t))}\right]$$
Knowing $f(t)$ as per the previous section allows us to integrate this equation, at least numerically, for any cart. In the following, we solve the equation for our cart geometry, distinguishing three steps.
Step 1: opening the nozzle
When the nozzle is quickly opened at $t=0$, the cart is full and $\mu=m$ is constant. the equation we have to solve is then $$\frac{dv}{dt}=\frac{1}{M+m}\frac{df}{dt}l-0$$ from which we easily deduce $$\Delta v = \frac{l\Delta f}{M+m}=\frac{lf(0)}{M+m}.$$ With the numerical values above, this corresponds to a speed of 4 mm·s⁻¹. This movement of the cart compensate the internal acceleration of the water inside the cart towards the nozzle.
As wee will see later, this abrupt speed change is the biggest acceleration taken by the cart. If the nozzle is opened in one second, which is still quickly enough to keep $\mu=m$ approximation valid, the horizontal acceleration $a$ is still small : $\frac{a}{g}=4\cdot10^{-4}$.
Step 2: Emptying the cart above the nozzle
Above the nozzle, we have a constant $l(h)=0$ and the differential equation is $$\frac{dv}{dt}=\frac{l}{M+\mu(t)}\frac{df}{dt}$$.
If the cart is emptied with a constant $f(t)$, it does not accelerate nor slow down, until the f(t) is cut. In that moment the back action is the same in a reverse direction, but with a lower mass. (M instead of M+m). We end therefore with a net speed towards the left of value $lf(1/(M+m)-1/M)$
In the more general case where f slowly decrease to 0, $df/dt <0$, implying a slow down, and indeed a reversal of the speed, since the total mass $M+µ(t)$ decreases.
If we plug into the above equation the values we have for $f(t)$ and $\mu(t)$, we have
$$\frac{dv}{dt}=\frac{lf(0)}{t_m(M+m(1-t/t_m)^2)}=-g\frac{ls^2m}{h(0)S^2M}\frac1{1+\frac mM(1-t/t_m)^2}$$ which can be analytically integrated using $\int dt/(1+t^2)= \arctan t$. We have then $$v(t)-v(0)=-\frac{ls}{h(0)S}\sqrt{2gh(0)}\left[\arctan\sqrt{\frac mM} - \arctan\left(\frac{t_m-t}{t_m}\sqrt{\frac mM}\right)\right]$$.
We have then $$v(t_m)=v(0)-\frac{ls}{hS}\sqrt{2gh(0)}\arctan\sqrt{\frac mM}$$ In the limit $m\gg M$, where the mass of water is larger than the cart mass, $\arctan\sqrt{m/M}\simeq\pi/2$ and $v(0)=\sqrt{2gh(0)}\frac{ls}{h(0)S}$, so that : $$v(t_m)=\sqrt{2gh(0)}\frac{ls}{h(0)S}(1-\frac{\pi}{2})$$
step 3: Showing that the nozzle has no influence, so long at it is small
The problem of the nozzle is the zone where $\frac{dl}{dz}$ is not small. let say that this zone is of height $\epsilon$, above a vertical pipe of height $\delta$, with $\epsilon\ll\delta\ll h(0)$. I have the intuition that the problem is not so dangerous, since the $\propto l/\epsilon$ derivative will be only relevant for a time proportional to $\epsilon$, and the small amount of water involved should keep the corrective term small. But I have nothing more rigorous yet :-(
-
1
v(0) should be 0 right ? And v(t)=const ? These 2 violates momentum conservation. What is t_m ? – Holowitz Dec 10 '10 at 16:02
1
@kalle43: what problem are your referring to ? There are several of them. If you are referring to your last objection, about moving only in one direction, it is not the case. If f(t)=f is constant, all the interesting stuff happens when f is "switched" on and off. When it's switched on, the cart get a kick towards the right and move at the speed $+lf/(M+m)$. The speed stays constant afterwards until $f$ is switched off, and the cart get a kick towards the left $\Delta v=-lf/M$, bigger than the first one since it is lighter, so its final speed is $-lf\frac{m}{M(M+m)}$, towards the left. – Frédéric Grosshans Dec 10 '10 at 18:22
1
@kalle43 : yes, but $\mu(t)$ can be linear only when there is still some water. When $\mu(t)=0$, $f(t)$ changes abruptly to 0, and (in your notation) $m''\neq0$ for a short time. This $m''$ peak then changes the speed. This occurs at time $t=m/f$ I have then three speeds : $v(t<0)=0$, $v(t\in[0,m/f])=+lf/(M+m)>0$ and $v(t>m/f)=-lfm/M(M+m)<0$. – Frédéric Grosshans Dec 10 '10 at 18:49
1
Ok, so at 0<t<m/f, its is moving in one direction with constant speed v. It's momentum is constant p. So it must have dropped of water with momentum -p, but the nozzle points straight down, so how can the water dropped off have gained momentum to the left, if the cart has so far only moved towards the right? Or do you mean that the water inside the cart has momentum -p ? – Holowitz Dec 10 '10 at 19:01
3
@kalle43 : exactly. There is a constant flow of water inside the cart which has a momentum $-p$. That is the key point. The main tank is centred, but you can imagine the nozzle as a pipe going from the bottom-centre of the tank to the point $-l$, ant then turning down. In the horizontal section of the pipe, you have a mass $sl\rho$ of water, with speed $-f/s\rho$. The total amount of momentum is then $-fl$. – Frédéric Grosshans Dec 10 '10 at 19:19
show 23 more comments
Here is my attempt. I went to a somewhat different path than kalle43 and this is a little easier i think.
Let $x(t)$ be the coordinate of the nozzle at time $t$. Consider an infinitesimal mass of water $dm$ departing the nozzle at time $\tau$ : $$dm=-m'(\tau)d\tau$$ Here $m'(t)$ denotes the time derivative of the mass of water inside the tank.
Let $x(\tau)$ be the horizontal coordinate of $dm$ at time $\tau$. Then at time $t>\tau$ the horizontal coordinate of $dm$ will be: $$x(\tau)+(t-\tau)x'(\tau)$$ Here $x'(t)$ denotes the time derivative of the coordinate of the nozzle at time $t$ or simply velocity of the wagon.
Now sum $x_idm_i$ (static moment of mass) over all infinitesimal particles emitted from the nozzle within the time period $(0...t)$ will be expressed by the integral:
$$-\int_0^t [x(\tau)+(t-\tau)x'(\tau)]m'(\tau)d\tau$$ The following step is to get static moment of mass of the wagon with water inside it. This is a simple:$$[l+x(t)][M+m(t)]$$ Now the static moment of mass of the whole system(the wagon with water + emitted water) is expressed as the sum of last two expressions:
$$-\int_0^t [x(\tau)+(t-\tau)x'(\tau)]m'(\tau)d\tau+[l+x(t)][M+m(t)]=pt+c$$ $p=const$ and $c=const$
Now you ask what means $pt+c$.This becomes clear when we differentiate the last equation with respect to $t$: $$-\int_0^t x'(\tau)m'(\tau)d\tau- x(t)m'(t)+x'(t)[M+m(t)]+m'(t)[l+x(t)]=p$$ $p=const$
This result represents the horizontal momentum of the whole system(the wagon with water + emitted water). This must be conserved. So $c$ is simply integration constant.
Now the most important part follows:
Consider the initial moment $t=0$. At this moment let the coordinate of the nozzle be zero:$(x(0)=0)$ as well as the initial velocity of the wagon:$(x'(0)=0)$. Then the momentum equation gives:
$$lm'(0)=p=const$$
What can we conclude from this result? First, before the opening of the nozzle the momentum of the whole system(wagon+water inside it) is definitely zero. But after opening, at $t=0$ the momentum remains zero only if $m'(0)=0$. Otherwise it suddenly becomes different from zero. And this last is realized in the given problem. The momentum of the whole system(the wagon with water + emitted water) becomes different from zero and the wagon starts to move in one direction.
But if $m'(0)=0$ then Mark Eichenlaub's scenario will start, i think.
Now let's differentiate the momentum equation with respect to $t$ to get the equation of motion of the wagon: $$[M+m(t)]x''(t)=-lm''(t)$$ Actually, I was shocked that the equation turned out to be as simple.
Edit
I drifted from Torricelli's law and added an example which confirms quantitatively Mark Eichenlaub's qualitative answer. This shows also that the law of conservation of energy is irrelevant in this problem. Only the mass change of the wagon does matter.
I picked a function $m(t)$ such that $m'(0)=0$. So there is no need to worry about any instantaneous jump at $t=0$ and the horizontal momentum remains zero. $$m(t)=\frac{m}{2}\left(1+cos\frac{\pi{t}}{T}\right);0\leq t\leq T$$ and the equation of motion:
$$[M+m(t)]x''(t)=-lm''(t)$$ The solution of the equation:
$$\dot{x}(t)=\frac{l\pi^2}{T}\left(\frac{t}{T}-\frac{2 }{\pi}\frac{\eta+1}{\sqrt{{2\eta+1}}}\arctan\frac{tan\frac{\pi{t}}{2T}}{\sqrt{2\eta+1}}\right)$$ where $\eta=\frac{m}{2M}$
This solution follows closely by the behavior Mark gave. Final velocity is directed to the left $(v_f<0)$ and is given by the expression: $$v_f=\frac{l\pi^2}{T}\left(1-\frac{\eta+1}{\sqrt{{2\eta+1}}}\right)$$
-
1
Cool. Your differential equation is actually the same as mine for $\dot{v}$. The stuff about $m'(0)$ being different from zero is a good point. Basically, if $m'(0) \neq 0$, the cart experiences infinite acceleration until it reaches the recoil speed $-lm'(0)/(M+m)$. This is actually the same issue your raised in a comment to my post - if the cart does accelerate very quickly as the flow turns on, we might expect the assumption that the surface of the water stays flat to fail. Anyway, it's nice to see someone take a different tack and confirm each other's work. – Mark Eichenlaub Dec 9 '10 at 11:54
1
@kalle43 : the momentum is conserved, because some water has been left with some momentum to the left. – Frédéric Grosshans Dec 10 '10 at 16:19
2
That is impossible unless the wagon has been moving to the left at some earlier point, but he predicts it to start at 0 and coast to the right forever. – Holowitz Dec 10 '10 at 16:22
1
From my own work I got the same EOM as you did for the tank, so I think you're right in that respect. But when you say near the end that the total momentum suddenly becomes different from zero, that'd be a blatant violation of the law of conservation of momentum. The total momentum should remain zero. (When I get a chance I'll try to verify that directly using the solution to the differential equation) – David Zaslavsky♦ Dec 12 '10 at 8:42
1
@David: Maybe I did not express myself correctly. My first language is not English. I repeate from my answer: It follows from the horizontal momentum equation of the system that at t=0 : l*m'(0)=p=const. So the momentum of the system depends strongly of the initial condition m'(0). If m'(0) is not zero then it implies discontinuity appearing. At this point you can not argue the validity of the law of conservation of momentum, i think. – Martin Gales Dec 13 '10 at 10:30
show 36 more comments
Qualitative Answer
I think the cart exhibits an extremely surprising behavior. The cart begins by sitting still on the track. The hole is to the left of the center. When the nozzle is opened, water in the cart begins a net flow to the left. The cart, conserving momentum, picks up a velocity to the right. In a steady state, the flow of water would be constant and the cart would move at constant velocity. However, as the flow rate begins to decrease, the velocity of the cart decreases. Eventually, the cart comes to a standstill, then actually reverses directions, moving to the left before the last water falls out. When the last of the water is gone, the cart is coasting to the left. The center of mass of the system never moves, because as the center of mass of the cart moves, the center of mass of the water moves oppositely. Momentum is also conserved, because as the cart picks up momentum, the water picks up opposite momentum. If the water also slides after hitting the track, by the end of the process the water will have a net motion somewhat to the right to compensate the motion to the left of the cart.
Quantitative Answer
Let the cart move at a speed $v$ to the right, and the water move at an average speed $w$ to the right. In general, $v \neq w$ because the water's center of mass is moving relative to the cart. The hole is at $l$. If the hole is on the left then $l$ is negative.
The velocity of the water relative to the cart is $w-v$. This velocity comes from the fact that the water, if it were to continue as it is now, would all move from the center of the cart to the hole, a distance $l$, in a time $m/f$, with $f$ the mass flow rate. Thus the kinematic relation
$$w-v = \frac{lf}{m}$$
Next, we want to conserve momentum. This gives
$$\frac{d}{dt}(Mv + mw) = 0$$
Taking this derivative, we have to keep in mind that $M$ and $m$ are changing because water is flowing out of the cart. $m$ is decreasing at the rate $f$, and $M$ is increasing at the rate $f$ when we think of $M$ as the total mass moving at speed $v$ rather than the mass of the cart.
$$M\dot{v} + m\dot{w} + f(v-w) = 0$$
Physically, the first two terms represent the force on the cart and the force on the water in the cart. The last term represents the force on the water entering the nozzle. Water entering the nozzle goes from $w$ to $v$, thus experiencing acceleration. We have an earlier expression for $v-w$, so plug it in.
$$M\dot{v}+m\dot{w} = \frac{lf^2}{m}$$
I would like to solve for $\dot{v}$. To do this, take the time derivative of the kinematic equation for $w-v$
$$\dot{w} - \dot{v} = \frac{l\dot{f}}{m} + \frac{lf^2}{m^2}$$
These last two equations simplify to
$$\dot{v} = \frac{-l\dot{f}}{M+m}$$
When the flow rate is constant, there is no acceleration. This is plausible because we can imagine watching in a center-of-mass frame where the cart moves to the right and the water moves to the left. The water entering the nozzle feels an acceleration, but the water in the cart is also accelerating, and in the opposite direction. (The water in the cart is accelerating because there is less and less of it, so on average it must move faster to deliver the correct flow rate from the center of the cart to the nozzle.)
Right when we release the nozzle, the flow rate very quickly jumps up, and so the cart quickly picks up speed, too. $m$ is essentially constant over the course of this acceleration, so the cart jumps up to a speed
$$v = -\frac{lf}{M+m}$$
If $m$ were to remain constant, we would find that this relation continues to hold, so that when the water stops flowing, the cart also stops. However, $m$ is not constant; it decreases. When the flow slows to a stop, the acceleration of the cart is now larger because $m$ is smaller. Hence, by the time all the water has left the cart, it is actually moving to the left. This is surprising but necessary - the water is mostly moving to the right because the cart initially moved to the right. The cart must wind up moving left when all is said and done to compensate.
If we suppose the flow rate is constant the entire time, except abruptly beginning and ending (an assumption not in the original problem, which is qualitatively similar but more work to calculate), the final velocity of the cart is
$$v_f = \frac{lfm}{M(M+m)}$$
The water is all flowing at the speed the cart originally jumped to,
$$w_f = -\frac{lf}{M+m}$$
so we see that momentum is conserved.
-
1
@Martin That's a good point, but I don't think it invalidates the analysis. It depends on what "quickly" means - quickly compared to what. In the sense I used quickly, what's important is that the mass of water draining during the acceleration is small compared to the total mass of water. The cart could accelerate for a time that is short compared to the drainage time $m/f$. For effects like wave motion, what's important is that the cart's acceleration time is slow compared to, say, the period of the sloshing mode. If the drainage time is very long compared to the sloshing period, – Mark Eichenlaub Dec 7 '10 at 13:43
1
... then the acceleration could be "quick" in the original sense I meant it, but "slow" in terms of wave effects. – Mark Eichenlaub Dec 7 '10 at 13:43
1
@Martin On second thought, the time of acceleration doesn't seem like the most important factor - perhaps the magnitude of the acceleration compared to $g$ is more important. I think the same basic idea that it is in principle possible to avoid waves should hold, though, if we can control the speed at which we ramp up the flow. If there are wave effects cropping up, though, it would seem that is an issue with the original statement that the water is flat, which simply proves to be unphysical. – Mark Eichenlaub Dec 7 '10 at 14:12
1
@kalle Thanks for posting, and I understand that it's subtle, but the momentum conservation equation I wrote does consider water leaving the system. At any given instant, the momentum of the cart and water still in the cart is $Mv+mw$. (I know that this ignores water that has left the cart. Bear with me a moment, please.) Suppose an amount of time $dt$ passes. Then the momentum changes because $v$ changes, $w$ changes, and because water leaves the cart. The momentum change due to a change in $v$ is $M\textrm{d}v$. The momentum change due to a change in $w$ is $m\textrm{d}w$. (cont.) – Mark Eichenlaub Dec 8 '10 at 22:26
1
@Mark : In order to understand your answer, I have developed a more complete model, which quantitatively finds your answer for a constant $f$ :-) – Frédéric Grosshans Dec 10 '10 at 16:07
show 37 more comments
This answer presents an analogy that I hope will clarify how it is possible that 1) the wagon moves 2) the wagon winds up with a net velocity at the end of the problem. This isn't a direct answer - it's intended as supporting conceptual material (so I've marked it community wiki).
Setup
Throughout this answer, all velocities and all momenta are calculated solely in the reference frame of the rail.
Imagine that the tank does not have water in it. Instead it has a gun that shoots clay lumps. The gun is mounted at the middle. It can shoot any size clay lump at any speed.
There is a hole in the wagon floor. For convenience, the hole is all the way at the left side of the wagon. If the gun shoots a lump of clay to the left, the gun, which is rigidly attached to the rest of the wagon, will recoil some. The lump will fly towards the left side of the wagon and collide with the left wall completely inelastically. Then it will fall down through the hole in the floor and exit the wagon with exactly the same horizontal speed (if any) as the wagon.
First experiment
The tank starts out stationary with a lump of mass $m$ in the gun. It shoots the lump at speed $v$. The lump is moving to the left; $v$ is negative. The momentum of the lump is $mv$. Let the recoil speed of the wagon be $w_0$. By conservation of momentum, $mv + Mw_0 = 0$. Therefore, the cart recoils, moving at speed
$$w_0 = -v*m/M$$,
which is to the right.
Next, the lump collides with the left wall. At this point the lump and wagon must move at some new, mutual speed after the collision. Call that $w_f$. Conservation of momentum implies $w_f = 0$ and the wagon has come to a dead stop. The lump falls through the hole straight down and the wagon sits still for the remainder of eternity. It is displaced from its original position.
Second Experiment
The tank starts out with two lumps of clay in the gun, each of mass $m/2$. The gun shoots one lump at speed $v$ as before. Conservation of momentum gives $m/2*v + (M+m/2)*w_0 = 0$, or
$$w_0 = -v\frac{m}{2(M+m/2)}$$
Next, we wait until the moment when that lump hits the left wall. At precisely that moment, we fire the next lump, also at speed $v$. We make the acceleration profiles of the two lumps exactly equal in magnitude and opposite in sign. This way, the forces on the two lumps must be equal. Those forces come from the rigid body of the gun and wagon combined. Hence, the gun/wagon feels no net force and no acceleration during this process.
The first lump is now comoving with the wagon at speed $w_0$. It falls through the hole moving at that speed.
Next, the second lump collides with the wagon. The second lump and the wagon come to some mutual velocity $w_f$. Conservation of momentum gives $mw_0/2 + (M + m/2)w_f = 0$, or
$$w_f = -w_0 \frac{m}{2(M+m/2)}$$
or substituting in for $w_0$
$$w_f = v \left(\frac{m}{2(M+m/2)}\right)^2$$
The second lump falls out of the wagon and moves at speed $w_f$, and the wagon coasts at speed $w_f$ from then on. $w_f$ is proportional to $v$ and has the same sign. The wagon is moving to the left at the end of the process.
-
Your pure qualitative answer is much more clear than this one (at least for me). What is x? – Martin Gales Dec 15 '10 at 9:03
@Martin Oops - $x$ is just a typo. I'll fix it, thanks. That's okay this answer isn't what you're looking for. I just wanted to present an explicit, easy-to-understand example of why the cart can move and even have net motion at the end of the process. – Mark Eichenlaub Dec 15 '10 at 9:17
You do not need even to shoot the second lump, just release it while the first one is in flight. Still, it is misleading; note than neither of the solutions proposed have the speed changing sign. – arivero Jan 19 '11 at 19:38
+1 for the nice analogy – Frédéric Grosshans Jan 20 '11 at 16:42
My answer below is wrong: it doesn't take into account the momentum of water leaving the cart once it has started moving.
Basically, by conservation of the horizontal momentum in the absence of any horizontal force, the speed of the wagon at the end will be 0. However, the position of the centre of mass of the (Wagon+Water) system should also be conserved, so the wagon will move slowly to the right during the process, which can probably be linked to a pressure difference inside the tank. But it will stop by the time the Wagon is empty.
The real question is therefore not the final speed, but the final displacement. Let x be the current position of the Wagon's centre of mass. When a mass -dµ of water goes through the nozzle, its centre of mass is displaced by l to the left, and the centre of mass of the wagon is displaced by -l·dµ/(µ+M) to the right, where µ is the remaining mass of water inside the wagon.
Integrating this gives $$\Delta x=-l\int_m^0\frac{\mathrm d\mu}{\mu+M}=l\ln\frac{m+M}M$$.
Of course, if the wagon moves initially to a (non-relativistic !) speed, the previous analysis stays true in the moving reference frame. The speed will not change, but the wagon will have a Δx advance compared to a wagon with the same initial speed, but a closed nozzle
Edited to correct a sign error*strong text*
-
1
@ Skilvvz: The movement of the water inside the tank indices a pressure gradient inside the tank, which is translated to a movement of the Wagon. It is the movement of the water inside the tank which moves the Wagon, not the water leaving the Wagon. By the way, if you moved all the water inside the tank with a ballast system, you could move the wagon without letting the water escaping it. The flown out water is just a red herring : it is not the one which moves the wagon. It is only used to move the water inside the wagon. – Frédéric Grosshans Dec 6 '10 at 15:24
1
@Skilvvz: 2 possibilities (I think 1 is the good one) : because 1. The speed of water is not 0 and plays a role in the pressure 2. The horizontality condition might be only aproximatly true. – Frédéric Grosshans Dec 6 '10 at 16:38
2
@Frédéric: oh, I just realized that you can't use CoM analysis this easily. The water that comes out the wagon when it is moving has non-zero velocity (with respect to frame where the wagon was stationary initially) and will always have it non-zero. Unless you want to bring 2nd law into game (which actually makes the water stop), but then CoM principle shouldn't be valid anymore. – Marek Dec 6 '10 at 17:45
1
@dmckee: right, I am sorry about that, but this problem is genuinely hard. If one were to consider the full description then they obviously have to take continuum mechanics into account. It's totally unclear to me how to reduce that infinite number of degrees of freedoms + thermodynamics into some simple system. I am certain though that it can be done, so if someone with better intuition comes along that will be great. – Marek Dec 6 '10 at 21:21
2
@mbq: The water level fall unifromly throughout the tank right. So each volume element $dV$ that leaves through the nozzle at one end it must have come ultimately from a layer spread across the upper surface of the liquid, so there is a bulk flow. That's the easy part. The hard part is how do you reconcile that with frictionlessness and "straight down" requirement for exhausting the liquid. My suspicion is that there are second order effects we're neglecting. If we allow a tiny bit a frictions we can slow the whole business down until the car is always static. – dmckee♦ Dec 7 '10 at 0:10
show 16 more comments
With a vertical jet, Torricelli's law still holds because the displacement of the wagon is orthogonal to the acting forces, gravity plus (arguably, but orthogonal in any case) reaction force, so no work is used by the wagon, $\Delta W = {\bf F} \cdot {\Delta \bf x}=0$ and all the energy still goes to the water jet.
Thus we can calculate $m(t)$ as usual. Forget the drawing and use a square tank. The one in the drawing was calculated by Kepler, and it complicates the problem. Let the height of the water to be simply $h(t)={m(t)\over \rho S}$, ok? And $2 g h(t)$ is the square speed of the jet, the variation of mass follows $m'(t)= - \rho s \sqrt { 2 g h(t)}$, and at the end we have $$m'(t) = - \sqrt {2 g \rho s^2 \over S} \sqrt{ m(t)}$$ Which solves to $m(t)= m (1 - t \sqrt {g \rho s^2 \over 2 m S})^2$ and tell us that the tank becomes empty at $t_f=\sqrt {2 m S \over g \rho s^2 }=\sqrt {2 h S^2 \over g s^2 }$.
We can plug this into Frederik "wrong" solution $x(t)= l \ln {m+M \over m(t)+M}$ t o get the displacement $$x(t) = l \ln {m+M \over (1 - t/t_F)^2 m +M}$$ and the velocity $$\dot x(t)= { 2 m l \over t_F} { (1-t/t_F) \over (1 - t/t_F)^2 m +M } = { 2 l (t_F-t) \over (t_F - t)^2 + {Mt_F^2 \over m} }$$
Note that in the limit of $M \ll m$, we get $\dot x(t)= { 2 l \over (t_F - t) }$ and thus $\ddot x = { 2 l \over (t_F - t)^2 }$, similar to other answers. Note that in this limit the speed at $t_F$ is infinite, but it is massless, so we can stop it anyway.
Another curious issue is that $\dot x(0) = { 2 l \over t_f (1 + M/m)}$ is not zero. It sounds strange, but consider that the initial speed of the jet is not zero neither.
Before to consider variants of Frederik solution, it is important to note that we have four blobs of mass playing some role.
• the leaked water, $m-m(t)$
• the leaking water, $\Delta m= - m'(t) \Delta t$
• the cart mass, M
• the wagon water, m(t)
In the leaking process, the leaked water is already inertial, will a horizontal momentum (in the railway direction) equal and opposite to the momentum of the other three masses, Or, for small $\Delta t$, equal to $- (M + m(t)) V_{c+w}$. The questions to be fixed are: 1) which is the actual direction of the force by the water and the leaking water horizontal velocity: the velocity of the cart, the one of the CM of the water, or some other one? and 2) Does the acceleration of the cart changes enough the direction of "gravity" inside the cart (remember your last bus trip) to be considered a major perturbation of the problem?
Point 2 is most probably a red herring, at least in the approximation where $M \ll m(t)$, because in such case we don't have reasons to expect the accelerations of the cart and the [CM of the] water inside to be different. Remember that the "horizontal gravity" inside the wagon will be the difference of these accelerations.
-
There are more problems, this is defined for any x'(-t) t>0, you have to set x'(T)=0 at some point T for realistic solution, thus toricelli does not hold, only for steady flow. m(t) is not an analytic function, since it is constant for all t<t_0, that is why its impossible to get exact solution. But there exist very realistic m(t), under certain simplifications. – Holowitz Jan 19 '11 at 14:20
It seems that we need to go with steady flow. The main worry, really, is the validity or not of Frederik solution. Is its assumed reaction force orthogonal to the tank displacement, or not? – arivero Jan 19 '11 at 14:44
@arivero : There is some small force acting on the left side of the wagon because Toricelli law is not exactly respected. Then some non-zero work can be achieved. You make the same error as I did in my "wrong" solution. – Frédéric Grosshans Jan 20 '11 at 17:03
@Fréderic, but then, how is that in the "light wagon" limit the equation has the same shape that the official answer? Is it wrong too? I will redo the proof or your wrong answer tomorrow. Note that the small force you mention, if it exists, is not an external force, the only external force is gravity. – arivero Jan 20 '11 at 23:04
@arivero : about the force : it is indeed not external to the "Cart+Water system". Therefore the center of gravity of the whole system (water+cart) does not move, but you should not take forget the water which has left the system with an horizontal speed (in the fixed reference frame), which is essentially what the "official solution" does, and my what my wrong solution forgets. You can also consider only the wagon movement, to which water is external. In this case, you have to take into account the horizontal force of the water on the wagon. – Frédéric Grosshans Jan 21 '11 at 13:32
show 1 more comment
Short version: movement inside the closed system cannot accelerate it. Zero horizontal speed at exit means zero speed at t->infinity.
More detailed version:
Let me transfer the problem to a simpler one:
We have an open wagon with me standng on one side of it holding a heavy box. Now I will start running towards the other side of the wagon. This will cause the wagon to move in the opposite direction.
At a certain point I will have to decelarate so that I stop at the other side of the wagon. This will create eqaual force to accelerating thus compensating any speed that developed during the accelerating.
The position of the wagon will be changed so that the center of mass will not have moved. The speed will be equal to starting speed.
Now I drop the box straight down. (I will use a bit of force to simulate the water pressure, but that is not important) Speed is zero, wagon moved box is down.
Now, let's say I have multiplied, have negligible weight and the box is a molecule of water. The final speed will be certainly zero again. The question is, what the displacement of the wagon will be. I have two answers and cannot choose either:
1. The centre of mass has to be kept the same (horizontally, gravitation can move it vertically down). This determines the final position of the wagon.
2. The final displacement is speed integrated over time. Now for each molecule that will start moving left, there will be one stopping at the nozzle. This would compensate the forces in real time keeping the speed at zero and so the displacement.
Please correct me if my analogy is wrong at some point and try to answer the question about final displacement.
Edit - more explanations
Assuming the wagon moves during the process it's true that the water will have a momentum relative to rail and it will travel at the same speed as the wagon. That means there will be no net force from this water coming down.
Imagine a very long tube open on both sides filled with water. If you put this tube vertically in a homogeneous gravitation field the water will flow (fall) out of it. If it moves at a constant speed the water would behave the same relative to the tube. The outside observer would see a tube moving to the side and a column of water moving down and to the side (at the same speed, so it would stay under the tube all the time). The same goes for the water from the nozzle: it will always have the same horizontal speed as the wagon at the point of leaving thus having no effect whatsoever on its movement. This is true disregarding the speed of the wagon.
Having said this the only forces affecting the whole water-wagon system are those caused by the internal movement of water. On this frictionless rail you can change the wagon's position from inside only at the cost of regrouping the stuff inside (changing the mass distribution through the system). Someone (let's say a lobster) walking on a wagon (of zero weigh for simplification) on a frictionless rail cannot move relative to the rail. It is the same as if this lobster was trying to walk on frictionless ice: there would be no reactive force to move him. Looking at the lobster on the zero-weight wagon we would see a lobster walking, though not moving, and a wagon moving under him. As the only mass in this system is the lobster, the centre of mass would not move.
Returning to the water - after opening the nozzle the water starts moving to the left and because there was no speed at t=0 there had to be some acceleration. than the water is gradually moved towards the left end of the wagon where it loses its horizontal speed and leaves the wagon at zero horizontal speed. While stopping the decelerating will compensate any forces (and speed) created during the acceleration. Whether this is going on at zero or non zero speed relative to the rail has no influence.
As we have no external force in the horizontal direction, there centre of mass has to stay unmoved (which requires the wagon to move). At the same time the zero momentum of the water train system has to be preserved so unless the water leaves the train with non-zero horizontal speed relative to the wagon the wagon cannot end up with non-zero horizontal speed relative to the water expelled.
-
If you are running with the box, and drop the box while running and the box falls through a hole in the floor, then the box has some net momentum. The cart will then have net momentum in the opposite direction, even after you stop running. (Your analysis is similar to most people's first thoughts, so I suggest reading through the other solutions.) – Mark Eichenlaub Dec 12 '10 at 23:29
I did read the other solutions. The problem in saying that I drop the box while running is that the water stops (horizontally) before leaving the wagon. – Lukas Dec 12 '10 at 23:48
I insist that moving all the mas within a wagon to its one side cannot affect its final speed on a frictionless track (can affect position though) and dropping it straight down can affect neither its speed nor its position. – Lukas Dec 13 '10 at 0:06
1
@Mark Eichenlaub: I carefully read through your answer and I cannot agree with your qualitative analysis. With the nozzle pointing downwards (and this very important) the only force besides gravitation affecting the wagon is the reaction of the water running down (lifts left side of wagon) but this is compensated (save for a very strong flow caused by a pump) by the gravity. With no external forces affecting the wagon we only have the displacement of water within the wagon and that cannot accelerate it, let alone to the other direction. – Lukas Dec 13 '10 at 0:17
1
@Mark: You are right that I did no computation so far. That is because mindless computing without having a good qualitative analysis is useless. But yes, I should probably approach this problem with more scientific methods. I have to admit that if the water is flowing down (wagon relative) from a wagon with non-zero speed (rail relative) the wagon has to go in the opposite direction so that the momentum is preserved. That is really surprising. – Lukas Dec 15 '10 at 11:57
show 10 more comments
Clearly the water going out of the nozzle does not contribute any horizontal momentum change. Initially the wagon is still and the water flows downward.
The only reason why the wagon could move is that there is a force acting on the right size of the nozzle as the water hits it and its direction is turned towards the floor, thus exerting a force.
But let's think about this. How can we calculate this force? The force is equal to the pressure of the water times the vertical cross section area of the nozzle.
However, the water pressure is the same on the side with the nozzle and the side without. The force on the left side of the nozzle is compensated by an equal force on the right side of the wagon.
The forces on the left are exactly canceled by the ones on the right. F3 is canceled out by -F3 acting on the left side of the tap.
If there were no left side of the tap we would have a net horizontal force (the wagon would be propelled by recoil), but having a left side keeps the wagon still.
It's clear to me that there will be, in real life, second order effects like unbalances in the density of the water which could make the wagon oscillate or move. But the question clearly states that the water remains horizontal (therefore undisturbed) and that Torricelli's law applies. This only happens when the outward flow is so slow that any inhomogeneities in density are second order effects and the water can be treated as to always have a laminar flow.
In any case the system is analogous as standing on a frictionless surface. Short of throwing something outwards, one wouldn't be able to propel. Throwing something downwards wouldn't help.
Edit
To address Mark's and Marek's concern about the conservation of momentum, I can say this:
• the water, internally, initially falls down and gains vertical momentum
• at some point it will necessarily turn left. The momentum will not change in magnitude, but in direction: from down to left. This creates a reaction force on the bottom and on the right side.
• at the final point (the nozzle): the water will turn down again, from left to down. This creates a reaction force on the top of the nozzle and on the left of the nozzle
• since the water flows vertically w.r.t. wagon, it has zero horizontal momentum at the exit point
• this constraint implies that the left hand force and the right hand force compensate.
• instead, there will be a torque. I have not calculated this, but depending on the length of the tube, this torque could eventually make the wagon tilt (if the weight of the tube remains negligible). Normally, though, the torque will not have a movement effect, it would merely move the center of mass towards the right.
To understand this a bit better:
• Imagine the same problem without the nozzle
• Water flows freely left, horizontally.
• The water flows out with a speed of $v(t)=\sqrt{2gh(t)}$ and a horizontal momentum that can be calculated via the parameter $s$ and $v(t)$, and a vertical momentum of zero
• The wagon's horizontal momentum momentum changes by the same amount, opposite sign
• The wagon recoils right
Now
• Re-imagine the original problem, with the nozzle
• Water flows freely downwards, vertically
• The water flows out with a speed of $v(t)=\sqrt{2gh(t)}$ and a vertical momentum that can be calculated via the parameter $s$ and $v(t)$ and a horizontal momentum of zero
• The wagon's horizontal momentum changes by the same amount, which is zero
• The wagon stays still
-
2
I think the cart will move(cf my answer), even if the final velocity will be 0. – Frédéric Grosshans Dec 6 '10 at 17:16
2
@Sklivvz Your answer says that the cart does not move, but the water moves from the center of the cart (on average) to someplace to the side of that. Hence, if the cart doesn't move, the center of mass of the system does move. Since it starts out stationary and there is no external force in the horizontal direction, this is impossible. – Mark Eichenlaub Dec 7 '10 at 8:43
3
@Sklivvz: you argument is definitely incorrect. You say water doesn't have horizontal momentum. Well, that's true for the water that has already left the wagon. But it isn't true for water that is still flowing inside the wagon. You completely ignored this in your analysis. You can't just arbitrarily reduce this infinite DoF system to one or two DoF and expect that it will be correct. – Marek Dec 7 '10 at 11:00
2
@Sklivvz I said "burned", but I meant "expelled". So if the fuel is expelled uniformly, then I agree there won't be a force, just like there wouldn't be one in this problem if the water leaked out uniformly from the floor. I was imagining the fuel is expelled from near the bottom - sorry that wasn't clear. As for an upward force, it is just saying that if there were a ball inside the rocket instead of fuel, and the ball accelerated down, the rocket would get lighter (experience an upward force) during the acceleration. Same for anything with mass accelerating down, including fuel. – Mark Eichenlaub Dec 7 '10 at 11:10
2
@Sklivvz: but the flow of the water (and associated momentum) is definitely not second-order. It's arguably the most important effect in the whole problem. – Marek Dec 7 '10 at 20:55
show 92 more comments
A quantitative answer
The three main conservation laws of fluid mechanics are
1. Conservation of mass
2. Conservation of momentum
3. Conservation of energy
Reference
Between the time $t$ and $t+\mathrm{d}t$ a mass of water $\mathrm{d}m(t)$ escapes through the nozzle. The mass escapes at a speed governed by Torricelli's law - obtained through 1. and 3.:
$$v(t) = \sqrt{2gh(t)}$$
The direction of the water is determined by the inclination of the nozzle $\theta$ which we may generalize to vary from $0$ radians (horizontal, pointing left) to $\frac{\pi}{2}$ (vertical, pointing down).
$$\mathbf{v}(t) = -v(t) \pmatrix{ \sin \theta \\ \cos \theta }$$
The momentum of the water flowing out is determined by
$$\mathbf{p}(t) = m(t) \mathbf{v}(t) = -m(t)v(t) \pmatrix{ sin \theta \\ cos \theta }$$
$$\mathbf{p}(t) = p(t) \pmatrix{ sin \theta \\ cos \theta }$$
Since the fluid is incompressible and mass is conserved, the mass flowing out corresponds to an equivalent decrease in the amount of water from the top.
$$\mathrm{d}m(t) = \rho S \mathrm{d}h(t)$$
But also, the water will flow at a speed $v(t)$ at the nozzle, so the water that escapes is
$$\mathrm{d}m(t) = \rho s v(t)\mathrm{d}t$$
Therefore
$$S \mathrm{d}h(t) = s v(t)\mathrm{d}t$$
or
$$\mathrm{d}h(t) = \frac{s}{S}v(t)\mathrm{d}t$$
Plugging in the equation for $v(t)$ and introducing $\sigma=\frac{s}{S}$
$$\mathrm{d}h(t) = -\sigma \sqrt{2g h(t)} \mathrm{d}t$$
solving this first-order nonlinear ordinary differential equation and using $h_0 = h(t=0)$ and $v_0 = v(t=0) = \sqrt{2gh_0}$
$$h(t) = \frac{1}{2}g\sigma^2t^2 - v_0\sigma t+h_0 \approx h_0 - v_0\sigma t$$
This lets us find $v(t)$, $m(t)$ and $\mathbf{p}(t)$:
$$v(t) \approx -\sqrt{2gh_0 - 2gv_0\sigma t}$$
$$\frac{\mathrm{d}m}{\mathrm{d}t} = \rho s v(t) \approx - \rho s \sqrt{2gh_0 - 2gv_0\sigma t}$$
Which is solved by the (approximate) solution:
$$m(t) \approx m(t) = C +\frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 t)^{\frac{3}{2}}}{3 \sigma v_0}$$
Note: an analytical solution exists, but it's really ugly
To calculate $C$ we must use the condition that when all the water is gone, $m(t) = 0$. To do so we can solve:
$$0=h(t)\approx h_0 - v_0\sigma t \implies t_f \approx \frac{h_0}{v_0 \sigma}$$
then,
$$0 = m(t = t_f) = C +\frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 \frac{h_0}{v_0 \sigma})^{\frac{3}{2}}}{3 \sigma v_0}\implies C=0$$
therefore
$$m(t) \approx \frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 t)^{\frac{3}{2}}}{3 \sigma v_0}$$
finally, the magnitude of the linear momentum is given by:
$$p(t) = m(t)v(t) \approx -\frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 t)^{\frac{3}{2}}}{3 \sigma v_0} \sqrt{2gh_0 - 2gv_0\sigma t}$$
Let's see the effect of the two components of $\mathbf{p}$. The horizontal component propels the wagon by reaction; the vertical component creates a torque that pushes the center of mass to the right - note that the flow pushes the center of mass to the left.
If $\theta = 0$, all the linear momentum is horizontal. There is no torque, the wagon will move by reaction and the center of mass doesn't move because the water flowing out and the wagon move in opposite directions:
$p_{wagon} = p(t)$
If $\theta = \frac{\pi}{2}$ all the linear momentum is vertical. There will be a torque but no horizontal movement, as there is no horizontal momentum. This implies that the contributions to the center of mass by the water flowing out and the torque must cancel out.
Finally, if $\theta$ has a middle value, a compositions of the two behaviours will occur.
In regards to the problem, $\theta = \frac{\pi}{2}$ and therefore the wagon will not move
-
2
@Martin, what kind of comment is that? :-( – Sklivvz♦ Dec 11 '10 at 11:47
1
That's not an explanation, either you can point out an error in my line of thought or you can't. I am using the assumptions you provided, like Torricelli's law. – Sklivvz♦ Dec 13 '10 at 10:04
1
There is a fundamental error on your analysis. Your starting point is not the law of conservation of horizontal momentum of the system ( the wagon with water + leaked water). I do not have anything more to add. – Martin Gales Dec 13 '10 at 12:18
1
I use and verify conservation of momentum at the end of the answer, starting with "Let's see the effect of the two components of p.". I've postponed it because my other answer is all about conservation of momentum (and it gets us to the exact same conclusion). – Sklivvz♦ Dec 13 '10 at 15:37
1
@Sklivvz Neither of your answers conserve momentum. We talked about this extensively in chat, and there you said you thought the cart moved. What has changed? (The problem with this particular answer is that it begs the question.) – Mark Eichenlaub Dec 14 '10 at 13:52
show 9 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 289, "mathjax_display_tex": 105, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345866441726685, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/79659/how-to-isolate-fx-in-fxafxa-times-gx/79714
|
## How to isolate $f(x)$ in $f(x+a)=f(x)+a\times g(x)$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
$a \in \mathbb{R}$
$f:\mathbb{R} \rightarrow \mathbb{R}$
$g:\mathbb{R} \rightarrow \mathbb{R}$
For generic functions $f$ and $g$, how isolate $f(x)$ in the equation below?
$f(x+a)=f(x)+a\times g(x)$
I tried to use Fourier Transform and Inverse Fourier Transform but looks like this don't work very well.
$f(x - a)=$ $e^{-2\pi i a \xi} \hat{f}(\xi)$
$\hat{f}(\xi)=$ $\int_{-\infty}^{\infty}f(x) e^{-2\pi i x\xi}\, dx \quad$ (Fourier Transform)
I tried ZTransform too, but again, didn't worked very well.
-
Does your equation have to hold for every $x\in\mathbb{R}$? Most importantly, what do you mean by "isolate"? – Qfwfq Oct 31 2011 at 22:45
You are actually trying to calculate the indefinite sum of terms defined by $f$ (just look at the case $a=1$), so you really should narrow down the class of functions to have summation algorithms. – thei Oct 31 2011 at 23:01
@Qfwfq, isolate is find `f` or $f(x)=\cdots$, where $\cdots$ is something using $g$ and $a$. – GarouDan Oct 31 2011 at 23:02
1
This should not have the special-functions tag – Yemon Choi Nov 1 2011 at 19:18
I think use DiracDelta is special functions no? And using Fourier Transforms, for example, DiracDelta appears frequently. – GarouDan Nov 1 2011 at 21:10
## 3 Answers
The answer is not unique - you can add any function of period $a$ to f. This is what the singularities are trying to tell you. In a distributional sense, the Fourier transform of a function of period a is supported exactly on the zeros of $e^{2 \pi i a \xi}-1$.
-
@shrdlu Yes I agree, but if $f$ is unknown, how to find, at least, one solution to $f$? – GarouDan Nov 1 2011 at 14:09
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you're not concerned with continuity, integrability, or any of the niceties of real analysis (and there's nothing in the problem that says you are), then, given $a\ne 0$ and $g$, you can let $f$ be any function on the interval $[0,|a|)$ and simply extend it to all real numbers by repeated applications of the given functional equation (written in the form $f(x) = f(x+a) - ag(x)$ to extend it in the direction opposite to the sign of $a$). If $a=0$, it's clear $f$ can be anything.
Addition: Let me be somewhat more explicit. If $a=1$, we can let $f$ be identically zero on $[0,1)$ and equal to $\sum_{k=1}^{[x]} g(x-k)$ for $x\ge1$, with a similar formula for $x<0$.
-
Didn't understand very well. Can you give an example? For example, using $a=1$ ang $g(x)=\frac{-1}{x(x-1)}$ who mathematica don't solves. – GarouDan Nov 2 2011 at 0:04
1
@GarouDan: Barry Cipra's point is that not every function is of the nice form that Mathematica would give you - not every function is given by a formula. If you are only interested in functions that are "physically realistic" in some sense then you should probably make this clear in the question – Yemon Choi Nov 2 2011 at 2:38
Great answer! :) – BR Nov 2 2011 at 14:15
Just as you said:$e^{2\pi i a \xi} \hat{f}(\xi)= \hat{f}(\xi) +a \hat{g}(\xi)$. In this way you get $\hat{f}(\xi)$ and you can use it to find $f(x)$.
-
In Mathematica, if we try $a=1$ and $g(x)=\frac{-1}{x(x+1)}$ $f(x)=a (\mathcal{F}_{\xi }^{-1}[\frac{\mathcal{F}_x[G(x)](\xi )}{-1+e^{2 i \pi a \xi }}])(x)$ We have no answer, but, is easy to know the answer. $f(x)=\frac{1}{x}$ because, $f(x+1)=f(x)+1\times g(x) \iff \frac{1}{x+1}-\frac{1}{x}=g(x) \iff \frac{-1}{x(x+1)}=g(x)$ – GarouDan Oct 31 2011 at 23:17
@GarouDan: Shall I write more details? – Math-player Oct 31 2011 at 23:18
3
I don't know what's wrong with mathematica, my answer is clear. – Math-player Oct 31 2011 at 23:21
1
I was going to say "unfortunately I don't know a good source for this, as I learnt it from Folland's excellent Lectures on Partial Differential Equations, which is very hard to find nowadays", but I see that the Tata Institute has made it available online (and re-typeset)! math.tifr.res.in/~publ/ln/tifr70.pdf – BR Nov 1 2011 at 2:49
1
I mean, what the poles are expressing is that you only know $f$ up to a periodic function with period $a$. It's up to you to coax a reasonable closed form out of it, and Fourier shouldn't work in general. In fact, nothing should. – Will Sawin Nov 1 2011 at 5:00
show 8 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414606690406799, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/52337/question-about-classical-transport-theory/52380
|
# Question about Classical Transport Theory
With a distribution function of the form $f=f_{0} + \vec{v} \cdot \vec{g}$, one can obtain the current density. My question is about $\vec{g}$; we assume a general solution to $\vec{g}$ of the form $\vec{g}= \alpha \vec{E}+\beta \vec{B} + \gamma(\vec{B} \times \vec{E})$. By substitution into $$\frac{\mu}{c}(\vec{v} \times \vec{B})\cdot \vec{g} + \vec{v} \cdot \vec{g} = e \tau \frac{\partial f_{0}}{\partial \epsilon} \vec{E} \cdot \vec{v}$$ and by equating both sides and equating the coefficients of $\vec{v} \cdot \vec{E}$, $\vec{v} \cdot \vec{B}$, and $\vec{v} \cdot (\vec{B} \times \vec{E})$ we are supposed to get $$\vec{g} = \frac{\partial f_{0}}{\partial \epsilon} \frac{e \tau}{1+ (\omega_{c}\tau)^{2}}[\vec{E}+(\omega_{c}\tau)^{2}(\hat{z}\cdot\vec{E})\hat{z}+\omega_{c}\tau(\vec{E} \times \hat{z})]$$
My question is how? I'm afraid this isn't apparent to me. I don't know where to start to try and prove this!
-
What is $\omega_c$? I would guess the cyclotron frequency. Also it looks like the magnetic field is assumed to be directed along $\hat{z}$. Is this true? – Vijay Murthy Jan 27 at 23:13
You are correct in both regard. The B field is indeed assumed directed along $\hat{z}$ and the $\omega_{c}$ is the cyclotron frequency. – Dylan Sabulsky Jan 27 at 23:16
## 1 Answer
When we substitute the expression for $\vec{g}$ in the equation you wrote, we can do a number of simplifications, using identities that involve the cross product. The first term contains a dot product of $\vec{v} \times \vec{B}$ with $\vec{g}$, in which we can use:
$$(\vec{v} \times \vec{B}) \cdot \vec{B} = 0 \\ (\vec{v} \times \vec{B}) \cdot \vec{E} = (\vec{B} \times \vec{E}) \cdot \vec{v} \\ (\vec{v} \times \vec{B}) \cdot (\vec{B} \times \vec{E}) = (\vec{v} \cdot \vec{B})(\vec{B} \cdot \vec{E}) - B^2 ~\vec{v} \cdot \vec{E}$$
After the substitution we get:
$$\frac{\mu}{c} \left[ \alpha \vec{v} \cdot (\vec{B} \times \vec{E}) + \gamma (\vec{B} \cdot \vec{E}) (\vec{v} \cdot \vec{B}) -\gamma B^2 ~ \vec{v} \cdot \vec{E} \right] +\alpha ~ \vec{v} \cdot \vec{E} + \beta ~ \vec{v} \cdot \vec{B} + \gamma ~ \vec{v} \cdot(\vec{B} \times \vec{E}) = e \tau \frac{\partial f_0}{\partial \epsilon} \vec{v} \cdot \vec{E}$$
We can now bring all the terms to the left side, and take out the common factor $\vec{v}$:
$$\vec{v} \cdot \left\{ \frac{\mu}{c} \left[ \alpha (\vec{B} \times \vec{E}) + \gamma (\vec{B} \cdot \vec{E}) \vec{B} -\gamma B^2 ~ \vec{E} \right] +\alpha ~ \vec{E} + \beta ~ \vec{B} + \gamma ~ \vec{B} \times \vec{E} - e \tau \frac{\partial f_0}{\partial \epsilon} \vec{E} \right\} =0$$
Since this equation holds for all $\vec{v}$, we know that the expression in the curly brackets needs to vanish. Assuming $\vec{E}$ and $\vec{B}$ are not parallel, the three vectors $\vec{E}$, $\vec{B}$ and $\vec{B} \times \vec{E}$ are linearly independent, so if a combination of them vanishes then each coefficient in the combination needs to vanish. In this case:
$$-\frac{\mu}{c} \gamma B^2 +\alpha -e \tau \frac{\partial f_0}{\partial \epsilon}=0 ~~~~~ \text{(the coefficient of $\vec{E}$)} \\ \frac{\mu}{c} \gamma ~ \vec{B} \cdot \vec{E} + \beta =0 ~~~~~ \text{(the coefficient of $\vec{B}$)} \\ \frac{\mu}{c} \alpha + \gamma =0 ~~~~~ \text{(the coefficient of $\vec{B} \times \vec{E}$)}$$
We have a system of three equations we can solve for $\alpha$, $\beta$ and $\gamma$. After solving we substitute the solution in the expression for $\vec{g}$, and using the definition of $\omega_c$ and the fact that $\vec{B} = B \hat{z}$ we get the desired result (I leave these last steps to you).
-
awesome, thanks Joe! – Dylan Sabulsky Jan 28 at 14:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461303949356079, "perplexity_flag": "head"}
|
http://wikis.controltheorypro.com/index.php?title=Category:MIMO
|
• Search »
•
• Toolbox »
• In other languages »
Category:MIMO
### From ControlTheoryPro.com
| | | |
|----------------------------------------------------------------------------------------|----------|----------|
| MIMO | | |
| | Examples | Modeling |
| | | |
| In order to prevent spam, users must register before they can edit or create articles. | | |
## 1 Introduction to Multiple Input-Multiple Output
Linear control theory, as taught to undergraduate students, is primarily concerned with Single Input-Single Output (SISO) systems. Many real world systems are linear and while technically Multiple Input-Multiple Output (MIMO) the coupled axes are so weakly coupled that the coupling can be neglected. As a result the system can be approximated as SISO. An example of this is the hovering helicopter where at hover the pitch attitude and horizontal speed can be decoupled because they should both be nearly zero minimizing any coupling between them.
MIMO systems are typically more complex than SISO systems. The interactions of the dynamics make simple linear predictions of system performance difficult. Singular value decomposition (SVD) is the equivalent of a Bode diagram for an SISO system. From an SVD frequency domain techniques can be applied. FOR MIMO systems the SISO frequency domain techniques require some adjustment.
SISO techniques exist for transfer functions and state equations. The same is true for MIMO. However, most of the effort and focus is on state equations for MIMO and transfer functions for SISO. Everyone has their own favorite of doing things, however, and there are as many different ways to do things as there are people to do them.
However, the focus in MIMO control system theory appears to be on Optimal and Robust control which involve designing a controller that minimizes a cost function. An example cost function would energy required for control (I believe this is cost function minimized for H∞ control).
Often these Optimal and Robust techniques require full Observability of all states and so a state estimator is required. The most common estimator is the Kalman Filter. Estimators are a large topic area and deserve their own subcategory.
What follows are some design goals to keep in mind.
## 2 Big Picture
Short Hand
$LaTeX: P$ is short hand for $LaTeX: P(s)$.
Goal
Design the pre-filter (W) and controller (K) on the basis of a nominal model $LaTeX: P_0$ for the plant $LaTeX: P$ such that the feedback system exhibits the following properties:
1. Stability: if the system is perturbed then the system will return to equilibrium
2. Small tracking error
• Good low frequency command following
• Good low frequency disturbance attenuation
• Good high frequency noise attenuation
Note
The stated goals must be achieved in the presence of the following sources of uncertainty
• $LaTeX: P_0 \ne P$
• $LaTeX: H$ is not known exactly
• $LaTeX: d_i$ and $LaTeX: d_o$ are not known exactly
• $LaTeX: n$ is not known exactly
## Pages in category "MIMO"
The following 5 pages are in this category, out of 5 total.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9078174829483032, "perplexity_flag": "middle"}
|
http://www.maths.manchester.ac.uk/~jm/wiki/Representations/Burnside
|
representations
burnside
## Burnside ring
For a finite group G, the Burnside ring Ω(G) of G is defined to be the ring generated by formal differences of isomorphism classes of G-sets. The ring structure is given by disjoint union and Cartesian product of G-sets.
Let $$\mathcal{O}$$ be a single orbit of a finite group G, and let $$x\in \mathcal{O}$$. Then $$\mathcal{O}$$ is isomorphic as a G-set to G/H, where H=Gx (the isotropy subgroup of x). If one chose a different 'starting point' $$y\in\mathcal{O}$$, then H would be replaced by a conjugate subgroup. The set of isomorphism classes of G-orbits is in this way in 1-1 correspondence with the set of conjugacy classes of subgroups of G.
The Burnside ring Ω(G) is therefore the Z-module generated by the conjugacy classes of subgroups of G. See the wikipedia page.
Let S be a finite G-set (a set upon which G acts). Then S can be decomposed into a finite disjoint union of G-orbits, and so corresponds to a sum $$S = \sum_i a_i [G/H_i]$$ where the Hi represent the conjugacy classes of subgroups of G, and the ai are (positive) integers.
### Permutation Representation
If G acts on a finite set S, then there is associated a permutation representation (over any ring or field, but let us use Q) as follows. Let V be the vector space whose basis consists of the elements of S. Thus a general element of V is of the form $$\mathbf{v} = \sum_{s\in S}a_s s$$, where the as are elements of the field/ring. Then $$g\in G$$ acts by $$g(\mathbf{v}) = \sum_{s\in S} a_s g(s)$$, which is a permutation of the coefficients as. The matrix representing g then has a 1 at the intersection of the column corresponding to s and the row corresponding to g(s), and 0 at other entries of that row and column.
If we fix a base field, this defines a homomorphism from the Burnside ring Ω(G) to the ring of representations over that field which we denote $$\beta:\Omega(G)\longrightarrow R(G).$$
#### Properties of β
• β is only surjective for the trivial group and the group of order 2;
• β is seldom injective
#### Table of marks
The element in row G/K and column H represents the number of points in the orbit (type) G/K fixed by H. By the definition of the action of G on G/K, a coset gK is fixed by H whenever HgK = gK. But this is equivalent to, g-1Hg < K so that $$m(G/K, H) = \# \left\{ gK \in G/K \mid g^{-1}Hg < K \right\}.$$
• In particular, the diagonal elements are m(K, K) = |NG(K)/K|.
• If H is not conjugate to a subgroup of K then m(G/K, H) = 0.
• If K is a normal subgroup of G and H < K then $$g^{-1}Hg \subset K$$ for all $$g\in G$$, so that $$m(G/K,\,H)=|G/K|$$.
• If G is Abelian then every subgroup is normal and so, again if H < K then $$m(G/K,\,H)=|G/K|$$, otherwise $$m(G/K,\,H)=0$$.
S3 S4
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8813140392303467, "perplexity_flag": "head"}
|
http://aimath.org/textbooks/beezer/CRSsection.html
|
Column and Row Spaces
Theorem SLSLC showed us that there is a natural correspondence between solutions to linear systems and linear combinations of the columns of the coefficient matrix. This idea motivates the following important definition.
Definition CSM (Column Space of a Matrix) Suppose that $A$ is an $m\times n$ matrix with columns $\set{\vectorlist{A}{n}}$. Then the column space of $A$, written $\csp{A}$, is the subset of $\complex{m}$ containing all linear combinations of the columns of $A$, \begin{equation*} \csp{A}=\spn{\set{\vectorlist{A}{n}}}. \end{equation*}
Some authors refer to the column space of a matrix as the range, but we will reserve this term for use with linear transformations (Definition RLT).
## Column Spaces and Systems of Equations
Upon encountering any new set, the first question we ask is what objects are in the set, and which objects are not? Here's an example of one way to answer this question, and it will motivate a theorem that will then answer the question precisely.
Example CSMCS: Column space of a matrix and consistent systems.
So if we fix the coefficient matrix, and vary the vector of constants, we can sometimes find consistent systems, and sometimes inconsistent systems. The vectors of constants that lead to consistent systems are exactly the elements of the column space. This is the content of the next theorem, and since it is an equivalence, it provides an alternate view of the column space.
Theorem CSCS (Column Spaces and Consistent Systems) Suppose $A$ is an $m\times n$ matrix and $\vect{b}$ is a vector of size $m$. Then $\vect{b}\in\csp{A}$ if and only if $\linearsystem{A}{\vect{b}}$ is consistent.
Proof.
This theorem tells us that asking if the system $\linearsystem{A}{\vect{b}}$ is consistent is exactly the same question as asking if $\vect{b}$ is in the column space of $A$. Or equivalently, it tells us that the column space of the matrix $A$ is precisely those vectors of constants, $\vect{b}$, that can be paired with $A$ to create a system of linear equations $\linearsystem{A}{\vect{b}}$ that is consistent.
Employing Theorem SLEMM we can form the chain of equivalences
\begin{align*} \vect{b}\in\csp{A} \iff \linearsystem{A}{\vect{b}}\text{ is consistent} \iff A\vect{x}=\vect{b}\text{ for some }\vect{x} \end{align*}
Thus, an alternative (and popular) definition of the column space of an $m\times n$ matrix $A$ is
\begin{align*} \csp{A}&= \setparts{ \vect{y}\in\complex{m} }{ \vect{y}=A\vect{x}\text{ for some }\vect{x}\in\complex{n} } = \setparts{A\vect{x}}{\vect{x}\in\complex{n}} \subseteq\complex{m} \end{align*}
We recognize this as saying create all the matrix vector products possible with the matrix $A$ by letting $\vect{x}$ range over all of the possibilities. By Definition MVP we see that this means take all possible linear combinations of the columns of $A$ --- precisely the definition of the column space (Definition CSM) we have chosen.
Notice how this formulation of the column space looks very much like the definition of the null space of a matrix (Definition NSM), but for a rectangular matrix the column vectors of $\csp{A}$ and $\nsp{A}$ have different sizes, so the sets are very different.
Given a vector $\vect{b}$ and a matrix $A$ it is now very mechanical to test if $\vect{b}\in\csp{A}$. Form the linear system $\linearsystem{A}{\vect{b}}$, row-reduce the augmented matrix, $\augmented{A}{\vect{b}}$, and test for consistency with Theorem RCLS. Here's an example of this procedure.
Example MCSM: Membership in the column space of a matrix.
Theorem CSCS completes a collection of three theorems, and one definition, that deserve comment. Many questions about spans, linear independence, null space, column spaces and similar objects can be converted to questions about systems of equations (homogeneous or not), which we understand well from our previous results, especially those in Chapter SLE:Systems of Linear Equations. These previous results include theorems like Theorem RCLS which allows us to quickly decide consistency of a system, and Theorem BNS which allows us to describe solution sets for homogeneous systems compactly as the span of a linearly independent set of column vectors.
The table below lists these for definitions and theorems along with a brief reminder of the statement and an example of how the statement is used.
\begin{tabular}{|r|l|}
\multicolumn{1}{|l}{Definitionnbsp;NSM}
SynopsisNull space is solution set of homogeneous system
ExampleGeneral solution sets described by Theoremnbsp;PSPHS
\multicolumn{1}{|l}{Theoremnbsp;SLSLC}
SynopsisSolutions for linear combinations with unknown scalars
ExampleDeciding membership in spans
\multicolumn{1}{|l}{Theoremnbsp;SLEMM}
SynopsisSystem of equations represented by matrix-vector product
ExampleSolution to $\linearsystem{A}{\vect{b}}$ is $\inverse{A}\vect{b}$ when $A$ is nonsingular
\multicolumn{1}{|l}{Theoremnbsp;CSCS}
SynopsisColumn space vectors create consistent systems
ExampleDeciding membership in column spaces
## Column Space Spanned by Original Columns
So we have a foolproof, automated procedure for determining membership in $\csp{A}$. While this works just fine a vector at a time, we would like to have a more useful description of the set $\csp{A}$ as a whole. The next example will preview the first of two fundamental results about the column space of a matrix.
Example CSTW: Column space, two ways.
We will now formalize the previous example, which will make it trivial to determine a linearly independent set of vectors that will span the column space of a matrix, and is constituted of just columns of $A$.
Theorem BCS (Basis of the Column Space) Suppose that $A$ is an $m\times n$ matrix with columns $\vectorlist{A}{n}$, and $B$ is a row-equivalent matrix in reduced row-echelon form with $r$ nonzero rows. Let $D=\{d_1,\,d_2,\,d_3,\,\ldots,\,d_r\}$ be the set of column indices where $B$ has leading 1's. Let $T=\set{\vect{A}_{d_1},\,\vect{A}_{d_2},\,\vect{A}_{d_3},\,\ldots,\,\vect{A}_{d_r}}$. Then
1. $T$ is a linearly independent set.
2. $\csp{A}=\spn{T}$.
Proof.
This is a nice result since it gives us a handful of vectors that describe the entire column space (through the span), and we believe this set is as small as possible because we cannot create any more relations of linear dependence to trim it down further. Furthermore, we defined the column space (Definition CSM) as all linear combinations of the columns of the matrix, and the elements of the set $S$ are still columns of the matrix (we won't be so lucky in the next two constructions of the column space).
Procedurally this theorem is extremely easy to apply. Row-reduce the original matrix, identify $r$ columns with leading 1's in this reduced matrix, and grab the corresponding columns of the original matrix. But it is still important to study the proof of Theorem BS and its motivation in Example COV which lie at the root of this theorem. We'll trot through an example all the same.
Example CSOCD: Column space, original columns, Archetype D.
## Column Space of a Nonsingular Matrix
Let's specialize to square matrices and contrast the column spaces of the coefficient matrices in Archetype A and Archetype B.
Example CSAA: Column space of Archetype A.
Example CSAB: Column space of Archetype B.
Example CSAA and Example CSAB together motivate the following equivalence, which says that nonsingular matrices have column spaces that are as big as possible.
Theorem CSNM (Column Space of a Nonsingular Matrix) Suppose $A$ is a square matrix of size $n$. Then $A$ is nonsingular if and only if $\csp{A}=\complex{n}$.
Proof.
With this equivalence for nonsingular matrices we can update our list, Theorem NME3.
Theorem NME4 (Nonsingular Matrix Equivalences, Round 4) Suppose that $A$ is a square matrix of size $n$. The following are equivalent.
1. $A$ is nonsingular.
2. $A$ row-reduces to the identity matrix.
3. The null space of $A$ contains only the zero vector, $\nsp{A}=\set{\zerovector}$.
4. The linear system $\linearsystem{A}{\vect{b}}$ has a unique solution for every possible choice of $\vect{b}$.
5. The columns of $A$ are a linearly independent set.
6. $A$ is invertible.
7. The column space of $A$ is $\complex{n}$, $\csp{A}=\complex{n}$.
Proof.
## Row Space of a Matrix
The rows of a matrix can be viewed as vectors, since they are just lists of numbers, arranged horizontally. So we will transpose a matrix, turning rows into columns, so we can then manipulate rows as column vectors. As a result we will be able to make some new connections between row operations and solutions to systems of equations. OK, here is the second primary definition of this section.
Definition RSM (Row Space of a Matrix) Suppose $A$ is an $m\times n$ matrix. Then the row space of $A$, $\rsp{A}$, is the column space of $\transpose{A}$, i.e. $\rsp{A}=\csp{\transpose{A}}$.
Informally, the row space is the set of all linear combinations of the rows of $A$. However, we write the rows as column vectors, thus the necessity of using the transpose to make the rows into columns. Additionally, with the row space defined in terms of the column space, all of the previous results of this section can be applied to row spaces.
Notice that if $A$ is a rectangular $m\times n$ matrix, then $\csp{A}\subseteq\complex{m}$, while $\rsp{A}\subseteq\complex{n}$ and the two sets are not comparable since they do not even hold objects of the same type. However, when $A$ is square of size $n$, both $\csp{A}$ and $\rsp{A}$ are subsets of $\complex{n}$, though usually the sets will not be equal (but see exercise CRS.M20).
Example RSAI: Row space of Archetype I.
The row space would not be too interesting if it was simply the column space of the transpose. However, when we do row operations on a matrix we have no effect on the many linear combinations that can be formed with the rows of the matrix. This is stated more carefully in the following theorem.
Theorem REMRS (Row-Equivalent Matrices have equal Row Spaces) Suppose $A$ and $B$ are row-equivalent matrices. Then $\rsp{A}=\rsp{B}$.
Proof.
Example RSREM: Row spaces of two row-equivalent matrices.
Theorem REMRS is at its best when one of the row-equivalent matrices is in reduced row-echelon form. The vectors that correspond to the zero rows can be ignored. (Who needs the zero vector when building a span? See exercise LI.T10.) The echelon pattern insures that the nonzero rows yield vectors that are linearly independent. Here's the theorem.
Theorem BRS (Basis for the Row Space) Suppose that $A$ is a matrix and $B$ is a row-equivalent matrix in reduced row-echelon form. Let $S$ be the set of nonzero columns of $\transpose{B}$. Then
1. $\rsp{A}=\spn{S}$.
2. $S$ is a linearly independent set.
Proof.
Example IAS: Improving a span.
Theorem BRS and the techniques of Example IAS will provide yet another description of the column space of a matrix. First we state a triviality as a theorem, so we can reference it later.
Theorem CSRST (Column Space, Row Space, Transpose) Suppose $A$ is a matrix. Then $\csp{A}=\rsp{\transpose{A}}$.
Proof.
So to find another expression for the column space of a matrix, build its transpose, row-reduce it, toss out the zero rows, and convert the nonzero rows to column vectors to yield an improved set for the span construction. We'll do Archetype I, then you do Archetype J.
Example CSROI: Column space from row operations, Archetype I.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 94, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8716292381286621, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/204096-i-need-help-finishing-gcd-proof.html
|
# Thread:
1. ## I need help finishing a gcd proof
the question was to prove:
if m and n are positive integers and gcd(m, n) = d, then gcd(2n - 1, 2m - 1) = 2d - 1.
my approach was the following:
given gdc(m,n) = d, the d|m and d|n also m = dp and n = dq for some integers p and q.
so, (2n - 1, 2m - 1) = (2dq - 1, 2dp - 1)
2dq - 1 = 2d - 1(2d(q-1) + 2d(q-2) +...+ 2d + 1) = (2d - 1)(s) for some integer s.
2dp - 1 = 2d - 1(2d(p-1) + 2d(p-2) +...+ 2d + 1) = (2d - 1)(t) for some integer t.
so (2d - 1) is a divisor of (2n - 1, 2m - 1), however I do not know how to show that it is the gcd(2n - 1, 2m - 1) or how to show that
gcd(s,t) = 1.
my instructor suggested proof by induction showing true for m + n < s, then true for m + n = s. I can not see how to start the proof
using his suggestion. any help would be appreciated.
2. ## Re: I need help finishing a gcd proof
Hey bskcase98.
If something is a greatest divisor (let alone a greatest common divisor), then it means that all divisors must be less than that divisor. In other words if you have n and the greatest divisor of n is a then if n = ab then b <= a. From this can you show that this holds?
3. ## Re: I need help finishing a gcd proof
You've proven that, if $d = gcd(a,b)$, then $(2^d - 1)$ divides the $gcd((2^a - 1), (2^b-1))$.
If you could prove that $gcd((2^a - 1), (2^b-1))$ divides $(2^d - 1)$, then you'd be done, since they'd then be equal.
Recall in a previous thread of yours called "need help with an Eucledian algorthim and gcd involving factorials and large exponets", I gave a long derivation? In there I wrote:
"Proposition: If d divides $(b^y - 1)$ and d divides $(b^y - 1)$, then d divides $(b^{gcd(x, y)} - 1)$. Rather than work out a proof, I'll run through it step by step for this problem."
Well, now that proposition is exactly what you need. Switching the labelling back to the current problem, you'd be done if you could prove:
Lemma: If $c$ divides $(2^a - 1)$ and $c$ divides $(2^b - 1)$, then $c$ divides $(2^{gcd(a, b)} - 1)$.
If you can prove that Lemma, then, since $c$ = $gcd((2^a - 1), (2^b-1))$ divides both $(2^a - 1)$ and $(2^b - 1)$, it would follow that
$c = gcd((2^a - 1), (2^b-1))$ divides $(2^{gcd(a, b)} - 1)$.
And since you already have shown that $(2^{gcd(a,b)} - 1)$ divides the $gcd((2^a - 1), (2^b-1))$, you'd then be done.
That insane calculation I did in that other problem lays out - perhaps - the way for you to prove the above Lemma.
By the way, that insane calculation is, I believe, called Euclid's Algorithm for finding the gcd.
Just a thought. Good luck.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939906656742096, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/81054/julia-sets-using-other-fields
|
## Julia sets using other fields
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I hope I am forgiven for my noob question. But, does it make sense to think of Julia sets using other fields? More precisely I would like to think of fields in which closed and bounded isn't necessarily a compact set. I am not sure what this will give us, but some results that we know in complex numbers wouldn't hold (e.g. will the Julia sets and the filled Julia sets still remain compact and nonempty?).
-
Silverman's book "The Arithmetic of Dynamical Systems" deals with fields of characteristic p. In that situation there are maps with empty Julia sets and various other properties that differ from the situation over the complex numbers. – Fabian Dreher Nov 16 2011 at 10:19
2
@Fabian I don't think Silverman's book does any local field of characteristic p. He certainly does the p-adics, but these have characteristic zero. They are also locally compact, so closed and bounded is compact there too. But, yes, sometimes Julia sets are empty and other things change. – Felipe Voloch Nov 16 2011 at 10:22
Indeed. I guess I got carried away by the reductions mod p. – Fabian Dreher Nov 16 2011 at 10:35
I mean, I think, its easy to restrict the julia set to the real algebraic numbers and arrive to some non-compact set. But why would it be interesting to look at Julia sets this way? – Jose Capco Nov 16 2011 at 11:57
2
More promising, perhaps, would be fields $\mathbb C_p$. Complete metric, algebraically closed, but NOT locally compact. Why not start by studying the maps $z^2+c$ which were so interesting in $\mathbb C$... – Gerald Edgar Nov 16 2011 at 13:41
## 3 Answers
As noted, $\mathbb{Q}_p$ and its finite extensions are locally compact, but the Julia set is often empty. Indeed, in this case that the Fatou set is always non-empty, another difference from the compact case. However, since $\mathbb{Q}_p$ isn't algebraically closed, one might best compare dynamics over $\mathbb{Q}_p$ as being analogous in some ways to dynamics over $\mathbb{R}$. So the analogue of $\mathbb{C}$ is $\mathbb{C}_p$, the completion of the algebraic closure of $\mathbb{Q}_p$. Unfortunately, $\mathbb{C}_p$ is not locally compact (and of course, totally disconnected), so one can't use measure-theoretic arguments. For example, it's not easy to make sense of equidistribution. The modern solution is to instead look at Berkovich space. This is a locally compact and connected space that includes $\mathbb{P}^1(\mathbb{C}_p)$ as a sort of boundary. A good introduction to Berkovich spaces is listed below. And in Berkovich space over $\mathbb{C}_p$, we're back to the situation where the Julia set is always non-empty.
I'll mention one other interesting difference between the complex and $p$-adic cases. A famous theorem of Sullivan says that a rational map has no wandering domains in $\mathbb{P}^1(\mathbb{C})$. In opposition to this, Benedetto has constructed rational maps that do have wandering domains in $\mathbb{P}^1(\mathbb{C}_p)$. However, it is not known if wandering domains can exist in $\mathbb{P}^1(\mathbb{Q}_p)$.
• Baker, Matthew; Rumely, Robert, Potential theory and dynamics on the Berkovich projective line. Mathematical Surveys and Monographs, 159. American Mathematical Society, Providence, RI, 2010
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
When you say "other fields", I assume that you're thinking that $\mathbb{C}$ is where "ordinary" Julia sets live. If you're interested only in Julia sets of polynomials (over $\mathbb{C}$) then that's not a bad point of view. On the other hand, if you're interested in Julia sets of rational functions (over $\mathbb{C}$) then the ambient space that you're working with is really `$\mathbb{C} \cup \{\infty\}$`, the Riemann sphere. Of course, this isn't a field at all.
If you have a look at Milnor's book Dynamics in One Complex Variable, you'll see Julia sets developed not for arbitrary fields, but for arbitrary Riemann surfaces. So, the general situation is that you have a Riemann surface $X$ and a holomorphic map $f: X \to X$; any such $f$ has an associated Julia set $J(f) \subseteq X$. Taking $X$ to be the Riemann sphere, this means that $f$ is a rational function and $J(f)$ is the "ordinary" Julia set.
-
You may also construct Julia sets in the Quanternions $\mathbb{H}$. I guess that not all results about the Julia sets in $\mathbb{C}$ hold since this algebra is not kommutativ. I am sorry not to know details, but a think there are paper..
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953140377998352, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/29006/counterexamples-in-algebra/68123
|
## Counterexamples in Algebra?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is certainly related to "What are your favorite instructional counterexamples?", but I thought I would ask a more focused question. We've all seen Counterexamples in Analysis and Counterexamples in Topology, so I think it's time for: Counterexamples in Algebra.
Now, Algebra is quite broad, and I'm new at this, so if I need to narrow this then I will- just let me know. At the moment I'm looking for counterexamples in all areas of algebra: finite groups, representation theory, homological algebra, Galois theory, Lie groups and Lie algebras, etc. This might be too much, so a moderator can change that.
These counterexamples can illuminate a definition (e.g. a projective module that is not free), illustrate the importance of a condition in a theorem (e.g. non-locally compact group that does not admit a Haar measure), or provide a useful counterexample for a variety of possible conjectures (I don't have an algebraic example, but something analogous to the Cantor set in analysis). I look forward to your responses!
You can also add your counter-examples to this nLab page: http://ncatlab.org/nlab/show/counterexamples+in+algebra
(the link to that page is currently "below the fold" in the comment list so I (Andrew Stacey) have added it to the main question)
-
8
My feeling is that this question is far too broad. – Andy Putman Jun 21 2010 at 23:11
8
I like that the question is general. I think if it's narrowed too much we won't get as many interesting responses. All of the big list type questions that have been successful have been fairly general, so I don't think it hurts as long as we aren't swarmed with questions like this. – jeremy Jun 22 2010 at 0:23
10
Meta discussion: meta.mathoverflow.net/discussion/459/… – Andrew Stacey Jun 22 2010 at 7:54
5
Whilst I like lists of counterexamples, I don't think that MO is an appropriate place for one. I've explained why in the meta discussion (NB: please vote for the comment linking to the meta discussion so that it appears "above the fold"). I think that this would work so much better as a wiki page. So I've started one on the nLab: ncatlab.org/nlab/show/counterexamples+in+algebra Obviously, as I'm not an algebraist I didn't understand everything and have probably left out a lot of information in copying it over. I recommend closing this question and redirecting to that nLab page. – Andrew Stacey Jun 22 2010 at 8:33
7
Andrew, why not keep the question open, in order to generate the examples here that can then be more sensibly organized on your page? It seems likely to me that you will get a lot of good examples with this question that might otherwise be missed. – Joel David Hamkins Jun 22 2010 at 13:16
show 11 more comments
## 39 Answers
In the category of rings, epimorphisms do not have to be surjective: $\mathbb{Z}\hookrightarrow \mathbb{Q}$.
-
2
For more on epimorphisms that are not surjective you can take a look at the article of G.A.Reid "Epimorphisms and surjectivity" (Invent.9, 295-307, (1970)). He discussed this problem in the context of groups, $C^*$ algebras, von-Neumann algebras, (finite dim.) Lie algebras, semisimple Lie algebras, (locally) compact groups. For Hopf algebras you can take a look at the more recent paper arXiv:0907.2881. – Dragos Fratila Jun 25 2010 at 8:05
5
omg what's an epimorphism then? – unknown (google) Oct 9 2010 at 22:26
1
$f: A \to B$ is an epimorphism in any category if for every $g,h: B \to C$ $gf=hf$ implies that g=h. – Sean Tilson Oct 10 2010 at 17:34
2
mathoverflow.net/questions/109/… – Martin Brandenburg Jun 18 2011 at 9:01
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I like Lance Small's example of a right but not left Notherian ring: matrices of the form ```$\begin{pmatrix}a & b\\
0 & c\end{pmatrix}$``` where $a\in\mathbb{Z}$ and $b,c\in\mathbb{Q}$.
-
3
Indeed, this is also an example that shows right global dimension is not equal to left global dimension. The former is 1 while the latter is at least 2 (and exactly 2 IIRC) – David White Jun 18 2011 at 18:30
The ring $A = \prod_{n=1}^{\infty} \mathbb{F}_2$ has some interesting/disturbing properties.
For example, the affine scheme $X := {\rm{Spec}}(A)$ has non-open connected components (since it has infinitely many open points), all local rings on $X$ are noetherian (in fact they're all $\mathbb{F}_2$ since $a^2 = a$ for all elements $a$) even though $A$ is not noetherian, and if $I$ is an ideal that isn't finitely generated then ${\rm{Spec}}(A/I) \hookrightarrow X$ is formally unramified (since closed immersion), finite type, and flat but not étale (since not finitely presented) and not open, in contrast with the noetherian case.
-
14
+1. I just want to add: $X$ is the Stone-Cech-compactification of the natural numbers. The structure sheaf is the constant sheaf $\mathbb{F}_2$. – Martin Brandenburg Jun 22 2010 at 10:52
4
$A$ has a proper ideal $I=\oplus_{n=1}^\infty \mathbb F_2$. Since the quotient ring $A/I$ has unit element, Zorn says it has a maximal ideal, but one cannot explicitly produce a maximal ideal. – Abhishek Parab Jun 28 2010 at 8:08
1
@AAP: Yes, such maximal ideals correspond to free ultrafilters on the natural numbers. – Martin Brandenburg Jun 28 2010 at 10:37
1
@unknown: unramified = formally unramified + locally of finite presentation. – Martin Brandenburg Jun 18 2011 at 9:01
show 4 more comments
1) (Nagata) There are noetherian domains of infinte Krull dimension: Localize $k[x_1,x_2,...]$ at the prime ideals $(x_1),(x_2,x_3),(x_4,x_5,x_6),...$.
2) (Malcev) Every commutative cancellative monoid embeds into a group. This is false in the non-commutative case. A very instructive counterexample is given by $\langle a,b,c,d,x,y,u,v : ax=by, cx=dy, au=bv \rangle$.
3) The Theorem of Cantor-Bernstein for sets does not carry over to algebraic structures. For example, the fields $K=\overline{\mathbb{Q}(x_1,x_2,...)}$ (or $K=\mathbb{C}$) and $K(t)$ embed into each other, but they are not isomorphic.
-
A non-abelian group, all of whose subgroups are normal: the quaternion group, $$Q=\langle\thinspace a,b\thinspace|\thinspace a^4=1,a^2=b^2,ab=ba^3\thinspace\rangle$$
-
An exact sequence that does not split: $0 \to \mathbb{Z} \to \mathbb{Z} \to \mathbb{Z}/2\mathbb{Z} \to 0$, where the first map is multiplication by 2.
-
4
Does anyone think all exact sequences split? – Steven Gubkin Jun 22 2010 at 1:22
18
No, but I'd say this is a perfectly valid example of "illuminating a definition". The point isn't dispelling false beliefs, but clarifying concepts for people who are just learning the topic. – Klaus Draeger Jun 22 2010 at 9:47
9
@Steven: since I taught you about exact sequences, I'm certainly glad that you don't! More seriously, if someone met exact sequences of vector spaces first, they might have that misconception. – Mark Meckes Jun 23 2010 at 14:02
The group $\mathbb{Z}^4$ is not the fundamental group of any 3-manifold, proved by Stallings in this 1962 paper. It follows that there is no algorithm for recognizing 3-manifold groups.
-
5
Leave it to John to not only find the counterexample no one else thinks of,but to have the exact reference handy.We are very priviledged to have you as a contributor,Professor Stillwell. Keep up the awesome work. – Andrew L Jun 22 2010 at 2:41
A number ring which is a principal ideal domain (and, hence, a unique factorization domain) but is not Euclidean: the ring of integers of ${\bf Q}(\sqrt{-19})$. See Th Motzkin, The Euclidean algorithm, Bull Amer Math Soc 55 (1949) 1142-1146, available at http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.bams/1183514381
-
In group theory, Lagrange's Theorem states that the order of a subgroup divides the order of the group, however the converse is false. The usual counterexample given is the alternating group $A_4$ of order 12 which has no subgroup of order 6.
-
From Milnor's book "Algebraic K-Theory":
A (nonzero!) associative ring for which a free module of rank 2 is isomorphic to a free module of rank 1: The ring of endomorphisms of an infinite-dimensional vector space.
-
A number field where the ring of integers is Euclidean but not norm-Euclidean: ${\bf Q}(\sqrt{69})$. See David A Clark, A quadratic field which is Euclidean but not norm-Euclidean, Manuscripta Mathematica 83 (1994) 327-330.
-
The ring $R = k[x,y]/(x^2, xy)$ is a simple example of a local commutative noetherian ring that is not Cohen-Macaulay. It is sometimes referred to as the "Emmy Ring."
This ring is very useful for showing how unintuitive non-CM rings can be. For instance, letting $I = (x)$, then $\operatorname{depth} R/I = 1 > 0 = \operatorname{depth} R$; in particular the (innocuous looking) inequality
$\operatorname{depth} R/I + \operatorname{grade} I \leq \operatorname{depth} R$
need not hold. Here $\operatorname{grade} I$ is the length of the longest regular sequence in $I$.
-
Two famous cases that come to mind are:
1. Nagata's counterexample to Hilbert's fourteenth problem.
2. Counterexamples by various people to (the original version of) the Burnside problem.
-
1
This is not my area at all, but this looks like a readable reference: math.bas.bg/serdica/2001/2001-171-192.pdf – Timothy Chow Jun 22 2010 at 2:18
2
And Nagata's example of a Noetherian ring with infinite Krull dimension, as well as a noetherian domain whose normalization is not a module-finite extension. – Boyarsky Jun 22 2010 at 2:27
show 1 more comment
Two finite non-isomorphic groups with the same order profile: let $C_n$ be the cyclic group of $n$ elements, let $Q=\langle\thinspace a,b\thinspace|\thinspace a^4=1,a^2=b^2,ab=ba^3\thinspace\rangle$ be the quaternion group, then $C_4\times C_4$ and $C_2\times Q$ are not isomorphic (the first is abelian, the second is not) but both have 1 element of order 1, 3 elements of order 2, and 12 elements of order 4.
By contrast, if two finite abelian groups have the same order profile, then they are isomorphic.
-
Grigorchuk 1984 example of a finitely generated group with intermediate growth (there are no such linear group).
-
An infinite group with exactly two conjugacy classes. See G. Higman, B. H. Neumann, and H. Neumann, Embedding theorems for groups, J. London Math. Soc. 24 (1949), 247-254.
-
4
That is not a big deal. A much bigger deal is an infinite finitely generated group with 2 conjugacy classes constructed by Osin (Annals of Math, this year). – Mark Sapir Oct 9 2010 at 22:15
A finite group in which a product of two commutators need not be a commutator: This is Exercise 3.27 in Rotman, The Theory of Groups, a construction attributed to Carmichael. Let $G$ be the subgroup of $S_{16}$ generated by the eight permutations $(ac)(bd)$, $(eg)(fh)$, $(ik)(jl)$, $(mo)(np)$, $(ac)(eg)(ik)$, $(ab)(cd)(mo)$, $(ef)(gh)(mn)(op)$, and $(ij)(kl)$. Then the commutator subgroup of $G$ is generated by the first four of these elements, and has order 16. It contains $\alpha=(ik)(jl)(mo)(np)$, but $\alpha$ is not a commutator.
Rotman remarks elsewhere that the smallest group in which there is a product of commutators which is not a commutator is a group of order 96.
-
There are finitely presented groups whose word problem is undecidable in computability theory.
-
Higman's group $G=\left< a_1,\ldots, a_4 | \forall i\in\mathbb{Z}/4\mathbb{Z}: a_i=[a_{i+1},a_i] \right>$, which has no subgroups of finite index. See: G. Higman, A finitely generated infinite simple group, J. London Math. Soc. 26 (1951), 61-64.
-
Tarski's monsters: infinite groups in which every proper non-trivial subgroup is of prime order $p$. They are two generated simple groups.
They were constructed by Olshanskii and as far as I rememeber they were also constructed independently by Rips maybe even before Olshanskii, but he did not bother publishing it, can anyone confirm this?
-
1
Rips had some preliminary text published and also gave a series of talks. It is hard to tell if these ideas would lead to the construction of an actual example. Some key components of Olshanskii's construction are missing in Rips' constructions. – Mark Sapir Oct 9 2010 at 22:07
• Does $R[x] \cong S[x]$ imply $R \cong S$? ( Taken from this link. )
• Here is a counterexample. Let $$R=\displaystyle\frac{\mathbb{C}[x,y,z]}{(xy(1-z^2))}, \quad \ S= \displaystyle\frac{\mathbb{C}[x,y,z]}{(x^2y(1-z^2))}$$ Then, $R$ is not isomorphic to $S$ but, $R[T]\cong S[T]$. In many variables, this is called the Zariski problem or cancellation of indeterminates and is largely open. Here is a discussion by Hochster (problem 3)
• http://www.math.lsa.umich.edu/~hochster/Lip.text.pdf
Excellent Counterexamples.
Let $G$ be a group and let $\mathscr{S}(G)$ denote the group of Inner-Automorphisms of $G$.
The only isomorphism theorem I know, that connects a group to its inner-automorphism is: $$G/Z(G) \cong \mathscr{S}(G)$$ where $Z(G)$ is the center of the group. Now, if $Z(G) ={e}$ then one can see that $G \cong \mathscr{S}(G)$. What about the converse? That is if $G \cong \mathscr{S}(G)$ does it imply that $Z(G)=\{e\}$? In other word's I need to know whether there are groups with non-trivial center which are isomorphic to their group of Inner-Automorphisms. That is if $G \cong \mathscr{S}(G)$ does it imply that $Z(G)= \{e\}$?
The answer is yes there are groups with non-trivial center which are isomorphic to $\mathscr{S}(G)$. The answer is given at this link
Next one:
• Does there exists a finite group $G$ and a normal subgroup $H$ of $G$ such that $|Aut(H)|>|Aut(G)|$
Arturo Magidin posed this question some time ago at MATH.SE
• Question. Can we have a finite group $G$, normal subgroups $H$ and $K$ that are isomorphic as groups, $G/H$ isomorphic to $G/K$, but no $\varphi\in\mathrm{Aut}(G)$ such that $\varphi(H) = K$?
• Answer was provided by Vipul Naik. Link is given here.
Question was posed by Zev Chonoles at $\textbf{MATH.SE}$
• I know it is possible for a group $G$ to have normal subgroups $H, K$, such that $H\cong K$ but $G/H\not\cong G/K$, but I couldn't think of any examples with $G$ finite. What is an illustrative example?
• Answer from this link: Take $G = \mathbb{Z}_4 \times \mathbb{Z}_2$, $H$ generated by $(0,1)$, $K$ generated by $(2,0)$. Then $H \cong K \cong \mathbb{Z}_2$ but $G/H \cong \mathbb{}Z_4$ while $G/K \cong \mathbb{Z}_2 \times \mathbb{Z}_2$.
-
Thompson's group T is a finitely presented infinite simple group.
-
Sweedler's Hopf algebra. It is the Hopf algebra generated by two elements $x, g$ with relations $g^2 = 1$, $x^2 = 0$, and $gxg = - x$. The coproduct is given by $$\Delta(g) = g \otimes g, \quad \Delta(x) = x \otimes 1 + g \otimes x,$$ the counit by $$\varepsilon(g) = 1, \quad \varepsilon(x) = 0,$$ and the antipode by $$S(g) = g, \quad S(x) = - gx.$$ It is noncommutative and noncocommutative, is quasitriangular and coquasitriangular, but is not a quantum double.
-
This quasigroup is not isomorphic to any loop (i.e. quasigroup with identity):
````* | a b c
-------------
a | a c b
b | c b a
c | b a c
````
See e.g. Latin squares: Equivalents and equivalence.
-
An infinitely generated and non-Noetherian subring of a polynomial ring:
$$R=K[x,xy,xy^2,\ldots, xy^n,\ldots] \subset S=K[x,y].$$
Explanation The ring $R$ is graded and monomial: it is spanned by the monomials $x^ay^b$ that are contained in it, whose exponents are the lattice points in the cone `$C=\{(a,b)=(0,0)$` or `$a>0, b\geq 0\}.$` The minimal generators of the homogeneous ideal $R_{+}$ of positive degree elements correspond to the minimal generators $(1,n), n\geq 0$ of the lattice cone $C\cap\mathbb{Z}^2.$ Thus $R_{+}$ (respectively, $R$) is infinitely-generated with ideal (respectively, $K$-algebra) minimal generators $x,xy,xy^2,\ldots, xy^n,\ldots.$
-
A polynomial, solvable in radicals, whose splitting field is not a radical extension (of $\bf Q$). Let $f(x)$ be any cyclic cubic, that is, any cubic with rational coefficients, irreducible over the rationals, with Galois group cyclic of order 3. Then $f(x)=0$ is solvable in radicals (every cubic is), so the splitting field $K$ of $f$ over $\bf Q$ is contained in a radical extension of $\bf Q$, but $K$ is not itself a radical extension of $\bf Q$. The degree of $K$ over $\bf Q$ is 3, so for $K$ to be radical over $\bf Q$ it would have to be an extension of $\bf Q$ by the cube root of some element of $\bf Q$, but such extensions are not normal.
-
2
So this just reminds us of the correct definition of a radical extension ... – Martin Brandenburg Jun 22 2010 at 10:55
Please forgive me if someone has already posted this...
Let X > Y > Z be a tower of groups with Y and Z being normal subgroups of X and Y, respectively. Z need not be a normal subgroup of X.
An example: D_4 > Klein's 4-group > Z/2Z.
-
5
I just did this exercise in Dummit and Foote- nice! P.S. Big fan of Pinball Wizard and Tiny Dancer. – Dylan Wilson Jun 28 2010 at 7:30
Harry Hutchins "Examples of commutative rings" may be of interest. It is based on his 1978 Chicago Ph.D. thesis under Kaplansky, and not surprisingly it serves as a useful complement to Kaplansky's excellent textbook Commutative Rings (most references to proofs refer to Kaplansky). There is also a 3 page list of errata, updates,... dated July 1983, which is distributed with the book.
Hutchins, Harry C. 83a:13001 13-02
Examples of commutative rings. (English)
Polygonal Publ. House, Washington, N. J., 1981. vii+167 pp. \$13.75. ISBN 0-936428-05-8
The book is divided into two parts: a brief sketch of commutative ring theory which includes pertinent definitions along with main results without proof (but with ample references), and Part II, the 180 examples. The examples do cover a very large range of topics. Although most of them appear elsewhere, they are enhanced by a fairly complete listing of their properties. Example 67, for instance, is M. Hochster's counterexample to the polynomial cancellation problem, and it lists a number of properties of the two rings that were not given in the original paper Proc. Amer. Math. Soc. 34 (1972), no. 1, 81 - 82; MR 45 #3394. Some of the examples appear more than once, since many rings exhibit more than one interesting property. (R=Kx, y, z is used in Examples 6 and 22.) The examples are grouped into areas, but a drawback is that these have not been labeled and separated off. In addition, the Index is for Part I and definitions only, and this means that searching for a specific example with certain properties can be time consuming. The book can be used as a supplement to one of the standard texts in commutative ring theory, and it does appear to complement the monograph by I. Kaplansky Commutative rings, Allyn and Bacon, Boston, Mass., 1970; MR 40 #7234; second edition, Univ. Chicago Press, Chicago, Ill., 1974; MR 49 #10674. --Reviewed by Jon L. Johnson
-
This is probably more of an example than a counterexample. Consider the following binary operation table defined on a three element set with zero:
``` 0 1 2
0 0 0 0
1 0 0 1
2 0 2 2
```
V. Murskii showed that the equational theory of this algebra has no logically equivalent (in equational logic) finite theory. Lyndon earlier showed that every two element algebra with one binary operation did have a finite basis, and Perkins found a six element semigroup with no finite basis. I don't know the status of algebras with a single ternary operation.
An infinitely-generated Noetherian ring: $\mathbb{Q},$ the field of rational numbers.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 128, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192523956298828, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/16407-basic-calculus-help-please.html
|
# Thread:
1. ## Basic Calculus Help Please
Jhevon did explain this to me, i think, but i forgot it again.
Let's say we want to find the derivative of $sin^3 (5x^2 - 4x)$
now that becomes $3(sin(5x^2 - 4x))^2 . cos(5x^2 - 4x) . (10x - 4)$
But if it was just $sin(5x^2 - 4x)$ would it have become:
$cos(5x^2 - 4x) . (10x - 4)$ ?
2. Originally Posted by janvdl
Jhevon did explain this to me, i think, but i forgot it again.
Let's say we want to find the derivative of $sin^3 (5x^2 - 4x)$
now that becomes $3(sin(5x^2 - 4x))^2 . cos(5x^2 - 4x) . (10x - 4)$
But if it was just $sin(5x^2 - 4x)$ would it have become:
$cos(5x^2 - 4x) . (10x - 4)$ ?
Jhevon must have done a good job at explaining it. (No great surprise!) You did it perfectly.
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9809433817863464, "perplexity_flag": "middle"}
|
http://medlibrary.org/medwiki/Adjacency_matrix
|
# Adjacency matrix
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
In mathematics and computer science, an adjacency matrix is a means of representing which vertices (or nodes) of a graph are adjacent to which other vertices. Another matrix representation for a graph is the incidence matrix.
Specifically, the adjacency matrix of a finite graph G on n vertices is the n × n matrix where the non-diagonal entry aij is the number of edges from vertex i to vertex j, and the diagonal entry aii, depending on the convention, is either once or twice the number of edges (loops) from vertex i to itself. Undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention. There exists a unique adjacency matrix for each isomorphism class of graphs (up to permuting rows and columns), and it is not the adjacency matrix of any other isomorphism class of graphs. In the special case of a finite simple graph, the adjacency matrix is a (0,1)-matrix with zeros on its diagonal. If the graph is undirected, the adjacency matrix is symmetric.
The relationship between a graph and the eigenvalues and eigenvectors of its adjacency matrix is studied in spectral graph theory.
## Examples[]
The convention followed here is that an adjacent edge counts 1 in the matrix for an undirected graph.
Labeled graph Adjacency matrix
$\begin{pmatrix} 1 & 1 & 0 & 0 & 1 & 0\\ 1 & 0 & 1 & 0 & 1 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 1\\ 1 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ \end{pmatrix}$
Coordinates are 1-6.
The Nauru graph
Coordinates are 0-23.
White fields are zeros, colored fields are ones.
Directed Cayley graph of S4
As the graph is directed,
the matrix is not symmetric.
• The adjacency matrix of a complete graph contains all ones except along the diagonal where there are only zeros.
• The adjacency matrix of an empty graph is a zero matrix.
## Adjacency matrix of a bipartite graph[]
The adjacency matrix A of a bipartite graph whose parts have r and s vertices has the form
$A = \begin{pmatrix} O & B \\ B^T & O \end{pmatrix},$
where B is an r × s matrix and O is an all-zero matrix. Clearly, the matrix B uniquely represents the bipartite graphs. It is sometimes called the biadjacency matrix. Formally, let G = (U, V, E) be a bipartite graph with parts $U={u_1,..., u_r}$ and $V={v_1,..., v_s}$. The biadjacency matrix is the r x s 0-1 matrix B in which $b_{i,j} = 1$ iff $(u_i, v_j) \in E$.
If G is a bipartite multigraph or weighted graph then the elements $b_{i,j}$ are taken to be the number of edges between the vertices or the weight of the edge $(u_i, v_j),$ respectively.
## Properties[]
The adjacency matrix of an undirected simple graph is symmetric, and therefore has a complete set of real eigenvalues and an orthogonal eigenvector basis. The set of eigenvalues of a graph is the spectrum of the graph.
Suppose two directed or undirected graphs $G_1$ and $G_2$ with adjacency matrices $A_1$ and $A_2$ are given. $G_1$ and $G_2$ are isomorphic if and only if there exists a permutation matrix $P$ such that
$P A_1 P^{-1} = A_2.$
In particular, $A_1$ and $A_2$ are similar and therefore have the same minimal polynomial, characteristic polynomial, eigenvalues, determinant and trace. These can therefore serve as isomorphism invariants of graphs. However, two graphs may possess the same set of eigenvalues but not be isomorphic. [1]
If A is the adjacency matrix of the directed or undirected graph G, then the matrix An (i.e., the matrix product of n copies of A) has an interesting interpretation: the entry in row i and column j gives the number of (directed or undirected) walks of length n from vertex i to vertex j. This implies, for example, that the number of triangles in an undirected graph G is exactly the trace of A3 divided by 6.
The main diagonal of every adjacency matrix corresponding to a graph without loops has all zero entries. Note that here 'loops' means, for example A→A, not 'cycles' such as A→B→A.
For $\left( d \right)$ -regular graphs, d is also an eigenvalue of A for the vector $v=\left( 1,\dots,1 \right)$, and $G$ is connected if and only if the multiplicity of $d$ is 1. It can be shown that $-d$ is also an eigenvalue of A if G is a connected bipartite graph. The above are results of Perron–Frobenius theorem.
## Variations[]
An (a, b, c)-adjacency matrix A of a simple graph has Aij = a if ij is an edge, b if it is not, and c on the diagonal. The Seidel adjacency matrix is a (−1,1,0)-adjacency matrix. This matrix is used in studying strongly regular graphs and two-graphs.[2]
The distance matrix has in position (i,j) the distance between vertices vi and vj . The distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, but instead of telling only whether or not two vertices are connected (i.e., the connection matrix, which contains boolean values), it gives the exact distance between them.
## Data structures[]
For use as a data structure, the main alternative to the adjacency matrix is the adjacency list. Because each entry in the adjacency matrix requires only one bit, it can be represented in a very compact way, occupying only ${n^2} / 8$ bytes of contiguous space, where $n$ is the number of vertices. Besides avoiding wasted space, this compactness encourages locality of reference.
However, if the graph is sparse, adjacency lists require less storage space, because they do not waste any space to represent edges that are not present. Using a naïve array implementation on a 32-bit computer, an adjacency list for an undirected graph requires about $8 e$ bytes of storage, where $e$ is the number of edges.
Noting that a simple graph can have at most $n^2$ edges, allowing loops, we can let $d = e / n^2$ denote the density of the graph. Then, $8 e > n^2 / 8$, or the adjacency list representation occupies more space precisely when $d > 1/64$. Thus a graph must be sparse indeed to justify an adjacency list representation.
Besides the space tradeoff, the different data structures also facilitate different operations. Finding all vertices adjacent to a given vertex in an adjacency list is as simple as reading the list. With an adjacency matrix, an entire row must instead be scanned, which takes O(n) time. Whether there is an edge between two given vertices can be determined at once with an adjacency matrix, while requiring time proportional to the minimum degree of the two vertices with the adjacency list.
## References[]
1. Godsil, Chris; Royle, Gordon Algebraic Graph Theory, Springer (2001), ISBN 0-387-95241-1 [Amazon-US | Amazon-UK], p.164
2. Seidel, J. J. (1968). "Strongly Regular Graphs with (−1,1,0) Adjacency Matrix Having Eigenvalue 3". Lin. Alg. Appl. 1 (2): 281–298. doi:10.1016/0024-3795(68)90008-6.
## Further reading[]
• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 22.1: Representations of graphs". (Second ed.). MIT Press and McGraw-Hill. pp. 527–531. ISBN 0-262-03293-7 [Amazon-US | Amazon-UK].
• Godsil, Chris; Royle, Gordon (2001). Algebraic Graph Theory. New York: Springer. ISBN 0-387-95241-1 [Amazon-US | Amazon-UK].
## []
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Adjacency matrix", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Adjacency_matrix
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.893211305141449, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/110394/substitution-x-sinh-theta-and-y-cosh-theta-to-1x2y-2xy-1x2
|
# Substitution $x=\sinh(\theta)$ and $y=\cosh(\theta)$ to $(1+x^{2})y'-2xy=(1+x^{2})^{2}$?
After this substitution I got to the point
$$\cosh^6 (\theta)y'-\sinh(2\theta)-\cosh^4 (\theta)=0$$
and now let
$$z=\cosh^2 (\theta)$$
so $$z^3 y'-z^2-\sinh(2\theta)=0$$
but then I started to question my substitution, look at the graph here. Is the substitution even possible? Can we actually even get $\textrm{arcsinh}(x)=\textrm{arccosh}(y)$? From the graph in WA, it seems like the hypergeometric sin/cosh do not even cross so the substition in the title is wrong or?
Again from the book, now on page 644.
-
Which book are you talking about? – Peter Tamaroff Feb 17 '12 at 17:07
– hhh Feb 17 '12 at 17:11
I gave you a solution but I don't know how your solution would work. I'll think about it and edit ASAP. – Peter Tamaroff Feb 17 '12 at 17:21
Solution from my uni is here, p. 636-637 (course book) mention something called "variation of constant" -- I have never understood the term, apparently just first-order linear DY? For future random walkers, I instruct to read the answers and the wikipedia -- I think the book is incomprehensible at that point. – hhh Feb 19 '12 at 10:15
## 2 Answers
I'll try to explain the idea of the "varioidaan vakiota" that is stated in your .pdf. The idea traces back to Legendre, which used it to solve linear ODEs.
We have the equation
$$y'-\frac{2xy}{1+x^2}=1+x^2$$
We first solve the homogeneous equation
$$y'-\frac{2xy}{1+x^2}=0$$
$$y'=\frac{2xy}{1+x^2}$$
$$\frac{y'}{y}=\frac{2x}{1+x^2}$$
$$\log y = \log(1+x^2)+C_1$$
$$y = C(1+x^2)$$
What Legendre thinks now, which Spiegel says "in first sight, seems ridiculous is":
Let's assume $C$ is not constant, but rather variable. What will this new function $C(x)$ be so that
$$y = C(x)(1+x^2)$$
is a solution to our original equation?
So in your case we have that, differentiating produces
$$y' = C'(x)(1+x^2)+C(x) 2x$$
But from our equation we have that $y'=\dfrac{2xy}{1+x^2}+(1+x^2)$ and that $\dfrac{y}{1+x^2} = C(x)$
So plugging this in we have
$$y' = C'(x)(1+x^2)+\frac{2xy}{1+x^2}$$
So that, comparing to our original equation we have $C'(x) = 1$ or $C(x) = x+C_1$, so that our solution is
$$y = (x+C_1)(1+x^2)$$
Hope this helped clear out the "varioidaan vakiota" issue. For more infor refer to Spiegel's book on Differential Equations, page 202.
You have
$$(1+x^2)y' -2xy = (1+x^2)^2$$
or
$$(1+x^2)\dfrac{dy} {dx} -2xy = (1+x^2)^2$$
$$\dfrac{dy} {dx} -\dfrac{2x}{1+x^2}y = 1+x^2$$
You can solve this by the integrating factor $$\exp\left(-\log\left(1+x^2\right)\right)=\dfrac{1}{1+x^2}$$ which will give
$$\dfrac{1}{1+x^2}\dfrac{dy} {dx} -\dfrac{1}{1+x^2}\dfrac{2x}{1+x^2}y = 1$$
$$\left(\dfrac{y}{1+x^2}\right)' = 1$$
$$\dfrac{y}{1+x^2} = x+C$$ $$y = (x+C)(1+x^2)$$
-
...sorry what does the last sentence actually mean? Can you solve it that way? – hhh Feb 19 '12 at 0:17
@hhh I've been looking at the problem and the substitution is IMO a waste of time, since what it only helps is into seeing what integrating factor to use, which is $$\frac{1}{{{{\cosh }^2}\theta }} = \frac{1}{{1 + {x^2}}}$$ so the problem is solved in the same manner either way. If you want I can order my thoghts on the substitution and post it. – Peter Tamaroff Feb 19 '12 at 0:33
The "integrating factor" is my weakest link for sure, not sure how with different differentials. I think it would be good idea to somehow clarify the final sentence, now it is a bit separate from the other part of the answer. – hhh Feb 19 '12 at 0:46
@hhh Added info on your doubt. Any problem, let me know. – Peter Tamaroff Feb 20 '12 at 0:10
Hint: this is a linear differential equation. In your case, it may be wise to skip the beginning of the Wikipedia article and go directly to "first order equation".
-
...any idea why my school solution mentions the method "variation of parameters" needed to solve the problem? I think they referring to this here by the term "variation of constant" (free translation). I can get the same solution without at least explicitly using some variation of parameters -method. – hhh Feb 19 '12 at 10:22
– hhh Feb 19 '12 at 10:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939532995223999, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/59866-ordinary-de-undetermined-coefficients.html
|
# Thread:
1. ## Ordinary DE with Undetermined Coefficients
y"-2y+2y=e2t (cos(t)-3sin(t))
I don't even know what should be the form of the particular solution . Could someone at least help me approach this problem??
thanks!!
2. I'm assuming you made a small mistake typing the equation and actually meant this:
$y''-2y'+2y = e^{2t}(cos(t) - 3sin(t))$
You are trying to solve a non-homogenous second order ordinary differential equation. You can use the annihilator approach to find the simplest differential operator that will "destroy" the right hand side of your equation and turn it into 0. From your equation, alpha = 2 and beta = 1.
So the annihilator is (D^2 - 2*alpha*D + (alpha^2 + beta^2)) which simplifies to (D^2 - 4D + 5). Solving for the root should give $2 +/- i$
Since the particular solution and the complementary solution must be linearly independation to form a valid linear combination, we multiple the particular solution by an x.
So the particular solution should be $Y_p = A*xe^{2t}cos(t) + B*xe^{2t}sin(t)$
Now solving for the coeff. A and B should be easy. Just go back to your original equation... differentiate the Yp I gave you two times, then minus 2 times the 1st deriv. of Yp, and add 2 times the original particular solution. Then do some simple algebra to the undetermined coefficient.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903657078742981, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/89863?sort=oldest
|
## Intersection graphs for “conflicting” directed paths in trees
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given an undirected tree and a set of directed paths in this tree (or equivalently, ordered $s$–$t$ pairs), we construct a graph with the paths as vertices and an edge between two paths if they traverse an edge of the tree in opposite directions. Is anything known about the resulting graphs? I found some similar settings, but none that exactly matches. Is this class equivalent to a known one, or does it have interesting characterizations or properties? The only thing I could find is that these graphs cannot have a $K_4$ as subgraph.
-
## 1 Answer
This may count only as a "similar setting," but Martin Golumbic has studied what he calls the edge intersection graph of a tree $T$, which has a node for each path in $T$, and an arc between two nodes if their paths share at least one edge. So his paths are not directed. He and coauthors have established a number of structural, coloring, and complexity properties of these "EPT" graphs, under various assumptions (e.g., on vertex degrees). Here are three references:
• M.C. Golumbic, R.E. Jamison, Edge and vertex intersection of paths in a tree, Discrete Mathematics 55 (1985), 151-159. (Elsevier link)
• Golumbic, Lipshteyn, and Stern, "Representing Edge Intersection Graphs of Paths on Degree 4 Trees," Discrete Mathematics, 2008. (DeepDyve link).
• Golumbic, Lipshteyn, and Stern, "The k-edge intersection graphs of paths in a tree," Discrete Applied Mathematics, Volume 156, Issue 4, 2008. (Elsevier link)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478937387466431, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/tagged/logarithm
|
# Tagged Questions
The logarithm tag has no wiki summary.
1answer
34 views
### First difference or log first difference?
I am evaluating the effect of covariances between series on returns. That is I run the following regression: $$r_t = \beta_0 + \beta_1\text{Cov}(Y_t,r_t) + ...$$ I have conducted my analysis with ...
0answers
33 views
### OLS standard error log log regression
I am estimating the following Power Law relationship: $$\ln(\text{Rank}) = \text{constant} + \alpha \ln(\text{Size})$$ where $\text{Rank}$ is $1,~2,~3,~...,~n$, and $\text{Size}$ is the raw value. ...
0answers
24 views
### Fixed effects model with dummy
I am not sure if I have interpreted the following one-way fixed effects model correctly. \text{Open} = \beta_0+ \beta_1\text{Pol}_r + \beta_2\text{Infr}+ \delta_3\text{FTA}+\gamma_t+\epsilon_{it} ...
1answer
47 views
### Plotting 0 in a log scaled axis
I have a very large and sparse dataset of spam twitter accounts and it requires me to scale the x axis in order to be able to visualise the distribution (histogram, kde etc) and cdf of the various ...
1answer
63 views
### Expected value and variance of log(a)
I have a random variable $X(a) = \log(a)$ where a is normal distributed $\mathcal N(\mu,\sigma^2)$. What can I say about $E(X)$ and $Var(X)$? An approximation would be helpful too.
1answer
89 views
### Can we use as predictor a variable that was used in the calculation of the dependent (a ratio)?
I wonder if someone could give me some advice on the problem of using ratios as a dependent variable in a Generalized Linear Model. I'm having the following problem: I have a variable referring to ...
0answers
43 views
### What does taking the logarithm of a variable mean? [duplicate]
Possible Duplicate: When are Log scales appropriate? Say you have a bar graph displaying data, for example, "Cost of Computer Orders by Population", and you are trying to analyze the data ...
1answer
67 views
### Solving logarithms using variables [closed]
LogQ= a+bLogP+cLogI+dLogPm b= -2.174 c= 0.461 d= 1.909 Determine price elasticity of demand, income elasticity, and cross price elasticity
1answer
55 views
### multiplicative treatment effects with standard errors
I simplified this a fair bit after finding a draft version of the Imbens and Rubin chapter. I am interested in estimating a constant multiplicative treatment effect from a randomized experiment. I ...
1answer
108 views
### Fitting ratios in multiple regression formula
I would like to ask a (probably very simple) question with regards to multiple linear regression. I have an experimental formula in the form: Y \sim \frac{a_0 \cdot X_0}{(a_1 \cdot X1) * (a_2 ...
1answer
167 views
### Logarithmic scale on a plot with negative values
I would like to plot two time-series on a same graph. One series takes much larger values than the other, so I thought a semilog scale might be appropriate (i.e. linear X (dates) and log Y). However, ...
0answers
21 views
### Interpret natural log transformed dependent variable [duplicate]
Possible Duplicate: Interpretation of log transformed predictor I have a regression equation (below). In the raw form, my Y dependent variable is in days (length of stay in days). I ...
1answer
79 views
### Elasticity with log + 1
I am running time series regressions to estimate the percentage change in quantity to a percentage change in price, with the most basic form being $\ln Q = \beta_0 + \beta_1\ln P + \varepsilon$, where ...
1answer
1k views
### Why use logged variables?
Probably, this is a very basic question but I don't seem to be able to find a solid answer for it. I hope here, I can. I'm currently reading papers as a preparation for my own master's thesis. ...
2answers
2k views
### Logistic regression with an log transformed variable, how to determine economic significance
I am using a logistic regression model with continuous independent variables and two log transformed size variables (total assets and total deposits). My question is how to interpret the results and ...
0answers
502 views
### Is it mathematically justifiable to log transform variables before running an ANOVA?
I have a model with variables (financial ratios) and some of them are in percentages, some in days and some just ratios (negative and positive). I ran an ANOVA and the results were not so good. When I ...
1answer
83 views
### The use of logarithmic form to facilitate comparison
On a post from the internet I've found the following expression, which I'll like to understand and to apply it on my datas. "Because of the size difference between the largest and the smallest ...
0answers
94 views
### Is it possible (or even usefull) to transform Log transformed data into Z-scores?
We have created a questionnaire. In this questionnaire there are different dimensions with different answering scales. Because of our rightly skewed data we log transformed our data. But here is the ...
0answers
148 views
### Binomial distribution confidence interval for log plot
In simulating iterative decoding of low-density parity-check codes there may be (for a certain signal-to-noise-ratio of a noisy channel) for example 10 decoding failures out of $10^6$ trials. The log ...
3answers
1k views
### Taking correlation before or after log-transformation of variables
Is there a general principle on whether one should compute pearson correlation for two random variables X and Y before taking their log transform or after? Is there a procedure to test which is more ...
1answer
409 views
### Base-10 lognormal PDF integrated over log10(x)
From what I understand, the lognormal probability density function in base-10 is mathematically defined thus: p(x; \mu, \sigma) = \frac{log_{10}(e)}{x \sigma \sqrt{2 \pi}} e^{-\frac{(log_{10}(x) ...
1answer
839 views
### Interpreting a quadratic logarithmic term
Given the regression model $y=\beta_0 + 300*ln(x_1) - 15 * (ln(x_1))^2 + \beta_3*x_2 + ... + u$ $x$ ranges from 1 000 to 30 000 (6.9 to 10.3 on a logarithmic scale). How to interpret the ...
2answers
774 views
### Transforming the dummy values to be able to take logs
I have a panel data model with double-log functional form. I have 4 variables, one of which is a dummy. What is the best way to transform the values of 0 for my dummy to be able to take natural logs ...
4answers
2k views
### How to interpret logarithmically transformed coefficients in linear regression?
I have 1 continuous dependent and 1 continuous predictor variable that I've logarithmically transformed to normalise their residuals for simple linear regression. Can someone give me some pointers on ...
2answers
8k views
### When (and why) to take the log of a distribution (of numbers)?
Say I have some historical data e.g., past stock prices, airline ticket price fluctuations, past financial data of the company... Now someone (or some formula) comes along and says "let's take/use ...
0answers
758 views
### Back-transformation and interpretation of $\log(X+1)$ estimates in multiple linear regression
I have performed multiple linear regression analyses with different combinations of transformed and untransformed variables--both explanatory (independent) and response (dependent) variables. All ...
2answers
2k views
### Interpretation of log transformed predictor
I'm wondering if it makes a difference in interpretation whether only the dependent, both the dependent and independent, or only the independent variables are log transformed. In the case of ...
1answer
479 views
### How can I sample from a log transformed distribution using uniform distribution?
I am transforming an unscaled density function to log scale to avoid underflow issues. BI was performing integration on this function on a grid of values before I used the log transormation, to ...
0answers
46 views
### Math Statistics Bell Curve Computing % correct given 2 numbers (C#) [duplicate]
Possible Duplicate: Computing per cent correctness from a bell curve I've been trying to remember my High School teachings and are falling short. I'm working on a project where I need to ...
2answers
1k views
### How to log transform Z-scores?
The data for my variable is in the form of Z-scores only. I'd like to log transform the scores, but I don't know the mean or standard deviation in order to covert to raw scores. Can I assign an ...
4answers
353 views
### Software to plot a log graph
I have to plot a graph with log distribution on y axis. The values are: 10^-3..10^3. What software do you suggest me to use. My OS is Ubuntu, so I prefer software for Linux. Thanks.
0answers
231 views
### Advice on identifying curve shape using quantreg
I'm using the quantreg package to make a regression model using the 99th percentile of my values in a data set. Based on advice from a previous stackoverflow question I asked, I used the following ...
2answers
156 views
### Log-scale with concentrated data using integers
I have a data set with a range of 0 to 65,000. The vast majority of data points (it is a huge sample) are concentrated between 0 and 1000. There is only one point that has 65,000. I want to plot ...
2answers
256 views
### Log graph question
So let's say you have a distribution where X is the 16% quantile. Then you take the log of all the values of the distribution. Would log(X) still be the 16% quantile in the log distribution?
1answer
494 views
### How do you calculate the standard deviation on a multiplicative scale for a distribution that has been transformed logarithmically?
I know the value for the 16% quartile, so I know the additive deviation for the given distribution. How do I find the deviation of the log of the given distribution on a multiplicative scale?
6answers
727 views
### What are alternatives to broken axes?
Users are often tempted to break axis values to present data of different orders of magnitude on the same graph (see here). While this may be convenient it's not always the preferred way of displaying ...
2answers
1k views
### Why (or when) to use the log-mean?
I am looking at a scientific paper in which a single measurement is calculated using a logarithmic mean 'triplicate spots were combined to produce one signal by taking the logarithmic mean of ...
6answers
31k views
### In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values?
Am I looking for a better behaved distribution for the independent variable in question, or to reduce the effect of outliers, or something else?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.888336718082428, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/tagged/time-sensitive
|
# Tagged Questions
The time-sensitive tag has no wiki summary.
2answers
226 views
### What is the progress on the MIT LCS35 Time Capsule Crypto-Puzzle?
Ron Rivest posed a puzzle in 1999. MIT LCS35 Time Capsule Crypto-Puzzle. The problem is to compute $2^{2^t} \pmod n$ for specified values of $t$ and $n$. Here $n$ is the product of two large ...
2answers
296 views
### How does one calculate the cryptoperiod?
NIST Special Publication 800-57 defines a cryptoperiod as the time span during which a specific key is authorized for use by legitimate entities, or the keys for a given system will remain in ...
1answer
251 views
### Self-expiring symmetric keys, or: cryptography in absence of secure deletion
I can encrypt some data D using a random symmetric key K, obtaining a ciphertext C, and then encrypt K with my public key Pub and obtain H. So far so good: I can only decrypt C if I have H and my ...
4answers
310 views
### Is it possible to make time-locked encrytion algorithm?
I'm not sure if what I'm asking is even a valid question but here goes. Would it be possible to add a mechanism to an encryption algorithm that would mean it had to be a certain time of the day or a ...
1answer
142 views
### Is there an algorithm or hardware that can sign/verify natural time?
PGP/GPG can used to sign text, others use public key to verify them. So one could say, that these cryptographic algorithms deal with space. Are there any algorithms that can deal with time? E.g. I ...
14answers
3k views
### Time Capsule cryptography?
Does there exist any cryptographic algorithm which encrypts data in such a way that it can only be decrypted after a certain period of time? The only idea that I can think of, is something like this: ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8992679119110107, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/44796/confused-about-unit-of-kilowatt-hours/44813
|
# Confused about unit of kilowatt hours
So I am a little confused on how to deal with the Kilowatt hours unit of power, I have only ever used Kilowatts and I have to design a residential fuel cell used as a backup generator for one day.
The average power consumption of a US household is 8,900 kW-hr per year and 25 kW-hr per day and approximate 1 kW-hr per hour. Does this mean that the power output of my fuel cell is 1 kW and if I wanted to use it for the entire day would it have to be designed to be 25 kW?
-
A backup generator has to handle your peak load, not your average load. Unless you have something like a very large accumulator. 1KW is very roughly 8A at 120 V (though you'd ideally take into account power factors etc). Just sum the wattage of the items you want to power simultaneously and allow a margin for error. Some devices occasionally consume much more power than their average consumption (e.g. laser printers). Think about kettles etc. – RedGrittyBrick Nov 21 '12 at 22:27
My question was not aiming towards information about a generator. It was more towards a better understanding of a kilowatt hour. – Greg Harrington Nov 21 '12 at 22:36
this is not a physics question – yca Nov 22 '12 at 0:19
This is an important question, exactly because it is so basic, yet so many (especially the press) get it wrong. – Bobbi Bennett Nov 22 '12 at 6:03
It is also (maybe deliberately) underspecified. A house that uses 24 Kw-hr in 24 hrs probably does not have the lights on all the time. You have to guess! If they have the power on only half the time, that would be 2Kw peak you need. If they run an electric dryer for 1 hour, and then have the lights off the rest of the time, that would be a 24Kw peak. @yca, think of it as an experimental physics question. – Bobbi Bennett Nov 22 '12 at 6:42
## 6 Answers
What RedGrittyBrick said in his comment is correct, but as you need to understand $kWh$ I'll try to explain a bit more.
There are two main things you need to take into account: the total energy in your fuel cell (in $kWh$) and the maximum power it can deliver (in $kW$). Power is the rate at which energy is used. In electricity, $1W$ is equal to $1V times 1A$, so 1kW is equivalent to $220V$ at about $4.5A$ and $1kWh$ is that amount of power used for 1 hour.
A $1kWh$ battery can deliver $1kW$ for 1 hour, or $0.5kW$ for 2 hours, etc. It does get problematic though, if we try to deliver more power over a shorter time. Batteries can only deliver so much current. For example, a $1kWh$ battery is unlikely to be able to supply $60kW$ for 1 minute. It may not be able to supply the full current required, or at those high currents it may not be able to supply it for the expected time.
If your fuel cell has to last a day, then it obviously needs to store enough energy to last that time. So, if your average use is $1kW$, then you need a $24kWh$ fuel cell to go the distance. But your usage will not be constant during the day. At some time most appliances may be switched off, but at other times you may turn on heaters or airconditioners, etc, and use lots of power. The power used by appliances can also vary a lot. A laser printer can use $1kW$ while printing, but only $10W$ in standby. Your fuel cell has to be able to deliver the maximum current required under the worst circumstances.
On top of all that, the above is only valid if you are using DC appliances - and I have not noticed any of those recently... If your appliances run AC you also need to allow for things like the "power factor".
So you need to know a lot more than just the power or energy usage.
-
Thank you. Since the course I am taking deals with fuel cells and their use and not on electronics then I was told to just simplify the problem which is why I don't include other factors. I was just reading something and it said "A 3 kW fuel cell ... ". So I guess that means it supplies 72 kW-hr of energy. So if I am supplying 24 kW-hr, I was wondering if I could just call it a 1 kW fuel cell – Greg Harrington Nov 21 '12 at 23:30
1
A 3KW fuel cell may not be able to supply 72 kWh, as the total amount of energy stored in it may not be large enough. You need to remember the difference between power and energy. A fuel cell has limits on both of them, and the relationship between them is not necessarily "24 hours" – hdhondt Nov 21 '12 at 23:36
A kilowatt is a unit of power, which has the dimensions of energy over time.
A kilowatt-hour, then, has dimensions of energy.
As a simple example, if you wanted to charge up a battery so as to operate a 1,000-watt (DC) heater for one hour, you'd need one kilowatt-hour of energy (assuming the mythical world of perfectly efficient batteries, lossless wires, etc.)
In terms of SI units, this is
1000 J/s $\times$ 3600 s = 3.6 MJ.
-
So is my above solution correct? If the average consumtion of a house per day is 24 kW-hr then should my fuel cell be designed as a 1 kW fuel cell or a 24 kW fuel cell? – Greg Harrington Nov 21 '12 at 22:43
24 kW (in simple terms) means that you can handle a 24 kW load. Like a water heater, electric stove, five microwaves, and 600 feet of incandescent Christmas lights. If you wanted to power all of that stuff for a full day, you'd need about 576 kWh of energy to do that. Now, it's a bit more complicated than that, because we're not considering the efficiency, or how many phases your fuel cell has, whether you're converting to AC or not, etc. – John at CashCommons Nov 21 '12 at 22:52
I guess i understand what you're saying, I am just having a hard time trying to apply it to the design of a fuel cell. I am trying to size a fuel cell so that it can run consistently for a day, essentially proving 24 kW-hr so I thought i could just design a 1 kW fuel cell and provide it with enough fuel and air so that it could run for 24 hours – Greg Harrington Nov 21 '12 at 23:01
It's a matter of capacity. A battery (or fuel cell) only has so much capacity for pumping electrons into something to power it. If you expect to power something for longer, all other things being equal, you'll need a bigger fuel cell. – John at CashCommons Nov 21 '12 at 23:12
Thanks anyway, I will try to deal with this in another way to find a solution – Greg Harrington Nov 21 '12 at 23:14
show 1 more comment
The average power output is about 1kW (which is the same as 1KW-hour per hour). The energy capacity required for 1 day's operation at this power level is 25 kW-hours.
-
If I need to supply 25 kW-hr per day then I am just going to simplify it and design a 1 kW fuel cell – Greg Harrington Nov 21 '12 at 23:51
I don't think it's that simple. You need enough fuel to last through a day (24 times as much as for 1 hour). Other answers have also made this point: the power rating tells you how fast you can extract energy without over-stressing the cell (think rated car horsepower, for example), while the energy capacity tells you the total amount of work the cell can perform (the size of the gas tank in the car analogy). Of course, if you have hoses supplying fuel from external reservoirs, your capacity is unlimited. – Art Brown Nov 21 '12 at 23:56
I am not getting anywhere on here. I am trying to make a parallel between my homework problems so that I can do the rest of the project. All of my homework problems just say "A ___ kW fuel cell ....". If I could determine how much power my fuel cell provides then I could do the rest of the homework but this is not helping me – Greg Harrington Nov 22 '12 at 0:00
Sorry... Looking at wikipedia, fuel cell technologies are indeed rated by power (e.g. kW), as your problems state. I guess you just strap on an appropriately sized tank to get the total energy you need. Good luck. – Art Brown Nov 22 '12 at 0:14
Thanks for the help. I did not mean to be rude before. Just getting stressed about starting a project last minute. Thanks again! – Greg Harrington Nov 22 '12 at 0:50
OK, I think you have it right, in the context and scope of the question, the fuel cell should be able to deliver 1Kw, and do it all day.
From a more practical interpretation, the problem is under-specified. You need to know the peak load. What is the highest power the house needs? It might sometimes draw 5Kw, maybe 20Kw. Your fuel cell should be able to handle that.
But the homework did not mention that. So set the fuel cell to deliver 1Kw all the time. It will produce 24Kw-Hrs in a day.
-
A kilowatt-hour is a unit of energy.
It's the energy of one kilowatt for one hour. Which is equivalent to the energy of 2 kilowatts for half an hour, 4 kilowatts for quarter of an hour, or 2 kilowatts for quarter of an hour plus 1 kilowatt for half an hour.
A house might use 24kWh in a day. But that doesn't mean 1kW continuous output would meets needs, hour by hour. It just means that 1kW continuous output would produce equivalent energy to the household consumption. It would only meet the house's needs, if the household demand were perfectly constant at 1kW power.
To meet power needs and energy needs, you need three things:
1. enough fuel to provide 24kWh of energy
2. the fuel cell has to have enough peak power to meet the household's peak demand. This is likely to be of the order of 3-5kW, unless it has an electric shower, in which case peak demand could be 10-13kW.
3. the fuel cell has to be able to match the slew rate - the rate of change of power - that demand has. For an electric shower, that might mean going from 0 to 10kW in a few seconds, and back down again in the same time.
Your question doesn't have enough information on points 2 and 3 to specify the fuel cell.
A 1kW fuel cell, if it could run continuously for 24 hours, would produce equivalent energy to a household that needs 24 kWh in a day - because the household's mean power requirement is 1kW, the same output as the fuel cell. So the house would not be energy-independent; but it would be some sort of self-sufficient, as long as it could export its surplus at times of surplus, and import extra power at times of deficit. In such a case, it would on average have zero net imports (but it would have N kWh of absolute imports each day, and N kWh of absolute exports each day, where N is some number between 0 and 24).
-
So I am a little confused on how to deal with the Kilowatt hours unit of power
Kilowatt-hours is not a unit of power, it is a unit of energy.
I think the easiest way to clarify this may be to think in terms of energy and power and try to avoid terms like kilowatt-hours per hour.
The average power consumption of a US household is 8,900 kW-hr per year and 25 kW-hr per day and approximate 1 kW-hr per hour. Does this mean that the power output of my fuel cell is 1 kW and if I wanted to use it for the entire day would it have to be designed to be 25 kW?
## Answer 1
A fuel-cell with a power output of 1042 Watt operated for a day will deliver the 25 kWh of energy you mention. (Assuming you have a hydrogen-cylinder delivery man who visits regularly with fresh supplies)
It will not allow you to use a kettle.
It may not allow you to turn all the lights on and watch TV
## Answer 2
It may help if you translate everything into SI units
1 kW is 1,000 Watts of power
1 kWh is 1,000 Watts for 3,600 seconds = 3,600,000 Joules of energy
So your target house uses 90,000,000 Joules in a day. If the rate of energy usage is constant and never varies you can supply that amount of energy at a constant rate of 1,042 Joules per second (otherwise known as 1,042 Watts or just over 1 kW)
A typical kettle for boiling drinking water needs over 2 kW of power.
## Answer 3
I eat 2,500 kCalories in 24 hours. (10,460,000 Joules)
That is an average of 121 Joules every second.
However I don't eat a tiny speck of food every second 24 hours a day. I spend some of that time sleeping and on activities where I do not necessarily want to concurrently eat blueberry pie at that rate.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586202502250671, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/81967?sort=oldest
|
## Conformal transformations and harmonic analysis on the sphere
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Consider the $n$-dimensional sphere $S^n$. I'm especially interested in the $n=4$ case. The Hilbert space $L^2(S^n)$ can be decomposed into a direct sum of eigenspaces of the Laplacian, which are finite dimensional. I'm looking for non-isometric conformal transformations
$$f: S^n \to S^n$$
s.t. for some $\lambda, \mu > 0$ if $\psi$ is an eigenvector of the Laplacian with eigenvalue $\alpha < \lambda$ then $f(\psi)$ is a sum of eigenvectors with eigenvalues $< \mu$.
Do such $f$ exist? If so, is it possibly to classify them?
-
## 2 Answers
The only possibility is the trivial one when $\lambda$ is so small that the only eigenfunctions with eigenvalue less than $\lambda$ are constants (eigenvalue zero). Otherwise the eigenfunctions with eigenvalue less than $\lambda$ span the space of polynomials of degree at most $d$ for some positive integer $d$, and then composition with a non-isometric conformal transformation takes it outside the space of polynomials, and thus outside the span of eigenfunctions of eigenvalue less than $\mu$ for any finite $\mu$.
P.S. In case you've not seen this yet: the group of conformal transformations is described by this theorem of Liouville, identifying it with the group of Möbius transformations.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let me add some remarks... The group of conformal transformations of $S^n$ is generated by isometries, scalings ( $x \to \lambda x$ conjugated by stereographical projection and it's inverse) and spherical inversions $x\to \frac{x}{\|x\|}$. As Noam Elkies indicated in his answer - the eigenfunctions of the Laplace equation (the so called spherical harmonics) are restrictions of harmonic polynomials on $\mathbb{R}^{n+1}$.
Also there's a modification of the Laplace-Beltrami operator by a scalar term which is called Yamabe operator. This Yamabe operator is conformally invariant when acting on densities of certain conformal weight. Maybe this operator will be more useful for theorems you want to prove...
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9032723307609558, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/54431/why-do-people-use-it-is-easy-to-prove/54684
|
# Why do people use “it is easy to prove”?
Math is not generally what I am doing, but I have to read some literature and articles in dynamic systems and complexity theory. What I noticed is that authors tend to use (quite frequently) the phrase "it is easy to see/prove/verify/..." in the manuscripts. But to me, it is usually not easy at all; maybe because I haven't spent much time in the field, maybe not.
My question is: why do people use the phrase "it is easy" in their proofs?
P.S. I hope this question is not too subjective, and has some value for the community.
-
40
I guess the most widespread reason is that people tend to be lazy. – t.b. Jul 29 '11 at 10:44
8
I believe the true meaning is actually "[I think but haven't really checked that] it is easy [for me] to see/prove/verify/..." with at least one pair of brackets removed. – Marek Jul 29 '11 at 10:59
17
Another reason, I guess, is that when the proposition which is "easy" is not part of your main result, and you are trying to keep the paper short; or that it is a nice corollary which you found which has no real importance. Sometimes however these things are indeed very trivial for the intended reader, for example - proving that embedding is a total order with respect to linear orders. It is not a "trivial" theorem for the beginning mathematician but it is for someone reading a paper titled "Forcing over long well orders" (just making up a title) which assumes some knowledge in set theory. – Asaf Karagila Jul 29 '11 at 10:59
7
It is true that we overuse "it is easy to show," "obviously," "clearly," and their ilk. However, such terms are hard to avoid when one wants to describe the full logic of an argument without verifying every detail. The main issue is inaccurate use of the term. It should mean roughly "it is (or should be) easy for you" and not "it is easy for me". – André Nicolas Jul 29 '11 at 11:28
4
There is also the alternative "whose proof is left to the reader (as an exercise)." :D – J. M. Jul 30 '11 at 5:32
show 9 more comments
## 7 Answers
Use of "it is easy to see that" is common and traditional in mathematical writing, but it is not exactly a proud tradition: really good mathematical exposition uses this and similar locutions very sparingly.
To be more specific, I think it is bad writing to say "It is easy to see that X is true" and say no more about how to prove X. If this occurs in formal mathematical writing and all else is as it should be, then no information is being conveyed. In other words, what other reason is there to assert that X is true and say no more about the proof except that the author expects the reader to be able to supply the details unassisted? If you are skipping the proof for any other reason, you had better say something!
[And of course some of the harm is psychological. If you're carefully reading a text or paper that asserts X is true and says nothing else about it, you know you need to stop and think about how to prove X. If "it is easy to see X" then after every minute or so part of your brain will quit thinking about how to prove X and think, "They said it was easy to see this, and I can't see it at all. What am I, stupid?" I definitely remember thinking this way when I started out reading "serious" math books.]
When I referee papers I often suggest that authors suppress their "it is easy to see that"'s. As others have said in the comments, as a careful, skeptical reader, you also need to stop and be sure that indeed you can supply the proof yourself, and it is notorious among mathematicians that such phrases are likely places to find gaps in mathematical arguments. But it is just as easy -- in fact, easier -- to have a gap in the argument where you don't have any text at all, so writing "it is easy to see that" is not really the guilty party but rather a possible piece of incriminating evidence.
So if it's not so good to write this, why do people write this way? And they certainly do: I happened to be editing my commutative algebra notes when I read this question, so out of curiosity I searched for "easy" and found about ten instances of "it is easy to see that" in 265 pages of notes. About half of them I simply took out. The other half I thought were okay because I didn't just say "it is easy to see that": I went on to explain why it was easy! So having caught myself doing what I said not to, I can reflect on some causes:
1) Tradition/habit.
I have read "it is easy to see that" thousands of times, so it is in my vocabulary whether I like it or not. Most mathematicians know that they have funny phrases which appear ubiquitously in mathematical writing but not in the rest of their lives: one of the very first questions I answered on this site was about the meaning and use of "in the sequel". In the year or so since then I have observed it in my own writing: it just fits in there. You have to really actively dislike some of these standard locutions in order to avoid writing them yourself. For instance, I have more than a thousand pages of mathematical writings available online and I challenge anyone to find "by inspection" anywhere in these. "By inspection" is the deformed cousin of "it is easy to see that": whereas at least it is easy to see what "it is easy to see that" means, even the meaning of "by inspection" is obscure.
2) A conflation of formal writing and informal writing / speaking / teaching.
The way you speak mathematics to someone else is very different from the way you write it: it is much more temporal. If you are teaching someone new mathematics then most often they cannot verify / process / understand every single mathematical statement you make, in real time, so they have to make choices about exactly what to think about as you're talking to them. In spoken conversation it's extremely useful to say "this is easy": by saying it, you're cueing the listener that it's safe to direct her attention elsewhere. Also, because when you talk -- or write informally -- you don't give anywhere near as complete information as you do in formal mathematical writing, commentary on what you're skipping becomes more important. For instance, in an intermediate level graduate course I may prove approximately 2/3 of the theorems I state in class. If I'm skipping something, it's probably because it's too easy or too hard. I had better say which it is!
3) Immaturity/Laziness.
Certainly when you're reading your own writing and you find "it is easy to see that", you need to stop short and make sure you know exactly what you omitted. If it's not easy for you to see what you wrote it was easy to see, you may have a serious problem: indeed, you may be papering over a gap in your argument. To do this intentionally is a sign of great mathematical immaturity -- someone who hides (in plain sight!) what they don't know in this way is not going to make it very far in this profession -- but even doing it unintentionally is something that most mathematicians largely grow out of with experience.
-
22
+1 "What am I, stupid?" - this is exactly how I feel – oleksii Jul 29 '11 at 12:23
16
What a great answer! I think there's one major take-way that can't be overemphasized: let your texts mature a little bit (a few months if you can afford it). Go back and try to eliminate all phrases conveying no information. Your goal should be to make it as easy as possible for the reader, not to make it as easy as possible for yourself. If you can't provide a reference or boil down the argument to a few sentences indicating the main ideas then this probably means that you should elaborate and think through it once again. – t.b. Jul 29 '11 at 12:34
4
Very interesting answer! As a side note to point 3), I recently read an article in which a proof started with "It is easy to see that the theorem is true if $a=0$, so suppose $a >0$." But actually the theorem was false in the case $a=0$..! (I indeed felt quite stupid when failing to see why the $a=0$ case was true, until I found a counterexample) – Malik Younsi Jul 29 '11 at 12:56
Writing a paper is usually a long and tedious process. Some arguments seem (to the author) to be straightforward, but potentially painful to write out (e.g. due to having to introduce additional notation or concepts that are not relevant to the larger thread of the discussion at hand). At this point the author may simply state the required result, saying that it is "easy to prove", or something similar.
Ideally, such a result will indeed be easy to prove; e.g. at the level of difficulty of an exercise in a graduate text-book. Note that if your mastery of the subject is such that you are challenged by exercises in graduate-level text-books, then you may well find "easy to prove" statements hard to prove, not easy! The intended audience for such a statement is typically another expert in the field, not a beginner.
On the other hand, one reason that an argument can be hard, or at least tedious, to write down is that the author may not have at hand good tools for formalizing their intuition about the argument. In this case, rather than going to the trouble of developing these tools to formalize their intuition, they may just state the result, writing "it's easy to prove" or something similar. In my experience, this is usually why mistakes can creep in at these points --- because the difficulty in formalizing the intuition may be caused by an actual failing in the intuition!
The lesson I take from this in my own writing is that, when one thinks that a certain proof will be easy but tedious, one should examine the situation carefully, to make sure that the difficulty in writing out the complete argument is not being caused by some hidden flaw in one's intuition about the situtation.
As a reader of mathematics, especially if you are not an expert and are reading well-known papers that have been certified as correct by experts, it's probably best to presume that everything is in fact correct. However, one should expect that reading a paper and filling in all the details will be at least as demanding as reading a chapter in a graduate text-book on the same topic and doing all the exercises.
-
I like Pete's answer a lot, but I feel I should add something that I don't think he mentions, which is an unapologetic word of sympathy for the phrase. Here's the thing: I don't think "easy" means "easy to construct", it means "easy to follow". Coming up with the proof might not be trivial, but the process of coming up with a proof and the proof itself are (as anyone in mathematics knows) nothing like the same thing. Which of these is important to you will of course depend hugely on your level of mathematics and your reason for reading the text.
Having gone through undergraduate mathematics relatively recently, I've found myself picking up phrases like "and trivially we see that..." from my lecturers. It's a nasty habit, but on the other hand, I know what I mean when I say it, and by extension I know what they mean when they say it. I certainly don't mean to imply that my undergraduate career has been one long triviality, or that anyone who can't follow my train of thought is an idiot. I mean that, if I wrote the proof down in front of you, and maybe you did a bit of scratching about, you would find it followed trivially from previous work or a definition. I don't need to assume you are comfortable with this previous work or definition yet, and perhaps I even expect you not to be, unless I'm trying to teach in "real time". Some things take a while to sink in, and maths is not a spectator sport.
It's actually pedagogically very reasonable, in some cases. In fact, I would argue that not omitting some proofs is pedagogically very bad practice. Here are a few reasons, all of which apply hugely to undergraduate texts, but do apply elsewhere too.
1. Sometimes adding huge and reasonably elementary proofs distracts students from the main goal of the discussion.
2. Sometimes the proofs aren't very enlightening. Few people read a proof thinking "I must check this to see whether it's right". We read a proof to see why it's right, in order to learn from it, for our own selfish gain. If it's right because of something we already know, it is of little to no value to someone trying to learn from a text.
3. Anything I read in a textbook, especially a well known textbook, I (by gut instinct) assume to be important in my understanding of the content, otherwise it should have been consigned to an appendix (or the bin) for me not to read at my leisure. Eminent authors are authority figures in this way.
The example that sticks in my mind is a proof that the Ackermann function is not primitive recursive. The proof is a very simple concept and some very simple arithmetic manipulation, and your average 15-year-old could follow it without any trouble. But is it useful? Not in the slightest. It is one cute idea (namely "show that everything primitive recursive grows more slowly"), followed by several pages of very elementary mathematics which is so full of 'tricks' that it is very difficult to come up with, it is very difficult to memorise, and it takes half an hour to write down. And the tricks are so specific and elementary (things like "now replace n by the inequality n < 2n + 3") that they can't be used elsewhere. What's the value in that to someone learning recursion theory?
-
The phrase "it is easy to prove" is part of the mathematics protocol. It takes some initiation to get used to it, like reading a mathematics book does. A novel, poem, holiday brochure, chess magazine, economy book, mathematics book all need to be read but each of them has a different reading protocol. 'Simply', 'it is easy to see that', 'this trivial proof is left to the reader', and so on. - By the way the key to reading a math book is a pencil and paper. ( Use it. )
-
Look at this cartoon in abstrusegoose.com http://goo.gl/K0bp2
-
This is a nice assortment of answers that we have going. Personally, I have tried to avoid using "it is easy to see" unless the intended reader (or, listener) should truly find it easy. I have used the phrase "a routine calculation shows that" in print, because the calculation truly was, probably more so for the prospective referee than for me. I have no qualms about using such phrases as appropriate for the audience being addressed. I agree with Pete that we should probably provide a little nudge in the correct direction of our argument whenever possible.
There is a possibly apochryphal story about Hardy (I think) who came to a certain point in a lecture and said "it is clear that," after which he paused, went over to a corner of the board and doodled for some time before resuming his lecture with the proclamation "Yes, it is clear that ... ." Hardy, I suppose, could get away with that. The rest of us might want to let the audience in on our little secret. I guess as with anything else, it's all a matter of balance, don't let the details obscure the flow of ideas.
-
$@$Chris: I didn't see your answer until just now, but I like it and agree with it. About your story: I heard it about Wiener, I think, not Hardy. From what I know about the psychology of these two great mathematicians (both of whom died before I was born), judged purely as a piece of apocrypha, it fits Wiener much better than Hardy: Hardy, to all accounts, was really excellent at explaining why he knew what he knew. – Pete L. Clark Mar 10 '12 at 21:00
It often means trivial in the algorithmic sense:
This part of the proof is tedious, but not interesting and does not necessitate insightful new ideas.
Almost always it would be better to be explicit:
E.g: It is trivial to check that the two sides are equal because the quotients of quotients are rational functions.
Obviously, this does not imply that the author would be able to correctly calculate the quotients of quotients at the first try. It is trivial in the sense that it is trivial to check 524 truly simple cases or calculate the gcd of two numbers.
Or be honest: The case of $B_n$ works exactly as the case of $A_n$, but the formulas are more complicated, so the details are not given.
Does not mean it is easy, it means that if you know the case $A_n$ and you are determined to solve the case $B_n$, the method will work.
-
## protected by user17762May 14 '12 at 1:12
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9732901453971863, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/27731/isoperimetric-inequalities-of-a-group
|
# Isoperimetric inequalities of a group
How do you transform isoperimetric inequalities of a group to the of Riemann integrals of functions of the form $f\colon \mathbb{R}\rightarrow G$ where $G$ is a metric group so that being $\delta-$hyperbolic in the sense of Gromov is expressible via Riemann integration?
In other words, how do you define "being $\delta-$hyperbolic group" by using integrals in metric groups?
(Note: I am not interested in the "Riemann" part, so you are free to take commutative groups with lebesgue integration etc.)
-
## 1 Answer
You can do this using metric currents in the sense of Ambrosio-Kirchheim. This is a rather new development of geometric measure theory, triggered by Gromov and really worked out only in the last decade. I should warn you that this is rather technical stuff and nothing for the faint-hearted.
Urs Lang has a set of nice lecture notes, where you can find most of the relevant references, see here.
My friend Stefan Wenger has done quite a bit of work on Gromov hyperbolic spaces and isoperimetric inequalities, his Inventiones paper Gromov hyperbolic spaces and the sharp isoperimetric constant seems most relevant. You can find a link to the published paper and his other work on his home page, the ArXiV-preprint is here.
I should add that I actually prefer to prove that a linear (or subquadratic) isoperimetric inequality implies $\delta$-hyperbolicity using a coarse notion of area (see e.g. Bridson-Haefliger's book) or using Dehn functions, the latter can be found in Bridson's beautiful paper The geometry of the word problem.
-
Thank you Theo, I need some time to comprehend your message. – niyazi Mar 18 '11 at 5:19
@niyazi: It is more a list of references addressing the second version of your question (and some variants) than an answer, but I think that this is the closest you can get. – t.b. Mar 18 '11 at 5:23
The last paragraph of t.b.'s answer can be interpreted as a form of integration of functions $f:R\to X$ where $X$ is the Cayley 2-complex of a finitely presented group $G$. This is just a fancy way of saying that you compute the area of a closed curve in the Cayley graph by adding up the number of 2-cells of a disc map into the Cayley 2-complex spanned by that curve, and minimizing over all such disc maps. After all, integration is just summation. – Lee Mosher May 22 '12 at 12:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153141379356384, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/214292/factorization-of-x7-1-into-irreducible-factors-over-gf4
|
Factorization of $(x^7-1)$ into irreducible factors over $GF(4)$
I need to find cyclotomic cosets depending on $n=7$ and $q=4$ and find the factorization of $(x^7-1)$ into irreducible factors over $GF(4)$.
Thanks for any advice.
-
Well, 1 is a root, so that's a start. – Chris Eagle Oct 15 '12 at 15:39
So I have $(x-1)(x^6+x^5+x^4+x^3+x^2+x+1)$. – James Oct 15 '12 at 15:46
Moreover, $x^7-1$ splits completely over $\mathbb{F}_8$, since its factorization in $\mathbb{F}_2$ is $(1+x)(1+x+x^3)(1+x^2+x^3)$. Note that there is no irreducible polynomial over $\mathbb{F}_2$ with degree $2$ that divides $x^7-1$. – Jack D'Aurizio Oct 15 '12 at 15:49
1 Answer
As has been noted by Jack D'Aurizio in his comment, the polynomial $x^{7}-1$ splits into a product of $x-1$ and two different irreducible factors of degree $3$ over $F_{2}.$ This certainly gives the same factorization (but not a priori into irreducible factors) over $F_{4}.$ However $F_{4}$ and $F_{16}$ contain no element of multiplicative order $7,$ so contain no root of $x^{7}-1$ other than $1,$ so the two factors of degree $3$ remain irreducible in $F_{4}[x].$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394774436950684, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/198365/matrix-of-linear-transformation-w-r-t-basis-terminology?answertab=votes
|
# Matrix of linear transformation w.r.t basis terminology
What does it mean to find the matrix of a linear transformation with respect to a basis in the domain, and another in the codomain? In our notes we usually use the standard basis in the domain and a non-standard basis in the codomain. I assume the solution differs in we use a non-standard basis in the domain, but I'm not really sure how.
I'm probably not being very clear so I'll include an example.
Consider the basis $\beta = \{1,t,t^2\}$ for $P_2(\mathbb{R})$. Let $D: P_2(\mathbb{R}) \rightarrow P_2(\mathbb{R})$ be defined by differentation, $D(p)(t) = p'(t)$. Find the matrix $B_D$ of $D$ with respect to $\beta$ in the domain and $C = \{1+t, t-t^2, t^2\}$ in the codomain.
I know how to answer this question, but not really what it means. How would it differ if $\beta$ wasn't the standard basis? How do you do it with respect to two different bases?
Some clarification would be much appreciated, thanks!
-
## 1 Answer
An important point of abstract linear algebra is that "bases are arbitrary"; a "standard basis" is just for convenience and has no mathematical significance. The way you answer this question doesn't change at all no matter what bases (standard or not) that you choose for the domain or codomain:
1. Find the image, under $D$, of each element of the given basis for the domain.
2. Write these images in terms of the given basis for the codomain. This gives you a set of coordinates (real numbers, in your example) for each element in the image.
3. Collect these coordinates into a matrix.
-
Thanks, I was confusing myself with what we were actually doing in the first step. All clear now. – user1520427 Sep 18 '12 at 5:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369838237762451, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/46218/partition-function-of-bosons-vs-fermions/46243
|
Partition function of bosons vs fermions
I have two atoms, both of which are either bosons or fermions, with four allowed energy states: $E_1 = 0$, $E_2 = E$, $E_3 = 2E$, with degeneracies 1, 1, 2 respectively.
What's the difference between the partition functions of a pair of two bosons and that of a pair of two fermions?
-
2
What have you tried so far? What does the partition function look like if you don't assume any particular statistics? How would you expect it to change when you add the statistics to the problem? – Colin McFaul Dec 7 '12 at 15:59
1 Answer
For the partition sum, you have so sum $e^{-E}$ ($T=1$) over all possible eigenstates of the system where $E$ is the energy of the corresponding state.
Two bosons can be in the states 10 $|kl\rangle$, with $1\leq k \leq l \leq 4$ where we accounted for the degeneracy by introducing an additional state with $E_4 =2E$. The corresponding partition sum reads (we assume the particles to be noninteracting) $$Z_B = \sum_{k\leq l} e^{-E_k- E_l} = 1+ e^{-E} + 3 e^{-2E} +2 e^{-3 E} +3 e^{-4E}.$$
Similarly, for fermions we have 6 states $|kl\rangle$, with $1\leq k < l \leq 4$ with the partition sum $$Z_F = e^{-E} + 2 e^{-2E} +2 e^{-3 E} + e^{-4E}.$$
So the difference of the partition functions of a pair of two bosons and that of a pair of two fermions is ;-) $$Z_B - Z_F = 1 + e^{-2E} +2 e^{-4E}.$$
-
2
+1 for giving an answering with a technically correct interpretation of ‘difference’. – Claudius Dec 7 '12 at 21:42
I would just add a clarification that you consider states like $|12\rangle$ and $|21\rangle$ as indistinguishable (i.e., a proper boson\fermion state would be $|12\rangle \pm |21\rangle$) – Benji Remez Dec 8 '12 at 0:40
@BenjiRemez: thank you for the clarification! – Fabian Dec 8 '12 at 7:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8962022066116333, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/black-holes?page=1&sort=active&pagesize=50
|
# Tagged Questions
A black hole is a volume from which photons, or any matter, can not escape. More formally, the coordinate speed of light at the event horizon - the boundary of a black hole - is zero, as measured by a sufficiently separated observer.
0answers
51 views
### Entanglement and Black holes
If you have two entangled quantum states, One state falls into a black hole and you measure the other state, What can you say about the state that has fallen into the black hole? If you have billions ...
1answer
35 views
### What's the criteria for black hole thermodynamically stability? (And dynamical?)
It looks like usual criteria (positivity of Hessian; what geometrically means a cancave of entropy) is no useful, becouse entropy is not additive and not extensive for black hole. Then what is the ...
0answers
51 views
### Gravitational redshift of Hawking radiation
How can Hawking radiation with a finite (greather than zero) temperature come from the event horizon of a black hole? A redshifted thermal radiation still has Planck spectrum but with the lower ...
1answer
80 views
### Is it possible (theoretically) to divide Black Hole into two parts? [duplicate]
I have read that it's not possible.
2answers
392 views
### What is the mass density distribution of an electron?
I am wondering if the mass density profile $\rho(\vec{r})$ has been characterized for atomic particles such as quarks and electrons. I am currently taking an intro class in quantum mechanics, and I ...
0answers
39 views
### What is the physical meaning of fact, that Reissner-Nordstrom black hole is thermodynamically unstable?
It is known, that Reissner-Nordstrom black hole is thermodynamically unstable [1]. Does it mean, that there is no Reissner-Nordstrom black hole in physical world? Does it mean, that there may be ...
0answers
27 views
### Dark Energy, Space Time and Black Holes [closed]
Since space and time are both one and the same would that mean that as time passes and accumulates after the big bang that space is forced to grow? This would explain why all the galaxies are speeding ...
3answers
776 views
+50
### From where (in space-time) does Hawking radiation originate?
According to my understanding of black hole thermodynamics, if I observe a black hole from a safe distance I should observe black body radiation emanating from it, with a temperature determined by its ...
13answers
5k views
### How does gravity escape a black hole?
My understanding is that light can not escape from within a black hole (within the event horizon). I've also heard that information cannot propagate faster than the speed of light. It would seem to ...
2answers
40 views
### Do black holes have charges?
Do black holes have charges? If so, how would they be measured? Also, does electricity behave the same way? Black holes affect photons, which are carriers of EM radiation, so do black holes have any ...
5answers
676 views
### Anti-Matter Black Holes
Assuming for a second that there were a pocket of anti matter somewhere sufficiently large to form all the type of object we can see forming from normal matter - then one of these objects would be a ...
5answers
276 views
### Theoretical physics and education: Does it really matter a great deal about what happens inside a black hole, or about Hawking radiation? [closed]
I stumbled across this article http://blogs.scientificamerican.com/cross-check/2010/12/21/science-faction-is-theoretical-physics-becoming-softer-than-anthropology/ It got me thinking. Why do we ...
3answers
627 views
### Black holes and positive/negative-energy particles
I was reading Brian Greene's "Hidden Reality" and came to the part about Hawking Radiation. Quantum jitters that occur near the event horizon of a black hole, which create both positive-energy ...
1answer
75 views
### Will the black hole evaporate in finite time from external observer's perspective?
There is the problem that is bothering me with the black hole evaporation because of Hawking radiation. According to Hawking theory the black hole will evaporate in finite time because of quantum ...
0answers
30 views
### How connected thermodynamical stability and dynamical stability for black holes?
Criteria for thermodynamical stability is the convex of entropy. But for black hole entropy is non-additive.
2answers
949 views
### How would a black hole power plant work?
A black hole power plant (BHPP) is something I'll define here as a machine that uses a black hole to convert mass into energy for useful work. As such, it constitutes the 3rd kind of matter-energy ...
0answers
15 views
### Pair production intefering with gamma-ray laser black hole fabrication
A common "proposal" to make a micro black hole is to use on the order of 10^12 kg of gamma-ray lasing medium and focus all the light at a small point. However, intense light will interact with itself ...
2answers
87 views
### Why don't black holes within a galaxy pull in the stars of the galaxy
visit http://www.nasa.gov/audience/forstudents/k-4/stories/what-is-a-black-hole-k4.html If black holes can pull even light, why cant they pull the stars in the galaxy?
1answer
48 views
### Why does the Kruskal diagram extend to all 4 quadrants?
Why is it that the Kruskal diagram is always seen extended to all 4 quadrants when the definitions of the $U,V$ coordinates don't seem to suggest that the coordinates are not defined in, say, the 3rd ...
5answers
441 views
### Why can't you escape a black hole?
I understand that the event horizon of a black hole forms at the radius from the singularity where the escape velocity is c. But it's also true that you don't have to go escape velocity to escape an ...
0answers
69 views
### Is there any proof that the speed of gravity is limited? [duplicate]
I must warn that though I'm argumenting with black holes I'm not asking how does gravity escape the black hole!. I want to know if the absolute speed of gravity waves were proven bu an experiment. We ...
3answers
105 views
### Is it possible to have a singularity with zero mass?
A singularity, by the definition I know, is a point in space with infinite of a property such as density. Density is Mass/Volume. Since the volume of a singularity is 0, then the density will thus ...
1answer
199 views
### Special relativity paradox and gravitation/acceleration equivalence
One of the features of the black hole complementarity is the following : According to an external observer, the infinite time dilation at the horizon itself makes it appear as if it takes an ...
2answers
217 views
### Hawking radiation and black hole entropy
Is black hole entropy, computed by means of quantum field theory on curved spacetime, the entropy of matter degrees of freedom i.e. non-gravitational dofs? What is one actually counting?
2answers
94 views
### Is time going backwards beyond the event horizon of a black hole?
For an outside observer the time seems to stop at the event horizon. My intuition suggests, that if it stops there, then it must go backwards inside. Is this the case? This question is a followup ...
2answers
104 views
### What is a sudden singularity?
I've seen references to some sort of black hole (or something) referred to as a sudden singularity, but I haven't seen a short clear definition of what this is for the layman.
2answers
88 views
### How can we detect a black hole? [duplicate]
If black holes are phenomena of very high density (gravitational singularities) which don't emit radiation how can we detect them so far away from us where so much other radiation can hide the black ...
3answers
54 views
### Spaceship split near event horizon
Lets say there's two astronauts, Alice and Bob, going on a space trip to a super-massive black hole. So large that they wouldn't notice any significant spaghettification forces at the event horizon. ...
1answer
358 views
### Firewall's grandfather paradox
See What are cosmological "firewalls"?. Alice is in freefall in her spacecraft just above the horizon of a gigantic black hole. She measures whether or not the near modes of the horizon ...
1answer
107 views
### Why doesn't the firewall argument also apply to far away ingoing modes?
Gidom Mera's answer at http://physics.stackexchange.com/a/45511 is illuminating, but on closer analysis, it brings up further puzzles. Backscattering works in both directions. Let's see what we get ...
8answers
2k views
### What are cosmological “firewalls”?
Reading the funny title of this talk, Black Holes and Firewalls, just made me LOL because I have no idea what it is about but a lively imagination :-P (Sorry Raphael Bousso but the title is just too ...
0answers
54 views
### What is the physical mechanism for the subjective rapid vanishing of the firewall on such a short notice?
Suppose there is an astronomical sized black hole. There is an observer Alice. She jumps into the black hole after it has emitted 2/3 — or 3/4, the exact number doesn't matter — of all the ...
1answer
251 views
### Gravitational Redshift around a Schwarzschild Black Hole
Let's say that I'm hovering in a rocket at constant spatial coordinates outside a Schwarzschild black hole. I drop a bulb into the black hole, and it emits some light at a distance of $r_e$ from the ...
1answer
111 views
### How would you detect Hawking radiation?
Hawking theorized that a black hole must radiate and therefore lose mass (Hawking radiation). According to classical relativity though, nothing can escape a black hole, the hawking radiation would ...
1answer
102 views
### Do all black holes spin in the same direction?
My question is as stated above, do all black holes spin the same direction? To my knowledge, the spin in the direction of the spin of the matter that created them. Another similar question was asked ...
1answer
42 views
### “WLOG” re Schwarzschild geodesics
Why, when studying geodesics in the Schwarzschild metric, one can WLOG set $$\theta=\frac{\pi}{2}$$ to be equatorial? I assume it is so because when digging around the internet, most references seem ...
1answer
47 views
### Would the universe get consumed by blackholes because of entropy?
Since the total entropy of the universe is increasing because of spontaneous processes, black holes form because of entropy (correct me if I'm wrong), and the universe is always expanding, would the ...
0answers
157 views
### Spacetime around a Black Hole
If we consider the sun, then space-time is curve around it. My question is that what is the kind of curvature of space and time around the black hole. Is that space and time more curved around the ...
1answer
55 views
### Relativistic Computation?
Is it possible to employ relativity to develop computational technology? Here is a really basic example: Build a Computer and Feed it the Problem (say the problem is projected to take 10 years to ...
0answers
48 views
### Singularities in Schwarzchild space-time
Can anyone explain when a co-ordinate and geometric singularity arise in Schwarzschild space-time with the element ...
2answers
575 views
### Analog Hawking radiation
I am confused by most discussions of analog Hawking radiation in fluids (see, for example, the recent experimental result of Weinfurtner et al. Phys. Rev. Lett. 106, 021302 (2011), ...
5answers
325 views
### What happens to light and mass in the center of a black hole?
I know that black holes are "black" because nothing can escape it due to the massive gravity, but I am wondering if there are any theories as to what happens to the light or mass that enters a black ...
2answers
90 views
### What happens to things when things get crushed in a blackhole [duplicate]
When a black hole destroys things until they are smaller than molecules, where does it go and what happens when it clogs up?
1answer
141 views
### Does the curvature of space-time cause objects to look smaller than they really are?
What's the difference between looking at a star from a black hole and looking at it from empty space? My guess is that the curvature of space-time distorts the wavelength of light thus changing the ...
1answer
79 views
### Why does the Schwarzschild radius become excessively large after a certain point?
Here's something that I've found difficult to wrap my head around. The relationship between the Schwarzschild radius and mass is linear. It's generally known that if you take an object in the universe ...
3answers
166 views
### Where 2 comes from in formula for Schwarzschild radius?
In general theory of relativity I've seen several times this factor: $$(1-\frac{2GM}{rc^2}),$$ e.g. in the Schwarzschild metric for a black hole, but I still don't know in this factor where 2 comes ...
1answer
214 views
### How much of a star falls into a black hole?
http://blogs.discovermagazine.com/badastronomy/2011/04/05/astronomers-may-have-witnessed-a-star-torn-apart-by-a-black-hole/ A lot of the star in the disc, a lot of the star in the jets, precisely how ...
1answer
50 views
### Does non-mass-energy generate a gravitational field?
At a very basic level I know that gravity isn't generated by mass but rather the stress-energy tensor and when I wave my hands a lot it seems like that implies that energy in $E^2 = (pc)^2 + (mc^2)^2$ ...
2answers
303 views
### Black Hole Photon Sphere
The photon sphere is a spherical region in space where photons are forced to travel in an orbit at $r = \frac{3GM}{c^{2}}$. Is it possible to detect these spheres? What happens if I fall through ...
1answer
105 views
### General definition of an event horizon?
Horizons are in general observer-dependent. For example, in Minkowski space, an observer who experiences constant proper acceleration has a horizon. Black hole horizons are usually defined as ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258543848991394, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/95954/alternate-proof-that-a-sequence-is-cauchy/95956
|
# Alternate proof that a sequence is Cauchy
Is it sufficient to show that for any $\epsilon > 0$ that there exists $N$ such that if $n$ is greater than or equal to $N$, then $d(s_n, s_{n+1})$ is less than $\epsilon$ to prove that a sequence ${s_n}$ is Cauchy?
-
– David Mitra Jan 3 '12 at 1:37
2
FYI, this is actually true in the p-adic world. ;) – Mark Schwarzmann Jan 3 '12 at 1:42
– David Mitra Jan 3 '12 at 1:59
## 4 Answers
In general, no, common counterexamples being $s_n = \sqrt{n}$ or $s_n = \log n$ or $s_n = \sum_1^n \frac{1}{i}$ as mentioned by Chris.
But it's not completely unfeasible that there exist metric spaces $(X,d)$ in which this is true. For example if $d$ is the discrete metric on $X$ then it is certainly true; I wonder if there are others.
-
2
It's true, for example, in any ultrametric space. This includes the $p$-adics, as Mark notes in the comments. – Chris Eagle Jan 3 '12 at 1:55
No. For example, take $s_n=\sum_{i=1}^n \frac{1}{i}$.
-
Intuition and or process when looking for such a series: Firstly, it's going to be easier to look for an unbounded solution - it will require ingenuity to find a bounded one (though simple sequences can be found). Secondly, we don't want it to grow very quickly - we can tell if something grows quickly by taking it's derivitve. Canonical functions with such things are log(x).
-
No, this is false in general. A standard counter-example is the harmonic series: $H_n = \sum \limits_{i=1}^n \frac{1}{i}$. It is well-known that $H_n$ diverges while $H_{n} - H_{n-1} = \frac{1}{n} \to 0$.
This situation is the sequence-analogue of the “$n$-th term test” for series:
If $\sum a_n$ converges, then $a_n \to 0$. Equivalently, if $a_n$ does not converge to $0$, then $\sum a_n$ diverges.
You might know that the $n$-term test is a divergence test and cannot be used to assert convergence of a series. Similarly, we cannot conclude that a sequence $s_n$ converges given only that $s_{n} - s_{n-1} \to 0$.
In fact, this connection is not a coincidence. Let $\sum a_n$ be a series, and define $s_n$ be its $n$-th partial sum. Then $s_{n} - s_{n-1} = a_n$, so asking if $s_n - s_{n-1} \to 0$ is equivalent to asking whether the $n$-th term of the series goes to $0$.
Finally, let me also touch upon Mark Schwarzmann's comment. In any ultrametric space (with an addition operation satisfying the usual properties), a sequence $s_n$ converges if and only if $s_{n+1} - s_n \to 0$. By the preceding discussion, it follows that any series $\sum a_n$ in such a space converges if and only if it passes the $n$-term test!
Examples of such spaces include
(i) any discrete metric,
(ii) the $p$-adic numbers, and
(iii) the ring of formal power series $\mathbf C[[X]]$ (with the metric $d(a, b) = 2^{-i}$ where $i$ is the smallest index where $a$ and $b$ differ).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219390749931335, "perplexity_flag": "head"}
|
http://mathhelpforum.com/geometry/159828-angles-proving.html
|
# Thread:
1. ## Angles proving
Point M is the middle of a BC side of an acute triangle ABC. Point K lies on the BC side and fills the condition $\angle BAM=\angle KAC$. On the AK segment such point E was chosen that $\angle BEK=\angle BAC$. Prove that $\angle KEC = \angle BAC$.
From what see in my provisory image, the M point is the same point as the K one since KAC divides the BAC angle in the half and thus for BAM to be the other half, M must be K. Then, we choose an E on AK and since we know that AK (or AM, it's the same) divides BAC in the half, automatically BEK = KEC and since we've selected such E that BEK=BAC, KEC=BAC.
Is my thinking right? Cause frankly - I don't think so..
2. I don't think it's wrong... Seems good to me.
3. Hmm.. After second thought I see a problem in the first sentence of my solution. For M to be the same as K, AK would have to be the bisector of BAC and the median of BC at the same time which it rather isn't, is it?
4. Hm.. yes, it's only the case if triangle ABC is isosceles with sides AB and AC equal. I had an equilateral triangle as sketch and hence, it automatically worked...
I tried another drawing and I can't find a simple way to prove this.
5. Hey, Opalg solve it here. It's the exact same problem.
I don't think I would have been able to do this. Really interesting problem
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9726061820983887, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/29016/cyclists-electrical-tingling-under-power-lines/29064
|
# Cyclist's electrical tingling under power lines
It's been happening to me for years. I finally decided to ask users who are better with "practical physics" when I was told that my experience – that I am going to describe momentarily – prove that I am a diviner, a psychic, a "sensibil" as we call it. The right explanation clearly needs some electrodynamics although it's "everyday electrodynamics" and theoretical physicists are not trained to quickly answer such questions although each of us has probably solved many exercises that depend on the same principles.
When I am biking under the power lines – which probably have a high voltage in them – I feel a clear tingling shocks near my buttocks and related parts of the body for a second or so when I am under a critical point of the power lines. It is a strong feeling, not a marginal one: it feels like a dozen of ants that are stinging me at the same moment. It seems almost clear that some currents are running through my skins at 50 Hz. I would like to know the estimate (and calculation or justification) of the voltage, currents etc. that are going through my skin and some comparison with the shock one gets when he touches the power outlet.
Now,
• my bike that makes this effect particularly strong is a mountain bike, Merida;
• the speed is about 20 km/h and the velocity is perpendicular to the direction of the current in the power line;
• the seat has a hole in it and there is some metal – probably a conducting one – just a few centimeters away from the center of my buttocks. It's plausible that I am in touch with the metal – or near touch;
• my skin is kind of sweating during these events and the liquid isn't pure water so it's probably much more conductive than pure water;
• the temperature was 22 °C today, the humidity around 35%, clear skies, 10 km/h wind;
• the power lines may be between 22 kV and 1 MV and at 50 Hz, the altitude is tens of meters but I don't really know exactly.
What kind of approximation for the electromagnetic waves are relevant? What is the strength? How high currents one needs?
Does one need some amplification from interference etc. (special places) to make the effect detectable? (I only remember experiencing this effect at two places around Pilsen; the most frequent place where I feel it is near Druztová, Greater Pilsen, Czechia.)
Is the motion of the wheels or even its frequency important? Is there some resonance?
Does the hole in the seat and the metal play any role? Just if you think that I am crazy, other people are experience the effect (although with different body parts), see e.g. here and here. This PDF file seems to suggest that the metals and electromagnetic induction is essential for the effect but the presentation looks neither particularly comprehensive nor impartial enough.
An extra blog discussion on this topic is here:
http://motls.blogspot.com/2012/05/electric-shocks-under-high-voltage.html
-
What do you mean " the velocity is perpendicular to the current"? That you are crossing the line under the high tension line? Up to that point I thought you were biking in parallel. – anna v May 27 '12 at 4:40
Had this happened to me, I would have a) taken a small lamp, of the kind used in torches, and made a circuit from a metal part to my insulated hand and watched whether there was light when crossing. If yes, I would take my voltmeter on AC index and watch again between the metal part and my body to measure the current. If no light or current was seen I would presume that the effect was on the physiology of my body ( water mainly) and read up on that. Too many unknown parameters in the problem and it has to be sliced down. – anna v May 27 '12 at 4:46
2
p.s. does the effect stop if you stop at that point? – anna v May 27 '12 at 5:01
Have a look at this stopgeek.com/richard-boxs-light-field.html . also youtube.com/watch?v=cXhZvyGtMrk – anna v May 27 '12 at 7:37
Yes, Anna, it appears when I am crossing but I suspect that if I were riding in parallel on the right place, it could be the same effect. And maybe not. Maybe there's some current running around the bike and the polarizations matter. ... I should make an experiment, like stopping at that point. But it has happened to me about 5 times in my life - although it's pretty safely guaranteed and regular with that bike - and it's unpleasant enough a feeling that I just don't want to repeat it again! But maybe i will do the sacrifice at some point haha. – Luboš Motl May 27 '12 at 18:08
show 3 more comments
## 7 Answers
First, Field strength.
This calculation is strictly an electric potential calculation; radiation and induction are safely ignored at 50Hz.
For a 200kV transmission line 20m above ground, the max electric field at ground level is about 1.2 kV/m. This number is reduced from the naive 200kV/20m=10 kV/m calculation by two effects:
1) The ~1/r variation in the electric field (reduction to 3 kV/m). I used the method of images to calculate this field, with a 10 cm conductor diameter to keep the peak field below the 1MV/m breakdown field.
2) Cancellation from the other two power lines in this 3-phase system, which are at +/-120 degree electrical phases with respect to the first, and are physically offset in a horizontal line per the photo. I estimated 7m spacings between adjacent lines. The maximum E-field actually occurs roughly twice as far out as the outermost line; the field under the center conductor is lower.
Next, Can you feel it?
1) The human body circuit model for electrostatic discharge is 100pF+1.5kohm; that's a gross simplification but better than nothing. If one imagined a 2m high network, the applied voltage results in a 50Hz current of about 70uA ($C \omega V$). Very small.
2) There will be an AC voltage difference between the (insulated) human and (insulated) bicycle. A 1m vertical separation between their centers of gravity would yield roughly 1200V. This voltage is rather small compared to some car-door-type static discharges, but it would still be sufficient to break down a short air gap (but not a couple cm), and would repeat at 100Hz. I imagine it would be noticeable in a sensitive part of the anatomy.
If the transmission voltage is actually 400 kV, all the field strengths and voltages would of course double.
-
Excellent, you got a +1 already for your first sentence, a very useful first step... Can a millimeter air gap be replaced by a centimeter of slightly wet shorts? Or is the air gap needed literally, i.e. in between body hair? I am still not getting why 700 V is safe. Why is it unsafe to touch 230 V or 120 V power outlets then? – Luboš Motl May 28 '12 at 8:15
3
@Luboš Motl, 1) With a closed circuit (bridged by an ionic conductive liquid), you get continuous current, which I don't think would be as noticeable as a sudden spark across an air gap. 2) The difference from a power outlet is the energy (or current, or power) available. Again a gross simplification, but the 100pF human capacitor will only supply 10mW under these conditions, while, once current from a wall outlet has started to flow, it will supply $>10A$. I think about 10mA continuously along the right path through the body can be fatal. – Art Brown May 28 '12 at 17:06
Oh, I see, so it's really about a finite charge in a "capacitor" that limits how much I can get out of it... Thanks. I will actually vote your answer as the "real answer to my question" although there have obviously been many other, sometimes even relevant ideas here... – Luboš Motl May 28 '12 at 17:31
@Luboš Motl, Thanks, but I'm frankly puzzled about the actual sensation. I believe you feel it, but am not sure my answer is sufficient to explain it. Maybe someone else will have an idea... I found an error in my superposition of the other phases that raises the field strength to 800 V/m, and will update the answer accordingly. – Art Brown May 28 '12 at 17:37
... and it turns out the peak field actually occurs farther away from the centerline than the outside conductor. One last update... – Art Brown May 29 '12 at 1:36
If the power line is 20m high, and has the voltage of 1MV , then the electric field (near ground), very roughly, is on order of 1000/30 kv ~ 30 000 v/m (the numbers are very approximate and the field is complicated because it is a wire near a plate scenario, and wire diameter is unknown but not too small else the air would break down, i.e. spark over, near the wire).
You get charged to several tens kilovolts relatively to bike, then you discharge through clothing, again and again, if the line is AC because the voltage is alternating, if the line is DC because as you're moving the field changes magnitude.
The fluorescent lights light up under power lines; the field is this strong.
http://www.doobybrain.com/2008/02/03/electromagnetic-fields-cause-fluorescent-bulbs-to-glow/
With regards to the current, as the current is pulsed (you get charged then rapidly discharge through the air gap), the current can be strong enough to be felt even if average current is extremely small. The pulse current is same as when you get zapped taking off clothing, or the like.
-
Right, thanks, +1, exactly, those 30 kV per meter which is huge even if one only gets a small remnant of it is something I am thinking about. Surprising that not too many people get killed in various situations under the power lines... – Luboš Motl May 27 '12 at 19:31
@LubošMotl Hi Lubos. I think that just kilovolts are not enough to kill you, there has to be enough current. I suspect that the 1/r^2 drop of radiated power is enough to avoid deadly currents, it must be the reason the lines are so high. – anna v May 27 '12 at 20:34
– John McVirgo May 27 '12 at 20:41
2
Luboš Motl: There would not be enough zap. The physiological zap is a complicated function of pulse duration, current, and voltage. In this scenario the total charge that goes through the body on each zap is no bigger than if you get zapped taking off a sweater or stroking a cat (which can also generate several kilovolts), and the pulse duration is so short and energy so low that neither the current nor the voltage are relevant, but the total charge (integral of current by time). – Dmytry May 27 '12 at 21:41
When calculating the volts per meter of the static field, it's important to assume that the bicycle is conductive (presumably an aluminum frame).
Without the bicyclist, one would use image charges to calculate the electric field at the bicycle. The three phases should partially cancel, and Art Brown's calculation seems reasonable, around 1200 volts per meter.
By the way, there's an additional DC voltage; the atmosphere (on a fair weather day) carries a voltage of about 60 to 100 volts in summer and 300 to 500 volts per meter in winter. On days when this effect is large it may be possible to see more of an effect.
When you insert a vertical conductor into the electric field of 1200 volts per meter, the electric field near the ends of the conductor are much larger. To estimate the effect you need to guess the radius of the top end of the conductor. This depends on the seat construction; if the seat itself is metal then its radius is on the order of 0.1 meter.
To first order, a vertical pole placed in an electric field will end up with charges at its two ends. For a bicycle frame of height 1m, the charges will be separated by about 1m. Of course the charge required to cancel the background potential depends on the radii of the ends of the pole. (An infinitely sharp pole will create an infinite electric field, before taking into account electric resistance breakdown of the air.)
To compute the electric field due to the bicycle frame, let's first say that the frame is 1m in height. Thus the two ends of the frame will have to carry voltages of +-600 volts with respect to the field produced by the overhead wires.
The actual electric field depends on how sharp the conductor is. Very sharp conductors have very large electric fields. Let's suppose that the bicycle seat has an effective radius of around 0.1 meters. What is the electric field at the seat?
Suppose that you have a point charge and that it produces a voltage of 600 volts at a radius of 0.1 meters, with 0 volts at infinity. What is the electric field at 0.1 meters? This is a question about the relationship between charge, potential and field. Some equations: $$V = \frac{1}{4\pi\epsilon_0}\frac{q}{r}$$ $$E = -\frac{1}{4\pi\epsilon_0}\frac{q}{r^2}$$ From these, we see that the electric field is increased by a factor of 1/r = 10/meter. Thus the field in the immediate vicinity of the bicycle seat is around $$600\;\; \textrm{volts} \times 10/\textrm{meter} = 6000 \;\;\textrm{volts / meter}.$$
It wouldn't surprise me that a sensitive part of the human anatomy could detect this electric field; it amounts to 60 volts per cm.
Most people have verified that if you touch your tongue to a 9 volt battery you can feel the shock. Now imagine a 50 volt battery jammed into your perspiring nether regions. This might very well feel like a lot of ants in your pants.
-
My approach would be to treat yourself like the plate of a parallel-plate capacitor. Make the following assumptions:
eps = 9e-12
A = surface area of you + bike ~ 1 square meter
d = distance to power line ~20 meters
V = 1000 kV
Then the current is I = C*dV/dt = (eps*A/d)*(2*pi*50)*V = 140 microamps.
Now is it really possible to feel 140uA? According to the OSHA website, 1mA is the minimum current you can feel from your hand to your foot (http://www.osha.gov/SLTC/etools/construction/electrical_incidents/eleccurrent.html). So 140uA isn't that far off, and maybe you can make some argument about the current density being higher where it's funneled through the seat. More likely, your nerves are more sensitive in some areas of the body than others.
I highly doubt that at biking velocities there is any significant current from motion through the magnetic fields of the lines.
-
1
what about treating the metal parts of the bike as an antenna and the body shorting it? – anna v May 27 '12 at 9:23
Dear anna, something like what you say has to be right. Can one estimate it? What is the voltage that may be in the antenna? What is the electric field in the electromagnetic wave? When one multiplies it by one meter, one has to get the voltage that may be attached to the body. I am pretty sure that Brian's estimate is smaller by many, many orders of magnitude. – Luboš Motl May 27 '12 at 17:09
– BrianC May 28 '12 at 2:49
I am not sure that the following is relevant, but maybe what you feel is caused by the action of electric field on the hair on your skin. I wrote elsewhere on this web-site about this effect: "the electric field polarizes, rather than charges, hair, and then acts on the resulting electric dipoles, judging by the formulas in: "Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, September 1-4, 2005", p. 4266. "Analysis of Body Hair Movement in ELF Electric Field Exposure", H. O. Shimizu, K. Shimizu. According to the formulas, it is essential that the electric field is not uniform. The authors claim good agreement with experimental results." It is also possible that, as others wrote here, metal parts of the bike modify the electric field, enhancing the effect.
-
It's very interesting but from my basic school years, I became convinced that the effect of static electricity on hair is only relevant if the hair is dry etc. This sensation on the bike only occurs near the buttocks and groin area, I don't have so much hair to rely upon, and they're wet because I kind of sweat, anyway. So I don't believe the static electricity is really too relevant here. – Luboš Motl May 27 '12 at 18:48
1
I don't know. Maybe we are talking about different phenomena: you are talking about effects related to charging of hair, whereas there is no charging in the mechanism described in the quoted article. – akhmeteli May 27 '12 at 18:56
Oh, I see, so this could also depend on my having body hair? ;-) – Luboš Motl May 27 '12 at 19:20
Well, body hair is ubiquitous anyway :-) – akhmeteli May 27 '12 at 19:43
As Dmytry & BrianC said, you are spanning about 2m of a field gradient of about 5e4 v/m.
What's more, most of you and the bike are practically shorting out that 10%, since you are either metal or brine. So what voltage there is is dropping across fairly thin insulators - tires & clothing.
The current might be in the range of 1e-6 amps, and if that were going through the salt water of your body, you might not feel it. But if it hits your skin as a spark, you probably will feel it.
-
Without calculating anything I can say that you are actually conducting electricity at 6ohz, the amperage is too small to harm because the resistance of your body combined with that also of the tires and the air overcomes the voltage. The salt in your perspiration does increase conductivity, the metal bike in a magnetic field does induce voltage much like a transformer does. I have felt the same effects when working near power lines of 345kv and handling any metal object. If you held a metal pole in the air high enough on a wet day near a power line it would kill you.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554054141044617, "perplexity_flag": "middle"}
|
http://congres-math.univ-mlv.fr/malliavin_calculus_for_jump_processes_2010/abstracts/regularity_of_the_law
|
Regularity of the law
Regularities and logarithmic derivatives of densities for SDEs with jumps.
Atsushi Takeuchi (Osaka City University)
Consider stochastic differential equations with jumps, which include the drift, the diffusion, and the jump terms. In this talk, we shall study the existence of the smooth density and its logarithmic derivatives with respect to the initial point of the equation, under some nice conditions on the coefficients and the L ́evy measure. Our approach is based upon the Malliavin calculus on the Wiener-Poisson space introduced by Picard and Ishikawa-Kunita. The key tools are the fundamental inequalities for semi- martingales, and the martingale representation theorem via the Kolmogorov backward equation for the integro-differential operator associated with the equation. The results obtained here reflect the effect not only from the diffusion term, but also from the jump term.
Dirichlet forms applied to Poisson measures : simplified construction and the double Fock space.
Nicolas Bouleau (Université Paris-Est - École des Ponts ParisTech)
We develop the Malliavin calculus for Poisson measures in the way of acting on the jumps size with general local Dirichlet forms. The construction that we present replace the Friedrichs method by a Monte Carlo argument. The choice of the gradient as a marked point process yields a very simple structure in terms of chaos and Fock spaces. We give a collection of remarkable formulas including the lent particle formulas for the gradient, the generator and the ”divergence”. We begin to study the celebrated functional inequalities in this framework.
Some applications of the Lent Particle Method.
Laurent Denis (Université d'Évry)
The lent particle method introduced in the previous talk by N. Bouleau, gives rise to a new explicit calculus and permits to develop a Malliavin calculus on the Poisson space in a simple way. In this talk, we shall first construct Sobolev spaces, study functional inequalities (Khintchine and Meyer inequalities) and establish a criterion which ensures existence of density and regularity of laws of Poisson functionals. Then, we’ll apply it to solutions of SDE’s driven by Poisson measure. We also shall consider non classical examples such as a new kind of (non linear) subordination for which the space of marks (bottom space) is infinite dimensional.
Integration by parts formula for SDE’s with jumps.
Vlad Bally (Université Paris-Est - Marne-la-Vallée)
We establish an integration by parts formula in an abstract framework in order to study the regularity of the law for processes solution of stochastic differential equations with jumps, including equations with discontinuous coefficients for which the Malliavin calculus developed by Bismut and Bichteler, Gravereaux and Jacod fails.
Regularization for the 2D Boltzmann equation.
Nicolas Fournier (Université Paris-Est Créteil)
We present a probabilistic interpretation of the Boltzmann equation in terms of a jumping S.D.E. Based on this and on the Malliavin calculus for jumps processes intro- duced by Bally-Clément, we prove that the solution enjoys some regularization properties.
Local Malliavin calculus for Lévy processes and applications.
Josep L. Solé (Universitat Autonoma de Barcelona)
We will develop a Malliavin calculus for L ́evy processes based on a family of true derivative operators. The starting point is an extension to Lvy processes of the pioneering paper by Carlen and Pardoux for the standard Poisson process. Our extension includes the classical Malliavin derivative for Gaussian processes. We obtain a sufficient condition for the absolute continuity of functionals of the Lvy process. As an application, we will analyze the absolute continuity of the laws of the solutions of some stochastic differential equations driven by Lvy processes. The talk is based in a joint work with Jorge Leon (Cinvestav), Josep Vives (UB) and Frederic Utzet (UAB).
On an absolute continuity criterion for Ornstein-Uhlenbeck processes
Thomas Simon (Université Lille 1)
We consider multidimensional Ornstein-Uhlenbeck processes with Lévy noise. Under a non-singularity assumption on the drift term, we give a NASC for the absolute continuity of the laws at fixed time. This condition is not time-dependent, contrary to infinitely divisible laws in general. It is expressed as a geometric condition between the drift, the Lévy measure, and the Brownian component. The proof relies on a suitable stratification method and basic control theory.
Convergence in variation for the laws of Poisson functionals under weak regularity assumptions.
Alexey Kulik (University of Kiev)
The talk is devoted to the criteria, proved in [1], for a sequence of laws of $\Re^m$-valued random vectors $\xi_n, n\geq 1$, restricted to a non-trivial part of the probability space, to converge in variation. These criteria are formulated in the terms of a group ${\mathcal G}$ of admissible transformations of the basic probability space, and demand in particular that every component of $\xi_n$ is either $L_p$-differentiable ($p>m$) or a.s. differentiable w.r.t. ${\mathcal G}$. The difference between $L_p$- and a.s.- based criteria is essential; the latter one is more technically complicated and contains a specific assumption on the family $\{\xi_n\}$ to have uniformly dominated increments w.r.t. ${\mathcal G}$. On the other hand, the a.s.-criterion, unlike the $L_p$- one, typically leads to weak regularity assumptions on a model.
To demonstrate that difference and the intrinsic further applications, we consider Markov process $X$ solution to an Itô-Lévy type SDE driven by a Poisson point measure, and show that convergence in variation criteria, combined with natural Lyapunov-type assumptions, perform an efficient tool for proving following $\phi$-ergodic rates for the transition probability $P_t$ of the process $X$:$\int_{\Re^m}\phi(y)|P_t(x,\cdot)-\pi|(dy)\leq r(t)\psi(x), \quad x\in \Re^m, \quad t\geq 0$
([2]; here $\pi$ is the invariant distribution). In that context, the $L_p$-criterion is applicable when the coefficients of initial SDE are smooth enough and either the Lévy measure of $\nu$ is absolutely continuous or the jump perturbations are additive. Otherwise, the $L_p$-criterion fails and one should imply the a.s.-criterion. Another field of applications is provided by the approach developed in [3], where a spectral gap property for the $L_2$-generator of the process $X$ was established in the terms of explicit $\phi$-ergodic rates both for the process $X$ and its dual one $X^*$. In that context, the a.s.-criterion is of essential importance because, even in the simplest case where $X$ is a Lévy-driven Ornstein-Uhlenbeck process, it's dual process follows an Itô-Lévy type SDE with discontinuous coefficients and hence can not be treated in terms of the $L_p$-criterion.
References:
[1] A.M. Kulik, Absolute continuity and convergence in variation for distributions of a functionals of Poisson point measure, arXiv:0803.2389 (2008).
[2] A.M. Kulik, Exponential ergodicity of the solutions to SDE?s with a jump noise. Stoch. Proc. Appl. 119, 602 – 632 (2009).
[3] A.M. Kulik, Asymptotic and spectral properties of exponentially φ-ergodic Markov pro- cesses, arXiv:0911.5473 (2009).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8715998530387878, "perplexity_flag": "middle"}
|
http://nrich.maths.org/152/note
|
### Writing Digits
Lee was writing all the counting numbers from 1 to 20. She stopped for a rest after writing seventeen digits. What was the last number she wrote?
### What Number
I am less than 25. My ones digit is twice my tens digit. My digits add up to an even number.
### One of Thirty-six
Can you find the chosen number from the grid using the clues?
# 6 Beads
## 6 Beads
If you put three beads onto a tens/units abacus you could make the numbers $3$, $30$, $12$ or $21$.
Explore the numbers you can make using six beads.
### Why do this problem?
This problem is a good, yet simple, activity that can get pupils thinking hard about numerals, numbers and place value. It also provides a context in which to discuss different ways of recording.
### Possible approach
You could start the children off by showing them one of the examples for three beads and then asking for other ways the beads could be arranged, reading the numbers together. You may want to use a basic drawing of the abacus on an interactive whiteboard and have 'beads' to drag into place. At this stage, you could encourage learners to try and explain how we know we have all the different ways.
After this the children could work in pairs on the six beads problem so that they are able to talk through their ideas with a partner. Have available a range of equipment which they could use, but allow them to make their own choice. You may have, for example, a real abacus, counters, paper, beads, coloured pencils/pens etc. They could use digit cards to make the number which is represented on the abacus.
In the plenary, children could compare the ways in which they have recorded their findings and you could discuss the advantages of each. You could then talk about which recording methods would be best if we wanted to be sure that we had all the ways of using six beads. At this point, you could share any that have used such a system, or you could demonstrate your own way.
### Key questions
What can you tell me about the numbers you've found?
Are there any other ways you can arrange those beads?
How can you tell if you have them all?
### Possible extension
Learners could increase the number of beads or they could be asked to investigate what would happen if there were three columns: units, tens and hundreds.
### Possible support
Some children may find it easier to use four beads, rather than going straight on to six. Using practical apparatus, such as counters, is essential for those having difficulties in understanding the problem.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9583296775817871, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/210871-proving-two-subspaces-direct-sum.html
|
# Thread:
1. ## proving that two subspaces are direct sum
Let $W,U\subset V$ be subspaces.
i need to prove that if $dim(V)=dim(U)+dim(W)$ and $W\cap U=\left \{ \vec0 \right \}$ then $W\oplus U=V$.
here's the thing.
i proved it, but without considering $W\cap U=\left \{ \vec0 \right \}$.
(i proved that if $dim(V)=dim(U)+dim(W)$ then $W\oplus U=V$)
and that's obviously can't be, i mean there must be something i'm doing wrong, i just can't figure out what.
here's what i did:
to prove that U and W are the direct sum of V, i just need to show that for every $v\in V$: there's only one linear combination of $u\in U$ and $w\in W$.
so:
Let $\left \{ u_1, u_2... u_n \right \}$ be the basis for U.
Let $\left \{ w_1, w_2... w_m \right \}$ be the basis for W.
now, since $dim(V)=dim(U)+dim(W)$, i can conclude that the number of vectors in V's basis must be $n+m$.
so, the basis for V can now be:
$\left \{ u_1, u_2... u_n,w_1, w_2... w_m \right \}$
now, let's presume v can be presented in two different ways, and show that it's actually the same presentation, so:
let's presume there are scalars $a_1,a_2... a_{n+m}\in \mathbb{R}$, not all 0, and $b_1,b_2... b_{n+m}\in \mathbb{R}$, not all 0, such that:
$a_1u_1+a_2u_2+...+a_nu_n+a_{n+1}w_1+a_{n+2}w_2+... +a_mw_m=v$
$b_1u_1+b_2u_2+...+b_nu_n+b_{n+1}w_1+b_{n+2}w_2+... +b_mw_m=v$
if we'll subtruct one from another we'll get:
$(a_1-b_1)u_1+(a_2-b_2)u_2+...+(a_{n}-b_{n})u_n+(a_{n+1}-b_{n+1})w_1+(a_{n+2}-b_{n+2})w_2+...+(a_m-b_m)w_m=0$
and since $\left \{ u_1, u_2... u_n,w_1, w_2... w_m \right \}$ are linear independent (basis for V), then $a_1=b_1$, $a_2=b_2$ and so on...
so that's all.
my question now is: where exactly does $W\cap U=\left \{ \vec0 \right \}$ fit in? why do i even need it here?
it's quite a task to use the math terminology when it's not your native language, so i hope i used it right and everything is clear enough...
thanks in advanced!
2. ## Re: proving that two subspaces are direct sum
Hi Stormey,
The argument uses $U\cap W=\{0\}$ when you conclude that $\{u_{1},\ldots, u_{n},w_{1},\ldots,w_{m}\}$ is a basis for $V.$ Just because $\{u_{1},\ldots, u_{n}\}$ and $\{w_{1},\ldots, w_{m}\}$ are linearly independent sets of vectors on their own does not always mean $\{u_{1},\ldots, u_{n},w_{1},\ldots, w_{m}\}$ must be a linearly independent collection too. For example, the sets $\{[1,0,0], [0,1,0]\}$ and $\{[0,1,0], [0,0,1]\}$ are each linearly indepdent sets of vectors on their own, but $\{[1,0,0], [0,1,0], [0,1,0], [0,0,1]\}$ is not a linearly indepedent set of vectors.
Does this answer your question? Let me know if anything is unclear. Good luck!
3. ## Re: proving that two subspaces are direct sum
Hi GJA, and thanks for your help.
i'm aware that if two subspaces' basis are linear independed that doesn't necessarily mean their union is also linear independent,
but i can draw this conclusion (that this two are linear independent and are basis for V) from $dim(V)=dim(U)+dim(W)$, i don't need $W\cap U=\left \{ \vec0 \right \}$ for that.
so actually, my question is:
if $dim(V)=dim(U)+dim(W)$, why doesn't it mean that U and W's basis are *disjoint sets?
*of course, disjoint except their common $\vec0$, but that goes without saying, since U and W are subspaces.
4. ## Re: proving that two subspaces are direct sum
Hi Stormey,
In the previous post you said
i'm aware that if two subspaces' basis are linear independed that doesn't necessarily mean their union is also linear independent,
but i can draw this conclusion (that this two are linear independent and are basis for V) from $dim(V)= dim(U)+dim(W)$
However, knowing $dim(V) = dim(U) + dim(W)$ does not imply the bases of $U$ and $W$ are linearly independent. For example, take $U=W=span([1,0])$ and $V=\mathbb{R}^{2}.$ Then $dim(V) = dim(U)+dim(W)$ holds, but the bases for $U$ and $W$ are not linearly independent.
Does this clear things up? Good luck!
5. ## Re: proving that two subspaces are direct sum
Brilliant, thanks man!
I forgot that U can be equal to W.
It all makes sense now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587854743003845, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/2290/predicting-prng-given-some-of-its-previous-output/2297
|
# Predicting PRNG given some of its previous output
I a have a question about PRNGs and this is my very first experience with them. I have the following generator that takes a 56-bit seed 'p' during initialization and then chooses both X and Y randomly from the interval [0, p].
Every time it is called, it returns the output of the next function:
````def next(self):
self.x = (2*self.x + 5) % self.p
self.y = (3*self.y + 7) % self.p
return (self.x ^ self.y)
````
I have the first 9 outputs of the generator and i need to predict the next output.
````prng_output = [210205973, 22795300, 58776750, 121262470, \
264731963, 140842553, 242590528, 195244728, 86752752]
````
I have thought of the solution which turned out to be correct by VERY slow, so apparently there must be another way to solve it.
My solution was to constrain the range of values for p according to the given output. Also, generate all values for X and for every value of X, calculate the first 9 values according to the previous equation and XOR it with the output to get Y. Finally, check of the sequence of Ys is valid or not (also according to the previous equation).
After some reading i knew that a reduced state (reduce the number of bits) of the generator can be determined.
So my question is how to crack the given generator ? and what is the 'reduced state' of the generator and how to use it ?
So if someone can help me with some details or a 'useful' link to read to understand more, i would be thankful.
Thanks,
-
Terminology nitpick: your $p$ is not a "seed", it's a parameter of the generator. Usually one would call it a "modulus", since that's how it's used in your algorithm. – Ilmari Karonen Apr 6 '12 at 14:51
Thanks, for the correction. – Samer Meggaly Apr 6 '12 at 22:10
The question is less than accurate: it turns out that p is 57-bit, not 56-bit (although that's not apparent from the known output, which happens to remain within 56 bits until the 28th output). Also, if the first `self.x` and `self.y` are chosen randomly in [0..p] rather than [0..p-1], that's an irregularity. – fgrieu Apr 7 '12 at 6:32
No, this is a mistake of mine, x and y and chosen randomly in [0..p-1] and the problem stated that p is 56-bits. – Samer Meggaly Apr 7 '12 at 6:43
## 2 Answers
I'm not familiar with the terminology "reduced state", so I can't address that half of the question.
However, this particular PRNG makes it easy to reconstruct the internal state; we note that the state function can be rewritten as:
````x := 2*x + 5 - kx * p
y := 3*y + 7 - ky * p
````
for some small integers $k_x$, $k_x$ (in fact, $0 \le k_x \le 2$ and $0 \le k_y \le 3$. Once we do that rewrite, we notice that the lower bits no longer depend on the higher bits.
So, what we can do is:
• Iterate over the possible $k_x$, $k_y$ values used to generate the second, third and fourth outputs; a total of 1728 possibilities.
• For each set, we go through the possibilities for bit 0 of p and the initial x (note: because the initial y can be immediately deduced from the x and the first output, we don't have to iterate through that)
• Check to see if there's a combination that gives the bit 0's on the second, third and fourth outputs that we have; if there is, then start looking through the various possibilities for bit 1 of p and initial x.
When we manage to get through all 29 bits, and get all the bits of the output we observed, then we have the answer (and in fact, continuing the generate outputs will, in this case, continue to generate the listed outputs).
Going through the above procedure gives us the answers p = 295075153, x0 = 89059908, y0 = 164204369, next output = 231886864
-
nice, Poncho's solution beats my explicit one, and convince me that the SAT solver would crunch the problem. – fgrieu Apr 6 '12 at 20:14
First, thanks a lot for the help. Second, i still have some points that i cant understand ... how did u determine the ranges for Kx and Ky ? ... Also, i don know whether i got the second step or not, so u mean that for all values of X [0, 2^56] and for all values of bit-0 of p {0, 1}, find 3 pairs of (Kx, Ky) that will generate the 2nd, 3rd and 4th outputs. – Samer Meggaly Apr 6 '12 at 22:01
@SamerMeggaly: well, because $0 \le x_0 < p$, then we know that $0 \le 2*x_0 + 5 < 3p$, and so doing the $\bmod$ reduction will cut out between 0 and 2 multiples of p (and similarly on the y side). As for step two, we start by iterating through the various possibilities of bit 0 of p and bit 0 of $x_0$, and once we've determined settings that works, we start on bit 1 of both, and work our ways up until we've recovered the entire value (or decided that the particular $k_x, k_y$ values weren't the right ones) – poncho Apr 6 '12 at 22:17
The generator can be re-stated as having a key $(p,x_0,y_0)$, the recurrences $x_{j+1}=(2\cdot x_j+5)\bmod p$ and $y_{j+1}=(3\cdot y_j+7)\bmod p$, and the output $r_j=x_j\oplus y_j$ known for $j\in\{1\dots 9\}$.
We define $(u_j,v_j)$ such that $x_{j+1}=2\cdot x_j+5-u_j\cdot p$ and $y_{j+1}=3\cdot y_j+7-v_j\cdot p$. Notice that $(u_j,v_j)$ can take only 12 values. At the beginning or the sequence, or if $p$ was chosen such that the Linear Congruential Generators $x$ and $y$ are maximal-length, $(u_j,v_j)=(0,0)$ has odds only slightly lower than $1/6$. It is reasonable to hope that $(u_j,v_j)=(0,0)$ occurs for some $j\in\{1\dots 8\}$. If so, we know both $x_j\oplus y_j$ and $(2⋅x_j+5)\oplus (3\cdot y_j+7)$, as these are $r_j$ and $r_{j+1}$. There can be at most one solution $(x_j,y_j)$ for that, and it can be found simply by determining bits of $y_j$ from right to left. Some of these solutions can be eliminated, for they lead to $x_j$ or $y_j$ too big for $(u_j,v_j)=(0,0)$ to hold.
Then, for many values of $(u_{j+1},v_{j+1})$, there will be few possible values of $p$ compatible with $(x_j, y_j, r_{j+2})$, which are known unless $j=8$. and these candidates for $p$ can be found reasonably efficiently, again by finding bits from right to left. The rest is pesky details. Among these, the given that $p$ is 56-bit turns out to be wrong.
Another option is to encode the problem in the formalism of boolean satisfiability, and throw that to some SAT solver, like theses ones. This will work fine even if we have only 4 consecutive output values, none of which with $(u_j,v_j)=(0,0)$.
-
Thanks @fgrieu, i have two points that i tried to figure out, but i couldn't. First, how did u limit the range of p to [2^55, 2^56] ... Second, also the same for Xj and Yj. – Samer Meggaly Apr 6 '12 at 22:09
@SamerMeggaly: By definition of the remainder, what is the range for $(2\cdot x_j+5)\bmod p$? Assuming $p$ known, what's the range for $x_j$ when $j>0$?. Same for $y_j$. What's the highest known $r_j$? Consider the high bit of that: how was it formed? What does that tell us on the minimum value of $x_j$ or $y_j$? What does that tell us on the minimum value of $p$? Does the problem statement give us a maximum value for $p$? If it did not, could we make a plausible conjecture, and at what odds? – fgrieu Apr 7 '12 at 5:53
I know that the range for (2x + 5) mod p is [0..p-1], but wouldn't that make x < (p-5)/2 that is (2^56-5)/2 ? and the maximum value for p will be (2^56-1), please correct me if i am getting anything wrong. – Samer Meggaly Apr 7 '12 at 6:46
We know $x_j<p$, including for $j=0$, given your recent comment. But contrary to the question's statement, $p$ is NOT 56-bit, it actually happens to be slightly above $2^{56}$ (that does not show until the 28th output). Ignoring the "56-bit" fragment of the statement, given that the first 9 outputs are within 56-bits, and assuming $(x_j,y_j)$ are random in $[0\dots p-1]$, odds are low that $p>2^{57}$, and we can confidently assume $p\le 2^{57}$. Thus $u_j=0\Rightarrow 2⋅x_j+5<2^{57}\Rightarrow x_j\in[0\dots 2^{56}-3]$; similarly $v_j=0\Rightarrow y_j\in[0\dots(2^{57}-8)/3]$. – fgrieu Apr 7 '12 at 8:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359045028686523, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/51213/how-would-it-be-to-look-at-the-sky-if-the-earth-were-near-the-edge-of-the-univer?answertab=votes
|
# How would it be to look at the sky if the earth were near the edge of the universe?
By looking at this picture:
http://earthspacecircle.blogspot.com/2013/01/earths-location-in-universe.html
The earth is near the center of the universe. I've read that the universe look the same no matter where the observer is located. It is the same distance everywhere.
So I understand that for general relativity the universe need to be homogeneous and isotropic, so it will look the same no matter where I am.
But what if I'm on one of the planets near the right or left of the image, then if I draw the same picture of the universe, but from my perspective, then I would be also located in the center? If that's not the case (I'm actually near an edge), then part of my sky would be completely dark, and all the sky that way won't be isotropic?
-
Earth is by definition the center of the observable universe. The observable universe is the region of space such that light from the beginning of the universe can reach the observer, i.e. Earth. – Alex Becker Jan 14 at 19:30
## 1 Answer
The universe has no edge so to speak. It is, however, finite in age, so light can only have traveled a given distance to get to us. Call this distance $R$, the "radius" of the universe. Any observer, anywhere, will see out to a distance $R$ in all directions from their location.
Now two different observers will have different origins for their respective observable universes, and so will see slightly (or vastly) different patches of the "full universe." (Be careful when talking about things outside your observable patch by the way - it is very easy to end up talking about impossible scenarios that produce nonsensical results.) This can happen even if the universe is closed (read: finite), so long as its size is bigger than something like $R$.
So no, no one is at the "edge" of the universe.
By the way, general relativity in no way requires homogeneity and isotropy. These are simply assumptions cosmologists make in order to take an utterly intractable problem (evolving the whole universe) and make it absurdly simple (see the FRW metric, which, although it may look complicated at first, is pretty much the must trivial thing you can do with general relativity). The homogeneous/isotropic assumptions, by the way, turn out to be justified on cosmological scales, though this was discovered only after the early days of GR-based cosmology, once we had very deep galaxy surveys.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491217136383057, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/86227/trig-reciprocal-function-nomenclature/86230
|
# Trig reciprocal function nomenclature?
The fact that the reciprocal of $\sin\theta$ is $\csc\theta$, and the reciprocal of $\cos\theta$ is $\sec\theta$ messed with my head for the longest time when I was taking trig. Why are the functions named this way, when an alliterative scheme would seemingly be more sensible?
While I'm at it, what's the reason for choosing the names sine, cosine, and tangent? The words sinusoidal and tangental come to mind, but perhaps these words could have come from the function names, not the other way around.
-
2
The properties of secant align better with the properties of sine than the properties of the cosine. For example, if you look at the formulas for the derivatives, the derivatives of sine, tangent, and secant have no minus sign, while the derivatives of cosine, cotangent, and cosecant all have a minus sign in them. – Arturo Magidin Nov 28 '11 at 3:27
3
– Arturo Magidin Nov 28 '11 at 3:30
2
– Srivatsan Nov 28 '11 at 3:55
## 2 Answers
Here is a nice explanation of the origin of all these names. In summary:
Most of the words come from Latin descriptions of the geometry involved. Sine comes from the Latin word 'sinus', tangent from the Latin 'tangens', and secant from the Latin 'secans'. The origin of the co-functions actually makes quite a lot of sense; cosine was originally co-sine, referring to the sine of the complementary angle. Similarly, cotangent and cosecant are the tangent and secant of the complementary angle.
I recommend reading the webpage if you want to know more detail, it's quite interesting.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179510474205017, "perplexity_flag": "middle"}
|
http://infostructuralist.wordpress.com/category/feedback/
|
# The Information Structuralist
## Stochastic kernels vs. conditional probability distributions
Posted in Control, Feedback, Information Theory, Probability by mraginsky on March 17, 2013
Larry Wasserman‘s recent post about misinterpretation of p-values is a good reminder about a fundamental distinction anyone working in information theory, control or machine learning should be aware of — namely, the distinction between stochastic kernels and conditional probability distributions.
(more…)
## Lower bounds for passive and active learning
Posted in Feedback, Information Theory, Narcissism, Papers and Preprints, Statistical Learning and Inference by mraginsky on November 10, 2011
Sasha Rakhlin and I will be presenting our paper “Lower bounds for passive and active learning” at this year’s NIPS, which will be taking place in Granada, Spain from December 12 to December 15. The proofs of our main results rely heavily on information-theoretic techniques, specifically the data processing inequality for ${f}$-divergences and a certain type of constant-weight binary codes.
(more…)
## Updates! Get your updates here!
Posted in Conference Blogging, Control, Feedback, Information Theory, Models of Complex Stochastic Systems, Narcissism, Papers and Preprints by mraginsky on October 5, 2011
Just a couple of short items, while I catch my breath.
1. First of all, starting January 1, 2012 I will find myself amidst the lovely cornfields of Central Illinois, where I will be an assistant professor in the Department of Electrical and Computer Engineering at UIUC. This will be a homecoming of sorts, since I have spent three years there as a Beckman Fellow. My new home will be in the Coordinated Science Laboratory, where I will continue doing (and blogging about) the same things I do (and blog about).
2. Speaking of Central Illinois, last week I was at the Allerton Conference, where I had tried my best to preach Uncle Judea‘s gospel to anyone willing to listen information theorists and their fellow travelers. The paper, entitled “Directed information and Pearl’s causal calculus,” is now up on arxiv, and here is the abstract:
Probabilistic graphical models are a fundamental tool in statistics, machine learning, signal processing, and control. When such a model is defined on a directed acyclic graph (DAG), one can assign a partial ordering to the events occurring in the corresponding stochastic system. Based on the work of Judea Pearl and others, these DAG-based “causal factorizations” of joint probability measures have been used for characterization and inference of functional dependencies (causal links). This mostly expository paper focuses on several connections between Pearl’s formalism (and in particular his notion of “intervention”) and information-theoretic notions of causality and feedback (such as causal conditioning, directed stochastic kernels, and directed information). As an application, we show how conditional directed information can be used to develop an information-theoretic version of Pearl’s “back-door” criterion for identifiability of causal effects from passive observations. This suggests that the back-door criterion can be thought of as a causal analog of statistical sufficiency.
If you had seen my posts on stochastic kernels, directed information, and causal interventions, you will, more or less, know what to expect.
Incidentally, due to my forthcoming move to UIUC, this will be my last Allerton paper!
## ISIT 2011: favorite talks
Posted in Conference Blogging, Feedback, Games and Decisions, Information Theory, Models of Complex Stochastic Systems, Papers and Preprints, Statistical Learning and Inference by mraginsky on September 1, 2011
Obligatory disclaimer: YMMV, “favorite” does not mean “best,” etc. etc.
• Emmanuel Abbe and Andrew Barron, “Polar coding schemes for the AWGN channel” (pdf)
• The problem of constructing polar codes for channels with continuous input and output alphabets can be reduced, in a certain sense, to the problem of constructing finitely supported approximations to capacity-achieving distributions. This work analyzes several such approximations for the AWGN channel. In particular, one approximation uses quantiles and approaches capacity at a rate that decays exponentially with support size. The proof of this fact uses a neat trick of upper-bounding the Kullback-Leibler divergence by the chi-square distance and then exploiting the law of large numbers.
• Tom Cover, “On the St. Petersburg paradox”
• A fitting topic, since this year’s ISIT took place in St. Petersburg! Tom has presented a reformulation of the problem underlying this (in)famous paradox in terms of finding the best allocation of initial capital so as to optimize various notions of relative wealth. This reformulation obviates the need for various extra assumptions, such as diminishing marginal returns (i.e., concave utilities), and thus provides a means of resolving the paradox from first principles.
• Paul Cuff, Tom Cover, Gowtham Kumar, Lei Zhao, “A lattice of gambles”
• There is a well-known correspondence between martingales and “fair” gambling systems. Paul and co-authors explore another correspondence, between fair gambles and Lorenz curves used in econometric modeling, to study certain stochastic orderings and transformations of martingales. There are nice links to the theory of majorization and, through that, to Blackwell’s framework for comparing statistical experiments in terms of their expected risks.
• Ioanna Ioannou, Charalambos Charalambous, Sergey Loyka, “Outage probability under channel distribution uncertainty” (pdf; longer version: arxiv:1102.1103)
• The outage probability of a general channel with stochastic fading is the probability that the conditional input-output mutual information given the fading state falls below the given rate. In this paper, it is assumed that the state distribution is not known exactly, but there is an upper bound on its divergence from some fixed “nominal” distribution (this model of statistical uncertainty has been used previously in the context of robust control). The variational representation of the divergence (as a Legendre-Fenchel transform of the moment-generating function) then allows for a clean asymptotic analysis of the outage probability.
• Mohammad Naghshvar, Tara Javidi, “Performance bounds for active sequential hypothesis testing”
• Mohammad and Tara show how dynamic programming techniques can be used to develop tight converse bounds for sequential hypothesis testing problems with feedback, in which it is possible to adaptively control the quality of the observation channel. This viewpoint is a lot cleaner and more conceptually straightforward than “classical” proofs based on martingales (à la Burnashev). This new technique is used to analyze asymptotically optimal strategies for sequential $M$-ary hypothesis testing, variable-length coding with feedback, and noisy dynamic search.
• Chris Quinn, Negar Kiyavash, Todd Coleman, “Equivalence between minimal generative model graphs and directed information graphs” (pdf)
• For networks of interacting discrete-time stochastic processes possessing a certain conditional independence structure (motivating example: discrete-time approximations of smooth dynamical systems), Chris, Negar and Todd show the equivalence between two types of graphical models for these networks: (1) generative models that are minimal in a certain “combinatorial” sense and (2) information-theoretic graphs, in which the edges are drawn based on directed information.
• Ofer Shayevitz, “On Rényi measures and hypothesis testing” (long version: arxiv:1012.4401)
• Ofer obtained a new variational characterization of Rényi entropy and divergence that considerably simplifies their analysis, in many cases completely replacing delicate arguments based on Taylor expansions with purely information-theoretic proofs. He also develops a new operational characterization of these information measures in terms of distributed composite hypothesis testing.
## Missing all the action
Posted in Control, Feedback, Games and Decisions, Information Theory, Optimization by mraginsky on July 25, 2011
Update: I fixed a couple of broken links.
I want to write down some thoughts inspired by Chernoff’s memo on backward induction that may be relevant to feedback information theory and networked control. Some of these points were brought up in discussions with Serdar Yüksel two years ago.
(more…)
## Deadly ninja weapons: Blackwell’s principle of irrelevant information
Posted in Control, Feedback, Games and Decisions, Models of Complex Stochastic Systems, Optimization by mraginsky on November 8, 2010
Having more information when making decisions should always help, it seems. However, there are situations in which this is not the case. Suppose that you observe two pieces of information, ${x}$ and ${y}$, which you can use to choose an action ${u}$. Suppose also that, upon choosing ${u}$, you incur a cost ${c(x,u)}$. For simplicity let us assume that ${x}$, ${y}$, and ${u}$ take values in finite sets ${{\mathsf X}}$, ${{\mathsf Y}}$, and ${{\mathsf U}}$, respectively. Then it is obvious that, no matter which “strategy” for choosing ${u}$ you follow, you cannot do better than ${u^*(x) = \displaystyle{\rm arg\,min}_{u \in {\mathsf U}} c(x,u)}$. More formally, for any strategy ${\gamma : {\mathsf X} \times {\mathsf Y} \rightarrow {\mathsf U}}$ we have
$\displaystyle c(x,u^*(x)) = \min_{u \in {\mathsf U}} c(x,u) \le c(x,\gamma(x,y)).$
Thus, the extra information ${y}$ is irrelevant. Why? Because the cost you incur does not depend on ${y}$ directly, though it may do so through ${u}$.
Interestingly, as David Blackwell has shown in 1964 in a three-page paper, this seemingly innocuous argument does not go through when ${{\mathsf X}}$, ${{\mathsf Y}}$, and ${{\mathsf U}}$ are Borel subsets of Euclidean spaces, the cost function ${c}$ is bounded and Borel-measurable, and the strategies ${\gamma}$ are required to be measurable as well. However, if ${x}$ and ${y}$ are random variables with a known joint distribution ${P}$, then ${y}$ is indeed irrelevant for the purpose of minimizing expected cost.
Warning: lots of measure-theoretic noodling below the fold; if that is not your cup of tea, you can just assume that all sets are finite and go with the poor man’s version stated in the first paragraph. Then all the results below will hold.
(more…)
## Bell Systems Technical Journal: now online
Posted in Control, Echoes of Cybernetics, Feedback, Games and Decisions, Information Theory, Open Access by mraginsky on November 1, 2010
The Bell Systems Technical Journal is now online. Mmmm, seminal articles … . Shannon, Wyner, Slepian, Witsenhausen — they’re all here!
(h/t Anand Sarwate)
## Sincerely, your biggest Fano
Posted in Control, Feedback, Information Theory, Narcissism, Optimization, Papers and Preprints by mraginsky on October 13, 2010
It’s time to fire up the Shameless Self-Promotion Engine again, for I am about to announce a preprint and a paper to be published. Both deal with more or less the same problem — i.e., fundamental limits of certain sequential procedures — and both rely on the same set of techniques: metric entropy, Fano’s inequality, and bounds on the mutual information through divergence with auxiliary probability measures.
So, without further ado, I give you: (more…)
## Autumn travels
Posted in Conference Blogging, Control, Feedback, Information Theory, Models of Complex Stochastic Systems, Narcissism by mraginsky on September 27, 2010
I have been on the road for the past few days. First I went to Washington DC to visit University of Maryland at College Park and to present my work on empirical processes and typical sequences at their Information and Coding Theory Seminar. A scientist’s dream — two hours in front of a blackboard, no slides!
And now I find myself amid the luscious cornfields of Central Illinois. That’s right, until Friday I’m in Urbana-Champaign for the annual Allerton conference. This year, Todd Coleman (UIUC), Giacomo Como (MIT), and I have co-organized a session on Information Divergence and Stochastic Dynamical Systems, which promises to be quite interesting — it will feature invited talks on Bayesian inference and evolutionary dynamics, reinforcement learning, optimal experimentation, opinion dynamics in social networks, signaling in decentralized control, and optimization of observation channels in control problems. If you happen to be attending Allerton this year, come on by!
## Directed stochastic kernels and causal interventions
Posted in Control, Echoes of Cybernetics, Feedback, Information Theory, Models of Complex Stochastic Systems by mraginsky on September 23, 2010
As I was thinking more about Massey’s paper on directed information and about the work of Touchette and Lloyd on the information-theoretic study of control systems (which we had started looking at during the last meeting of our reading group), I realized that directed stochastic kernels that feature so prominently in the general definition of directed information are known in the machine learning and AI communities under another name, due to Judea Pearl — interventional distributions.
(more…)
Tagged with: IT reading group
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9089594483375549, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/2484/elliptic-curves-for-ecdsa?answertab=active
|
# Elliptic curves for ECDSA
i'm trying to implement parameters generation for ECDSA according to SEC1 v2.0:
`Input: The approximate security level in bits = t is {80, 112, 128, 192, 256}`
`Output: Elliptic curve domain parameters over Fp: T = (p, a, b, G, n, h)`
Here's the 2nd step of the algorithm:
2. Select elements (a, b) is Fp to determine the elliptic curve E(Fp) defined by the equation:
`E : y^2 = x^3 + ax + b (mod p),`
a base point `G = (Gx, Gy)` on E(Fp), a prime `n` which is the order of `G`, and an integer `h` which is the cofactor `h = #E(Fp)/n`, subject to the following constraints:
• 4a^3 + 27b^2 != 0(mod p).
• #E(Fp) != p.
• p^B != 1(mod n) for all 1 <= B < 100.
• h <= 2 ^ (t/8).
• n−1 and n+1 should each have a large prime factor r, which is large in the sense that log_n(r) > (19/20).
I haven't understood a lot of things in 2nd step.
1. How to select `a` and `b` for E(Fp)? Should it be done randomly just to satisfy `4a^3 + 27b^2 != 0(mod p)` ? Yes it should, as far as i've understood.
2. How to find #E(Fp) -- the cardinality of E(Fp)? -- Schoof or SEA algorithm.
3. How to choose generator -- G = (Gx, Gy) and find its order `n`? -- Random point should be chosen on a curve. Again not pretty sure about it.
EDIT: The point has to have a prime order. How can a point be chosen with a prime order?
P.S.
Thank you and sorry for my English;
-
## 1 Answer
It is easier to generate a point with order $n$ than to find out the order of a random point:
• Generate a random point $G'$ (generate random $x$ and solve for $y$)
• Compute $G = hG'$ (multiply by cofactor)
This is guaranteed to generate a point $G$ with order either $n$ or $1$ (the point at infinity). The chance of generating the point at infinity is negligible, but you can check for it and regenerate $G$ if you want.
This procedure is described in section 3.1.3.2 "Point Selection" in SEC1v2, where it computes a generator from a seed $S$. It is more convoluted since it generates points verifiably at random.
-
where n is ?.. Though the real problem is to find a point with a prime order. – ted May 4 '12 at 5:34
1
You can find $n$ by factoring the order of the curve, which you have found with e.g. Schoof's algorithm. Then $n$ will be the largest factor, and $h$ is the order divided by $n$. – Conrado PLG May 4 '12 at 12:46
probably you should add last comment to your answer. Thank you. – ted May 5 '12 at 12:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9147490859031677, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2009/04/16/the-cauchy-schwarz-inequality/?like=1&source=post_flair&_wpnonce=4ca8fc8c32
|
# The Unapologetic Mathematician
## The Cauchy-Schwarz Inequality
Today I want to present a deceptively simple fact about spaces equipped with inner products. The Cauchy-Schwarz inequality states that
$\displaystyle\langle v,w\rangle^2\leq\langle v,v\rangle\langle w,w\rangle$
for any vectors $v,w\in V$. The proof uses a neat little trick. We take a scalar $t$ and construct the vector $v+tw$. Now the positive-definiteness, bilinearity, and symmetry of the inner product tells us that
$\displaystyle0\leq\langle v+tw,v+tw\rangle=\langle v,v\rangle+2\langle v,w\rangle t+t^2\langle w,w\rangle$
This is a quadratic function of the real variable $t$. It can have at most one zero, if there is some value $t_0$ such that $v+t_0w$ is the zero vector, but it definitely can’t have two zeroes. That is, it’s either a perfect square or an irreducible quadratic. Thus we consider the discriminant and conclude
$\displaystyle\left(2\langle v,w\rangle\right)^2-4\langle w,w\rangle\langle v,v\rangle\leq0$
which is easily seen to be equivalent to the Cauchy-Schwarz inequality above. As a side effect, we see that we only get an equality (rather than an inequality) when $v$ and $w$ are linearly dependent.
### Like this:
Posted by John Armstrong | Algebra, Linear Algebra
## 5 Comments »
1. [...] We again consider a real vector space with an inner product. We’re going to use the Cauchy-Schwarz inequality to give geometric meaning to this [...]
Pingback by | April 17, 2009 | Reply
2. [...] We’re still looking at a real vector space with an inner product. We used the Cauchy-Schwarz inequality to define a notion of angle between two [...]
Pingback by | April 21, 2009 | Reply
3. [...] notion of length, defined by setting as before. What about angle? That will depend directly on the Cauchy-Schwarz inequality, assuming it holds. We’ll check that [...]
Pingback by | April 22, 2009 | Reply
4. [...] we can use the Cauchy-Schwarz inequality to conclude [...]
Pingback by | February 25, 2010 | Reply
5. [...] The condition relating and is very common in this discussion, so we will say that such a pair of real numbers are “Hölder conjugates” of each other. Given , the Hölder conjugate is uniquely defined by , which is a strictly decreasing function sending to itself (with order reversed, of course). The fact that this function has a (unique) fixed point at will be important. In particular, we will see that this norm is associated with an inner product on , and that Hölder’s inequality actually implies the Cauchy-Schwarz inequality! [...]
Pingback by | August 26, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208481907844543, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/48552/homotopy-between-solutions-of-maurer-cartan-equation/48578
|
## homotopy between solutions of Maurer-Cartan equation
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If $S_0, S_1$ are two solutions of Maurer-Cartan equation $dS+\frac{1}{2}{S,S}=0$ for a dg-Lie algebra $g$, do we have a suitable concept of homotopy between $S_0$ and $S_1$?
-
1
There is a standard notion of "gauge equivalence" of Maurer-Cartan solutions, at least when $g^0$ (the degree zero part of $g$) is nilpotent. It comes from the "gauge action", which is just the exponentiated $\operatorname{ad}$ action of $g^0$. See for example these notes of Manetti: arxiv.org/abs/math/0507286 – Kevin Lin Dec 7 2010 at 10:15
If you take the infinitesimal neighbourhood of a Maurer Cartan solution (ie the twisted dg-Lie algebra) then the differential certainly gives a notion of homotopy. I look forward to an expert's take on this nice question. – James Griffin Dec 7 2010 at 11:03
## 2 Answers
To continue what Kevin Lin mentioned, a discussion on the relation between homotopy and gauge equivalence for solutions of the Maurer-Cartan equation is in the last section of Manetti's "Deformation theory via differential graded Lie algebras". To sum up, you can define two solutions $S_0,S_1\in MC_{\mathfrak{g}}(A)$ to be homotopic iff there exists an element $S\in MC_{\mathfrak{g}[t,dt]}(A)$ with $S(0)=S_0$ and $S(1)=S_1$. Now it turns out that in a certain sense this homotopy equivalence and the gauge equivalence are the same thing, however the homotopy definition is preferable since it extends to $L_\infty$ algebras.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
$S_0$ and $S_1$ are said to be homotopy equivalent if there is a Maurer-Cartan element $S(t,dt)$ in the dgla $g[t,dt]$ such that $S(0)=S_0$ and $S(1)=S_1$. It is not completely clear at first that this is an equivalence relation, but actually it is. Indeed much more is true and the homotopy equivalence just described is just the tip of the iceeberg. To see this, rewrite $g[t,dt]$ as $g\otimes\Omega^1$, where $\Omega^1$ is the differential graded commutative algebra of polynomial differential forms on the (algebraic) 1-simplex. Then one sees this is the beginning of a simplicial dgla $g\otimes\Omega^\bullet$, and taking Maurer-Cartan elements produces a simplicial set $MC(g\otimes \Omega^\bullet)$. This simplicial set turns out to be a Kan complex and the fact that the homotopy relation between solution of the Maurer-Cartan equation on $g$ is an equivalence relation is precisely the 'horn filling' property of this Kan complex.
A good reference is Getzler's [math/0404003] Lie theory for nilpotent L-infinity algebras
(in formal deformation theory one produces a nilpotent dgla out of an arbitrary one by tensoring it with the maximal ideal $m_A$ of a local Artin algebra $A$)
-
You mean maximal ideal of a local Artin algebra? – Kevin Lin Dec 7 2010 at 17:11
yes, thanks. I've now edited that. – domenico fiorenza Dec 7 2010 at 17:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9059436321258545, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/60154-probability-next-entry-series-discrete-trials.html
|
# Thread:
1. ## Probability of the next entry in a series of discrete trials.
Hi, all.
Suppose you have a series of events, all the same, which can result in X or ~X. The probability of each is unknown. As you watch the series go forward, every entry (to start) is X. So:
$S = \{X, X, X, X, X... \}$
At any point, however, it is possible that the pattern will cease, and that we will get a ~X. But let's say it hasn't happened yet.
Let's also say that the number of X's we have observed so far is $n$, such that the probability of this occurring is:
$P(S_n)=[P(X)]^n$
So, how do we determine P(X) ?
We could determine a confidence level, I think...
$[P(X)]^n>.05$
$P(X)>.05^{\frac{1}{n}}$
So, there's a 95% chance that $P(X)\in(.05^\frac{1}{n},1]$, yes? But how do we actually find P(X)?
Thanks!
2. We could use the confidence interval to find a minimum probability. Let's say that the error is given as $E$. So:
$E^{\frac{1}{n}}$ is the lower bound of $P(X)$ in the $1-E$ confidence level. Let's call Q(X) the probable expectation that the next entry in the sequence is X. So, $Q(X)\geq(1-E)E^{\frac{1}{n}}$. We can take the derivative with respect to $E$ to find the value of $E$ which maximizes $Q(X)$:
$\frac{d}{dE}(1-E)E^{\frac{1}{n}}=0$
$\frac{d}{dE}(E^{\frac{1}{n}}-E^{\frac{1}{n}+1})=0$
$\frac{1}{n}E^{\frac{1}{n}-1}-[\frac{1}{n}+1]E^{\frac{1}{n}})=0$
$\frac{1}{En}E^{\frac{1}{n}}-[\frac{1}{n}+1]E^{\frac{1}{n}})=0$
$[\frac{1}{En}-\frac{1}{n}-1]E^{\frac{1}{n}}=0$
$\frac{1}{En}-\frac{1}{n}-1=0$
$\frac{1}{En}=\frac{n+1}{n}$
$En=\frac{n}{n+1}$
$E=\frac{1}{n+1}$
Now, recall our $Q(X)$ relationship:
$Q(X)\geq(1-E)E^{\frac{1}{n}}$
And substitute for E:
$Q(X)\geq(1-\frac{1}{n+1})[\frac{1}{n+1}]^{\frac{1}{n}}$
$Q(X)\geq\frac{n}{(n+1)^{\frac{1}{n}}(n+1)}$
$Q(X)\geq\frac{n}{(n+1)^{\frac{n+1}{n}}}$
It seems like we should be able to do better than this, though.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956636905670166, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/115378/example-of-a-concave-function-with-lim-x-to-0-fracgx-x-ln-x-infty-w
|
## example of a concave function with $\lim_{x\to 0^+}\frac{g(x)}{-x\ln x}=\infty$ which fullfills some additional condition
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for the example of a concave function $g\colon [0,1]\mapsto \mathbb{R}$, with $g(0)=0$, for which
1. $\lim\limits_{x\to 0^+}\frac{g(x)}{-x\ln x}=\infty$ and
2. $\lim\limits_{x\to 0^+}\frac{\lambda g(x)}{g(\lambda x)}=1$ for every $\lambda>1$
-
## 1 Answer
Take $g_1(x)=x\log^2x$. Properties 1,2 are evidently satisfied, and computation of the second derivative shows that it is negative for $0< x<1/e$. Now rescale: $g(x)=g_1(x/e)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8821649551391602, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/48849/transpositions-of-order-three
|
## Transpositions of order three
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Allow me to take advantage of your collective scholarliness...
The symmetric group $\mathbb S_n$ can be presented, as we all know, as the group freely generated by letters $\sigma_1,\dots,\sigma_{n-1}$ subject to relations $$\begin{aligned} &\sigma_i\sigma_j=\sigma_j\sigma_i, && 1\leq i,j<n, |i-j|>1;\\ &\sigma_i\sigma_j\sigma_i=\sigma_j\sigma_i\sigma_j, &&1\leq i,j<n, |i-j|=1; \\ &\sigma_i^2=1, && 1\leq i<n \end{aligned}$$ If we drop the last group of relations, declaring that the $\sigma_i$'s are involutions, we get the braid group $\mathbb B_n$. Now suppose I add to $\mathbb B_n$ the relations $$\begin{aligned} &\sigma_i^3=1, && 1\leq i<n \end{aligned}$$ and call the resulting group $\mathbb T_n$.
• This very natural group has probably shown up in the literature. Can you provide references to such appearances?
• In particular, is $\mathbb T_n$ finite?
-
1
(One can ask exactly the same last question for the quotients of all Artin braid groups corresponding to finite type Coxeter matrices by the cubes of the simple reflections... I am not greedy) – Mariano Suárez-Alvarez Dec 9 2010 at 23:36
1
for n=3, this is the binary tetrahedral group of order 24, a split extension of the quaternion group of order 8 by a cyclic group of order 3 – mt Dec 9 2010 at 23:40
1
For $n=4$, the group is an extension of the simple group $O(5,3)$ of order $25920$ by a cyclic group of order $6$. – Mariano Suárez-Alvarez Dec 10 2010 at 0:02
1
...and for $n=5$ GAP's coset enumeration seems not to stop. – Mariano Suárez-Alvarez Dec 10 2010 at 0:12
1
(My two last comments use off-by-one indexing...) For $n=6$, GAP gives up after using 3GiB to build the coset table. – Mariano Suárez-Alvarez Dec 10 2010 at 0:51
show 1 more comment
## 3 Answers
Following up what was mentioned in the comments for $n$ up to $5$. In "Factor groups of the braid group" Coxeter showed that the quotient of the Braid group by the normal closure of the subgroup generated by $\{\sigma_i^k \ | \ 1\le i\le n-1\}$ is finite if and only if $$\frac{1}{n}+\frac{1}{k}>\frac{1}{2}$$ In your case ($k=3$) this translates to this group being infinite for $n\geq 6$.
P.S. For the same question on Artin braid groups one can use the classification of finite complex reflection groups. See for example the first reference there, "On complex reflection groups and their associated braid groups" by Broué, Malle and Rouquier.
-
Which seems to answer the second question. – Theo Johnson-Freyd Dec 10 2010 at 1:33
Aha! That is precisely what I wanted to hear! :D – Mariano Suárez-Alvarez Dec 10 2010 at 1:45
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a way to understand these groups geometrically, as complex reflection groups, which are groups generated by rotations around complex hyperplanes. I believe (but without actually looking up the paper cited by Gjergji Zaimi) that this is what Coxeter was doing.
Let's start with the tetrahedron. Let $A$ be rotation by 120 degrees about one vertex, and $B$ be rotation by 120 degrees about another. $A$ and $B$ both act as even permutations of the vertices, so $AB$ also acts as an even permutation of the vertices; it is a 3-cycle, so they satisfy the relation $(AB)^3 = 1$. As noted by mt in comments, this is not $\mathbb T^3$, but just the quotient by its center. The group $SO(2)$ of orientation preserving isometries of $S^2$ is the quotient of the group of unit quaternions ($S^3$) by its center, and the preimage of the tetrahedral group is the binary tetrahedral group, where the relation is also satisfied. You can think of this as keeping track of how many times you've rotated the tetrahedron, mod 2. The group $S^3$ is the same as $SU(2)$. The 2 lifts of a 120 degree rotations to $S^3$ look like $60$ and $240$ degree rotations about great circles. These great circles are the intersections of complex lines with the unit sphere in $\mathbb C^2$, and the operations are what are called complex reflections. 120 degree rotations about hyperplanes in any dimension making the same angle as these satisfy the braid relation.
In general, to realize the $\mathbb T^n$ geometrically, you need $n-1$ complex hyperplanes $P_i$ corresponding to your transformations $T_i$, so that when $|i-j] > 1$ they are orthogonal, and otherwise they make the same angle as above. To make this work, construct a Hermitian form ($\leftrightarrow$ metric compatible with the complex structure) so normal vectors to these planes have the specified angles. This is a standard process. This won't be a positive definite form in general --- it'll only be positive definite when the group is finite. But, it still has a geometric interpretation. In particular, when there is only one negative direction, it can be interpreted as a complex reflection group acting on complex hyperbolic space.
Quite a lot is known about complex reflection groups, but I'm not very familiar with the literature so I won't try to summarize. I'll just mention a paper of mine Shapes of polyhedra and triangulations of the sphere, in which you can find an interpretation of $\mathbb T^n$ for $n \le 12$ as the modular groups for spaces of convex polyhedra in $R^3$ which have angle defects at their vertices that are multiples of $\pi/3$. When there are no more than 6 points, the polyhedron is non-compact. When $n < 6$, the moduli space is the quotient of complex projective space $\mathbb{CP}^{n-2}$ by your group modulo its center. (The full group is a subgroup of $SU(n-1)$).
In the borderline case $n = 6$, the polyhedron looks like an infinite cylinder at one end, and $\mathbb T^6$ acts as a crystallographic group on $\mathbb{C}^4$ (with its order 2 center acting trivially).
-
Beautiful answer. – Petya Dec 10 2010 at 4:32
Here is an idea. Fix a commutative ring $R$ and elements $q, z \in R$. Recall that the (Iwahori-)Hecke algebra $H_n(q, z)$ is the $R$-algebra on generators $T_1, ... T_{n-1}$ with relations
$$T_i T_j = T_j T_i, |i - j| \ge 2$$ $$T_i T_{i+1} T_i = T_{i+1} T_i T_{i+1}$$ $$T_i^2 = z T_i + q.$$
It is known that $H_n(q, z)$ is a free $R$-module on elements $T_w, w \in S_n$ which are identified with products of the $T_i$ corresponding to minimal representations of $w$ as a product of transpositions. When $q = 1, z = 0$, we get the group algebra of $S_n$.
When $q = -1, z = -1$, we have $T_i^3 = 1$, hence $\mathbb{T}_n$ acts on $H_n(-1, -1)$ (via the map sending $\sigma_i$ to multiplication by $T_i$). If we could show that this action is faithful, then it would follow at least that $\mathbb{T}_n$ has a faithful linear representation, and one might be able to push this to show that $\mathbb{T}_n$ is finite (for example by showing that the action is faithful when $R$ is a finite field).
-
Isn't $T_i^3=-1$ with those parameters? – Mariano Suárez-Alvarez Dec 10 2010 at 0:44
You want $q=z=-1$. – Mariano Suárez-Alvarez Dec 10 2010 at 0:46
Whoops; thanks. – Qiaochu Yuan Dec 10 2010 at 0:50
This is a great approach. Sadly, according to Gjergji Zaimi's answer, the "push this to show that it's finite" is bound to fail. – Theo Johnson-Freyd Dec 10 2010 at 1:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9349686503410339, "perplexity_flag": "head"}
|
http://pretaapprendre.wordpress.com/
|
# Prêt à Apprendre
Feeds:
Posts
Comments
## Particle Physics is Dead
July 8, 2012 by johnbulava
So I’ve decided to a break from the standard progression of pedagogical posts about theoretical particle physics and comment on the recent Higgs Boson discovery. First, you can view the actual talk where the results were presented here. This video consists of two talks, one by Joe Incandela presenting the results of the CMS collaboration and the other by Fabiola Gianotti presenting the results of the ATLAS collaboration, of which my wife Elisa is a member! Some of my favorite parts are:
• 27:00-29:00: The audible gasp in the room is for the bump on the plot. This is a plot of the number of times a particular reaction occurs, as a function of energy of the products of the reaction. An enhancement in such a plot corresponds to the presence of an intermediate unstable particle produced in the reaction.
• 36:00-38:00: The crowd goes wild when the significance of the signal is reported to be 5.0σ, this means it not just ‘evidence’ but ‘discovery’. These words are given precise statistical meaning in the particle physics community. The attainment of ‘discovery’ was (at least to me) a huge surprise.
• 50:00-52:00:The end of the first talk. Some nice closing remarks.
• 1:18:00-1:20:00: A bump is seen by ATLAS which is AT THE SAME ENERGY as CMS. This is big news.
• 1:33:00-1:37:00: The statistical significance of the ATLAS bump is quantified. Double whammy! This is 5.0σ as well! Discovery by two independent experiments. Both the crowd and the speaker can barely contain themselves.
• 1:40:00-1:46:00: The end of the second talk. I think some nice remarks. Also, teary eyed Peter Higgs (the old guy with the bushy eyebrows) steals the show. Finally some remarks by Rolf-Dieter Heuer, the head of CERN.
• 1:52:00-1:54:00: Remarks by former head of CERN Sir Christopher L. Smith. It took a while to find this damn particle!
• 1:55:00 – 1:59:00: Remarks by the theorists who theorized the Higgs. My favorite parts: Peter Higgs glad he lived to see the day and François Englert missing Robert Brout, who did not live to see the day.
Anyway, while the video is as long as two Downton Abbey episodes, I can’t promise it’s any more exciting.
WARNING: If particle physics were a newspaper, the remainder of this post would belong in the ‘Opinion’ section. I’ve spent the last week or so thinking a bit about the implications of this discovery with some of my colleagues, and I’m afraid I don’t share the unbridled optimism of the press releases and articles. Undoubtedly, this discovery was historic as it (in some sense) completes the Standard Model and represents many years of dedicated blah blah blah. However, so far all indications suggest that this Higgs boson is ‘standard’ and is precisely the one that has been described in particle physics textbooks for that last ~20 years!
Of course, it is of the utmost importance that the properties of this new particle are measured precisely, as they certainly will be in the upcoming years at the LHC. However, if this particle IS the garden variety Higgs, there are few suggestions that anything else is waiting to be discovered. For example, the Higgs mass is linked to the energy at which the Standard Model ceases to make sense as a self-consistent theory. This relation has been updated recently by (amoung others) some of my CERN colleages, Degrassi, et al., in light of the recent discovery.
Fig. 1: The energy at which the Standard Model breaks down plotted against the mass of the Higgs boson. For the ~125GeV Higgs that has been discovered, the breakdown energy is waaay beyond the reach of the LHC and possibly any future earth-based particle accelerator. Figure taken from here.
The main idea from this paper which I want to emphasize is illustrated in Fig. 1. As has been known for a while, but I think not really mentioned so much, a ~125GeV Higgs boson implies that the Standard Model as we know it could be a perfectly complete theory up to VERY large energies, like ~1,000,000,000,000 GeV or so. For comparison, the maximum planned energy of the LHC is 14,000 GeV! Of course this doesn’t mean that there aren’t any new phenomena within the reach of the LHC, just that there doesn’t have to be any.
One of the few reasons for optimism is the phenomenon of ‘Dark Matter’ in astrophysics. It seems that a reasonable explanation of the astrophysical observations is the existence of a new particle that only interacts weakly with the Standard Model particles. Such a lazy particle is affectionately termed a Weakly-Interacting Massive Particle, or WIMP for short. If one makes some simplifying assumptions about the expansion history of the universe and the Dark Matter production mechanism, a rough estimate for its energy scale is somewhat close the energies being probed at the LHC. This is the so-called ‘WIMP miracle’, and may mean that Dark Matter can be produced at CERN.
In conclusion, the Higgs boson discovery was certainly a monumental occasion for particle physics. However, there is a very real possibility that it could be the ONLY major discovery at CERN. If that’s the case, the graphic at the top of the post, which is taken from Hip-Hop is Dead (a Nas album), may also describe accelerator based particle physics as well as the academic careers of many students and postdocs currently working in the field. Of course, in hip-hop the emergence of several new genres and communities has (in my mind) proven Nas wrong, I just hope the same thing happens in particle physics.
Posted in Uncategorized | 2 Comments »
## Respect the Beard
April 28, 2012 by johnbulava
Rather than continue the classical line of High Energy Physics pedagogy, i.e. Special Relativity, Quantum Field Theory, etc., I’d like to pause for a bit and talk about one of the neat little curiosities about quantum mechanics. To begin with, I’ll recall the main equation from our previous post on Quantum Mechanics
$\langle x_f, t_f | x_i, t_i \rangle = \int\mathrm{D}x(t) \; \mathrm{e}^{i\frac{S[x]}{\hbar}}$
This equation states that if a system is in state xi at time ti, to calculate the probability that it is in state xf at some later time tf I have to sum up all possible paths the system could take between the two states, and assign to each path an “arrow” given by the complex phase on the right side of the equation. This sum results in a complex number, and if I take the magnitude (distance from zero) of this complex number and square it, I get the probability.
The neat little nugget is that if I make a simple change to this equation, namely by treating the time variable on the right as an imaginary number (what does that mean, really??), it becomes the following
$\langle x_f, t_f | x_i, t_i \rangle = \int\mathrm{D}x(t) \; \mathrm{e}^{-\frac{S[x]}{\hbar}}$
where the minus sign comes out basically because i times itself is minus 1. I’ve glossed over the details a bit, but this technique is known as Wick rotation. Anyway, this equation now looks like one that was derived nearly 150 years earlier, namely that of the Partition Function in statistical mechanics.
Statistical mechanics is one of my favorite branches of physics. It uses statistical techniques to describe systems that consist of many, many constituent parts. For example, in an introductory physics class, you usually talk about balls being thrown in the air, and the speed of trains and so forth, but these concepts are useless to describe a pot of water boiling. Am I to calculate the individual positions and velocities of all the individual water molecules in the pot? NO! That would be stupid. Instead what I want to know are “thermodynamic” properties of the water, like temperature and (if you ever had a grandmother make you soggy beans in one of those pots where you clamp the lid on) pressure. Statistical mechanics is able to bridge the gap between the individual properties of the water molecules and the thermodynamic properties of the pot as a whole.
This connection is made through the partition function, which (in some sense) sums up all the ways that energy can be partitioned among all the particles in the system. This function alone is enough to obtain basically all the thermodynamic information about the system. Aren’t you excited??? This was a revolution! Using the physics concepts for the microscopic single particles, like velocity, force, energy, and so forth, I’m able to write down the partition function, which then allows me to calculate macroscopic properties like temperature and pressure.
Statistical Mechanics was founded by some pretty cool guys, among them Ludwig Boltzmann, James Clerk Maxwell, and perhaps most importantly, J. Willard Gibbs, who was one of the greatest American physicists. All these guys rocked some serious beards, with Boltzmann having the best one. As someone who also has a beard, I feel a special kinship to these guys. I’ll also mention my favorite bearded basketball player, James Harden, whose picture appears at the top of the post. His Oklahoma City Thunder begin the NBA playoffs this week. GO THUNDER!!!
Anyway, when most people talk about ‘modern’ physics they usually mean quantum mechanics, special relativity, and the like, but for me statistical mechanics was really the advent of the modern era in physics. This is because statistical mechanics made direct reference to properties of the individual particles in a system, which as we discussed before, can’t be measured in practice. The idea of using these ‘invisible’ properties to formulate a theory was a radical one, and at least for Boltzmann, the scientific establishment was not initially on beard, umm, i mean on board. The scientific philosophy of the era was something like the positivism of Ernst Mach (also amply bearded) which basically said that science should only be concerned with directly observable phenomena (NOTE: I am not a philosophy of science expert). Of course, atomic theory, nuclear physics, etc. all disagree with this way of thinking, but Mach was very opposed to Boltzmann’s ideas. A more modern view was espoused by Karl Popper (not bearded) which instead demands that a valid scientific theory be ‘falsifiable’, i.e. able to be proven wrong.
Ok, to wrap things up, there’s a very deep connection between quantum mechanics and statistical mechanics. This connection is made by examining the defining equation of quantum mechanics and treating time as a purely imaginary number. When you do this, the ‘sum over possible paths’ in quantum mechanics becomes equivalent to a ‘sum over possible states’ in statistical mechanics. This is the beginning of many connections between the strange ‘quantum’ world and the less strange concepts of statistical systems. As for the ‘meaning’ of ‘imaginary time’, I’m not sure that there is one. As far as I’m concerned, it’s just a mathematical device used to establish this equivalence. If you disagree please comment!!!
## Least Action Hero
January 16, 2012 by johnbulava
At long last, the previously mentioned Quantum Mechanics post!!! Quantum Mechanics is essential to our understanding of the world, and was born around the turn of the (19th-20th) century. It’s rather counter-intuitive from the point of view of our everyday experience so things like thought experiments must be used to gain intuition about what’s going on. WARNING: although quantum mechanics is a very successful theory experimentally, there are many interpretations of what is actually happening. Indeed, there is an entire field of physics (which I am NOT an expert in) devoted to understanding quantum mechanics.
However, I think it is fair to say (please comment if you disagree) that for all currently observable phenomena, different interpretations of quantum mechanics are indistinguishable. Therefore, I’ll present the point of view I think is easiest to understand. We’ve got to start with `classical’ physics, that is all of physics before quantum mechanics. This is the physics of dropping balls and crashing cars and predicting when trains will arrive. Our intuition about how this works should be pretty good, as it describes the macroscopic world in which we live.
The most succinct way I know to formulate classical physics is in terms of the ‘Least Action Principle’: Given a starting configuration of a system and an ending configuration, I want to know how the system proceeds from the starting configuration to the ending configuration. For example, if a baseball (from a really boring American sport) is in my hand now and through my office window 2 seconds later, what path did it take to get there? For each possible path, like going on a straight line between my hand and the window or taking a detour around the moon before hitting the window, I can assign a number, called the ‘action’. The path going around the moon has waaaaaaay more action than the straight line path. In fact, the path that the baseball takes is the one with the least action. End of story. For a given starting and ending point, it always takes this least action path.
Of course, I can also turn this around and specify the initial position of the baseball (my hand) and the direction and magnitude of its initial velocity (toward the window and super fast). Once I do that, the path that the baseball takes is fixed. Always. If I throw a million baseballs through a million office windows (yeah, it’s that kind of day at work), all starting from my hand and ending at the window 2 seconds later, each one will follow the same least action path.
This calls for an equation! There’s a general principle in physics that things behave smoothly, except when they don’t. When I’m assigning an action to each path, two paths which are almost identical will have almost identical actions. As I move toward the least action path, the action should decrease in a more-or-less smooth fashion, with the minimum at the the least action path (duh). If I move past this least action (or ‘classical’) path, the action will increase again (also duh). So a cartoon plot of what’s going on might look like this:
Fig.1: As the path is smoothly varied toward the classical path, the action decreases. The tangent line at the classical path has zero slope
This plot is just a fancy way to say that the least action path is the one which has the minimum action. Because of that, the tangent to the curve at the least action path is flat, i.e. it has zero slope. We can use this fact to encapsulate all of classical physics in a single tiny equation:
$\delta S[x_{cl}(t) ] = 0$ (1)
This equation says that the path the system takes is the one for which the action has a flat tangent, i.e. the slope of the tangent of the action at the classical path is zero. This also means that as I change a tiny bit from the classical path the action does not change that much. The bottom of the well in the above plot is the only place where this is true as any other point on the curve does not have a flat tangent.
Finally! We can stop talking about boring old classical physics! The main point of the preceding discussion was that classical physics is deterministic, i.e. if I specify the start and end points, or the start point and the initial velocity, I can predict with certainty what the system will do at every moment of the future. When we are doing a classical physics homework problem (booo!) the `answer’ is x(t), the state of the system at all time, after specifying the start and end points, or the start point and initial velocity.
Quantum mechanics is way different. If I were doing a quantum mechanics homework problem (<cough> nerd!) the answer would be instead be a set of numbers for each time, usually written
$\rho(x,t) = |\langle x | \psi(t) \rangle|^2$ (2)
Each number is the probability of finding the system in state x at time t. In our example, given the initial and final positions of the baseball (my hand and the shattered window), at each time I have no idea where the baseball is, I only have a probability for each point in space. In fact, at any given time, there is a non-zero (but very small) probability that the baseball may be on the moon! Of course I do know that the baseball has to be somewhere at each time so I can write
$\int \mathrm{d}x \; \rho(x,t) = 1$ (3)
which just says that the sum of all probabilities at a certain time has to be one, i.e. that I will always find the baseball somewhere in the universe. The thing on the right side of Eq. (2) is a bit strange. The angle brackets inside the absolute value sign are called a probability amplitude and represent a complex number. What the hell is a complex number, and why am I using it to describe something in the real world? A complex number is just a number with two parts, a real part and an imaginary part. A complex number can be specified two ways, either by giving the real and imaginary parts, or by giving the distance from zero and the angle with the real axis. See the figure. Do it.
Fig. 2: A complex number has real and imaginary parts. It can also be specified by it's length (distance from 0) and the angle it makes with the Real axis.
Formally, I can write the two representations of the same complex number as
$x + \mathrm{i}y = r\mathrm{e}^{i\varphi}$ (4)
where x,y, r, and φ are given in the figure and i is the square root of -1 (crazy).
So, the problem of calculating the probability density ρ(x,t) has been reduced to calculating the complex probability amplitude. Then we just take the length of that complex number and square it to get the probability density. There are many ways to calculate this amplitude but I think the easiest one to understand was given by Richard Feynman (in his P.H.D thesis!) which expresses the amplitude as a sum over all paths (or histories) that the system could possibly take:
$\langle x_f, t_f | x_i, t_i \rangle = \int\mathrm{D}x(t) \; \mathrm{e}^{i\frac{S[x]}{\hbar}}$ (5)
This is by far the most complicated equation we’ve encountered so far. The left side is the probability amplitude for finding the system in the specified initial and final states at the initial and final times, respectively. Remember, if we wanted the actual probability, we would have to take the length of this complex number and square it. The right side of Eq. 5 is a sum over all possible paths that start in the initial state and end in the final state. Each path gets weighted by a ‘phase factor’ in the sum. This is the exponential factor, which is just a complex number (look at Eq. 4) with length one. This equation also contains a new fundamental constant of Nature: Planck’s constant (h).
It is the small numerical value of this constant (h = 6.62606957(29)×10−34J·s) that makes quantum mechanical effects so far removed from our everyday world. For comparison, the action (which has units of [energy] [time]) of a 0.14kg baseball traveling at 75mph ( roughly 33.5m/s, way slower than during my teenage baseball years) for 0.1 seconds is 78.55 J·s, or about 35 orders of magnitude (factors of 10) bigger than h!
The smallness of this constant may also be used to predict Eq. 1, the defining equation of classical physics. When summing up all the phase factors corresponding to all the possible paths in Eq. (5), we have to add the arrows head-to-tail. If we have an arrow pointing along the Real axis (angle = 0 degrees) and we add to it an arrow pointing the opposite way (angle = 180 degrees), we end up with nothing. In our case, the angle corresponds to the ratio S/h, the action of a particular path divided by Planck’s constant. If we examine a random path in the sum, say the one that goes around the moon before hitting my office window, the action will change quite a lot (compared to h) if we vary the path a tiny bit. This is because this path does not have a flat tangent in Fig. 1. In fact, for every path that we choose in this way, there is a nearby path whose arrow points in exactly the opposite direction, and the effect of the two paths cancel out as we sum them.
The only path which does not have a nearby path to cancel it is the path for which the action does not change very much when we move away from it. This is precisely the condition that we gave earlier for the classical path!!! So for a system (like the baseball and window) where the action is much larger than Planck’s constant, the only path that significantly contributes to the sum in Eq. (5) is the classical path, and we can recover Eq. (1) which determines the condition for the classical path. Furthermore, for this situation the probability that the system does not follow the classical path is certainly not zero, but it is so small that we will never see it happen. Maybe not even once if I throw baseballs at my office window all day everyday for the entire age of the universe. Perhaps I should write a research grant application for such an experiment!
Anyway, now that we’ve seen that classical mechanics is recovered as a limiting case of the Feynman path integral, we can ponder some of the implications. Of course the probabilities calculated from the path integral agree extremely well with observations, so for systems where the action is not so much larger than h we can observe effects of this sum over paths. Some famous manifestations of this are the double-slit and Stern-Gerlach experiments, but there are a ton of others. I suppose the Keanu-Reeves-Bill-and-Ted’s-Excellent-Adventure quasi-philosophical interpretation of what’s happening would be that when deciding the probability for a certain initial and final state, Nature somehow travels all possible paths (assigning each one an arrow) before summing them up to get the probability, causing the `wave particle duality’ that is so peculiar in quantum mechanics. However, this is simply one interpretation.
Of course, there are a lot of things I glossed over in this post, like the problem of measurement in quantum mechanics, some of the paradoxes, and different interpretations (here is an overview). Anyway, all I’ve done is provide a rule for calculating probabilities, which are all that can be measured. Back to throwing baseballs at office windows!!!
Posted in Uncategorized | 2 Comments »
## Hello World!
January 9, 2012 by johnbulava
Apologies for the title! I’m new to blogging but feel the need to share some of my thoughts on particle physics and other subjects on which I am (even) less qualified.
I’ve just started working at CERN as a Theory Fellow and have mixed feelings about the `message’ that makes it from experts to the general public. Particle physics is a wonderfully rich subject that does not lend itself to headlines and twitter feeds (watch out, old man alert!!). It is my intent to make this blog not about the bleeding edge state of research in the field (for this, simply google `God Particle’ and try not to vomit) but to write more pedagogical posts about topics which I think are central to understand the main ideas about a theory (The Standard Model, yes, with capital letters) that accomplishes no small feat: it can explain all interactions between elementary particles in our world.
I remember a discussion I had over a Tartiflette (cheesy potato awesomeness) with a senior CERN theorist last year that too much of what people hear about particle physics is related to what we don’t know. Of course, this is what is most interesting to experts in the field, but in order to understand the implications of things like `The Higgs Boson’ and `Supersymmetry’, one first needs to understand the foundations. Also, I believe that some of what we do know about elementary particle physics is pretty damn elegant and deserves to be shared.
Another all-too-prevalent pet peeve of mine is `experts’ taking license to pontificate about topics outside their realm of expertise (more old man ranting). I’ll try to avoid that in this blog, or to be explicit about when I’m trying to clarify my own understanding on a subject, hopefully to incite discussion. So in short, if you want a very non-sensational, somewhat sardonic, pedagogically oriented look at particle physics (which is in what could be a very exciting time), then this blog is for you.
Most likely the next post will be somewhat of an introduction to quantum mechanics. After that who knows! Oh, and the title is a bit of a joke; despite living in France (CERN is on the border between France and Switzerland), I speak maybe 10 words of French. Basically my wife (also a particle physicist at CERN) does all the talking when we go out. I’m studying, so hopefully this will change!
Posted in Uncategorized | 3 Comments »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384452700614929, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/188262-finding-absolute-value-imaginary-number.html
|
# Thread:
1. ## Finding absolute value of imaginary number
How do you find the absolute value of 1 plus or minus i. The answer is 1.41
2. ## Re: Finding absolute value of imaginary number
Originally Posted by benny92000
How do you find the absolute value of 1 plus or minus i. The answer is 1.41
$|a+bi|=\sqrt{a^2+b^2}$
so $|\pm a\pm bi|=\sqrt{[\pm a]^2+[\pm b]^2}=\sqrt{a^2+b^2}$
3. ## Re: Finding absolute value of imaginary number
Originally Posted by benny92000
How do you find the absolute value of 1 plus or minus i. The answer is 1.41
If you were to plot the point $\displaystyle 1 + i$ and draw the length from the origin to that point on an Argand diagram, you'll see that from the origin, you have travelled one unit right and one unit up. So really, a right-angle triangle has been created, with the two shorter sides = 1 unit in length. How would you find the length of the hypotenuse (in other words, $\displaystyle |1 + i|$)?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9000186920166016, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/94001/semisimplicity-of-automorphic-galois-representations/94176
|
semisimplicity of automorphic Galois representations
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is it known that the Galois representation constructed by Harris and Taylor in their book is semisimple? I can't see this proven in the book, but on the other hand, everywhere else the representation is taken to be semisimple... Are they considering its semisimplification?
Sorry for the simple question.
Thanks
-
1 Answer
Do you mean the local Galois representations or the local Galois representations ?
The global Galois representations they are constructing correspond to cuspidal automorphic representations of GL(n). They are expected to be always irreducible, though I'm not sure when this is known exactly. But it is known in the case that Harris and Taylor consider (when the automorphic representation is square integrable at a finite place), cf corollary 1.3 of the article "Compatibility of local and global Langlands correspondences" by Taylor and Yoshida.
The local Galois representations are not expected to be semi-simple in general. They are expected to be Frobenius semi-simple (ie, the Frobenius elements are supposed to act semi-simply), but this is not known for $n\geq 3$. So, if you mean the local representations, then yes, very often people are just taking the Frobenius semi-simplifications of the representations that appear in the cohomology of Shimura varieties.
-
Thanks for your answer. I meant the global Galois representation. – unknown Apr 18 2012 at 21:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211645722389221, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/arithmetic?page=3&sort=votes&pagesize=15
|
# Tagged Questions
Questions on basic arithmetic, e.g. addition, subtraction, multiplication, division, powers, roots, etc.
2answers
98 views
### Stuck on simple proving
Prove that $\frac{1}{2}-\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}-\cdots-\frac{1}{2009}+\frac{1}{2010}<\frac{3}{8}$ Oh my, I feel embarrased for not knowing how to solve such an elementary ...
4answers
185 views
### Divide inside a Radical
It has been so long since I have done division inside of radicals that I totally forget the "special rule" for for doing it. -_- For example, say I wanted to divide the ...
2answers
206 views
### Primes in nonstandard models of PA
What is known about prime numbers in nonstandard models of PA? Restricted to true natural numbers the sets are identical, but does there always exist nonstandard primes? Can we explicitly define one ...
3answers
287 views
### How to compute these GCD's?
Please suggest me how to compute the GCD of theese really big numbers : GCD of $2^{120547564397}-1$ and $2^{356946681940}-1$ GCD of $2^n-1$ and $n!$ where $n=3^{19}$ Thanks to Bill Dubuque's ...
5answers
285 views
### Division with 4 digit number in denominator
I've got a question in my task sheet. The question is as follows. $$\frac{43\cdot93\cdot47\cdot97}{3007}=X$$ Find the exact value of $X$. I've tried a lot, but couldn't find easier way to do it ...
2answers
238 views
### What algorithm can I use to add negadecimal numbers?
I am trying to figure out how to add negadecimal numbers by hand. I can add normal decimal numbers using an algorithm I learned in kindergarten: start with the least significant digits, add them, ...
4answers
121 views
### How to factor 5671?
The other day I wanted to factor 5671 in my head. (It turns out to be $53\cdot107$, but I did not know this at the time.) I quickly ruled out the easy divisors, 2, 3, 5, 7, 11, and 13. At this point ...
5answers
170 views
### What is the square root of $i^4$?
What is the $\sqrt{i^4}$? $i^4$ = $(i^2)^2$ So is $\sqrt{i^4}$ = $\sqrt{(i^2)^2}$ = $i^2$ = $-1$? Or is $\sqrt{i^4}$ = $\sqrt{1}$ = $1$? When I plug it into my TI-89 Titanium, I get $1$. Edit: I ...
1answer
60 views
### Confused about exponents and imaginary/real answers
I am confused about some exponent behavior. $$(-2)^{7.6} = (-2)^{\frac{76}{10}} = ((-2)^{76})^{\frac{1}{10}} = ((-2)^{\frac{1}{10}})^{76}$$ Is there something wrong in this logic? When I plug the ...
1answer
2k views
### rules for rounding (positive and negative numbers)
I'm looking for clear mathematical rules on rounding a number to n decimal places. Everything seems perfectly clear for positive numbers. Here is for example what I found on math.about.com : Rule ...
2answers
47 views
### How do I append an integer to the left of another integer?
For example: . is my append operator f(x,y) = |x| . |y| f(1,45) = 145 f(233,10) = 23310 f(8,2) = 82 f(0,1) = 1 This is a trivially easy problem to ...
2answers
224 views
### Is there a log-space algorithm for divisibility?
Is there an algorithm to test divisibility in space $O(\log n)$, or even in space $O(\log(n)^k)$ for some $k$? Given a pair of integers $(a, b)$, the algorithm should return TRUE if $b$ is divisible ...
3answers
290 views
### A “fast” way to find the sum of the sequence $5,5.5,5.55,5.555,5.5555,\ldots$ (20 terms)
My initial approach is diving the whole sum by $9$ and taking the common $5$ out which gives $$\frac{5}{9}[(10-1)+(10-0.1)+(10-0.01)+\cdots + (10-10^{-19})]$$ after some algebra this could be reduced ...
1answer
757 views
### Chinese Remainder theorem with non-pairwise coprime moduli
Let $n_1,...,n_k \in \mathbb{N}$ and let $a_1,...,a_k \in \mathbb{Z}$. How to prove the following version of the Chinese remainder theorem (see here): There exists a $x \in \mathbb{Z}$ satisfying ...
2answers
856 views
### How can I calculate non-integer exponents?
I can calculate the result of $x^y$ provided that $y \in\mathbb{N}, x \neq 0$ using a simple recursive function: f(x,y) = \begin {cases} 1 & y = 0 \\ (x)f(x, y-1) & y > 0 \end ...
2answers
3k views
### How to calculate the number of decimal digits for a binary number?
I was going to ask this on Stack Overflow, but finally decided this was more math than programming. I may still turn out to be wrong about that, but... Given a number represented in binary, it's ...
1answer
135 views
### Is every φ above the second level of the arithmetical hierarchy independent of PA?
If I am not wrong, every $\Sigma_n$ (or $\Pi_n$ ) statement $\phi$ is equivalent to a statement that says that a given Turing machine halts (or doesn't halt) on input $C$ using a ...
3answers
124 views
### What is the most mathematically sound way to define the “damage per second” for a weapon?
Consider a weapon firing shots every $f^{-1}$ seconds (i.e. $f$ is the weapon's fire rate). Each shot deals $n$ damage to is target. Consider another weapon firing every $3f^{-1}$ second, but dealing ...
2answers
310 views
### How many decimal places are needed for incremental average calculation?
If using the following formula to incrementally calculate average value: $$m_n = m_{n-1} + \frac{a_{n}-m_{n-1}}{n}$$ And the number of decimal places to where you can save $m_n$ is limited to $d$ ...
1answer
59 views
### An identity involving the powers of a nilpotent element in a unital commutative ring
Suppose $R$ is a commutative unital ring with identity $1$ such that the equation $nx = 1$ has a unique solution for each integer $n \ge 1$, and let $\xi$ be a nilpotent element of $R$ with nilpotency ...
1answer
77 views
### Direct proof of the non-zeroness of an Eisenstein series
Question: Can you show directly from its formula that $G_4(i)\neq0$? Recall that the holomorphic Eisenstein series of weight $2k$ is defined by: G_{2k}(\tau)= \sum_{(m,n)\in\mathbb{Z}^2\setminus ...
2answers
180 views
### How can I write an algorithm to perform the following calculation exactly? (references accepted)
Given natural numbers $N, K, m, C$, with $3^{m/3}K>C$, I want to be able to write an algorithm to exactly compute the number $$\left\lceil \log_3 \left(\frac{N}{3^{m/3}K-C}\right) \right\rceil$$ ...
4answers
486 views
### Long division notation (census of nations)
The Wikipedia article on long division explains the different notations. I still use the European notation I learned in elementary school in Colombia. I had difficulty adapting to the US/UK notation ...
3answers
375 views
### Can two sets have same AM, GM, HM?
Can two set of numbers(same size) have same arithmetic , geometric, and harmonic mean ? When I say different set they must differ by at-least $1$ element and also what if set is not be of distinct ...
5answers
9k views
### How to convert a hexadecimal number to an octal number?
How can I convert a hexadecimal number, for example 0x1A03 to its octal value? I know that one way is to convert it to decimal and then convert it to octal ...
8answers
2k views
### Need faster division technique for 4 digit numbers
I have to divide 2860 by 3186. The question gives only 2 minutes and that division is only half part of question. Now I can't possibly make that division in or less than 2 minutes by applying ...
4answers
333 views
### A prize of \$27,000 is to be divided among three people in the ratio 3:5:7. What is the largest share?
This is not homework; I was just reviewing some old math flash cards and I came across this one I couldn't solve. I'm not interested in the solution so much as the reasoning. Thanks
3answers
162 views
### sum of ten squares
You are given an unlimited supply of $1\times 1,2\times 2,3\times 3,4\times 4,5\times 5,6\times 6$ squares.Find a set of ten squares whose areas add up to $48$.If not the whole solution,even a little ...
5answers
236 views
### Proving that $30 \mid ab(a^2+b^2)(a^2-b^2)$
How can I prove that $30 \mid ab(a^2+b^2)(a^2-b^2)$ without using $a,b$ congruent modulo $5$ and then $a,b$ congruent modulo $6$ (for example) to show respectively that $5 \mid ab(a^2+b^2)(a^2-b^2)$ ...
7answers
150 views
### Rational numbers $\mathbb Q$
$$\Bbb{Q} = \left\{\frac ab \mid \text{$a$ and $b$ are integers and $b \ne 0$} \right\}$$ In other words, a rational number is a number that can be written as one integer over another. ...
5answers
250 views
### What is the remainder of $(14^{2010}+1) \div 6$?
What is the remainder of $(14^{2010}+1) \div 6$? Someone showed me a way to do this by finding a pattern, i.e.: $14^1\div6$ has remainder 2 $14^2\div6$ has remainder 4 $14^3\div6$ has remainder 2 ...
3answers
247 views
### Fractions with radicals in the denominator
I'm working my way through the videos on the Khan Academy, and have a hit a road block. I can't understand why the following is true: $$\frac{6}{\quad\frac{6\sqrt{85}}{85}\quad} = \sqrt{85}$$
5answers
298 views
### Do addition and multiplication have arity?
Many books classify the standard four arithmetical functions of addition, subtraction, multiplication, and division as binary (in terms of arity). But, "sigma" and "product" notation often writes ...
5answers
1k views
### cubic root of negative numbers
excuse my lack of knowledge and expertise in math;) but to me it would came naturally that the cubic root of -8 would be -2 since -2 ^ 3 = -8. but when i check wolfram alpha for cbrt(-8), real it ...
3answers
219 views
### $1 +1$ is $0$ ? [duplicate]
Possible Duplicate: -1 is not 1, so where is the mistake? $i^2$ why is it $-1$ when you can show it is $1$? So: \begin{align} 1+1 &= 1 + \sqrt{1} \\ &= 1 + \sqrt{1 ...
4answers
163 views
### How to think about multiplication with a number $0 <x < 1$?
To clarify my confusion: $88 \cdot 0.732$ is the same as $0.732 \cdot 88$ The latter being: 0.732 eighty-eight times: $0.732 + 0.732 + 0.732 + \cdots + 0.732$ But how to think about \$88 \cdot ...
2answers
189 views
### How to generate two numbers such that the smaller divides the larger
I am creating a children's math game and need an algorithm (that I can write in JavaScript) to generate two numbers such that the smaller always divides the larger. How can I do that?
3answers
74 views
### Repeated nested roots
Quite some years ago, I remember being asked the following question: Suppose $\alpha = \sqrt{2+\sqrt{2+\sqrt{2+\ldots}}}$, what is $\alpha$. The solution was given by squaring $\alpha$ and solving ...
2answers
136 views
### Why $x^{(1/2)2} \neq x^{2(1/2)}$?
I know, probably is a newb. question, but i can't get this $x^{(1/2)2} \neq x^{2(1/2)}$ $x\in\mathbb R^+$. I know $x^{(1/2)2}=(\pm \sqrt{x})^2=+x$ and $x^{2(1/2)}=\pm x$ because ...
5answers
146 views
### Find the possible value from the following.
Find the possible value from the following. I'm not able to end up on a concrete note, as I'm unable to get the essence of question, still not clear to me. $x$, $y$, $z$ are distinct reals such ...
3answers
178 views
### Cancel before multiplying!!
$$\binom{12}6 = \frac{12\cdot11\cdot10\cdot9\cdot8\cdot7}{6\cdot5\cdot4\cdot3\cdot2\cdot1} = 924.$$ Sometimes it's hard to talk students out of computing both the numerator and the denominator in ...
4answers
89 views
### Is it true that $\sum_{k=1}^n(p_k\prod_{i=1}^k(1-p_i)) \stackrel{\mbox{?}}{=} 1 - \prod_{i=1}^n(1-p_i)$
Prove that $$p_1 + \sum_{k=2}^n \left(p_k\prod_{i=1}^{k-1}(1-p_i)\right) = 1 - \prod_{k=1}^n(1-p_k)\ .$$ I'm working in a code where I have to do those computations. I want to see if this ...
2answers
110 views
### If a sum is one, the sum of all products are also one.
Let $p_1,\ldots,p_s$ be $s$ number in the unit interval such that $$p_1+\ldots+p_s=1.$$ Is it then true, that for every $n\geq 1$ we have \sum_{(k_1,\ldots,k_n)\in \{1,\ldots,s \}^n} p_{k_1}\cdot ...
2answers
131 views
### Solving simple congruences by hand
When I am faced with a simple linear congruence such as $$9x \equiv 7 \pmod{13}$$ and I am working without any calculating aid handy, I tend to do something like the following: "Notice" that adding ...
1answer
105 views
### Least value for addition
We know that $$0\leq a \leq b \leq c\leq d\leq e\,\,\text{ and}\,\, a + b + c + d + e = 100$$. What would be the least possible value of $\,\,a + c + e\,\,$ ? I apologize for poor syntax.
2answers
627 views
### Converting decimal(base 10) numbers to binary by repeatedly dividing by 2
A friend of mine had a homework assignment where he needed to convert decimal(base 10) numbers to binary. I helped him out and explained one of the ways I was taught to do this. The way I showed him ...
2answers
83 views
### integer to float
How can represent any number from 0 to 127 as something between 0 and 1 ? for example 64's equivalent would be 0.5
2answers
113 views
### How to compute the following formulas?
$\sqrt{2+\sqrt{2+\sqrt{2+\dots}}}$ $\dots\sqrt{2+\sqrt{2+\sqrt{2}}}$ Why they are different?
1answer
270 views
### Non Higher-Order Formulation of Gödels Incompletness Theorem
I was just having a look at Gödels incompletness theorem as found in: http://www.research.ibm.com/people/h/hirzel/papers/canon00-goedel.pdf I noticed that Gödel used a higher order logic. At least ...
2answers
357 views
### Finding four numbers
Find four weights such that given four weights and weighing pan (balance scale) you can measure all weights between $1$ to $80$. I found this one here.Any idea how to solve?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 102, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276235699653625, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/44099/quotient-of-free-module/44108
|
# Quotient of free module
Let $R$ be a commutative ring with $1$ and let $J$ be a proper ideal of $R$ such that $R/J \cong R^n$ as $R$-modules where $n$ is some natural number. Does this imply that $J$ is the trivial ideal?
Basically I am trying to prove/disprove that if $J$ is a proper ideal of $R$ and $R/J$ is free then $J=0$ and above is my work.
-
Just a comment: it was sort of a shame that Zev deleted his answer: it was not correct, but it was not correct in a very instructive way (and coming off of teaching a commutative algebra class, let me say that plenty of graduate students want to answer the way he did). Let me encourage him to parlay his answer into a cautionary tale, if he so chooses... – Pete L. Clark Jun 8 '11 at 14:56
@Pete: You're right, it should be an educational example. I've made it CW to avoid getting any points though. I also clearly need to get some sleep... – Zev Chonoles♦ Jun 8 '11 at 15:18
The essential problem is one of notation: we allow ourselves to talk about isomorphisms without specifying the category (usually this would be a waste of time but sometimes, as here, it is essential). – Qiaochu Yuan Jun 8 '11 at 18:37
## 4 Answers
Correct me if I'm wrong, but isn't it obvious that if $J \neq 0$, $R/J$ can't be a free $R$-mod because anything in $J$ acts by $0$ on $R/J$. Therefore an equation like $j.r = 0$ holds for $j \neq 0$, which can't happen if $R/J$ was free.
-
1
That is precisely what Pete wrote, no? – Mariano Suárez-Alvarez♦ Jun 8 '11 at 15:01
Well, the words are different. :) Also what I wrote was (intentionally) slightly oblique, and this explains things more plainly. – Pete L. Clark Jun 8 '11 at 15:05
1
Sorry, I think I was writing up my answer before yours was posted... – qwert Jun 8 '11 at 15:07
1
You have my sympathy, qwert! It has happened to me quite a few times, because I am rather slow and do a lot of checking before posting. I know too well the feeling of slight embarrassment of discovering one second after having posted that the key idea is already there in another answer! – Georges Elencwajg Jun 8 '11 at 21:00
Yes. A nice way to see this is via the annihilator $\operatorname{ann}(M)$ of a module $M$: it is the set of all $x \in R$ such that $xm = 0$ for all $m \in M$. One shows immediately that $\operatorname{ann}(M)$ is an ideal of $R$ and that isomorphic modules have equal annihilators.
If you take annihilators of both sides of your isomorphism $R/J \cong R^n$, you'll get the desired conclusion. I could say more, but I'll leave it up to you for now because this is a very important and enlightening exercise.
-
You have an epi $f:R\to R^n$. Tensoring it with $k=R/\mathfrak m$ for some maximal ideal $\mathfrak m\subset R$, we get an epi $k\to k^n$, so $n$ must be equal to $1$. Now the short exact sequence $$0\to J\to R\xrightarrow{\;f\;} R\to 0$$ must split, because the rightmost $R$ is projective.
Can you see how to finish this?
-
Here is my original answer:
No; let $R=k[x_1,x_2,\ldots]$ for any field $k$ (a polynomial ring in infinitely many variables). Do you see a non-trivial ideal $J\subset R$ such that $R/J\cong R$? (There are a lot).
What I was aiming at was that, for any infinite set $S\subseteq\mathbb{N}$, choosing $J=(\{x_i\mid i\notin S\})$ gives a ring isomorphism $R/J\cong R$. The problem with my answer was that this is not the same as an isomorphism of $R$-modules. The definition of $R$-module is an abelian group that is acted on by $R$ (by scalar multiplication); the fact that $R/J\cong R$ as rings includes the fact that they are isomorphic as abelian groups (under addition), which is part of what is necessary for an isomorphism of $R$-modules, but as all the other (correct) answers point out, the essential problem lies in the scalar multiplication aspect: $J$ annihilates $R/J$ (i.e., scaling $R/J$ by any element of $J$ gives the zero map), while $R^n$ has trivial annihilator, so $J$ must be trivial.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9661288857460022, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/1802/what-tradeoff-is-there-to-using-an-accurate-estimate-with-a-large-confidence-int/1803
|
# What tradeoff is there to using an accurate estimate with a large confidence interval?
I am working on calibrating a Heston model from simulated historical stock data.
After obtaining an accurate estimate of the model parameters I found very large 95% confidence intervals for these estimations if the sample size is about 10-15 years.
In view of the graph below, how would you choose the ideal sample size?
A 5-10 years period seems small since a large confidence interval means that there is a large uncertainty about the estimation. On the other hand, it appears useless to accept a sample size for which the confidence interval is small (50 years) since shorter periods provides good enough estimates.
I am a little confused as to how to interpret these results.
-
## 1 Answer
Unless it is due to random chance, there seems to be a bias in your estimation method for $\kappa$, and this bias appears to depend on the size of the sample. This may be revealing a deeper underlying problem with your technique that will ultimately make it clearer what the tradeoff is between accuracy and sample size. I do not believe it should be the case that a shorter sample-size yields a more accurate estimate in a simulation where you can be certain the data generating process has not changed.
In practice, though, you will want to use as long a sample as possible over which you can be reasonably sure the underlying DGP has not changed. I would also suggest trying to obtain higher frequency data. Unless your time scale here is arbitrary, it will be difficult to justify an estimate based on 20+ actual years of stock market data.
-
Thank you very much sheegaon for your helpful answer. Actually, there is no error on the above graph, the confidence bounds are built with the aid of a CLT for weakly dependent processess so the convergence is very very slow – Beer4All Sep 8 '11 at 7:03
The suspected error is in the calibrated parameter (red line), not the CI. – Tal Fishman Sep 8 '11 at 12:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945684015750885, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/9808/prove-that-0n-1n-cdot-m-n-m-in-mathbbn-is-not-context-free
|
Prove that $\{0^n 1^{n\cdot m} : n,m \in \mathbb{N}\}$ is not context-free
This is a homework problem I have spent several hours on. A "hint" is given that we may use this fact: If $n,j,k \in \mathbb{N}$ satisfy $n \geq 2$ and $1 \leq j+k \leq n$, then $n^2+j$ does not evenly divide $n^3+k$.
I cannot find any way to apply this fact. It leads me to believe I should use the string $0^{p^2}1^{p^3}$ or something like that, but I am really just not sure. The pumping lemma has given me trouble since the non regular language version.
Even small hints greatly appreciated at this point.
-
– Raphael♦ Feb 16 at 14:42
1 Answer
If you use the pumping lemma on the word $w=0^{p^2} 1^{p^3}$, consider the partition $w=xyzuv$, where $|yzu|\le n$ and $|yu|>0$ ($n$ being the length of $w$). It is easy to prove that from all the cases (that is, from all the possibilities for $y$ and $u$), the only non-trivial case is when $y=0^i$ and $u=1^j$, in which case the hint you mentioned finishes the job.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413444995880127, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/172461/any-x-in-on-orthogonal-matrices-with-positive-determinant-is-the-product
|
# Any $X\in O^+(n)$ (orthogonal matrices with positive determinant) is the product of an even number of reflections?
Any $X\in O^+(n)$ (orthogonal matrices with positive determinant) is the product of an even number of reflection?
-
What do you know about $O(n)$ ? – Belgi Jul 18 '12 at 16:06
$O(n)$ is compact. determinants are $\{+1,-1\}$, collumn vectors are orthonormal. – Taxi Driver Jul 18 '12 at 16:08
– Simon Markett Jul 18 '12 at 16:11
Hint: what is the determinant of (a matrix representing) a reflection? – Geoff Robinson Jul 18 '12 at 16:39
## 2 Answers
I'm imagining you may be interested in the case $K = \mathbb{R}$, $q = x_1^2 + \ldots + x_n^2$, but the result holds more generally:
Theorem (Cartan-Dieudonne): Let $q = q(x_1,\ldots,x_n)$ be a nondegenerate quadratic form over a field $K$ of characteristic different from $2$. Then every element of the orthogonal group $O(q)$ of $q$ is a product of at most $n$ reflections.
For a proof see e.g. $\S 8.4$ in these notes.
Since the determinant of a reflection is $-1$, an element of $O(q)$ has determinant $+1/-1$ according to whether it can be written as a product of an even/odd number of reflections.
-
Since the OP asked this as a follow up to one of my answers, I guess I should answer this one.
You need to know some basic facts about orthogonal matrices:
• that $X\in O(n)$ if and only if the columns of $X$ form an orthonormal system which is again equivalent to the fact that the rows of $X$ are an orthonormal base.
• that for any two unit vectors $v, w$ in $\mathbb{R}^n$ there exists a reflection $R$ such that $Rv=w$.
• the product of orthogonal matrices is orthogonal.
• the inverse of an orthogonal matrix is it's transpose.
If you know that then you will easily see that, if $X =(r_1, \ldots, r_n)$ with an orthonormal base $\{r_i\}$, there exists a reflection $R_1$ such that $R_1 X=(e_1, r_2^', \cdots , r_n^')$, where $e_1=(1,0,\ldots,0)^T$. $R_1 X$ is again orthogonal (third statement above). So the first row of this matrix is $(1,0, ,\ldots,0)$, that is, $R_1X$ is an orthogonal matrix with a $1$ in the upper left corner and zeroes in the other entries of the first row and column. Now a simple induction shows that there exist (at most) $n$ reflections which transform $X$ into the identity matrix, i.e. $$R_n\cdots R_1 X = Id$$
Since the inverse of an orthogonal matrix is simply the transpose you get $$X = (R_n\cdots R_1)^T$$
The four basic propeties I mentionend should be easy to find in any basic text book.
(The fact that the number of reflections is even follows by taking determinants).
-
could you tell me how to prove the basic fact number two? which fact you used to write $R_1X=(e_1,r_2',\dots,r_n')$? – Taxi Driver Jul 18 '12 at 18:15
@Patience the idea is to use the reflection through the hyperplane which bisects the angle between $v$ and $w$. That is, the reflection through the plane normal to $N=v-w$, which is given explicitly by $$r\mapsto r-2\langle r, \frac{N}{|N|}\rangle\frac{ N}{|N|}$$ (If f$w=v$ take any hyperplane $E$ containing them and reflect through $E$). – user20266 Jul 18 '12 at 18:50
what is $r$? ?? – Taxi Driver Jul 18 '12 at 19:04
and I hope you are going to answer my two doubt :( – Taxi Driver Jul 18 '12 at 19:06
@Patience $r$ is the free variable, a vector in Euclidean space. The reflection is given by $$Rr = r-2\langle r, n\rangle n$$ with $n=N/|N|$. – user20266 Jul 19 '12 at 5:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9051192402839661, "perplexity_flag": "head"}
|
http://nrich.maths.org/7223/note
|
### Whole Number Dynamics I
The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases.
### Whole Number Dynamics II
This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point.
### Whole Number Dynamics III
In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again.
# Difference Dynamics
##### Stage: 4 and 5 Challenge Level:
This is an engaging investigation that quickly yields results and leads to conjectures and a lot of mathematical thinking and discussion. Young learners, even at the primary stage, can understand and carry out the iterative process and see the cyclical patterns that emerge. It is not difficult to make and test conjectures. Also it is easy to understand that, after the first sequence, all the sequences have only positive terms and so, by taking differences, the terms in the sequences can't increase. This means that, from any chosen starting sequence, there are a strictly limited number of possible sequences that can follow, and sooner or later a sequence must repeat itself starting a cycle. Thus it is easy to prove that all the sequences of sequences end in cycles or a sequence of zeros. Not only can very young learners create the sequences of sequences, notice the cycles and make conjectures, but also this proof is very accessible and there are more questions to explore making this a low threshold high ceiling investigation.
This is a simple example of a dynamical system and it can lead to discussion of how dynamical systems are used to model population dynamics and other natural phenomena.
### Possible approach
If different groups in the class choose their own sequences from which to start, it won't be long before they notice that they are all getting the same sort of patterns. It is easier to see how the process works by starting with sequences of length three rather than length two. When everybody finds that before very long they have produced a cycle, and nobody can find a sequence that goes on indefinitely, then perhaps suggest that they try sequences of length four and see if the same thing happens. They will soon find that with length four the iteration always seems to stop with the zero sequence.
From that point the teacher can either encourage the learners to discuss why the iteration always seems to stop with a zero sequence or cycle and, depending on the class and time available, reach a well argued proof.
Alternatively the class can try starting with sequences of different lengths 2, 3, 4, 5, and 6 say, and try to discover if the lengths of the sequences determine whether the sequences go to zero or end in a cycle.
### Key questions
Look at your chain of sequences, have you seen that sequence in the chain before? What will happen next?
Is it worth continuing this chain or do you already know how the chain continues?
Can you describe what is happening to your chain of sequences? (Encourage language like "it loops back on itself", don't introduce the term 'cycle' too early rather, if possible, let the term emerge in discussion).
Can the terms that occur in the sequences get bigger?
You said the terms can't get bigger so what is the biggest value any term can take in your chain? Then how many different values can the terms take? So how many different sequences is it possible to have in your chain? Can the chain go on for ever without any sequence being repeated?
Would the same thing be true for any chain?
Does the same thing happen when you start with sequences of different length?
If you get to the zero sequence what can you say about the sequence in the chain just before it?
### Possible extension
Prove that when sequence $\mathbf{a}$ maps to the next sequence in the chain $\mathbf{b}$ then $\mathbf{a}$ is a constant sequence if and only if $\mathbf{b}$ is the zero sequence.
Read the article Difference Dynamics Discussion.
### Possible support
Suggest the learners start with sequences of small terms which will converge very quickly.
For example: $(1, 4, 3), (3, 1, 2), (2, 1, 1,), (1,0,1), (1,1,0), (0,1,1), (1,0,1)...$
and $(1,5,3,7), (4,2,4,5), (2,2,1,1), (0,1,0,1), (1,1,1,1), (0,0,0,0)....$
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325627088546753, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/12469/determine-absolute-conditions-of-mappings-provide-examples-of-ill-well-conditio
|
# Determine absolute conditions of mappings, provide examples of ill/well conditioned functions
Question: (a) Determine each of the absolute conditions of the linear map $f: \mathbb{R} \rightarrow \mathbb{R}:$
1. $f(x) = |x|$
2. $f(x) = \frac{\pi}{2}$
(b) Provide both an example of a well conditioned and an ill conditioned function evaluation (both the function $f$ and the argument $x$). Do not use any of the examples from (a).
My attempt: so although this seems really simple I'm unfortunately having trouble with exactly what to do. The given definition of absolute condition $\kappa_{abs}$ is the smallest number for which: $|f(x_0) - f(x)| \leq \kappa_{abs}|x_0 - x| + o(|x_0 - x|)$. However in the one example I have it seems that the last term with $o$ is ignored (I assume it is too small...). I was thinking that perhaps for (a) 2. I could write something like: $|f(x_0) - f(x)| = |\frac{\tilde{\pi}}{2} - \frac{\pi}{2}|= \frac{1}{2}|\tilde{\pi}-\pi| \Rightarrow \kappa_{abs} = \frac{1}{2}$ ? With 1. (a) I am don't know which sort of answer would be expected...
For (b) I am guessing it must be measured in terms of relative condition, right? Since it appears that the rule is condition(not specified which) greater than 1 is ill conditioned. I know that subtraction of almost equal numbers does not do well. I thought a way to guarantee that the function would always subtract two such numbers would to be to let $f(x) = x - \frac{99x}{100}$ using the formula for relative condition of subtraction($\kappa = \frac{|x|+|y|}{|x-y|}$) I would have $$\frac{|x| + |\frac{99x}{100}|}{|x| - |\frac{99x}{100}|} = 199 > 1$$ $\Rightarrow$ This function is ill-conditioned ? For the well conditioned function I have the theorem: for $x,y > 0$ $$\frac{|(x+y) - (\tilde{x} + \tilde{y})|}{|x+y|} \leq 1\epsilon \Rightarrow \kappa = 1$$ but I don't understand how to use that here since it seems self explanatory that a positive number divided by a larger positive number is less than 1. Can I simply substitue any positive number for $y$ and let my function be $f(x) = x^{2} +10$ ?
As if it weren't obvious I am very lost with this and any help, tips, advice, etc is as usual greatly appreciated!
-
$\tilde{\pi}$?!? It's the same $\pi$ in both $f(x_0)$ and $f(x)$, so $f(x_0)-f(x)=0$, thus $\kappa_{\mathrm{abs}}=0$. For $f(x)=|x|$, think geometrically: what's the maximum slope of the graph? – Hans Lundmark Nov 30 '10 at 11:40
@Hans: haha sorry about that with $\pi$. I just thought that maybe it had something to do with machines not being able to store/represent $\pi$ to the same amount of places and the $\tilde{\pi}$ would have been a slightly different approximation. With $|x|$ wouldn't the slope be $=1$ ? so $\kappa_{\mathrm{abs}} = 1$ ? – ghshtalt Nov 30 '10 at 11:53
Are you in mathematics (where the numbers are exact) or in computer science (where you have to worry about the errors in floating point representation)? If in math, your last comment that the condition is 1 is correct. And what would be |f(x)-f(x0)| if f is a constant? – Ross Millikan Nov 30 '10 at 14:06
@Ross: I am in 'computer-oriented mathematics', and it seems that a lot of the focus so far is on error (I couldn't tell you whether or not that is exclusively 'errors in floating point representation' since I don't know exactly what that means). But I thought that the whole point of condition was to measure the effect of a mapping on inaccuracy(error), right? Does this still occur in math? If $f$ were constant $|f(x)-f(x_o)| = 0$ as Hans pointed out, correct? – ghshtalt Nov 30 '10 at 14:44
@user3711: Yes, Hans is correct. But in a computer you cannot represent real exactly, only with some error. For your second function, you would expect pi/2 to always be represented the same way (unless you calculate it in different ways) so f(x)-f(x0) will be truly zero. But in the first, if you try to define |represented f(x)- represented f(x0)|/|real x - real x0| you could get a surprise as x gets close to x0. – Ross Millikan Nov 30 '10 at 15:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606853127479553, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/270179/basic-stochastic-integral
|
# Basic stochastic integral
I am new to this stuff. Can some one explain how I could compute the stochastic integral of the form $\int_0^t W_sds$, where $W_t$ is Brownian process?
Thanks!
-
I wouldn't expect such integral admits a simpler form. – Ilya Jan 4 at 8:49
1
That is not a stochastic integral. It is a standard Lebesgue integral. – Learner Jan 4 at 8:49
1
@Learner: this can be even considered as a Riemann integral as $W_s(\omega)$ is continuous on $[0,t]$ for any $\omega$. However, the term "stochastic integral" here may refer to methods that can be applied to find its value. E.g. one has $$\int\limits_0^t W_s\mathrm ds = tW_t - \int\limits_0^t s\mathrm dW_s$$ which however we can't have for any Lebesgue/Riemann integral. Neither this latter form is simpler, though. – Ilya Jan 4 at 9:03
1
@Ilya I do not disagree with any of your points. Still, the title is misleading. – Learner Jan 4 at 9:08
## 2 Answers
What to compute the integral means is unclear in this context but one can say this: for every $t\geqslant0$, the random variable $$X_t=\int_0^tW_s\mathrm ds$$ is centered normal vith variance $\sigma_t^2$ where $$\sigma_t^2=\mathbb E(X_t^2)=2\int_0^t\int_0^s\mathbb E(W_sW_u)\mathrm du\mathrm ds=\int_0^t\int_0^s2u\mathrm du\mathrm ds=\frac{t^3}3.$$ The process $(X_t)_{t\geqslant0}$ is called integrated Brownian motion and is the subject of some active research, for a sample see this paper and the list of references therein.
-
Thanks. How do you prove that $X_t$ has normal distribution? – John Peter Jan 8 at 10:30
Linear combinations of gaussian families are normal and (Riemann-Stieltjes) integrals are pointwise (hence in distribution) limits of linear combinations. – Did Jan 8 at 10:36
As Learner pointed out, the integral $\omega \mapsto \int_{0}^t W_s(\omega) \, ds$ is not a stochastic integral, it's a pathwise Lebesgue integration.
But anyway: If we would like to obtain another expression for this integral, we can apply Itô's formula:
$$f(W_t)-f(W_0) = \int_0^t f'(W_s) \, dW_s + \frac{1}{2} \int_0^t f''(W_s) \, ds \tag{1}$$
Since we are looking for the "$ds$-part", it would be nice to have
$$f''(W_s) = 2 W_s$$
i.e. $f''(x)=2x$. We obtain this by choosing $f(x) := \frac{x^3}{3}$. By applying $(1)$:
$$\frac{W_t^3}{3} - 0 = \int_0^t W_s^2 \, dW_s + \frac{1}{2} \int_0^t 2 W_s \, ds \\ \Rightarrow \int_0^t W_s \, ds = \frac{W_t^3}{3} - \int_0^t W_s^2 \, dW_s$$
-
1
certainly, another expression - but how does it help OP? – Ilya Jan 4 at 9:05
@Ilya Since the OP mentioned that he's new to this stuff, it's probably a transformation like this he is looking for. If not so, I'll delete my answer. – saz Jan 4 at 9:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382899403572083, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/tagged/randomness
|
# Tagged Questions
The randomness tag has no wiki summary.
0answers
34 views
### Sanity check to determine if 'random' data on disk drive may have been tampered with
I've got some code that does a Dept of Defense secure erase of disks, and would like to know about a good sanity check to run after the fact. Bottom line, the last write phase of a secure erase of a ...
0answers
31 views
### Is my One Tail Hypothesis correct?
Testing the sale of marijuana in Colorado and Washington before and after legalization. The numbers are made up and random. ...
0answers
42 views
### Determining sample size in a non-normaly distributed data
I want to take some samples from a database nearly ~150.000 records of a non-normally distributed values. From this link I obtained some common procedures to find a sample, influenced by The level of ...
2answers
60 views
### Best process for randomly drawing an odd and an even number
I want to draw a random number from 1 to 100. If this number is even, I want to then draw a random number from the odd numbers between 1 and 100. If the first number is odd, I want to draw a random ...
1answer
43 views
### Detecting/analyzing non-random permutations of a random sequence
I am looking for theoretical results regarding how easy it is to detect alteration of a random sequence. The permutation I am most interested in is where a subsequence of the random numbers is ...
1answer
50 views
### randomness and computers
How do computers achieve randomness if they really do ? How to decide whether something is random or not? Is there a measure of randomness? ...
1answer
75 views
### Differential Entropy
Differential entropy of Gaussian R.V. is $\log_2(\sigma \sqrt{2\pi e})$. This is dependent on $\sigma$, which is the standard deviation. If we normalize the random variable so that it has unit ...
0answers
38 views
### Relation between statistical randomness, uniform distribution and independence
In Monte Carlo simulation, we often consider how well a sequence of generated points are. If I am correct, one aspect is statistical randomness: A numeric sequence is said to be statistically ...
0answers
48 views
### ascertaining the randomness of a list of numbers [closed]
Assume that I have a series of DNA mutations occurring at specific positions x= [2, 3, 10, 14, 20] drawn from a sequence of length 50.( has positions 1 to 50) How ...
2answers
109 views
### Randomly picking from $n$ choices roughly $n$ times. What's the resulting frequency distribution called?
Not sure of the best way of phrasing this question, but I'll give it a go. If I were to randomly choose whole numbers between 1 and $n$ a significant number of times relative to $n$ (say, $m$, where ...
1answer
96 views
### Why is the ROC curve of a random classifier the line $x=y$?
The title is my whole question.
2answers
447 views
### Are the digits of $\pi$ statistically random?
Suppose you observe the sequence: 7, 9, 0, 5, 5, 5, 4, 8, 0, 6, 9, 5, 3, 8, 7, 8, 5, 4, 0, 0, 6, 6, 4, 5 , 3, 3, 7, 5, 9, 8, 1, 8, 6, 2, 8, 4, 6, 4, 1, 9, 9, 0, 5, 2, 2, 0, 4, 5, 2, 8 ... What ...
0answers
58 views
### How can you select one of two individuals at random using a biased coin
Suppose you are given a biased coin for which the probability of getting a head is $p (0<p<1)$.Discuss how you will select one of two individuals at random using the biased coin.
2answers
52 views
### Co-occurance problem
I have a series of discrete, purportedly random whole numbers like this: v1 v2 v3 v4 v5 v6 42 23 10 07 01 35 05 02 26 25 49 18 35 18 43 29 26 28 36 59 26 15 34 35 I ...
0answers
56 views
### Stock price max / min strangeness
Stock markets are often described as following a random-walk or something similar. I have a process that automatically picks a trade entry time and a trade exit time for a given stock. After 1000 ...
0answers
16 views
### Estimating the randomness of an SRS
I have noticed that normal population parameters can be estimated accurately with a reasonably small sample. It occurs to me that this is only possible because of the assumption that the random ...
1answer
54 views
### Test data for randomness with repetitions
I have an experiment where I am trying to determine whether the answering behavior of participants can be explained as random answering. I.e. every participants will have to answer multiple questions ...
1answer
116 views
### Runs Test and Chi Square Distribution
I want to identify random data by applying some tests to the observed byte stream. I used the chi square test already on a frequency analysis, which works fine. To reduce the false-positive rate I ...
0answers
174 views
### interpreting chi square value for validating random numbers
I am trying to use the chi-square test for identifying random data. I have 256 categories, i.e. 255 degrees of freedom and count the occurrence of byte-values (0-255). As suggested by Knuth, the ...
0answers
93 views
### measuring randomness using runs test [closed]
I try to measure the randomness of data using the runs test (beside the chi square test). I get following values for a file produced by /dev/urandom: ...
2answers
149 views
### When does a random test fail?
I must implement some Chi Square Test to test the randomness of "my" implementation, but I can't understand what this tests really say. The tests are different but what I always do is: divide in ...
3answers
157 views
### Test randomness of a generated password?
As you know, there are many password generators out there to increase computer security. Suppose I am given such a password (say, a string of letters, numbers, symbols, etc.), is there a way for me ...
2answers
174 views
### Probability of a randomly generated string being already present in a data set [duplicate]
Possible Duplicate: Simple combination/probability question based on string-length and possible-characters Suppose I have a collection of 1,000,000 unique strings where each string is ...
3answers
388 views
### What is the probability of n people from a list of m people being in a random selection of x people from a list of y people?
If I am selecting 232 people from a pool of 363 people without replacement what is the probability of 2 of a list of 12 specific people being in that selection? This is a random draw for an ultra ...
1answer
688 views
### Ljung-Box test: Basic Questions
I'm interested into the Portmanteau tests for fitting ARIMA models and ended up to Ljung-Box test and their implementations for R-statistics. I've read already some relative question on the subject ...
2answers
124 views
### Can one force randomness in a sample?
I recently spoke to a large qualitative analysis company who were working on in-depth analysis of the potential customers of train company. I asked them how they chose the people to include in their ...
1answer
120 views
### Is there a Bayesian equivalent to a Wald-Wolfowitz runs test?
I have a sequence of observations and I would like to determine if the observations in the sequence are mutually independent. Wald-Wolfowitz is a non-parametric test that can be used to check for ...
1answer
98 views
### How to sample randomly from a population?
What do you think about the following method to perform a random sampling: Generate as many floats (between 0 and 1) as individuals using a quantum random generator (each value appears only one ...
2answers
260 views
### Random walk with momentum
Consider an integer random walk starting at 0 with the following conditions: The first step is plus or minus 1, with equal probability. Every future step is: 60% likely to be in the same direction ...
6answers
746 views
### Why is it bad to teach students that p-values are the probability that findings are due to chance?
Can someone please offer a nice succinct explanation why it is not a good idea to teach students that a p-value is the prob(their findings are due to [random] chance). My understanding is that a ...
0answers
61 views
### Best way to find non-randomness regions in these or similar count data?
Let say I have data in a shape: [0,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,1,0,0,2,0,0,0,....] - so mainly zeros.... However I know how long is my 'signal' and how many counts are they. Is it possible ...
1answer
210 views
### Is there self-similarity in true randomness? [closed]
Would a truly random function exhibit self-similarity? By truly random I basically mean that if you have some function that produces a binary value, each new value will have exactly 50% chance of ...
3answers
2k views
### Can non-random samples be analyzed using standard statistical tests?
Many clinical studies are based on non-random samples. However, most standard tests (e.g. t-tests, ANOVA, linear regression, logistic regression) are based on the assumption that samples contain ...
2answers
323 views
### How to find out if a set of daily measurements are random or not?
There is a set of daily measurements. Time and measured values are both discrete. I want to find out whether measured values depend on the day the measurement was taken, or whether measurements are ...
3answers
303 views
### Long tailed distributions for generating random numbers with parameters to control tail heaviness
I have to generate random numbers for my algorithm based on probability distributions. I want a distribution which has heavy tails and is unskewed, which can produce numbers far away from location ...
1answer
58 views
### How can I even out a random distribution while minimising how far each data point is moved?
I have a software application which uses a queue and multiple processors to process those jobs. Jobs get re-run on a daily basis for customers, but we also have new customers signing up regularly. ...
1answer
509 views
### Scrambling and correlation in low discrepancy sequences (Halton/Sobol)
I am currently working on a project where I generate random values using low discrepancy / quasi-random point sets, such as Halton and Sobol point sets. These are essentially $d$-dimensional vectors ...
2answers
145 views
### Is it possible to predict the likelihood of an order of random events?
Sorry if this is a n00b question, I'm just trying to wrap my head around the problem. I am trying to refute conventional wisdom here, so any help is greatly appreciated. And now for the question: ...
4answers
617 views
### What is wrong with this “naive” shuffling algorithm?
This is a follow-up to a Stackoverflow question about shuffling an array randomly. There are established algorithms (such as the Knuth-Fisher-Yates Shuffle) that one should use to shuffle an array, ...
0answers
419 views
### Testing (and proving) the randomness of numbers [duplicate]
Possible Duplicate: Testing random variate generation algorithms What's a good way to test a series of numbers to see if they're random (or at least psuedo-random)? Is there a good ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201170206069946, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/103835/does-higher-order-arithmetic-interpret-the-axiom-of-choice
|
## Does higher order arithmetic interpret the axiom of choice?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
By second order arithmetic I mean the axiomatic theory $Z_2$, that is Peano arithmetic extended by second order variables with the full comprehension axiom, and not defined semantically using power set in ZF. By third order arithmetic I mean that extended by third order variables and the comprehension axiom. And so on. Does each of these have an inner model which also satisfies the axiom of choice in each order, using constructibility? If not, do such inner models exist if we also extend induction to a higher order axiom? Is there a good reference on it?
-
If its helpful to know, second order arithmetics is a first order theory. The set variable are not actually second order variables. Usually the language would include a unary predicate that indicates something is a "set". – William Aug 3 at 3:15
3
I'm not sure I understand the question. On the one hand, it seems to me semantically impossible to have, say, a model of seventh-order arithmetic in which the seventh-order variables were well-ordered: what would this well-ordering consist of? On the other hand, given $n$, I can take a well-founded model $M$ of $V=L$ and look at the first $n$ many powersets of $\omega$. It seems to me that this gives a model of $n$-th order arithmetic in which the first $n-1$ many sorts are well-ordered. Am I understanding the question correctly? – Noah S Aug 3 at 3:25
William, yes people often say things like "axiomatic second order arithmetic $Z_2$ is a first order theory." Yet $Z_2$ is still widely called second order arithmetic. So I tried to be clear that I am asking about an axiomatic theory with stated axioms and not about what people call the full second order semantics. Noah, yes what you say is right. So my question is what does it take to get an inner model of $V=L$ in $Z_n$ without recourse to ZF. – Colin McLarty Aug 3 at 12:51
## 1 Answer
There is quite a bit of this in Simpson's book Subsystems of Second Order Arithmetic in the specific context of second-order arithmetic. Here are three relevant results:
Corollary VII.5.11 (conservation theorems). Let $T_0$ be any one of the $L_2$-theories $\Pi^1_\infty\text{-CA}_0$, $\Pi^1_{k+1}\text{-CA}_0$, $\Delta^1_{k+2}\text{-CA}_0$, $0 ≤ k < \infty$. Let $\phi$ be any $\Pi^1_4$ sentence. Suppose that $\phi$ is provable from $T_0$ plus $\exists X \forall Y (Y ∈ L(X ))$. Then $\phi$ is provable from $T_0$ alone.
Here $\Pi^1_\infty\text{-CA}_0$ has the full comprehension scheme for second order arithmetic, and hence also the full induction scheme.
Theorem VII.6.16 ($\Sigma^1_{k+3}$ choice schemes). The following is provable in $\text{ATR}_0$. Assume $\exists X \forall Y (Y ∈ L(X ))$. Then:
1. $\Sigma^1_{k+3}\text{-AC}_0$ is equivalent to $\Delta^1_{k+3}\text{-CA}_0$.
2. $\Sigma^1_{k+3}\text{-DC}_0$ is equivalent to $\Delta^1_{k+3}\text{-CA}_0$ plus $\Sigma^1_{k+3}\text{-IND}$.
3. Strong $\Sigma^1_{k+3}\text{-DC}_0$ is equivalent to $\Pi^1_{k+3}\text{-CA}_0$.
4. $\Sigma^1_\infty \text{-DC}_0$ ($=\bigcup_{k < \omega} \Sigma^1_k\text{-DC}_0$ ) is equivalent to $\Pi^1_\infty\text{-CA}_0$.
and
Corollary IX.4.12 (conservation theorem). For all $k <\omega$, $\Sigma^1_{k+3}\text{-AC}_0$ (hence also $\Delta^1_{k+3}\text{-AC}_0$ ) is conservative over $\Pi^1_{k+2}\text{-CA}_0$ for $\Pi^1_4$ sentences.
-
1
It looks ok to me. What looks wrong to you? – David Roberts Aug 3 at 5:12
@David Roberts: the math is rendering for me now as well. Yesterday, it was not rendering, and only displayed as raw TeX. It may have been a network issue with MathJaX, since I thought my connection was slow last night in other ways. – Carl Mummert Aug 3 at 10:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421665072441101, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Potential_energy
|
# Potential energy
Potential energy
In the case of a bow and arrow, the energy is converted from the potential energy in the archer's arm to the potential energy in the bent limbs of the bow when the string is drawn back. When the string is released, the potential energy in the bow limbs is transferred back through the string to become kinetic energy in the arrow as it takes flight.
Common symbol(s): PE, U, or V
SI unit: joule (J)
Derivations from other quantities: U = m · g · h (gravitational) U = ½ · k · x2 (elastic) U = C · V2 / 2 (electric) U = -m · B (magnetic)
Classical mechanics
Branches
Formulations
Fundamental concepts
Core topics
Scientists
In physics, potential energy is the energy of an object or a system due to the position of the body or the arrangement of the particles of the system.[1] The SI unit for measuring work and energy is the joule (symbol J).
The term potential energy was coined by the 19th century Scottish engineer and physicist William Rankine,[2][3] although it has links to Greek philosopher Aristotle's concept of potentiality. Potential energy is associated with a set of forces that act on a body in a way that depends only on the body's position in space. This allows the set of forces to be considered as having a specified vector at every point in space forming what is known as a vector field of forces, or a force field. If the work of forces of this type acting on a body that moves from a start to an end position is defined only by these two positions and does not depend on the trajectory of the body between the two, then there is a function known as a potential that can be evaluated at the two positions to determine this work. Furthermore, the force field is defined by this potential function, also called potential energy.
## Overview
Potential energy is often associated with restoring forces such as a spring or the force of gravity. The action of stretching the spring or lifting the mass is performed by an external force that works against the force field of the potential. This work is stored in the force field, which is said to be stored as potential energy. If the external force is removed the force field acts on the body to perform the work as it moves the body back to the initial position, reducing the stretch of the spring or causing a body to fall.
The more formal definition is that potential energy is the energy difference between the energy of an object in a given position and its energy at a reference position.
There are various types of potential energy, each associated with a particular type of force. More specifically, every conservative force gives rise to potential energy. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of mutual positions of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their mutual positions.
As a general rule, the work done by a conservative force F will be
$\,W = -\Delta U$
where $\Delta U$ is the change in the potential energy associated with that particular force. Common notations for potential energy are U, V, and Ep.
## Work and potential energy
The work of a force acting on a moving body yields a difference in potential energy when the integration of the work is path independent. The scalar product of a force F and the velocity v of its point of application defines the power input to a system at an instant of time. Integration of this power over the trajectory of the point of application, C=x(t), defines the work input to the system by the force.
If the work for an applied force is independent of the path, then the work done by the force is evaluated at the start and end of the trajectory of the point of application. This means that there is a function U (x), called a "potential," that can be evaluated at the two points x(t1) and x(t2) to obtain the work over any trajectory between these two points. It is tradition to define this function with a negative sign so that positive work is a reduction in the potential, that is
$W = \int_C \bold{F} \cdot \mathrm{d}\bold{x} = \int_{\mathbf{x}(t_1)}^{\mathbf{x}(t_2)} \bold{F} \cdot \mathrm{d}\bold{x} = U(\mathbf{x}(t_1))-U(\mathbf{x}(t_2)).$
The function U(x) is called the potential energy associated with the applied force. Examples of forces that have potential energies are gravity and spring forces.
In this case, the application of the del operator to the work function yields
${\nabla W} = -{\nabla U} = -\left ( \frac{\partial U}{\partial x}, \frac{\partial U}{\partial y}, \frac{\partial U}{\partial z} \right ) = \mathbf{F},$
and the force F is said to be "derivable from a potential."[4]
Because the potential U defines a force F at every point x in space, the set of forces is called a force field. The power applied to a body by a force field is obtained from the gradient of the work, or potential, in the direction of the velocity V of the body, that is
$P(t) = -{\nabla U} \cdot \mathbf{v} = \mathbf{F}\cdot\mathbf{v}.$
Examples of work that can be computed from potential functions are gravity and spring forces.[5]
### Potential function for near earth gravity
Gravity exerts a constant downward force F=(0, 0, Fz) on the center of mass of a body moving near the surface of the earth. The work of gravity on a body moving along a trajectory s(t) = (x(t), y(t), z(t)), such as the track of a roller coaster is calculated using its velocity, v=(vx, vy, vz), to obtain
$W=\int_{t_1}^{t_2}\boldsymbol{F}\cdot\boldsymbol{v}dt = \int_{t_1}^{t_2}F_z v_z dt = F_z\Delta z.$
where the integral of the vertical component of velocity is the vertical distance. Notice that the work of gravity depends only on the vertical movement of the curve s(t).
The function U(s)=mgh is called the potential energy of a near earth gravity field.
### Potential function for a linear spring
A horizontal spring exerts a force F=(kx, 0, 0) that is proportional to its deflection in the x direction. The work of this spring on a body moving along the space curve s(t) = (x(t), y(t), z(t)), is calculated using its velocity, v=(vx, vy, vz), to obtain
$W=\int_0^t\boldsymbol{F}\cdot\boldsymbol{v}dt =\int_0^tkx v_x dt = \frac{1}{2}kx^2.$
For convenience, consider contact with the spring occurs at t = 0, then the integral of the product of the distance x and the x-velocity, xvx, is (1/2)x2.
The function U(x)= 1/2 kx2 is called the potential energy of a linear spring.
## Reference level
The potential energy is a function of the state a system is in, and is defined relative to that for a particular state. This reference state is not always a real state, it may also be a limit, such as with the distances between all bodies tending to infinity, provided that the energy involved in tending to that limit is finite, such as in the case of inverse-square law forces. Any arbitrary reference state could be used, therefore it can be chosen based on convenience.
Typically the potential energy of a system depends on the relative positions of its components only, so the reference state can also be expressed in terms of relative positions.
## Gravitational potential energy
Main articles: Gravitational potential and Gravitational energy
Gravitational energy is the potential energy associated with gravitational force, as work is required to elevate objects against Earth's gravity. The potential energy due to elevated positions is called gravitational potential energy, and is evidenced by water in an elevated reservoir or kept behind a dam. If an object falls from one point to another point inside a gravitational field, the force of gravity will do positive work on the object, and the gravitational potential energy will decrease by the same amount.
Gravitational force keeps the planets in orbit around the Sun.
A trebuchet uses the gravitational potential energy of the counterweight to throw projectiles over long distances.
Consider a book placed on top of a table. As the book is raised from the floor, to the table, some external force works against the gravitational force. If the book falls back to the floor, the "falling" energy the book receives is provided by the gravitational force. Thus, if the book falls off the table, this potential energy goes to accelerate the mass of the book and is converted into kinetic energy. When the book hits the floor this kinetic energy is converted into heat and sound by the impact.
The factors that affect an object's gravitational potential energy are its height relative to some reference point, its mass, and the strength of the gravitational field it is in. Thus, a book lying on a table has less gravitational potential energy than the same book on top of a taller cupboard, and less gravitational potential energy than a heavier book lying on the same table. An object at a certain height above the Moon's surface has less gravitational potential energy than at the same height above the Earth's surface because the Moon's gravity is weaker. Note that "height" in the common sense of the term cannot be used for gravitational potential energy calculations when gravity is not assumed to be a constant. The following sections provide more detail.
### Local approximation
The strength of a gravitational field varies with location. However, when the change of distance is small in relation to the distances from the center of the source of the gravitational field, this variation in field strength is negligible and we can assume that the force of gravity on a particular object is constant. Near the surface of the Earth, for example, we assume that the acceleration due to gravity is a constant g = 9.8 m/s2 ("standard gravity"). In this case, a simple expression for gravitational potential energy can be derived using the W = Fd equation for work, and the equation
$W_F = -\Delta U_F.\!$
The amount of gravitational potential energy possessed by an elevated object is equal to the work done against gravity in lifting it. The work done equals the force required to move it upward multiplied with the vertical distance it is moved (remember W = Fd). The upward force required while moving at a constant velocity is equal to the weight, mg, of an object, so the work done in lifting it through a height h is the product mgh. Thus, when accounting only for mass, gravity, and altitude, the equation is:[6]
$U = mgh\!$
where U is the potential energy of the object relative to its being on the Earth's surface, m is the mass of the object, g is the acceleration due to gravity, and h is the altitude of the object.[7] If m is expressed in kilograms, g in m/s2 and h in metres then U will be calculated in joules.
Hence, the potential difference is
$\,\Delta U = mg \Delta h.\$
### General formula
However, over large variations in distance, the approximation that g is constant is no longer valid, and we have to use calculus and the general mathematical definition of work to determine gravitational potential energy. For the computation of the potential energy we can integrate the gravitational force, whose magnitude is given by Newton's law of gravitation, with respect to the distance r between the two bodies. Using that definition, the gravitational potential energy of a system of masses m1 and M2 at a distance r using gravitational constant G is
$U = -G \frac{m_1 M_2}{r}\ + K$,
where K is the constant of integration. Choosing the convention that K=0 makes calculations simpler, albeit at the cost of making U negative; for why this is physically reasonable, see below.
Given this formula for U, the total potential energy of a system of n bodies is found by summing, for all $\frac{n ( n - 1 )}{2}$ pairs of two bodies, the potential energy of the system of those two bodies.
Gravitational potential summation $U = - m (G \frac{ M_1}{r_1}+ G \frac{ M_2}{r_2})$
Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity.
$U = - m \left(G \frac{ M_1}{r_1}+ G \frac{ M_2}{r_2}\right)$
therefore,
$U = - m \sum G \frac{ M}{r}$,
### Why choose a convention where gravitational energy is negative?
As with all potential energies, only differences in gravitational potential energy matter for most physical purposes, and the choice of zero point is arbitrary. Given that there is no reasonable criterion for preferring one particular finite r over another, there seem to be only two reasonable choices for the distance at which U becomes zero: $r=0$ and $r=\infty$. The choice of $U=0$ at infinity may seem peculiar, and the consequence that gravitational energy is always negative may seem counterintuitive, but this choice allows gravitational potential energy values to be finite, albeit negative.
The singularity at $r=0$ in the formula for gravitational potential energy means that the only other apparently reasonable alternative choice of convention, with $U=0$ for $r=0$, would result in potential energy being positive, but infinitely large for all nonzero values of r, and would make calculations involving sums or differences of potential energies beyond what is possible with the real number system. Since physicists abhor infinities in their calculations, and r is always non-zero in practice, the choice of $U=0$ at infinity is by far the more preferable choice, even if the idea of negative energy in a gravity well appears to be peculiar at first.
The negative value for gravitational energy also has deeper implications that make it seem more reasonable in cosmological calculations where the total energy of the universe can meaningfully be considered; see inflation theory for more on this.
### Uses
Gravitational potential energy has a number of practical uses, notably the generation of hydroelectricity. For example in Dinorwig, Wales, there are two lakes, one at a higher elevation than the other. At times when surplus electricity is not required (and so is comparatively cheap), water is pumped up to the higher lake, thus converting the electrical energy (running the pump) to gravitational potential energy. At times of peak demand for electricity, the water flows back down through electrical generator turbines, converting the potential energy into kinetic energy and then back into electricity.[citation needed] (The process is not completely efficient and some of the original energy from the surplus electricity is in fact lost to friction.) See also pumped storage.
Gravitational potential energy is also used to power clocks in which falling weights operate the mechanism.
It's also used by counterweights for lifting up an Elevator, crane, or Sash window.
Another practical use is utilizing gravitational potential energy to descend (perhaps coast) downhill in transportation such as the descent of an automobile, truck, railroad train, bicycle, airplane, or fluid in a pipeline. In some cases the Kinetic energy obtained from potential energy of descent may be used to start ascending the next grade such as what happens when a road is undulating and has frequent dips.
## Elastic potential energy
Springs are used for storing elastic potential energy
Archery is one of humankind's oldest applications of elastic potential energy.
Main article: Elastic potential energy
Elastic potential energy is the potential energy of an elastic object (for example a bow or a catapult) that is deformed under tension or compression (or stressed in formal terminology). It arises as a consequence of a force that tries to restore the object to its original shape, which is most often the electromagnetic force between the atoms and molecules that constitute the object. If the stretch is released, the energy is transformed into kinetic energy.
### Calculation of elastic potential energy
The elastic potential energy stored in a stretched spring can be calculated by finding the work necessary to stretch the spring a distance x from its un-stretched length:
$U = -\int\vec{F}\cdot d\vec{x}$
an ideal spring will follow Hooke's Law:
$F=-k \Delta x\,$
The work done (and therefore the stored potential energy) will then be:
$U = -\int\vec{F}\cdot d\vec{x}=-\int {-k x}\, dx = \frac {1} {2} k x^2.$
The units are in joules (J).
The equation is often used in calculations of positions of mechanical equilibrium. This equation can also be stated as:
$U = \frac{1}{2}k \Delta x^2\,$
## Chemical potential energy
Main article: Chemical energy
Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy is converted to heat, same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy through the process known as photosynthesis, and electrical energy can be converted to chemical energy through electrochemical reactions.
The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc.
## Electric potential energy
Main article: Electric potential energy
An object can have potential energy by virtue of its electric charge and several forces related to their presence. There are two main types of this kind of potential energy: electrostatic potential energy, electrodynamic potential energy (also sometimes called magnetic potential energy).
Plasma formed inside a gas filled sphere
### Electrostatic potential energy
In case the electric charge of an object can be assumed to be at rest, it has potential energy due to its position relative to other charged objects.
The electrostatic potential energy is the energy of an electrically charged particle (at rest) in an electric field. It is defined as the work that must be done to move it from an infinite distance away to its present location, in the absence of any non-electrical forces on the object. This energy is non-zero if there is another electrically charged object nearby.
The simplest example is the case of two point-like objects A1 and A2 with electrical charges q1 and q2. The work W required to move A1 from an infinite distance to a distance r away from A2 is given by:
$W=\frac{1}{4\pi\varepsilon_0}\frac{q_1q_2}{r},$
where ε0 is the vacuum permittivity. This may also be written in a simpler form, resembling better the natural parallelism with Newton's gravitation equation, by using the electrostatic constant (Coulomb's constant), defined as ke = 1 ⁄ 4πε0.
This equation is obtained by integrating the Coulomb force between the limits of infinity and r.
A related quantity called electric potential (commonly denoted with a V for voltage) is equal to the electric potential energy per unit charge.
### Magnetic potential energy
The energy of a magnetic moment m in an externally produced magnetic B-field B has potential energy[8]
$U=-\mathbf{m}\cdot\mathbf{B}.$
The magnetization M in a field is
$U = -\frac{1}{2}\int \mathbf{M}\cdot\mathbf{B} dV,$
where the integral can be over all space or, equivalently, where M is nonzero.[9] Magnetic potential energy is the form of energy related not only to the distance between magnetic materials, but also to the orientation, or alignment, of those materials within the field. For example, the needle of a compass has the lowest magnetic potential energy when it is aligned with the north and south poles of the Earth's magnetic field. If the needle is moved by an outside force, torque is exerted on the magnetic dipole of the needle by the Earth's magnetic field, causing it to move back into alignment. The magnetic potential energy of the needle is highest when it is perpendicular to the Earth's magnetic field. Two magnets will have potential energy in relation to each other and the distance between them, but this also depends on their orientation. If the opposite poles are held apart, the potential energy will be the highest when they are near the edge of their attraction, and the lowest when they pull together. Conversely, like poles will have the highest potential energy when forced together, and the lowest when they spring apart.[10][11]
## Nuclear potential energy
Nuclear potential energy is the potential energy of the particles inside an atomic nucleus. The nuclear particles are bound together by the strong nuclear force. Weak nuclear forces provide the potential energy for certain kinds of radioactive decay, such as beta decay.
Nuclear particles like protons and neutrons are not destroyed in fission and fusion processes, but collections of them have less mass than if they were individually free, and this mass difference is liberated as heat and radiation in nuclear reactions (the heat and radiation have the missing mass, but it often escapes from the system, where it is not measured). The energy from the Sun is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million tonnes of solar matter per second into electromagnetic energy, which is radiated into space.
## Relation between potential energy, potential and force
Potential energy is closely linked with forces. If the work done moving along a path which starts and ends in the same location is zero, then the force is said to be conservative and it is possible to define a numerical value of potential associated with every point in space. A force field can be re-obtained by taking the negative of the vector gradient of the potential field.
For example, gravity is a conservative force. The associated potential is the gravitational potential, often denoted by $\phi$ or $V$, corresponding to the energy per unit mass as a function of position. The gravitational potential energy of two particles of mass M and m separated by a distance r is
$U = -\frac{G M m}{r},$
The gravitational potential (specific energy) of the two bodies is
$\phi = -\left( \frac{GM}{r} + \frac{Gm}{r} \right)= -\frac{G(M+m)}{r} = -\frac{GMm}{\mu r} = \frac{U}{\mu}.$
where $\mu$ is the reduced mass.
The work done against gravity by moving an infinitesimal mass from point A with $U = a$ to point B with $U = b$ is $(b - a)$ and the work done going back the other way is $(a - b)$ so that the total work done in moving from A to B and returning to A is
$U_{A \to B \to A} = (b - a) + (a - b) = 0. \,$
If the potential is redefined at A to be $a + c$ and the potential at B to be $b + c$, where $c$ is a constant (i.e. $c$ can be any number, positive or negative, but it must be the same at A as it is at B) then the work done going from A to B is
$U_{A \to B} = (b + c) - (a + c) = b - a \,$
as before.
In practical terms, this means that one can set the zero of $U$ and $\phi$ anywhere one likes. One may set it to be zero at the surface of the Earth, or may find it more convenient to set zero at infinity (as in the expressions given earlier in this section).
A thing to note about conservative forces is that the work done going from A to B does not depend on the route taken. If it did then it would be pointless to define a potential at each point in space. An example of a non-conservative force is friction. With friction, the route taken does affect the amount of work done, and it makes little sense to define a potential associated with friction.
All the examples above are actually force field stored energy (sometimes in disguise). For example in elastic potential energy, stretching an elastic material forces the atoms very slightly further apart. The equilibrium between electromagnetic forces and Pauli repulsion of electrons (they are fermions obeying Fermi statistics) is slightly violated resulting in a small returning force. Scientists rarely discuss forces on an atomic scale. Often interactions are described in terms of energy rather than force. One may think of potential energy as being derived from force or think of force as being derived from potential energy (though the latter approach requires a definition of energy that is independent from force which does not currently exist).
A conservative force can be expressed in the language of differential geometry as a closed form. As Euclidean space is contractible, its de Rham cohomology vanishes, so every closed form is also an exact form, and can be expressed as the gradient of a scalar field. This gives a mathematical justification of the fact that all conservative forces are gradients of a potential field.
## Notes
1. McCall, Robert P. (2010). "Energy, Work and Metabolism". Physics of the Human Body. JHU Press. p. 74. ISBN 978-0-8018-9455-8.
2. William John Macquorn Rankine (1853) "On the general law of the transformation of energy," Proceedings of the Philosophical Society of Glasgow, vol. 3, no. 5, pages 276-280; reprinted in: (1) Philosophical Magazine, series 4, vol. 5, no. 30, pages 106-117 (February 1853); and (2) W. J. Millar, ed., Miscellaneous Scientific Papers: by W. J. Macquorn Rankine, ... (London, England: Charles Griffin and Co., 1881), part II, pages 203-208.
3. Smith, Crosbie (1998). The Science of Energy - a Cultural History of Energy Physics in Victorian Britain. The University of Chicago Press. ISBN 0-226-76420-6.
4. Feynman, Richard P. (2011). "Work and potential energy". The Feynman Lectures on Physics, Vol. I. Basic Books. p. 13. ISBN 978-0-465-02493-3.
5. Aharoni, Amikam (1996). Introduction to the theory of ferromagnetism (Repr. ed.). Oxford: Clarendon Pr. ISBN 0-19-851791-2.
6. Jackson, John David (1975). Classical electrodynamics (2d ed.). New York: Wiley. ISBN 0-471-43132-X.
7. James D. Livingston, Rising Force: The Magic of Magnetic Levitation - President and Fellows of Harvard College 2011, p. 152
8. Narinder Kumar, Comprehensive Physics XII, Laxmi Publications 2004, p. 713
## References
• Serway, Raymond A.; Jewett, John W. (2010). Physics for Scientists and Engineers (8th ed.). Brooks/Cole cengage. ISBN 1-4390-4844-4.
• Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9288777112960815, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/05/09/uniqueness-of-solutions-to-differential-equations/?like=1&_wpnonce=d19428bcb3
|
# The Unapologetic Mathematician
## Uniqueness of Solutions to Differential Equations
The convergence of the Picard iteration shows the existence part of our existence and uniqueness theorem. Now we prove the uniqueness part.
Let’s say that $u(t)$ and $v(t)$ are both solutions of the differential equation — $u'(t)=F(u(t))$ and $v'(t)=F(v(t))$ — and that they both satisfy the initial condition — $u(0)=v(0)=a$ — on the same interval $J=[-c,c]$ from the existence proof above. We will show that $u(t)=v(t)$ for all $t\in J$ by measuring the $L^\infty$ norm of their difference:
$\displaystyle Q=\lVert u-v\rVert_\infty=\max\limits_{t\in J}\lvert u(t)-v(t)\rvert$
Since $J$ is a closed interval, this maximum must be attained at a point $t_1\in J$. We can calculate
$\displaystyle\begin{aligned}Q&=\lvert u(t_1)-v(t_1)\rvert\\&=\left\lvert\int\limits_0^{t_1}u'(s)-v'(s)\,ds\right\rvert\\&\leq\int\limits_0^{t_1}\lvert F(u(s))-F(v(s))\rvert\,ds\\&\leq\int\limits_0^{t_1}K\lvert u(s)-v(s)\rvert\,ds\\&\leq cKQ\end{aligned}$
but by assumption we know that $cK<1$, which makes this inequality impossible unless $Q=0$. Thus the distance between $u$ and $v$ is $0$, and the two functions must be equal on this interval, proving uniqueness.
### Like this:
Posted by John Armstrong | Analysis, Differential Equations
No comments yet.
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9105369448661804, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/80500/list
|
Return to Answer
2 added 10 characters in body
This should also be a comment. Most combinatorialists (for example, Enumerative Combinatorics, v.1, by R.P. Stanley, page 18) define the Stirling numbers of the first kind to be $(-1)^{(k-m)}c(k,m)$. s(k,m) := (-1)^{(k-m)}c(k,m)$. With that definition, you have the identity$\sum_{k \geq 0} S(n,k)s(k,m) = \delta_{n,m}$(ibid., p. 35). The sum you give does not always yield 0. When$n=2$and$m=1\$, for example, it equals 2.
1
This should also be a comment. Most combinatorialists (for example, Enumerative Combinatorics, v.1, by R.P. Stanley, page 18) define the Stirling numbers of the first kind to be $(-1)^{(k-m)}c(k,m)$. With that definition, you have the identity $\sum_{k \geq 0} S(n,k)s(k,m) = \delta_{n,m}$ (ibid., p. 35). The sum you give does not always yield 0. When $n=2$ and $m=1$, for example, it equals 2.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7906033992767334, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/39993/how-to-prove-int-02r-2h-sqrtr2-x-r2dx-pi-r2-h-without-geometry
|
How to prove $\int_0^{2r} 2h \sqrt{r^2 -(x-r)^2}dx = \pi r^2 h$ without geometry?
I wanted to find the volume of a cylinder, radius r, height h by slicing it in to rectangles:
I placed the cylinder on the x-axis, one corner of the base diameter at (0,0) the opposite at (2r, 0). I have found that an area of a cross-section perpendicular to the x-axis is $A(x) = 2h \sqrt{r^2 -(x-r)^2}$ so:
$V=\int_0^{2r} 2h \sqrt{r^2 -(x-r)^2}dx$
I have tested this and it gives the volume correctly for various r and h. But how to show:
$\int_0^{2r} 2h \sqrt{r^2 -(x-r)^2}dx = \pi r^2 h$ without just saying "we know $V= \pi r^2 h$"?
This problem was my own device, perhaps it is not possible to do this.
-
As a general heuristic, it would have been nicer to put one "corner" of the base diameter at $(-r,0)$ and the opposite one at $(r,0)$. Symmetry is almost always helpful! – André Nicolas May 19 '11 at 4:57
2 Answers
Put $(x-r) = r \sin\theta$ then you have $x = r + r \sin\theta$. Which says $\mathrm{dx}=r\cos\theta \rm{d}\theta$. When $x= 0$ we have $-r = r\sin\theta$ which says that $\theta = -\frac{\pi}{2}$. And when $x =2r$ we have $r=r\sin\theta$ which says that $\theta = \frac{\pi}{2}$. So your integral is now,
\begin{align*} \int\limits_{0}^{2r}\sqrt{r^{2}-(x-r)^{2}} \ dx &=\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \sqrt{r^{2}-r^{2}\sin^{2}\theta} \cdot r\cos\theta \ \text{d}\theta \\ &= r^{2} \int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \cos^{2}\theta \ \text{d}\theta \end{align*}
-
I believe the difficulty comes down to finding $$\int\sqrt{a^2-x^2}\,dx$$ The standard technique is to substitute $x=a\sin\theta$, $dx=a\cos\theta\,d\theta$. Can you take it from there?
-
But it is more like $\int \sqrt{a^2 +bx - x^2} dx$. – tutorscomputer May 19 '11 at 4:28
2
@tutorscomputer, can you complete the square in $a^2+bx-x^2$? or, in the original, first substitute $u=x-r$, $du=dx$? – Gerry Myerson May 19 '11 at 4:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8990382552146912, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/128205-multi-variable-calculus-problems.html
|
# Thread:
1. ## Multi-variable Calculus Problems
Hello!
I have a few questions that I am having some trouble figuring out,
1) If $w = x^2+y^2+xyz$ where $x=sin(st), y=cos(s+t), z=e^{st}$, use an 'appropriate' version of the chain rule to find dw/ds
2) Finding the directional derivitive of $f(x,y) = \frac{x+y}{e^x-y}$ at (0,2) in the direction of the unit vector u (1,1,1) which makes an angle of $\theta=3\pi/4$ with the positive x-axis
3) If an ant is climbing a hill whose shape is given by $z= f(x,y) = 7 - \frac{(x^2+y^2)}{4}$, and the ant is at $(x,y) = (1,\sqrt{3})$
a) in which direction does the ant preceed to take the steepest route to the top?
b) if it climbs at that direction, at what angle above the horizontal will it be climbing, initially?
---I think this has to do with the gradient of f?---
and 4) Find equations for the tanget plane and normal lines to $e^z = \sqrt{x^2+y^2+z^2}$ at (1,2,0).
I have tried a few different approaches to 1) and to 3) but am not too sure if they are going in the right direction.
If anyone could suggest anything it would be great!
Thanks so much!
2. Originally Posted by matt.qmar
Hello!
I have a few questions that I am having some trouble figuring out,
1) If $w = x^2+y^2+xyz$ where $x=sin(st), y=cos(s+t), z=e^{st}$, use an 'appropriate' version of the chain rule to find dw/ds
2) Finding the directional derivitive of $f(x,y) = \frac{x+y}{e^x-y}$ at (0,2) in the direction of the unit vector u (1,1,1) which makes an angle of $\theta=3\pi/4$ with the positive x-axis
3) If an ant is climbing a hill whose shape is given by $z= f(x,y) = 7 - \frac{(x^2+y^2)}{4}$, and the ant is at $(x,y) = (1,\sqrt{3})$
a) in which direction does the ant preceed to take the steepest route to the top?
b) if it climbs at that direction, at what angle above the horizontal will it be climbing, initially?
---I think this has to do with the gradient of f?---
and 4) Find equations for the tanget plane and normal lines to $e^z = \sqrt{x^2+y^2+z^2}$ at (1,2,0).
I have tried a few different approaches to 1) and to 3) but am not too sure if they are going in the right direction.
If anyone could suggest anything it would be great!
Thanks so much!
for (4)
Let $F(x,y,z) = e^z - \sqrt{x^2+y^2+z^2}$
Now, find $F_x$, $F_y$ and $F_z$ .. etc etc
Its typical problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9478923082351685, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/159762-what-meant.html
|
# Thread:
1. ## What is meant by this?
I am having trouble understanding this problem.
It has to deal with either cosine or sine law.
The angle of depression of a fire noticed west of a fire tower is 6.2 degrees. The angle of depression of a pond, also west of the tower, is 13.5 degrees.
If the fire and pond are the same altitude, and the tower is 2.25 km from the pond on a direct line, how far is the fire from the pond?
So do I take the fire and pond are level together on the ground, on a horizontal plane from the angles noted above??
Not sure how to solve this, and do I have enough info??
2. Originally Posted by bradycat
I am having trouble understanding this problem.
It has to deal with either cosine or sine law.
The angle of depression of a fire noticed west of a fire tower is 6.2 degrees. The angle of depression of a pond, also west of the tower, is 13.5 degrees.
If the fire and pond are the same altitude, and the tower is 2.25 km from the pond on a direct line, how far is the fire from the pond?
So do I take the fire and pond are level together on the ground, on a horizontal plane from the angles noted above??
Not sure how to solve this, and do I have enough info??
1. Draw a rough sketch.
2. You are dealing with 2 right triangles with the height of the tower as the common leg.
3. Calculate the height of the tower (use the tan-function)
4. Calculate the distance fire-tower and then the distance I labeled "x".
5. For confirmation only: I've got $x \approx 2.722\ km$
Attached Thumbnails
3. Thanks for the help. I can figure it out from how you drew it. I was totally wrong on my sketch!! ))))
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403082132339478, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/170375/question-about-fourier-series
|
# Question about Fourier series
The Fourier series of a function $f: G \to \mathbb C$ where $G$ is a group is the representation of $f$ in terms of characters $\chi_g \in \mathrm{Hom}(G, S^1)$ of $G$.
I understand the case where $G$ is finite and discrete. Now I'm trying to generalise to periodic functions on $\mathbb R$ but I'm not so sure what I'm thinking is correct.
If $f: \mathbb R \to \mathbb C$ is a $[-R,R]$-periodic function, we set $G = \mathbb R / 2R\mathbb Z \cong [0, 2R)$. Then since $f$ is $2R$-periodic we can shift this by $R$ consider $G = [-R, R)$, makes no difference.
In the discrete case $|G|=n$ the characters are $e^{\frac{2 \pi i k}{n}}$ for $0 \leq k \leq n$.
Now for some reason, the characters are $e^{i kx}$ for all $k \in \mathbb Z$. How do I see that these are the characters of $[-R,R)$?
-
I'm not sure what sort of equivalence you mean by $\mathbb R/2R\mathbb R\cong [0,2R)$. The first is a topological group (which is what allows us to find characters) while the second is not, and the two are not even homeomorphic or homotopy equivalent as topological spaces. – Alex Becker Jul 13 '12 at 15:31
@AlexBecker I meant "group isomorphism". – Matt N. Jul 13 '12 at 15:32
1
But it's unclear to me what the group operation would look like on that. And since the standard topologies on the two are non-homeomorphic, it's not an isomorphism of topological groups, so for the purposes of Fourier analysis the two are different. – Alex Becker Jul 13 '12 at 15:36
@AlexBecker The group operation on $[0,2R)$ is addition mod $2R$. Ok, then my question remains : ) – Matt N. Jul 13 '12 at 15:42
1
Not if we give $[0,1)$ the subspace topology from $\mathbb R$, as it is then noncompact and simply connected while the circle is compact and not simply connected. The difference is that in $[0,1)$ points in small neighborhoods of $1$ are not in small neighborhoods of $0$. – Alex Becker Jul 13 '12 at 18:06
show 1 more comment
## 1 Answer
What you have is equivalent to the circle group $\mathbb R/\mathbb Z$, and its characters are indeed $e^{ikx}$ for $k\in \mathbb Z$. It is easy to verify that these are continuous group homomorphisms from $\mathbb R/\mathbb Z$ to itself (which is the definition of a character). To see that these are all characters of $\mathbb R/\mathbb Z$, suppose that $f:\mathbb R/\mathbb Z\to \mathbb R/\mathbb Z$ is a continuous homomorphism. Assume first $f$ is increasing. Let $k$ be the degree of $f$ and note that we can lift $f$ to a map $g:\mathbb R\to\mathbb R$ with $g(x)=f(x)+k\lfloor x\rfloor$. The map $g$ is in fact a continuous homomorphism, as if $\{x\}+\{y\}<1$ ($\{\cdot\}$ denotes fractional part) then $$g(x+y)=f(x+y)+k\lfloor x+y\rfloor= f(x)+f(y) + k\lfloor x\rfloor + k\lfloor y\rfloor=g(x)+g(y)$$ and otherwise $$g(x+y)=f(x+y)+k\lfloor x+y\rfloor=f(x)+f(y)-\lim\limits_{x\to 1^-}g(x) + k\lfloor x\rfloor + k\lfloor y\rfloor+k=g(x)+g(y)$$ as $\lim\limits_{x\to 1^-} g(x)=k$, which also proves continuity. Restricting our attention to $\mathbb Q$ we see that $n\cdot g(m/n)=g(m)=km$ so $g(m/n)=km/n$, thus $g|_{\mathbb Q}(x)=kx$. By continuity we have that $g(x)=kx$ on $\mathbb R$, and it follows that $f(x)=kx$. Viewing the circle as the group of unit complex numbers, this is $e^{ikx}$. Similarly, if $f$ is decreasing we get $e^{-ikx}$. Thus the characters of $\mathbb R/\mathbb Z$ are $e^{ikx}$ for $k\in\mathbb Z$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528800249099731, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/129/why-did-nist-remove-the-lempel-ziv-compression-test-from-the-statistical-test-su?answertab=votes
|
# Why did NIST remove The Lempel-Ziv Compression test from the Statistical Test Suite?
NIST removed "The Lempel-Ziv Compression" test from the Statistical Test Suite in revision 2008 and above and has not incorporated it since – see revision 2010.
Why was it removed? Does it no longer provide sufficient testing of a PRNG or was it simply superseded by better tests?
-
## 1 Answer
According to the paper On Lempel-Ziv Complexity of Sequences by Doganaksoy and Gologlu,
A test based on Lempel-Ziv complexity was used in the NIST test suite, to test the randomness of sequences. However the test had some weaknesses. First of all, the test could only be applied to data of a specified length: $10^6$ bits. Moreover, the test used empirical data generated by SHA-1 (under randomness assumptions) for estimating the expected value of Lempel-Ziv complexity of sequences of length $10^6$ bits. Apparently, the data generated by SHA-1 led to not-so-good an estimate, hence, for instance, first $10^6$ bits of the binary expansion of $e$ failed the randomness test. Using asymptotic formulae for an estimate will not work either, since the sequences, as we will see in the forthcoming sections, are distributed tightly around the mean. Recently, apparently because of the spelt out reasons, Lempel-Ziv test had been excluded from the NIST test suite.
The Crypt-XS package, part of the NIST suite, includes the simply named sequence complexity test, which is based on Lempel-Ziv compression.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369844198226929, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/90726/list
|
## Return to Answer
1 [made Community Wiki]
Bourgain has a nice paper, Pointwise ergodic theorems for arithmetic sets, (subsequently extended in various directions by other authors including my co-author, Máté Wierdl) on proving a version of the Birkhoff ergodic theorem where one averages along the sequence of square numbers, rather than the sequence of integers. That is: for a measure-preserving system $T\colon X\to X$ and a (square-integrable) function $f$, one considers convergence of the averages $$\frac{1}{N}\sum_{j=1}^N f(T^{j^2}x).$$
Bourgain proves using analytic number theory techniques involving exponential sums that there is convergence almost everywhere, just as in the regular Birkhoff ergodic theorem (although not to the integral as in the regular case and the convergence fails for a typical $L^1$ function, unlike the regular case).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9195836186408997, "perplexity_flag": "head"}
|
http://mathhelpforum.com/new-users/202396-introduction.html
|
# Thread:
1. ## Introduction
Hi,
I came to MHF through google, looking for a *direct* proof that for all integers x it holds that if $x^2$ is even, then $x$ is even. There's a post on this here: proof that if a square is even then the root is too., but it doesn't give a direct proof, unfortunately. Anyway, MHF seemed like a nice and helpful site, so I registered, and here I am.
Tomas
2. ## Re: Introduction
The contrapositive proof is more direct than the direct proof in this case...
3. ## Re: Introduction
Yes, it's quite neat, that's precisely the point: I'm looking at this as an example of a case where proving the contrapositive is a lot easier than giving a direct proof. Of course, I need the direct proof to justify this claim. (I'm talking about deriving that there exists an $a$ s.t. $x=2a$ from the given that $x^2=2b$ for some integer $b$.
4. ## Re: Introduction
See what happens when x is odd and you square it.
See what happens when x is even and you square it.
5. ## Re: Introduction
Well, of course if $x$ is odd, then $x^2$ is odd, and if $x$ is even, then $x^2$ is even. But without wanting to sound pedantic, I'd say that's a contrapositive proof in disguise. I'd like to start reasoning about $x^2$ and end at $x$.
I just discussed it with a colleague, and we came up with this.
By the fundamental theorem of arithmetic, $x^2$ has a unique prime factorization, so $x^2=p_1^{c_1}\cdot p_2^{c_2}\cdot\ldots\cdot p_n^{c_n}$, for certain primes $p_i$, and positive whole numbers $c_i$. (Assume these primes are listed in ascending order.) Because $x^2$ is even, $p_1=2$ and because it is a square, we have that for all $i$, $2\mid c_i$. (This shows why, if $x^2$ is even, $4\mid x^2$.) So $x=2^{c_1/2}\cdot p_2^{c_2/2}\cdot\ldots\cdot p_n^{c_n/2}$, which means $x$ is even. QED.
6. ## Re: Introduction
Well, of course if $x$ is odd, then $x^2$ is odd, and if $x$ is even, then $x^2$ is even. But without wanting to sound pedantic, I'd say that's a contrapositive proof in disguise. I'd like to start reasoning about $x^2$ and end at $x$.
I just discussed it with a colleague, and we came up with this.
By the fundamental theorem of arithmetic, $x^2$ has a unique prime factorization, so $x^2=p_1^{c_1}\cdot p_2^{c_2}\cdot\ldots\cdot p_n^{c_n}$, for certain primes $p_i$, and positive whole numbers $c_i$. (Assume these primes are listed in ascending order.) Because $x^2$ is even, $p_1=2$ and because it is a square, we have that for all $i$, $2\mid c_i$. (This shows why, if $x^2$ is even, $4\mid x^2$.) So $x=2^{c_1/2}\cdot p_2^{c_2/2}\cdot\ldots\cdot p_n^{c_n/2}$, which means $x$ is even. QED.
It's not a contrapositive proof. You are seeing what the possible ways are to get even-valued x^2. It's a proof by exhaustion, which is direct.
7. ## Re: Introduction
OK, thank you. What I meant was "direct" in the sense that I wanted to reason from $x^2$ being even to $x$ being even. What I meant by "contrapositive in disguise" is that just one of those cases (the one of x being odd) already provides the contrapositive proof.
I'm sorry, I'm new here; still learning the proper way of expressing myself on the forum. Thanks.
8. ## Re: Introduction
OK, thank you. What I meant was "direct" in the sense that I wanted to reason from $x^2$ being even to $x$ being even. What I meant by "contrapositive in disguise" is that just one of those cases (the one of x being odd) already provides the contrapositive proof.
I'm sorry, I'm new here; still learning the proper way of expressing myself on the forum. Thanks.
Logically speaking, a direct proof is an argument that can show that statement 1 implies statement 2. You can start wherever you like, as long as you get to an argument that shows this implication. Exhaustive proofs like what I've shown you are direct.
9. ## Re: Introduction
@thomasklos that's a hard 1. i dont even know the answer. help please!
10. ## Re: Introduction
Originally Posted by EhtaChan
@thomasklos that's a hard 1. i dont even know the answer. help please!
Hi, there are several (ideas for) proofs on this page. Can you try to explain what you don't understand?
11. ## Re: Introduction
Hi Friends!
I am the new member of the forum ...........
So first of all I would like say ".............HELLO........"
to all of u Dear..................
Thanks...........
12. ## Re: Introduction
Can u suggest me how to solve pie chart questions very quickly??????????????
thanks
13. ## Re: Introduction
Hi, pssingh1001,
Two things. First, start a new thread for a new issue (introduction, new question, etc.). Second, you are more likely to receive help if you post a concrete question. A forum is not a replacement for textbooks.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394517540931702, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/99926/list
|
## Return to Answer
5 deleted 16 characters in body
This approach to deformations is taken, for instance, in all of the original papers of Kodaira-Spencer and Nirenberg. For instance, you You can have a look at On the existence of deformations of complex analytic structures, Annals, Vol.68, No.2, 1958
http://www.jstor.org/discover/10.2307/1970256?uid=3737608&uid=2129&uid=2&uid=70&uid=4&sid=47699092130607
but there are several many other papers of by the same authors.
For a nice and compact exposition, you can look at these class notes of Christian Schnell: http://homepages.math.uic.edu/~cschnell/pdf/notes/kodaira.pdf
Of course, the Maurer-Cartan equation and deformations (of various structures) via dgla's have been used by many other people since the late 1950-ies: Goldman & Millson, Gerstenhaber, Stasheff, Deligne, Quillen, Kontsevich.
Regarding the formula: that's a typo, indeed. You have two eigen-bundle decompositions, for $I$ and $I_t$:
$$T_{M, \mathbb{C}} = T^{1,0}\oplus T^{0,1}\simeq T^{1,0}_t\oplus T^{0,1}_t$$
and you write $T^{0,1}_{t}=\textrm{graph }\phi$, where $\phi: T^{0,1}_M\to T^{1,0}_M$. So actually
$$\phi = \textrm{pr}^{1,0}\circ \left.\left(\textrm{pr}^{0,1}\right)\right|_{T^{1,0}_t}^{-1}.$$
In local coordinates, $$\phi = \sum_{j,k=1}^{\dim_{\mathbb{C}} M}h_{jk}(t,z)d\overline{z}_j\otimes \frac{\partial}{\partial z_k},$$ and $T^{0,1}_t$ is generated (over the smooth functions) by
$$\frac{\partial}{\partial \overline{z_j}} + \sum_{k=1}^{\dim_{\mathbb{C}}M}h_{jk}\frac{\partial}{\partial z_k} z_k}.$$
Regarding the question "where does $t$ come from?", the answer is "From Ehresmann's Theorem": given a proper holomorphic submersion $\pi:\mathcal{X}\to \Delta$, you can choose a holomorphically transverse trivialisation $\mathcal{X}\simeq X\times \Delta$, $X=\pi^{-1}(0)= (M,I)$. In this way you get yourself two (almost) complex structures on $X\times \Delta$, which you can compare.
ADDENDUM I also second YangMills' suggestion to have a look at Chapter 2 of Gross-Huybrechts-Joyce. Also, you You can look at also try Chapter 1 of K. Fukaya's book "Deformation Theory, Homological algebra, and Mirror Symmetry", or at as well as the Appendix of Goldman-Millson paper to Homotopy invariance of the Kuranishi Space by Goldman and Millson (Illinois J. of Math, vol.34, No.2, 1990). In particular, you'll see how one uses formal Kuranishi theory to avoid dealing with the convergence of the power series for $\phi(t)$. For deformations of compact coplex complex manifolds, the convergence was proved by Kodaira-Nirenberg-Spencer. Fukaya says a little bit about the convergence of this series in general, i.e., for other deformation problems.
4 added 717 characters in body
This approach to deformations is taken, for instance, in all of the original papers of Kodaira-Spencer and Nirenberg. For instance, you can have a look at On the existence of deformations of complex analytic structures, Annals, Vol.68, No.2, 1958
http://www.jstor.org/discover/10.2307/1970256?uid=3737608&uid=2129&uid=2&uid=70&uid=4&sid=47699092130607
but there are several other papers of the same authors.
For a nice and compact exposition, you can look at these class notes of Christian Schnell: http://homepages.math.uic.edu/~cschnell/pdf/notes/kodaira.pdf
Of course, the Maurer-Cartan equation and deformations (of various structures) via dgla's have been used by many other people since the late 1950-ies: Goldman & Millson, Gerstenhaber, Stasheff, Deligne, Quillen, Kontsevich.
Regarding the formula: that's a typo, indeed. You have two eigen-bundle decompositions, for $I$ and $I_t$:
$$T_{M, \mathbb{C}} = T^{1,0}\oplus T^{0,1}\simeq T^{1,0}_t\oplus T^{0,1}_t$$
and you write $T^{0,1}_{t}=\textrm{graph }\phi$, where $\phi: T^{0,1}_M\to T^{1,0}_M$. So actually
$$\phi = \textrm{pr}^{1,0}\circ \left.\left(\textrm{pr}^{0,1}\right)\right|_{T^{1,0}_t}^{-1}.$$
In local coordinates, $$\phi = \sum_{j,k=1}^{\dim_{\mathbb{C}} M}h_{jk}(t,z)d\overline{z}_j\otimes \frac{\partial}{\partial z_k},$$ and $T^{0,1}_t$ is generated (over the smooth functions) by
$$\frac{\partial}{\partial \overline{z_j}} + \sum_{k=1}^{\dim_{\mathbb{C}}M}h_{jk}\frac{\partial}{\partial z_k}$$
Regarding the question "where does $t$ come from?", the answer is "From Ehresmann's Theorem": given a proper holomorphic submersion $\pi:\mathcal{X}\to \Delta$, you can choose a holomorphically transverse trivialisation $\mathcal{X}\simeq X\times \Delta$, $X=\pi^{-1}(0)= (M,I)$. In this way you get yourself two (almost) complex structures on $X\times \Delta$, which you can compare.
ADDENDUM I also second YangMills' suggestion to have a look at Chapter 2 of Gross-Huybrechts-Joyce. Also, you can look at Chapter 1 of K. Fukaya's book "Deformation Theory, Homological algebra, and Mirror Symmetry", or at the Appendix of Goldman-Millson paper Homotopy invariance of the Kuranishi Space (Illinois J. of Math, vol.34, No.2, 1990). In particular, you'll see how one uses formal Kuranishi theory to avoid dealing with the convergence of the power series for $\phi(t)$. For deformations of compact coplex manifolds, the convergence was proved by Kodaira-Nirenberg-Spencer. Fukaya says a little bit about the convergence of this series in general, i.e., for other deformation problems.
3 added 20 characters in body
This approach to deformations is taken, for instance, in all of the original papers of Kodaira-Spencer and Nirenberg. For instance, you can have a look at On the existence of deformations of complex analytic structures, Annals, Vol.68, No.2, 1958
http://www.jstor.org/discover/10.2307/1970256?uid=3737608&uid=2129&uid=2&uid=70&uid=4&sid=47699092130607
but there are several other papers of the same authors.
For a nice and compact exposition, you can look at these class notes of Christian Schnell: http://homepages.math.uic.edu/~cschnell/pdf/notes/kodaira.pdf
Of course, the Maurer-Cartan equation and deformations (of various structures) via dgla's have been used by many other people since the late 1950-ies: Gerstenhaber,StasheffGoldman & Millson, Gerstenhaber, Stasheff, Deligne, Quillen, Kontsevich.
Regarding the formula: that's a typo, indeed. You have two eigen-bundle decompositions, for $I$ and $I_t$:
$$T_{M, \mathbb{C}} = T^{1,0}\oplus T^{0,1}\simeq T^{1,0}_t\oplus T^{0,1}_t$$
and you write $T^{0,1}_{t}=\textrm{graph }\phi$, where $\phi: T^{0,1}_M\to T^{1,0}_M$. So actually
$$\phi = \textrm{pr}^{1,0}\circ \left.\left(\textrm{pr}^{0,1}\right)\right|_{T^{1,0}_t}^{-1}.$$
In local coordinates, $$\phi = \sum_{j,k=1}^{\dim_{\mathbb{C}} M}h_{jk}(t,z)d\overline{z}_j\otimes \frac{\partial}{\partial z_k},$$ and $T^{0,1}_t$ is generated (over the smooth functions) by
$$\frac{\partial}{\partial \overline{z_j}} + \sum_{k=1}^{\dim_{\mathbb{C}}M}h_{jk}\frac{\partial}{\partial z_k}$$
Regarding the question "where does $t$ come from?", the answer is "From Ehresmann's Theorem": given a proper holomorphic submersion $\pi:\mathcal{X}\to \Delta$, you can choose a holomorphically transverse trivialisation $\mathcal{X}\simeq X\times \Delta$, $X=\pi^{-1}(0)= (M,I)$. In this way you get yourself two (almost) complex structures on $X\times \Delta$, which you can compare.
2 edited body
This approach to deformations is taken, for instance, in all of the original papers of Kodaira-Spencer and Nirenberg. For instance, you can have a look at On the existence of deformations of complex analytic structures, Annals, Vol.68, No.2, 1958
http://www.jstor.org/discover/10.2307/1970256?uid=3737608&uid=2129&uid=2&uid=70&uid=4&sid=47699092130607
but there are several other papers of the same authors.
For a nice and compact exposition, you can look at these class notes of Christian Schnell: http://homepages.math.uic.edu/~cschnell/pdf/notes/kodaira.pdf
Of course, the Maurer-Cartan equation and deformations (of various structures) via dgla's have been used by many other people since the late 1950-ies: Gerstenhaber,Stasheff, Deligne, Quillen, Kontsevich.
Regarding the formula: that's a typo, indeed. You have two eigen-bundle decompositions, for $I$ and $I_t$:
$$T_{M, \mathbb{C}} = T^{1,0}\oplus T^{0,1}\simeq T^{1,0}_t\oplus T^{0,1}_t$$
and you write $T^{0,1}_{t}=\textrm{graph }\phi$, where $\phi: T^{0,1}_M\to T^{1,0}_M$. So actually
$$\phi = \textrm{pr}^{1,0}\circ \left.\left(\textrm{pr}^{0,1}\right)^{-1}\right|_{T^{1,0}_t}.$$left.\left(\textrm{pr}^{0,1}\right)\right|_{T^{1,0}_t}^{-1}.
In local coordinates, $$\phi = \sum_{j,k=1}^{\dim_{\mathbb{C}} M}h_{jk}(t,z)d\overline{z}_j\otimes \frac{\partial}{\partial z_k},$$ and $T^{0,1}_t$ is generated (over the smooth functions) by
$$\frac{\partial}{\partial \overline{z_j}} + \sum_{k=1}^{\dim_{\mathbb{C}}M}h_{jk}\frac{\partial}{\partial z_k}$$
Regarding the question "where does $t$ come from?", the answer is "From Ehresmann's Theorem": given a proper holomorphic submersion $\pi:\mathcal{X}\to \Delta$, you can choose a holomorphically transverse trivialisation $\mathcal{X}\simeq X\times \Delta$, $X=\pi^{-1}(0)= (M,I)$. In this way you get yourself two (almost) complex structures on $X\times \Delta$, which you can compare.
1
This approach to deformations is taken, for instance, in all of the original papers of Kodaira-Spencer and Nirenberg. For instance, you can have a look at On the existence of deformations of complex analytic structures, Annals, Vol.68, No.2, 1958
http://www.jstor.org/discover/10.2307/1970256?uid=3737608&uid=2129&uid=2&uid=70&uid=4&sid=47699092130607
but there are several other papers of the same authors.
For a nice and compact exposition, you can look at these class notes of Christian Schnell: http://homepages.math.uic.edu/~cschnell/pdf/notes/kodaira.pdf
Of course, the Maurer-Cartan equation and deformations (of various structures) via dgla's have been used by many other people since the late 1950-ies: Gerstenhaber,Stasheff, Deligne, Quillen, Kontsevich.
Regarding the formula: that's a typo, indeed. You have two eigen-bundle decompositions, for $I$ and $I_t$:
$$T_{M, \mathbb{C}} = T^{1,0}\oplus T^{0,1}\simeq T^{1,0}_t\oplus T^{0,1}_t$$
and you write $T^{0,1}_{t}=\textrm{graph }\phi$, where $\phi: T^{0,1}_M\to T^{1,0}_M$. So actually
$$\phi = \textrm{pr}^{1,0}\circ \left.\left(\textrm{pr}^{0,1}\right)^{-1}\right|_{T^{1,0}_t}.$$
In local coordinates, $$\phi = \sum_{j,k=1}^{\dim_{\mathbb{C}} M}h_{jk}(t,z)d\overline{z}_j\otimes \frac{\partial}{\partial z_k},$$ and $T^{0,1}_t$ is generated (over the smooth functions) by
$$\frac{\partial}{\partial \overline{z_j}} + \sum_{k=1}^{\dim_{\mathbb{C}}M}h_{jk}\frac{\partial}{\partial z_k}$$
Regarding the question "where does $t$ come from?", the answer is "From Ehresmann's Theorem": given a proper holomorphic submersion $\pi:\mathcal{X}\to \Delta$, you can choose a holomorphically transverse trivialisation $\mathcal{X}\simeq X\times \Delta$, $X=\pi^{-1}(0)= (M,I)$. In this way you get yourself two (almost) complex structures on $X\times \Delta$, which you can compare.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8407824635505676, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/speed-of-light?sort=active&pagesize=30
|
# Tagged Questions
The speed of light is fundamental universal constant that marks the maximum speed at which information can propagate. Its value is $299792458\frac{\mathrm{m}}{\mathrm{s}}$.
2answers
103 views
### Extended Rigid Bodies in Special Relativity
I was reading Landau & Lifhsitz's Classical Field Theory and I noticed that they mention that an extended rigid body isn't "relativistically correct". For example, if you consider a rigid rod ...
8answers
2k views
### Would time freeze if you can travel at the speed of light?
I read with interest about Einstein's Theory of Relativity and his proposition about the speed of light being the speed limit for anything with mass. So, if I were ...
1answer
44 views
### Does the magnitude of a mass affect the velocity?
Imagine that I shrink my entire mass to fit within the volume of a light particle. If I was then 'hit' by another light particle would my greater mass affect my gain in velocity from this collision ...
3answers
76 views
### What causes the permittivity and permeability of vacuum?
When light travels through a material, it gets "slowed down" (at least its net speed decreases). The atoms in the material "disturb" the light in some way which causes it to make stops on its path. ...
2answers
96 views
### Superluminal particles with causality
What kind of CLASSICAL theories would allow to true (non-apparent) superluminal particles (beyond speed of light, BSOL) agreeing with causality to exist? I mean, are causal superluminal classical ...
1answer
28 views
### Relation of color and frequency for the visible spectrum
In this question the OP is looking for a way to see light that is outside of the visible spectrum without using electronic sensors. This got me wondering about the visible spectrum itself. Typically ...
1answer
80 views
### Describing physical constants in alternate wording; c = there can only be 671million miles of space for every second of time [closed]
This spawns from part of an answer to a question I asked. All sorts of things go to 0 and/or ∞ if you start boosting at c, and so you cannot boost into and out of a photon's frame. It ...
1answer
95 views
### Neutrinos and Speed of light
Einstein's Special Theory of relativity postulates that the speed of light is same for all frames. Suppose a neutrino is there moving at the speed of light. Then will that neutrino also be flowing ...
1answer
49 views
### Zero uncertainty constant and a unit change
So, we know the speed of light with zero uncertainty. We also know that values of $\epsilon_0$ (electric constant) and $\mu_0$ (magnetic constant) are known with zero uncertainty. My questions are ...
13answers
5k views
### How does gravity escape a black hole?
My understanding is that light can not escape from within a black hole (within the event horizon). I've also heard that information cannot propagate faster than the speed of light. It would seem to ...
1answer
111 views
### Why does Lorentz factor not hold for relativistic mass when we apply it to photons? [duplicate]
We know that the photon itself is massless particle $m_0=0$. But we also know, that the mass of the objects does increase with their energy. And we know that under certain circumstances (gravity, ...
0answers
69 views
### Is there any proof that the speed of gravity is limited? [duplicate]
I must warn that though I'm argumenting with black holes I'm not asking how does gravity escape the black hole!. I want to know if the absolute speed of gravity waves were proven bu an experiment. We ...
9answers
6k views
### Why does the mass of an object increase when its speed approaches that of light?
I'm reading Nano: The Essentials by T. Pradeep and I came upon this statement in the section explaining the basics of scanning electron microscopy. However, the equation breaks down when the ...
1answer
82 views
### Could the shadow move with faster-than-light speed? [duplicate]
If I make a huge laser with a figure for shadow in front of the laser, and I shine it on to the moon, will I see the light from the laser AND the shadow moving the same speed? (I read somewhere the ...
3answers
185 views
### Could some Red and Blue shifts be the result of light passing through “dark matter”?
As i see it, light behaves in certain ways, as the Double Slit experiement shows, So when light comes into contact with dark matter, it becomes both a wave and a particle, the wave is bent around the ...
1answer
95 views
### Could entropy explain dark energy?
This was 3rd beer idea, so please bear with me. What if the universe was not actually expanding but the speed of light was slowing? Wouldn't that be indistinguishable to our observations? Either way ...
3answers
98 views
### Reaching the speed of light via quantum mechanical uncertainty?
Suppose you accelerate a body to very near the speed of light $c$ where $v = c - \epsilon$. Although this would take an enormous energy, is it possible the last arbitrarily small velocity needed -- ...
1answer
81 views
### Is the speed of light related to the mass of the universe?
If the mass of the universe were cut in half, would it affect the speed of light? Would it be twice as fast? Would it stay the same? Do we have instruments that are sensitive enough to measure the ...
2answers
61 views
### Special Theory of relativity on electromagnetic waves
Since time slows down and length contracts, when we travel almost at speed of light, if the speed of light (or EM waves) remains same and the wavelength of light remains same, do we measure the ...
1answer
46 views
### Live feed from a Rocket traveling near the speed of light?
Okay, odd question popped up in my physics class today. If a rocket ship is traveling at .99c for 1 year, and is streaming a video at 30 frames/sec to earth, how would the earth feed be affected? ...
1answer
187 views
### drift velocity of electrons in a superconductor
is there a formula for the effective speed of electron currents inside superconductors? The formula for normal conductors is: $$V = \frac{I}{nAq}$$ I wonder if there are any changes to this ...
1answer
62 views
### Magnets and speed of light
I am in no way a physicist but I do have a fascination with physics. My question is if magnets are being explored / studied as a potential source to achieve the speed of light and if that is even ...
0answers
59 views
### How does this paper relate to standard QED?
This paper proposes a microscopic mechanism for generating the values of $c, \epsilon_0, \mu_0$. They state that their vacuum is assumed to contain ephemeral (meaning existing within the limits of ...
3answers
102 views
### Stuff can't go at the speed of light - in relation to what? [duplicate]
We all know that stuff can't go faster than the speed of light - it's length becomes negative and all kinds of weird stuff happens. However, this is in relation to what? If two objects, each moving ...
1answer
104 views
### Michelson–Morley @ Home
The Michelson-Morley experiment seems to have taken many years, resources and a nervous breakdown to complete. Is it possible to recreate a variation of this experiment at home for say, under \$1000, ...
2answers
102 views
### Is Earth's orbit around the sun affected by the ~8 minutes light delay?
Gravitational change occurs at the speed of light. As a consequence, we experience on Earth the gravitational attraction of the sun based on its position relative to us ~8 minutes ago. How does this ...
3answers
69 views
### What is the cause the light is affected by gravity? [duplicate]
I know that photons have no mass and that a photons exist only moving at the speed of light. So what is the cause that a massive astronomical object can bend a ray of light? I have two thoughts, but I ...
3answers
312 views
### Can something travel faster than light if it has always been travelling faster than light?
I know there are zillions of questions about faster than light travel, but please hear me out. According to special relativity, it is impossible to accelerate something to the speed of light. However, ...
1answer
28 views
### If there's a light ray and it's turned to a new location by a certain angle
Imagine that there's a light ray, with source at point A, and it's directed towards point B (which is very far from point A) and it continues for a huge distance. How will an observer at point B ...
1answer
55 views
### Rømer's determination of the speed of light
I am trying to understand Rømer's determination of the speed of light ($c$). The geometry of the situation is shown in the image below. The determination involves measuring apparent fluctuations in ...
2answers
92 views
### What is the mass of a photon moving at the speed of light? [duplicate]
What is the mass of a photon moving at the speed of light? And if it does not have mass, how is it affected by gravity? Also why does Einstein's general relativity support that a gravitational wave ...
3answers
124 views
### Why are black holes special?
A black hole is where it's mass is great enough that light can't escape at a radius above the surface of the mass? I've been told that strange things happen inside the event horizon such as ...
1answer
40 views
### Wavefront emitted by bodies at traveling near the velocity of light
I studied that no body can travel with the velocity of light. But, assuming that when a body moves nearly velocity of light, will it obey length contraction law of Einstein or will it emit the same ...
10answers
6k views
### Does the Pauli exclusion principle instantaneously affect distant electrons?
According to Brian Cox in his A night with the Stars lecture$^1$, the Pauli exclusion principle means that no electron in the universe can have the same energy state as any other electron in the ...
2answers
60 views
### Speed and transparency of light
I have been puzzled with a fact that as an object moves faster, it ceases its property of opacity. I mean to say that as an object moves faster we can see right through it (more clearly than in a ...
3answers
546 views
### Why is a black hole black?
In general relativity (ignoring Hawking radiation), why is a black hole black? Why nothing, not even light, can escape from inside a black hole? To make the question simpler, say, why is a ...
3answers
154 views
### Is there absolute proof that an object cannot exceed the speed of light?
Have any known experiments ruled out travelling faster than the speed of light? Or is this just a widely accepted theory?
1answer
60 views
### How do we know that light is massless? [duplicate]
Almost everybody knows that light is massless. But where this come from and how it can be proven (experimentally or theoretically)? I actually found this article which explains and calculates the mass ...
2answers
12k views
### Why does wavelength change as light enters a different medium?
When light waves enter a medium of higher refractive index than the previous, why is it that: Its wavelength decreases? The frequency of it has to stay the same?
0answers
151 views
### How is it possible the speed of light is not constant?
I was reading this article recently, which summarizes a couple of new studies into the speed of light. In one paper, Marcel Urban from the University of Paris-Sud, located in Orsay, France and his ...
2answers
39 views
### About hubble observatory and distant galaxies [duplicate]
According to Hubble observatory, the age of universe is 14 billion years. But, the distant galaxies are about 40 billion light years. How could that simply be possible? That means the information that ...
2answers
295 views
### Why cosmic background radiation is not ether?
why cosmic background radiation is not ether? I mean it's everywhere and it' a radiation then we can measure Doppler effect by moving with a velocity.
2answers
158 views
### Has anyone ever measured the one way speed of light perpendicular to the Earth at the Earth's surface?
1 - Has anyone ever measured the one way speed of photons traveling perpendicular to the Earth at the Earth's surface? 2 - Given our current understanding of Physics is there any way both the upward ...
2answers
152 views
### Can a dot of light travel faster than the speed of light? [duplicate]
Say I have a laser. If I spin the laser so that the beam sweeps in an arc along a very distant object, could that dot travel faster than the speed of light? In Diagram form:
5answers
420 views
### Special Relativity Second Postulate
That the speed of light is constant for all inertial frames is the second postulate of special relativity but this does not means that nothing can travel faster than light. so is it possible the ...
3answers
505 views
### Special Relativity and $E = mc^2$
I read somewhere that $E=mc^2$ shows that if something was to travel faster than the speed of light then they would have infinite mass and would have used infinite energy. How does the equation show ...
2answers
71 views
### How does the wavelength change in relativistic limit?
In the text, it reads that the momentum of a particle will change if it is moving at speed close to light speed. In the general case, the wavelength is given as $$\lambda = \frac{h}{p}$$ and p ...
4answers
349 views
### How does $E=mc^2$ put an upper limit to velocity of a body?
How does $E=mc^2$ put a upper limit to velocity of a body? I have read some articles on speed of light and they just tell me that it is the maximum velocity that can be acquired by any particle. How ...
3answers
86 views
### Special Relativity - speed of light question
Just a basic question: I know that if you are traveling at $x$ speed the time will pass for you slower than to an observer that is relatively stopped. That's all just because a photon released at the ...
0answers
42 views
### Do all the 4 forces of nature act at the same speed? [duplicate]
It is believed that gravity, the weakest of the four forces propagates at the speed of light, cf. e.g. this Phys.SE post. One would expect (perhaps erroneously) that the other, stronger, forces acted ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458585381507874, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/201574-permutations-maximal-order-odd-even.html
|
1Thanks
• 1 Post By a tutor
# Thread:
1. ## Permutations: maximal order and odd/even
Problem number one goes like this: determine, in cycle form (eng?), a σ є S10 with maximal order, in other words so that the order of σ is greater than the order of any other permutation in S10. State this order, too.
Is there any systematic method of doing this or is trying different combinations the only way?
The second problem: Let π denote that permutation on the set {1, 2, 3, 4, 5, 6, 7} which can be described with the product
π = (1 2 3 4 5) (1 7 6 5) (1 3 5 7)
a) Write π as a product of disjunct cycles.
b) Is π an odd or even function?
I would go about solving a) like this: write the cycles of π in one-row form (eng?) from right to left. Then, write [1 2 3 4 5 6 7] at the top and the final result at the bottom
(which is [4 3 2 5 6 1 7]) and figure out the transpositions. But the solution is completely different: (1 4 5 6) (2 3) (7). How did they get that answer? Also, I figure that I haven't understood what a disjunct cycle is. Is it when you write the cycle form like (a b c) (d e)? But isn't that always the way to do it?
For b) the solution is what I tried to do in a) and count the number of transpositions. The answer they state is (1 6) (1 5) (1 4) (2 3). Is there a simple way to get this answer or do I have to write a long list with a bubble sort from top to bottom?
Grateful for answers!
2. ## Re: Permutations: maximal order and odd/even
If a permutation is expressed as a product of disjoint cycles then the order of the permutation is the lcm of the lengths of the cycles.
So the order of (1 2 3)(7 8) is 6.
For $S_{10}$
10 = 5 + 3 + 2.
5 x 3 x 2 = 30
3. ## Re: Permutations: maximal order and odd/even
For your permutation $\pi$ you are working from the wrong end. These product are worked out left to right.
So
1->2
2->3->5
3->4
4->5->1->3
5->1->7->1
6->5->7
7->6
So you end up with
1 2 3 4 5 6 7
2 5 4 3 1 7 6
4. ## Re: Permutations: maximal order and odd/even
I may be exposing my ignorance now but I worked out a set of transpositions like this:
Resulting in (1 2)(1 5)(3 4)(6 7).
I hope this is all correct.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8926082253456116, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90490/blow-up-along-a-subscheme-and-along-its-associated-reduced-closed-subscheme
|
## Blow-up along a subscheme and along its associated reduced closed subscheme
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a noetherian scheme and let $Y$ be a closed subscheme of $X$. What relation is there between $\mathrm{Bl} _ {Y}(X)$ and $\mathrm{Bl} _{ Y _{\mathrm{red}}}(X)$ ?
Thanks.
-
## 2 Answers
There is no map from one blow up to the other, and definitely not an isomorphism. Please see my comments to J.C. Ottems answer.
However, if you replace radical by integral closure, then everything is fine.
Here's what I mean, if $I$ is an ideal and $J$ is its integral closure, then you always have an everywhere defined map
$$Bl_J X \to Bl_I X.$$
This need not be an isomorphism, indeed the integral closure of $(x^2, y^2)$ is $(x^2, xy, y^2)$. The blow up of the latter ideal is the normalization of the blow up of the former.
The other way you can get a map is if $J = \sqrt I$, and also if we can write $I = J \cdot \mathfrak{a}$ for some other ideal $\mathfrak{a}$. Then the blow up of $I$ is always the blow up of $\mathfrak{a}$ pulled back to $Bl_J X$.
In general, you should expect no relation between the blow up of two ideals with the same radical unless there is some integral closure relation between them and/or one ideal is the product of the other (and something else).
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In general they can be very different. For example take the subscheme $Y$ of $\mathbb{A}^2$ given by the ideal $(x^2,y)$. Here the blow up is covered by the two open subsets
$$U = \mbox{Spec} k[x, y][t]/(y − x^2t),\qquad V = \mbox{Spec} k[x, y][s]/(ys − x^2)$$
In particular the blow up of $Y$ is singular, whereas the blow-up of $\mathbb{A}^2$ at a point is not.
In general, even if you assume that both blow-ups are smooth, all sorts of things can happen depending on how complicated the ideal sheaf is. For example the blow-ups can have a different number of exceptional divisors and not even be related by a finite map. Even worse, every birational morphism $X'\to X$ is the blow-up of $X$ along some ideal sheaf.
-
Thanks for the example. But at least, there is a natural map from one to another? – gio Mar 7 2012 at 21:25
2
I believe there is no map from one to the other. In the example J.C. Ottem gives, the blow up of $(x^2,y)$ can be obtained as follows. Blow up $(x,y)$, then blow up another point on that first blowup (the origin on one of the usual charts), and then contract the first exceptional curve. There's no map between $Bl_{(x^2,y)}X$ and $Bl_(x,y) X$, at least no map over $X$.$$\text{ }$$ Just because you have an inclusion of Rees algebras, does not mean that there is an everywhere defined map of the blow-ups. In the given example, one of the points of the overring contracts to an irrelevant ideal. – Karl Schwede Mar 8 2012 at 4:39
You are right, Karl. Thanks. – J.C. Ottem Mar 8 2012 at 8:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426245093345642, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/1495-velocity-falling-object.html
|
# Thread:
1. ## velocity of falling object
If you drop a 1.36 kg weight and it falls .1524 meters, what is its velocity when it hits the ground and how is this calculated. Ultimately I am trying to decide how much energy is released.
Thanks,
Greg
2. Originally Posted by Greg E
If you drop a 1.36 kg weight and it falls .1524 meters, what is its velocity when it hits the ground and how is this calculated. Ultimately I am trying to decide how much energy is released.
Thanks,
Greg
An object dropped near the surface of the earth experiences an approximately
constant acceleration of $-g$ the minus sign is conventional as
up is normally taken as positive, and so downward accelleration is negative.
So $v=-g\cdot t+c$, and if the initial speed is $0$ then $c=0$.
Integrating again gives:
$x=-\frac{g\cdot t^2}{2}+x_0$,
where $x_0$ is the initial height of the weight, for convenience we will
take this to be $0$.
Now we are interested in what is happening when the weight has fallen
$0.1524m$. This occurs when:
$0.1524=-\frac{g\cdot t^2}{2}$,
or:
$t=\sqrt{\frac{2\cdot 0.1524}{g}}$.
Putting this back into the equation for vertical velocity:
$v=-g \cdot \left( \sqrt {\frac{2 \cdot 0.1524}{g}} \right)=-\sqrt{g \cdot 2 \cdot 0.1524}$.
At the earth surface $g \sim 9.81 m/s$.
We could have short circuited this calculation by knowing that the change
in potential energy when the weight drops through a height $h$is
$PE=m \cdot g \cdot h$,
which will equal the kinetic energy of the weight when it has fallen from rest
through a distance $h$:
$KE= \frac{1}{2}m \cdot v^2$
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336335062980652, "perplexity_flag": "middle"}
|
http://en.m.wikibooks.org/wiki/Electrodynamics/Solving_Maxwell's_Equations
|
# Electrodynamics/Solving Maxwell's Equations
We will solve the two equations for the potentials, and put them together to generate a solution for E and B. The two equations are
$\nabla^2 \phi -\mu_0 \epsilon_0 \frac{\partial^2 \phi} {\partial t^2} = -\frac{\rho}{\epsilon_0}$
$\nabla^2 \mathbf{A} - \mu_0 \epsilon_0 \frac{\partial^2 \mathbf{A}}{\partial t^2}= -\mu_0 J$
## Retarded Time
Imagine an infinite sheet. At time 0, a current is turned on, and thus, a magnetic field should be produced. What is the field at a very far point immediately after time 0? It is still 0, because the "news" that the current is turned on hasn't reached there yet. Electromagnetic information travel at the speed of light. This isn't surprising since light is an electromagnetic phenomena. We have already seen that electromagnetic waves travel at the speed of light; it would only make sense if all electromagnetic fields propagate at that speed.
Thus, let us introduce the quantity $t_{ret}=t-r/c$ called the retarded time. Suppose we have a fixed point P, and that it is currently time t. At another point Q, electromagnetic waves are sent. The waves arriving at P are not the ones generated at that instant, but the ones generated at $t_{ret}$. This is because light takes $PQ/c$ time to travel to P, so we must subtract it from t.
It turns out that the most general solution to the potential equations is given by:
$\phi(\mathbf{r},t)=\frac{1}{4 \pi \epsilon_0} \int \frac{\rho(\mathbf{r}-\mathbf{r'},t_{ret})}{||\mathbf{r}-\mathbf{r'}||} dV$
and
$\mathbf{A}(\mathbf{r},t)=\frac{\mu_0}{4 \pi} \int \frac{\mathbf{J}(\mathbf{r}-\mathbf{r'},t_{ret})}{||\mathbf{r}-\mathbf{r'}||} dV$
with $t_{ret}=t-\frac{||\mathbf{r}-\mathbf{r'}||}{c}$
Note that the potentials are given at the actual time t, while the time appearing in the integral is the retarded time. Also, the integrals are over r'. Third, note that $t_{ret}$ is a function of both r and r'; keep this in mind when doing integrals and derivatives with retarded time in it.
Last of all, note that the following equation is not correct:
$\mathbf{E}(\mathbf{r},t)=\frac{1}{4 \pi \epsilon_0} \int \frac{\rho(\mathbf{r}-\mathbf{r'},t_{ret})}{||\mathbf{r}-\mathbf{r'}||^2} dV$
as might be expected from analogy. Thus, our formulas for potentials is not a trivial equation; they have to be checked against Maxwell's equations.
On the other hand, since we know that $\nabla \times \mathbf{A}=\mathbf{B}$ and $-\nabla \phi = - \frac{\partial \mathbf{A}}{\partial t}$, we can "easily" calculate the fields from the potentials.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582531452178955, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/115330/it-is-possible-to-integrate-this-function/115333
|
# It is possible to integrate this function?
I heard that that there are some unintegratable functions and I want to as if this one is not one of them?
\begin{equation} \large \int \frac{t}{t+1}dt \end{equation} Got this by trying to solve another function and I need to check if this is not a dead end. If it is I will just have to find another way. It is likely that I just lack some skill and knowledge to solve it despite it`s simple appearance.
-
1
By writing $\frac{t}{t+1} = 1 - \frac{1}{t+1}$, you can see that $t - \ln|t+1|$ is a primitive – Joel Cohen Mar 1 '12 at 17:58
@Povylas: Let's rewrite history and assume that no function like $\log$ had ever been given a name. Then $t/(t+1)$ would be "unintegrable" in that we would not have an expression for an antiderivative in terms of named functions. – André Nicolas Mar 1 '12 at 18:15
## 3 Answers
This integral is easy to do:
$$\begin{align*} \int \frac{t}{t+1}\,dt &= \int\frac{t+1-1}{t+1}\,dt\\ &= \int\left(\frac{t+1}{t+1}-\frac{1}{t+1}\right)\,dt\\ &= \int1\,dt - \int\frac{dt}{t+1}\\ &= t - \ln|t+1| + C. \end{align*}$$
You can verify this by differentiation: $$\frac{d}{dt}\left(t - \ln|t+1| + C\right) = 1 - \frac{1}{t+1} = \frac{t+1-1}{t+1} = \frac{t}{t+1}.$$
Note. What you have here is the integral of a rational function (a polynomial divided by a polynomial). In principle, every rational function has an elementary integral. There's even an algorithm for finding them.
To find the integral of $\frac{p(t)}{q(t)}$, where $p$ and $q$ are polynomials:
1. If $\deg(p)\geq \deg(q)$, then perform long division with remainder and rewrite the fraction as $$\frac{p(t)}{q(t)} = P(t) + \frac{r(t)}{q(t)}$$ where $P(t)$ is a polynomial, and $r(t)$ is a polynomial with $r=0$ or $\deg(r)\lt\deg(q)$. $P(t)$ can be integrated easily, so we are left with the problem of integrating rational functions $\frac{p(t)}{q(t)}$ with $\deg(p)\lt\deg(q)$.
2. Completely factor $q(t)$ into a product of linear and irreducible quadratic polynomials. This step can be hard to perform in practice! In fact, this is the only reason why I say "in principle" above, because actually factoring a polynomial can be very hard to do.
3. Use Partial Fraction Decomposition to rewrite $\frac{p(t)}{q(t)}$ as a sum of rational functions in which the denominator is a power of a linear polynomial and the numerator is a constant; or the denominator is a power of an irreducible quadratic polynomial and the numerator is linear polynomial.
4. To compute $\int\frac{A}{(at+b)^n}\,dt$, $A$ constant, $a\neq 0$, $n$ a positive integer, use the substitution $u=at+b$.
5. To compute $\int\frac{At}{(at^2+bt+c)^n}\,dt$ where $at^2+bt+c$ is irreducible quadratic, use the substitution $u=at^2+bt+c$, adjusting the numerator by a constant.
6. To compute $\int\frac{A}{at^2+bt+c}\,dt$, with $at^2+bt+c$ irreducible quadratic, complete the square, use a substitution, and use the arctangent.
7. To compute $\int\frac{A}{(at^2+bt+c)^n}\,dt$ with $n\gt 1$, $at^2+bt+c$ irreducible quadratic, complete the square, use a change of variable, and use the reduction formula $$\int\frac{dx}{(c^2\pm x^2)^n} = \frac{1}{2c^2(n-1)}\left(\frac{x}{(c^2\pm x^2)^{n-1}} + (2n-3)\int\frac{dx}{(c^2\pm x^2)^{n-1}}\right).$$
-
I really feel very embarrassed now :) This probably means that I really need a break. Thanks! – Povylas Mar 1 '12 at 18:02
Use $u = t+1$. Then $t = u-1$ and $du = dt$, so that $$\int \frac t{t+1} dt = \int \frac{u-1}u du = \int 1 du - \int \frac 1u du = u - \log u + C = t+1 - \log(t+1) + C.$$ "Unintegrable functions" are not functions with no primitive, they're just functions that we don't know how to compute the primitive by hand, i.e. they're not "expressible in terms of elementary functions".
Hope that helps,
-
$u= t+1$ : $du=dt$ $\rightarrow \displaystyle \int\frac{u-1}{u} du = \int\frac{u}{u}\;du- \int\frac{1}{u}\;du = u-\ln u = t+1 - \ln(t+1) + C$
Hope I helped.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276031255722046, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/275646/problem-in-modular-arithmetic-using-group-theory
|
# Problem in modular arithmetic using group theory
This problem is from Herstein's Topics in Algebra.
I have to use the property that if a finite set is closed under an associative product and that both cancellation laws hold in $G$, then $G$ is a group to prove the following:
1. Non-zero integers modulo $p$, a prime number, form a group under multiplication $\mod p$
2. Non zero integers relatively prime to $n$, form a group under multiplication $\mod n$.
I am very new to modular arithmetic and group theory, and stuck badly. I am trying to do the above, by assuming some numbers $a= pq + r$, where $q$ is an integer, so $a=r (mod\, p)$ but I don't know how to incorporate the facts prime and relatively prime, to solve the problem.
-
## 1 Answer
Hint: If $a$ and $b$ are relatively prime and $a$ divides $bc$, then $a$ divides $c$.
-
:Sorry, This is not helping. I have proved closure, and associativity. But I am getting that the above theorem is true wrt the mod of any number, not necessarily prime or relatively prime. I show it below. Let $\bar{a}$ be the congruence class of a modulo p ($a \lt p$). TPT: $\bar{a}\bar{b}=\bar{a}\bar{c}$ implies $\bar{b}=\bar{c}$. I write $\bar{a}=pn_1+a$, and similarly for others, with labels of the n different. I substitute into the given statement, and get $$(pn_1+a)(pn_3+c)=(pn_1+a)(pn_2+b)$$. Rearranging I get $$(c-b)(pn_1+a)=p(n_2-n_3)(pn_1+a)$$Which proves b congruent to c mod p – ramanujan_dirac Jan 11 at 12:24
@ramanujan_dirac Here is a counterexample: Let $p = 4$, $c = 3$, $b = 1$ and $a = 2$. So there must be something wrong with your argument. – Serkan Jan 11 at 15:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8871445059776306, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/4474/normalizable-and-non-normalizable-modes-of-gauge-fields-in-ads-cft/4476
|
# Normalizable and non normalizable modes of gauge fields in AdS/CFT
In Lorentzian AdS space there are both normalizable and non normalizable solutions and we also know (at least for scalar fields in bulk) what do they correspond to in the boundary. But I saw the calculation only for scalar fields. Can someone please give me a reference where people have calculated these modes for a gauge fields, say for a graviton field? McGreevy's lecture note says the relation $\Delta(\Delta-D)=m^2L^2$ gets modified to $(\Delta+j)(\Delta+j-D)=m^2L^2$ for form $j$ fields. Does this mean for other fields too the normalizable and non normalizable behavior remains the same: namely $Z_0^{\Delta_+}$ and $Z_0^{\Delta_-}$, as $z_0\rightarrow 0$ ($\Delta_{\pm}$ are two solutions of course)? How can that be?
-
## 2 Answers
At the end of Sec 3.3.1 of the MAGOO review (hep-th/9905111) you will find a useful list of the relationship between conformal dimension $\Delta$ and masses $m$ for scalars, spinors, vectors, p-forms, first order (d/2)-forms, spin $3/2$ and massless spin $2$ fields along with a list of references to the literature where these various cases were analyzed.
-
From there, for example it seems for massless spin 2 particle, $\Delta=d$. What is the difference between it and putting $j=2$ in $(\Delta+j)(\Delta+j-d)=0$? Then the values of $\Delta$ are different from $d$. And also in case $\Delta=d$, does that mean it has only one mode going like $Z_0^d$? – user1349 Feb 2 '11 at 18:42
Oh..Sorry, I see..So, for example for massless spin 2, will the only one mode go like $Z_0^d$ near the boundary? In that case, is it the one which will be non-normalizable and couple to boundary fields? Also, the reference of this relation goes back to the Witten's paper (9802150, via relations in p-29 of 9904017), but I really didn't understand how to get to this relation for massless spin 2. The relation looks so like scalar case. – user1349 Feb 2 '11 at 19:32
As Jeff Harvey points out, you can find all of the original calculations for the various form fields in the references of the MAGOO article. But you are better off working the result out for yourself, and it's not a difficult calculation.
You are only concerned with the asymptotics of the fields, so you can make some simplifications. Start with the Poincaré patch of AdS (where the metric is especially easy to work with) $$ds^2 = \frac{\ell^2}{z^2}\,(dz^2 + \eta_{ab} dx^a dx^b) ~.$$ Now focus on the $z$ dependence of the field, ignoring any dependence on the boundary coordinates. Assume the field scales like $z^{\Delta}$ as $z \to 0$. Evaluate the equation of motion with this ansatz, and the condition for $\Delta$ follows. For a scalar field this calculation takes just a few lines. Working it out for massive p-forms is a bit more involved: you will need to expand the covariant derivative of a (p+1)-form field strength and then work out the z dependence of the terms involving Christoffel symbols. But the simple form of the metric in the Poincaré patch keeps this from getting too complicated, and eventually you'll get the result you quoted for $\Delta$.
Notice that this calculation works equally well for Euclidean and Lorentzian AdS. For a discussion of the differences between the two, see hep-th/9805171.
-
1
What I said above goes for the graviton, but you'll need to linearize the Einstein equation to get the equation of motion for the field. Choosing the right gauge will help simplify the intermediate steps of this calculation. You will eventually find that the equation of motion for the spin-2 graviton on the AdS background reduces to that of a massless scalar. – Robert McNees Feb 2 '11 at 19:54
Thanks Robert. So, if I have only one solution (as I can see for massless spin 2 gravitons, $\Delta=d$), then that means the solution goes like $z^\Delta$ as $z\rightarrow 0$. Is it then the non normalizable solution which will couple to the boundary fields? I think I am confusing because it looks normalizable to me in which case I don't understand how to get the source term that will couple to the boundary field! – user1349 Feb 2 '11 at 19:57
I got your comment on graviton after I typed my comment..Can you please also suggest the reference which does that (if any)? – user1349 Feb 2 '11 at 20:00
Sorry -- I don't remember a reference off the top of my head. You can work this result out by first linearizing the Einstein equation (i.e., write the metric as $g_{\mu\nu} = \bar{g}_{\mu\nu} + h_{\mu\nu}$, where $\bar{g}_{\mu\nu}$ is the AdS metric and $h_{\mu\nu}$ is the graviton) and then solving for the leading z dependence of the graviton. Like I said before, this will be easier if you pick a convenient gauge for $h_{\mu\nu}$. – Robert McNees Feb 2 '11 at 21:30
Thanks again Robert! I will definitely work it out..But can you please explain about my confusion on whether it will provide normalizable or non normalizable mode (basically my first comment after your response) and in that case what will be the source that will couple to boundary operator? – user1349 Feb 2 '11 at 23:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923853874206543, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Banked_turn
|
# Banked turn
A banked turn is a turn or change of direction in which the vehicle banks or inclines, usually towards the inside of the turn. The bank angle is the angle at which the vehicle is inclined about its longitudinal axis with respect to its path.
## Turn on flat surfaces
If the bank angle is zero, the surface is flat and the normal force is vertically upwards. The only force keeping the vehicle turning on its path is friction, or traction. This must be large enough to provide the centripetal force, a relationship which can be expressed as an inequality, assuming the car is driving in a circle of radius r:
$\mu mg > {mv^2\over r}.$
The expression on the right hand side is the centripetal acceleration multiplied by mass, the force required to turn the vehicle. The left hand side is the maximum frictional force, which equals the coefficient of friction μ multiplied by the normal force. Rearranging the maximum cornering speed is
$v < {\sqrt{r\mu g}}.$
Note that μ can be the coefficient for static or dynamic friction. In the latter case, where the vehicle is skidding around a bend, the friction is at its limit and the inequalities becomes equations. This also ignores effects such as downforce which can increase the normal force and cornering speed.
## Frictionless banked turn
Upper panel: Ball on a banked circular track moving with constant speed v; Lower panel: Forces on the ball. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the required force for centripetal acceleration dictated by the need to travel a circular path.
As opposed to a car riding along a flat circle, inclined edges add an additional force that keeps the car in its path and prevents it from being "dragged into" or "pushed out of" the circle. This force is the horizontal component of the car's normal force. In the absence of friction, the normal force is the only one acting on the car in the direction of the center of the circle. Therefore, as per Newton's second law, we can set the horizontal component of the normal force equal to mass multiplied by centripetal acceleration:
$N\sin \theta ={mv^2\over r}$
Because there is no motion in the vertical direction, the sum of all vertical forces acting on the system must be zero. Therefore we can set the vertical component of the car's normal force equal to its weight:
$N\cos \theta =mg$
Solving the above equation for the normal force and substituting this value into our previous equation, we get:
${mv^2\over r}= {mg\tan \theta}$
Which is equivalent to:
${v^2\over r}= {g\tan \theta}$
Solving for velocity we have:
$v= {\sqrt{rg\tan \theta}}$
This provides the velocity that in the absence of friction and with a given angle of incline and radius of curvature, will ensure that the car will remain in its designated path. The magnitude of this velocity is also known as the "rated speed" of a turn or curve.[1] Notice that the rated speed of the curve is the same for all massive objects, and a curve that is not inclined will have a rated speed of 0.
## Banked turn with friction
When considering the effects of friction on the system, once again we need to note which way the friction force is pointing. When calculating a maximum velocity for our automobile, friction will point down the incline and towards the center of the circle. Therefore we must add the horizontal component of friction to that of the normal force. The sum of these two forces is our new net force in the direction of the center of the turn (the centripetal force):
${mv^2\over r}= \mu_s N\cos \theta +N\sin \theta$
Once again, there is no motion in the vertical direction, allowing us to set all opposing vertical forces equal to one another. These forces include the vertical component of the normal force pointing upwards and both the car's weight and vertical component of friction pointing downwards:
$N\cos \theta =\mu_s N\sin \theta +mg$
By solving the above equation for mass and substituting this value into our previous equation we get:
${v^2\left(N\cos \theta -\mu_s N\sin \theta \right)\over rg}= \mu_s N\cos \theta +N\sin \theta$
Solving for v we get:
$v= {\sqrt{rg\left(\sin \theta +\mu_s \cos \theta \right)\over \cos \theta -\mu_s \sin \theta }}$
This equation provides the maximum velocity for the automobile with the given angle of incline, coefficient of static friction and radius of curvature. By a similar analysis of minimum velocity, the following equation is rendered:
$v= {\sqrt{rg\left(\sin \theta -\mu_s \cos \theta \right)\over \cos \theta +\mu_s \sin \theta }}$
The difference in the latter analysis comes when considering the direction of friction for the minimum velocity of the automobile (towards the outside of the circle). Consequently opposite operations are performed when inserting friction into equations for forces in the centripetal and vertical directions.
Improperly banked road curves increase the risk of run-off-road and head-on crashes. A 2% deficiency in superelevation (say, 4% superelevation on a curve that should have 6%) can be expected to increase crash frequency by 6%, and a 5% deficiency will increase it by 15%.[2] Up until now, highway engineers have been without efficient tools to identify improperly banked curves and to design relevant mitigating road actions. A modern profilograph can provide data of both road curvature and cross slope (angle of incline). A practical demonstration of how to evaluate improperly banked turns was developed in the EU Roadex III project, see the linked referenced document below.
## Banked turn in aeronautics
Douglas DC-3 banking to make a left turn.
When a fixed-wing aircraft is making a turn (changing its direction) the aircraft must roll to a banked position so that its wings are angled towards the desired direction of the turn. When the turn has been completed the aircraft must roll back to the wings-level position in order to resume straight flight.[3]
When any moving vehicle is making a turn, it is necessary for the forces acting on the vehicle to add up to a net inward force, to cause centripetal acceleration. In the case of an aircraft making a turn, the force causing centripetal acceleration is the horizontal component of the lift acting on the aircraft.
In straight, level flight, the lift acting on the aircraft acts vertically upwards to counteract the weight of the aircraft which acts downwards. During a balanced turn where the angle of bank is θ the lift acts at an angle θ away from the vertical. It is useful to resolve the lift into a vertical component and a horizontal component. If the aircraft is to continue in level flight (i.e. at constant altitude), the vertical component must continue to equal the weight of the aircraft and so the pilot must pull back on the stick a little more. The total (now angled) lift is greater than the weight of the aircraft so the vertical component can equal the weight. The horizontal component is unbalanced, and is thus the net force causing the aircraft to accelerate inward and execute the turn.
Vector diagram showing lift, weight and centripetal force acting on a fixed-wing aircraft during a banked turn.
During a banked turn in level flight the lift on the aircraft must support the weight of the aircraft, as well as provide the necessary component of horizontal force to cause centripetal acceleration. Consequently, the lift required in a banked turn is greater than that one required in straight, level flight is by increasing the angle of attack of the wing typically by pulling on the elevator control. The maneuver is usually complemented by an increase in power, in order to maintain airspeed.
Because centripetal acceleration is:
$a = {v^2\over r}$
Newton's second law in the horizontal direction can be expressed mathematically as:
$L\sin \theta = {mv^2\over r}$
where:
L is the lift acting on the aircraft
θ is the angle of bank of the aircraft
m is the mass of the aircraft
v is the true airspeed of the aircraft
r is the radius of the turn
In straight level flight, lift is equal to the aircraft weight. In turning flight the lift exceeds the aircraft weight, and is equal to the weight of the aircraft (mg) divided by the cosine of the angle of bank:
$L = {mg\over{\cos \theta}}$
where g is the gravitational field strength.
The radius of the turn can now be calculated:[4]
$r = {v^2\over{g \tan \theta}}$
This formula shows that the radius of turn is proportional to the square of the aircraft’s true airspeed. With a higher airspeed the radius of turn is larger, and with a lower airspeed the radius is smaller.
This formula also shows that the radius of turn is inversely proportional to the angle of bank. With a higher angle of bank the radius of turn is smaller, and with a lower angle of bank the radius is greater.
## Notes
1. Beer, Ferdinand P.; Johnston, E. Russell (July 11, 2003). Vector Mechanics for Engineers: Dynamics. Science/Engineering/Math (7 ed.). McGraw-Hill. ISBN 978-0-07-293079-5.
2. Federal Aviation Administration (2007). Pilot's Encyclopedia of Aeronautical Knowledge. Oklahoma City OK: Skyhorse Publishing Inc. Figure 3–21. ISBN 1-60239-034-7.
3. Clancy, L.J, Equation 14.9
## References
Surface vehicles
• Serway, Raymond. Physics for Scientists and Engineers. Florida: Saunders College Publishing, 1996.
• Health and Safety Issues, the EU Roadex III project on health and safety issues raised by poorly maintained road networks.
Aeronautics
• Kermode, A.C. (1972) Mechanics of Flight, Chapter 8, 10th Edition, Longman Group Limited, London ISBN 0-582-23740-8
• Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London ISBN 0-273-01120-0
• Hurt, H.H. Jr, (1960), Aerodynamics for Naval Aviators, A National Flightshop Reprint, Florida
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191434979438782, "perplexity_flag": "middle"}
|
http://en.wikibooks.org/wiki/A-level_Mathematics/MEI/C3/Differentiation
|
A-level Mathematics/MEI/C3/Differentiation
< A-level Mathematics | MEI | C3
Differentiation in Core 3 (C3) are an extension of the work that you did in Core 1 and Core 2.
Differentiation
Standard Derivatives
For the C3 module, there are a few standard results for differentiation that need to be learnt. These are:
$\frac {d} {dx} \ln x = \frac {1} {x}$
$\frac {d} {dx} e^{kx} = ke^{kx}$
$\frac {d} {dx} \sin kx = k \cos kx$
$\frac {d} {dx} \cos kx = -k \sin kx$
$\frac {d} {dx} \tan kx = \frac {k} {cos^2 {kx}}$
Chain Rule
$\frac {dy}{dx} = \frac {dy} {du} \frac {du}{dx}$
The Chain Rule is used to differentiate when one function is applied to another function. A typical example of this is:
$y = \sin(x^2)$
One of the ways of remembering the chain rule is: Find the derivative outside, then multiply it by the derivative inside. In the example above, this becomes:
$\frac {dy} {dx} = 2x\cos (x^2)$
Product Rule
$\frac {d}{dx}uv = v\frac {du} {dx} + u\frac {dv}{dx}$
The product rule is used when two functions are multiplied together.
Quotient Rule
$\frac {d}{dx} \frac{u} {v}= \cfrac {v\cfrac {du} {dx} - u\cfrac {dv}{dx}} {v^2}$
The quotient rule is used when one function is divided by another. It is a specific case of the product rule. A typical example of this is:
Implicit Differentiation
Implicit differentiation is used when a function is not a simple $y=something$ but contains a mixture of x and y parts. A typical example of this is to differentiate:
$y^2 + 2y = 4x^3$
When differentiating the y components of the expression you differentiate as normal, and then multiply by $\frac {dy} {dx}$. So differentiating both sides of the above expression it becomes:
$2y\frac {dy} {dx} +2\frac {dy} {dx}= 12x^2$
Then by factorising the left hand side and cancelling, this becomes:
$\frac {dy} {dx} = \frac {6x^2} {y+1}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097341895103455, "perplexity_flag": "middle"}
|
http://nrich.maths.org/1387/index?nomenu=1
|
## Introduction
There is a legendary story of the sage who posed the question: 'A normal elephant has four legs; if an elephant's trunk is called a leg, how many legs does it have?' He asked a mathematician, who continued to stare at a pile of paper on which he was scribbling as he muttered: 'four and one make five'. Next to him a philosopher mused enigmatically and puffed for a few moments on his pipe before observing: 'The fact that it is called a leg, doesn't change the fact that it is not a leg, so the answer is four '. 'Excuse me,' said a passing zoologist, 'if a trunk is classified as a leg, clearly this will also apply to the tail, so it has six legs, and it's an insect'. A logician joined the conversation: 'A normal elephant has four legs, but you did not actually say that this elephant is normal, so there is insufficient evidence...'
Continuing to seek enlightenment, the sage in his wisdom passed the query on to a statistician who returned the following day asserting 'the mean is 0.33'. 'Might I ask how you came by this information?' queried the sage, concealing his innermost thoughts behind an inscrutible smile. 'The best way to solve such a question is to obtain empirical information,' replied the statistician, 'so I went to the local zoo and got the answer from the horse's mouth, so to speak. Two elephants refused to respond and the third blew his own trumpet just once.'
Still bemused the sage went along to the local school which was deeply embroiled in GCSE investigations and once again stated his problem. 'That's a very interesting question,' said the teacher.
The moral of this story is that, as Humpty Dumpty once said, 'when I use a word, it means just what I want it to mean, and nothing else'. The term 'proof' is just such a word. In different contexts it means very different things. To a judge and jury it means something established by evidence 'beyond a reasonable doubt'. To a statistician it means something occuring with a probability calculated from assumptions about the likelihood of certain events happening randomly. To a scientist it means something that can be tested -- the proof that water boils at 100o is to carry out an experiment. A mathematician wants more -- simply predicting and testing is not enough -- for there may be hidden assumptions (that the water boiling is always carried out at normal atmospheric pressure and not, say, on the top of Mount Everest).
## Problem Solving and Convincing Arguments
When a problem is encountered, the question of providing a convincing argument to explain the solution often arises. The book 'Thinking Mathematically' by John Mason, Leone Burton and Kay Stacey has a large number of problem-solving situations. One is the problem 'into how many squares can you cut a square?'
Faced with such a question, you might begin by thinking 'as many as you like', or 'infinity'. Then you may begin to realize that a square could be cut into 4, 9 or 16 by dividing it into pieces of equal size. If you play about with possible ways of cutting a square into smaller squares, you may suddenly see that any square can be be cut into four smaller squares, including the smaller squares themselves. Aha! A square could be cut into four quarters and one quarter cut into four again, losing the quarter as a counted square but gaining four smaller ones - so a square can be cut into seven squares.
Figure 1: cutting a square into seven squares
Cutting any of the squares in this picture into four squares gives three extra squares. Thus it is possible to cut a square into 7 squares, 10 squares, 13 squares, 16, 19, 22, ... and so on.
Figure 2: cutting a square into ten squares
It is easy to see that IF a square can be cut into n squares THEN it can be cut into n+3 squares. It is this general result which is the key to the solution of the problem. For instance, if I could cut a square into six smaller squares, then I could do 6+3=9, 9+3=12, and so on, to get the sequence 6, 9, 12, 15, ... If I could cut a square into five smaller squares, then I could get the sequence 5, 8, 11, 14, ..., and so on. But can I ?
One attack suggested by students is to look at a picture like figure 3, and to propose that, by rubbing out a number of lines in a three by three subdivision it is possible to "glue four squares into a single square". This gives one big square and five smaller squares, making six squares in all.
Figure 3: cutting a square into six squares (one big, five small)
This can then be built on by subdividing any of these squares into four smaller, to cut a square into 6+3=9, 9+3=12, ... to get the sequence of possibilities: 6, 9, 12, 15, ...
If you could cut a square into five squares, it would be possible to get the sequence of possibilities 5, 8, 11, 14, ... Then it would be possible to do all possible numbers 4 and above, using the combination of the three sequences
4, 7, 10, 13, ...
5, 8, 11, 14, ...
6, 9, 12, 15, ...
But can you cut a square into five smaller squares? One student, Paul, suggested to me that if you can do n squares you can do n-3 by joining a block of four squares together as in Figure 3. Is Paul's suggestion correct? It is certainly true for n=9, as Figure 3 shows, but is it true for all whole numbers n?
There is a well-known story of the experimental physicist who claimed to prove that 60 is divisible by every other number. He came to this conclusion by considering a sequence of cases to establish the pattern: 1,2,3,4,5,6 and then moved on to a few others at random to test out the theory : 10,12, 20, 30, and concluded that his result was experimentally verified. He was surpassed in this endeavour by an engineer who noticed that all odd numbers seemed to be prime... One - well that's an oddity, but we'll include it in - three, five, seven, good, we're getting somewhere - nine? Oh, nine... Let's leave that a moment - eleven, thirteen - fine. The exceptional case of nine must have been an experimental error.
This story, which I claim bears no relationship to any known physicist or engineer, living or dead, does illustrate the important difference between proof by looking at a number of cases and proper mathematical proof. It is not enough to consider just a number of cases, for all of them may have some hidden common assumption. For instance, we might conclude from a number of experiments that water always boils at 100o C because we never have the experience of trying to boil water on the top of Mount Everest. Scientific proof depends on the predictability of experiments: that we conjecture that when we carry out an experiment it will have a predicted outcome. Such proof is not appropriate in mathematics where we must provide a logical argument that the conclusion follows from explicitly stated assumptions.
To help the student focus on the various stages of putting up a convincing argument, 'Thinking Mathematically' suggests three stages:
1. Convince yourself.
2. Convince a friend.
3. Convince an enemy.
The idea is first to get a good idea how and why the result works, sufficient to believe its truth. Convincing oneself is, regrettably, all too easy. So pleased is the average mortal when the 'Aha!' strikes that, even if shouting 'Eureka' and running down the street in a bath towel is de rigeur, it is very difficult to believe that the blinding stroke of insight might be wrong. So the next stage is to convince a friend - another student, perhaps - which has the advantage that, to explain something to someone else at least makes one sort out the ideas into some kind of coherent argument. The final stage in preparing a convincing argument, according to 'Thinking Mathematically' is to convince an enemy - a mythical arbiter of good logic who subjects every stage of an argument with a fine toothcomb to seek out weak links.
A student might very well convince himself of the truth of the argument "IF I can cut a square into n smaller squares, THEN I can cut the square into n- 3 squares". He might even convince a friend by showing pictures such as figure 3. But an enemy might put up figure 4, where a square is cut into eight smaller squares (seven the same size, plus one bigger one). Here there is no set of four smaller subsquares in a group which can be amalgamated into one larger square to reduce the eight sub-squares to five sub-squares.
Figure 4: cutting a square into eight squares, but not into five
Does this blow, which demolishes Paul's theory, show that you cannot cut a square into five smaller squares? No it does not. It suggests that Paul's method does not work, but perhaps some other method will ...
The problem which I leave you to formulate precisely and prove has two parts:
(a) Find all numbers n such that a square can be cut into n smaller sub-squares and prove that this is actually possible for every such number n.
(b) For all the numbers n not included in part (a), prove that it is not possible to cut a square into such a number of smaller squares.
You should certainly have a go at this before moving to the next section.
## Making Precise Statements
Proof requires a careful statement of assumptions and a precise argument showing how a clearly stated result is deduced. It is surprising how often we miss the fact that a statement has implicit, unspoken assumptions. Look at the square problem. Into how many squares can I cut a square? For what numbers is this not possible?
If a square is cut into more than one square, there will be a corner of a smaller square in each corner of the original square. Thus, if there is more than one square, there must be at least four . There cannot be two or three. Perhaps you might like to try to extend this argument to cover other cases which you suspect cannot be done (if there are any...).
I have given this problem to hundreds of undergraduates over the years and we have all eventually agreed on which values of n cannot be done. It has become quite a party piece which I have also tried out with many sixthformers.
It was ten years after I first met the problem that a perceptive fourteen year old girl in a problem-solving session came up with an original thought. She suggested that the problem had not explicitly stated that the paper could not be cut and then glued together again in a different way. Her solution for n=2 is given in figure 5.
Figure 5: sticking together bits to cut a square into two squares
This illustrates the fact that we must be extremely careful about how we phrase our assumptions. Before Figure 5 the (unspoken) assumption had been that we must make single straight line cuts to form whole subsquares and we are not allowed to cut into smaller parts (say triangles) and to stick them together again. The original problem is better specified by saying:
Square problem version 2 : A square is cut into n smaller squares by making single straight line cuts, without joining together cut parts into larger wholes. What are the possible values of n?
The exceptional cases found earlier would still be exceptions to this better phrased problem. Figure 5 would now fail to be a counter example to this because it breaks the rule about not sticking together cut parts into larger wholes.
However, Figure 5 does suggest a different problem:
Square problem version 3 : Into how many subsquares n is it possible to cut a square, if it is allowed to join cut parts into large wholes?
The answer to problem version 3 is likely to be different from that to version 2. You should see if any or all of the 'impossible' numbers from version 2 now become 'possible'. For instance, in Figure 4 we can clearly take any four of the smaller squares and move them together to glue them into one medium size square. Thus, if we allow sticking together we can re-form the square in Figure 4 into five squares of different sizes: one large, one medium, and three little ones. With a little ingenuity perhaps you can solve the case n=3 for version 3 of the problem. Perhaps now you can specify the solutions of both problems. They will be different. This shows that precision in making mathematical statements is all important.
Three men were going by train to a conference in a distant region of the United Kingdom. The engineer looked out of the window and said, 'Look, all the sheep in Scotland are black'. The theoretical physicist thought for a moment and said, 'No, there exists a field in Scotland in which all the sheep are black'. There was silence from the logical mathematician who mused for some time in the corner of the compartment before declaring 'No, there exists a field in Scotland in which all the sheep are at least half black...'
### Making appropriate deductions
Once we have got precise statements of the assumptions (P) underlying a theorem and what it is we are trying to prove (Q), then a mathematical proof of the theorem is in the form "IF P is true THEN Q is true". In everyday language the conventions are sometimes different. "If your father comes home before six o'clock then you can have some chocolate before dinner-time". Here the assumption P is "father comes home before six o'clock" and the deduction Q is "you can have chocolate before dinner-time". Presumably father brings the chocolate and if he arrives sufficiently early you can have some without spoiling your appetite. But also contained in the statement that IF father does not come home before six o'clock, THEN you will NOT have chocolate before dinner-time. There is often an implication in everyday language that IF P happens THEN Q will follow, but IF P FAILS THEN Q FAILS ALSO.
In mathematics such an assumption is not made. Here a proof in the form IF P THEN Q simply requires that if P is true, then Q must be true also. If P is false, then no implication as to the truth or falsehood of Q is necessary.
Consider the example:
If x > 6 then x > 3 .
In mathematics this is considered a true statement. If x is a number bigger than 6 then it must also be bigger than 3. However, consider this as separate statements, where P is " x > 6 " and Q is "x > 3 ". What happens for various values of x? If x=7 then P is true and Q is also true. In fact, when x is a number bigger than 6 then P is true and it will follow also that Q is true.
But if x=5 then P is false but Q is true, and if x=3 then P is false and Q is false. Thus when P is false, Q can be true or false. We simply have no interest what happens in this case.
## Common Errors in Proof
Students often make quite serious errors in proof on examination papers. As a Senior Examiner in Mechanics for many years I regularly had to mark questions which said something like this:
A particle mass M rests on a rough plane with coefficient of friction $\mu$, inclined to the horizontal at an angle $\alpha$. Show that if the particle slides down the plane then $\tan\alpha> \mu$.
What students often do is to assume $\tan\alpha> \mu$ and deduce that the particle slides. They have been asked to prove IF P THEN Q where P is "the particle slides" and Q is "$\tan\alpha> \mu$". They often prove IF Q THEN P. In this case it happens that the two things are equivalent. P happens if and only if Q happens. But the question only asks for the implication from P to Q and the students only prove the implication from Q to P.
You might feel that this is a trivial matter. But logically it is totally erroneous. In mathematics it often happens that IF P THEN Q is true but IF Q THEN P is false. For instance, it is true that IF x > 6 THEN x > 3 , but the other way round: IF x > 3 THEN x > 6 is clearly false. Thus it is important to distinguish between the two. The statement IF Q THEN P is called the converse of the statement IF P THEN Q. It is important to distinguish between the proof of a statement and the proof of its converse. One may be true and the other may be false. Another example occurred with the case of "into how many squares can I cut a square" (version 2). It is true to say that if a square can be cut into n pieces then it can be cut into n+3 pieces. The converse, that if it can be cut into n+3 pieces it can be cut into n pieces is false, as can be seen from the case n=3,5.
There is one case in A-level that (almost) everyone gets wrong. It is to do with the constant of integration: that if
$\int f(x) dx = F(x)$,
then any other integral is of the form
$F(x)+c$
where $c$ is a constant. This is usually deduced from the fact that the derivative of a constant $c$ is zero. Hence the derivative of
$F(x)+c$
is the same as the derivative of
$F(x)$.
However, the deduction is false. Let $P$ be the statement that
$G(x)=F(x)+c$
and $Q$ be the statement
$G\prime(x)=F\prime(x)$.
Then, because the derivative of a constant is zero we can deduce that IF $P$ is true THEN $Q$ is true. What we cannot do is to deduce the converse: IF $Q$ is true THEN $P$ is true.
It is actually possible to have $Q$ true and $P$ false. As an example, let
$G(x)=1/x$
and let
\begin{eqnarray}F(x) =& 1/x+1 (x < 0) \\ & 1/x+2 (x > 0) \end{eqnarray}
then both $G(x)$ and $F(x)$have derivative $-1/x^2$. The fact that there is a different constant added to $F(x)$ for x < 0 and x > 0 does not affect the derivative because these two parts of the domain are totally separate. (In forming the limit of $(F(x+h)-F(x))/h$, as $h$ tends to zero, when $h$ is sufficiently small, both $F(x)$ and $F(x+h)$ will have the same added constant.)
Oh, you may say, that's cheating, we don't normally meet functions like that in the calculus... No we don't. Nor do we normally have the personal experience of boiling water on the top of Mount Everest, which would prove that water doesn't always boil at 100o C.
To be sure that the mathematics will always work it is necessary to state precisely the assumptions and to take great care over the deductions. This proves to be rather hard. Indeed it tends to be the province of university pure mathematics rather than A-level.
You may find the accent on precise proof in mathematics rather esoteric. Other scientists are known to make such jibes at mathematicians. Indeed they say you can tell whether someone is an engineer, physicist or mathematician by setting fire to his wastepaper basket. The engineer will make a cursory calculation and swamp the basket with enough water to put out the fire and more. The physicist will sit down, calculate exactly how much water is needed and pour the exact quantity on the fire. The mathematician? The mathematician will sit down and calculate exactly how much water is needed.
Thus the mathematician stands accused of developing a precise theory that is devoid of application. This could not be further from the truth. In our universities computer scientists are growing increasingly worried that students no longer seem to understand the finer points of proof. This is especially true since the demise of Euclidean geometry which was largely concerned with the ritual of deducing one statement about a geometrical figure from given assumptions. It may have serious consequences. As we use increasingly more sophisticated software to run our lives we need computer scientists and programmers who can write provably correct software that does not contain horrendous bugs - unlike the kind of software that caused the stockmarket crash because it was designed to sell under certain conditions which occurred late one Friday and caused the computers to attempted to outdo each other as the selling fed back into the system causing even more selling and then eventual collapse of the market.
It is therefore even more important in today's technological climate to pay attention to the niceties of well-formulated statements and logical deduction. "To prove or not to prove" is a question that can have only one answer, for proof is an essential component of technological order in the future.
Professor David Tall
Professor in Mathematical Thinking at the University of Warwick (1992-present).
Educated at Victoria School (1945-1952), The Grammar School, Wellingborough (1952-1960) and Wadham College Oxford (1960-1966) where I obtained first class honours in Mathematics, The Junior Mathematics Prize, (1963) and DPhil in Mathematics (1967). From 1966 to 1969 I was a lecturer in Mathematics at Sussex University. Since 1969, I have been on the staff of the University of Warwick, as a lecturer in mathematics with special interests in education in the Mathematics Institute (1969-1980), then within what has become the Institute of Education, being awarded a personal chair in 1992.
[This article was originally published in Mathematics Review in January 1991. NRICH has the permission of the editors and authors to reprint material from it as the Mathematics Review magazine closed down after 17 issues. Ed.]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9630770087242126, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/87505/origin-of-the-notation-s-sigmait-in-analytic-number-theory/87506
|
## Origin of the notation s=\sigma+it in analytic number theory
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I was wondering if the standard notation of denoting a complex variable by "$s$" had an interesting origin, or if it dates back to Riemann or Weierstrass. Almost every book in analytic number theory seems to uses that alphabet with
$s = \sigma + i t$
denoting its real and imaginary parts.
I shall be happy if anyone could enlighten me about it. I tried searching MO for relevant questions but couldn't find it.
-
See Landau's Handbuch der Lehre von der Verteilung der Primzahlen – KConrad Feb 4 2012 at 5:27
## 3 Answers
To expand on KConrad's comment: Edmund Landau's 1909 book Handbuch der Lehre von der Verteilung der Primzahlen certainly uses $\sigma = \Re s$, see the footnote on page 30 at Google books
It reads in English "I understand $\sigma = \Re(s)$ as the real part of the complex number $s = \sigma + ti$, (and) $t = \Im(s)$ as the coefficient of $i$ in the purely imaginary part.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In skimming through Narkiewicz "The Development of Prime Number Theory", one sees a reference on p. 155 (footnote 38) to a certain R. Lipschitz, who in Crelle in 1857 "studied the series $\sum_{n=1}^\infty\exp(nui)n^{-\sigma}$ for real values of $\sigma$." I checked the reference; Lipschitz was indeed using $\sigma$.
Lipschitz is referred to several times in this section of Narkiewicz for later work on functional equations of various $L$-functions.
-
Riemann uses the notation $s=\frac12+it$ for $s$ on the critical line, but I cannot find any appearance of $\sigma$ in his paper. On the contrary, the notation $a+bi$ appears often there.
However, the following sentence occurs in the first paragraph of Ivic's book:
Riemann wrote $s=\sigma+it$ ($\sigma,t$ real) for the complex variable $s$, and this tradition still persists, although some authors prefer the more logical notation $s=\sigma+i\tau$.
I cannot remember where, but I vaguely recall reading that the tradition was initiated by Landau's book.
-
2
This is wrong: look at the paper and you'll see he never mentions a variable for the real part of s. In fact the only time he does use a real part, it's 1/2. – KConrad Feb 4 2012 at 5:26
1
Well, Riemann used the letter $s$, but it's wrong that he used $\sigma$. – KConrad Feb 4 2012 at 5:27
Sorry for the confusion. I have edited the answer. – timur Feb 4 2012 at 5:36
2
@timur: maybe you read that above! – J. H. S. Feb 4 2012 at 5:40
No, it was explicitly stated that Landau seems to be the first one to use $\sigma$. – timur Feb 4 2012 at 17:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273064732551575, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/182727/find-the-terms-of-the-sequence-a-n1-1n-a-n-that-are-natural-numbers
|
# Find the terms of the sequence $a_{n+1}=1+n/a_n$ that are natural numbers
Let's consider the sequence $(a_n)_{n\in\mathbb{N}}$, defined by the following recurrence relation: $$a_{n+1} = \begin{cases} 1 + \frac{n}{a_{n}}\quad&n\gt0\\ 1&n=0 \end{cases}$$ Find all terms of the sequence that are natural numbers.
Two things to mention here:
• first one is that $a_{n}$ goes to $\infty$ when $n$ goes to $\infty$,
• and the second point is that the product of the first $k$ terms of the sequence is an integer number.
This is all I have so far.
-
2
$a_n$ is given by $\frac{b_n}{b_{n-1}}$ where $b_n$ is the number of Young tableaux on $n$ elements. Don't know if this helps. – Cocopuffs Aug 15 '12 at 7:55
A quick computer test finds only $n\in\{1,2,3\}$ among $n\le100000$. – Harald Hanche-Olsen Aug 15 '12 at 7:55
@ Cocopuffs: noticed that in OEIS, but I don't see yet how it helps. If one may prove that $b_{n}$ is not an integer multiple of $b_{n-1}$ then it's helpful. – Chris's wise sister Aug 15 '12 at 8:01
Sorry if I wasn't clear enough. I meant to say that for $n\le100000$, $a_n$ is an integer if and only if $n\in\{1,2,3\}$. – Harald Hanche-Olsen Aug 15 '12 at 8:09
@Harald Hanche-Olsen: Perfect. Thanks! – Chris's wise sister Aug 15 '12 at 8:10
show 7 more comments
## 1 Answer
Obvious integer values are $a_1=1$, $a_2=a_3=2$. I'll show that these are the only ones.
First, we note that if $a_n<a_{n+1}$, then $$a_n(a_n-1)<a_n(a_{n+1}-1)=n<a_{n+1}(a_{n+1}-1).$$ If $a_{n-1}<a_n<a_{n+1}$, this gives us $$a_{n-1}(a_n-1)=n-1<a_n(a_n-1)<a_n(a_{n+1}-1)=n;$$ if $a_n$ was an integer, then so would $a_n(a_n-1)$, hence, $a_n$ cannot be an integer in this case. Thus, all we need to show is that $a_n$ is strictly increasing for $n\ge3$.
I'm sure there are numerous ways to prove that $a_n$ is strictly increasing for $n\ge3$, many of which will be purely technical. So far, my ideas have all centered around $a_n(a_n-1)\approx n-1/2$, which would in itself have sufficed to prove $a_n$ integral, and have gotten rather messy. I'll see if I can come up with a nice one.
Edit: I think I have a proof now that $a_n$ is strictly increasing for $n\ge3$.
Let $p_n(x)=x(x-1)-n$. This is increasing for $x\ge1/2$ and has positive root $x_n=1/2+\sqrt{n+1/4}$. We can then do induction on $x_{n-1}<a_n<x_n$.
Since $x_n(x_n-1)=n$ and $a_n(a_{n+1}-1)=n$, if $a_n<x_n$, then $a_{n+1}>x_n$.
if $a_n>x_{n-1}$, we get $$a_{n+1}-1=\frac{n}{a_n}<\frac{n}{x_{n-1}}<x_{n+1}-1\Rightarrow a_{n+1}<x_{n+1}$$ where the last step relies on $x_{n-1}(x_{n+1}-1)>n$ which can be shown for $n>1$ by plugging in the values.
Since $x_n$ are increasing and $x_3<a_4<x_4$, it follows by induction that $x_{n-1}<a_n<x_n$ for all $n\ge4$, hence, $a_n<x_n<a_{n+1}$. For $n=3$ we have $x_2=a_3=2<x_3$.
How I got the idea in the first place?
I computed $a_n$ numerically (using Maple), and quickly found that $a_n(a_n-1)\approx n-1/2$. This lead me to think of proving that $a_n(a_n-1)$ was not an integer since this seemed to be true by a large margin. The brute force approach, which is what I started out trying, would have been to show this by proving the approximation was sufficiently accurate. However, since I had already observed that $a_n$ was increasing, comparing $x_n(x_n-1)$ to $x_n(x_{n+1}-1)=n$ the way I did was quite apparent.
-
it's funny to see that the main core of the approach isn't that hard as I initially thought. I just missed that part! (+1) – Chris's wise sister Aug 15 '12 at 14:46
where did you get from $a_{0}=1$? – Chris's wise sister Aug 15 '12 at 14:47
thank you for your brilliant and simple approach! – Chris's wise sister Aug 15 '12 at 14:51
The $a_0=1$ was purely a figment of my imagination as $a_0$ is not defined in the original problem desciption. Anyway, I don't use this anywhere. – Einar Rødland Aug 15 '12 at 15:16
Absolutely awesome this proof! – Chris's wise sister Aug 15 '12 at 15:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9660810232162476, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/48045/why-are-matrices-ubiquitous-but-hypermatrices-rare/48171
|
## Why are matrices ubiquitous but hypermatrices rare?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am puzzled by the amazing utility and therefore ubiquity of two-dimensional matrices in comparison to the relative paucity of multidimensional arrays of numbers, hypermatrices. Of course multidimensional arrays are useful: every programming language supports them, and I often employ them myself. But these uses treat the arrays primarily as convenient data structures rather than as mathematical objects. When I think of the generalization of polygon to $d$-dimensional polytope, or of two-dimensional surface to $n$-dimensional manifold, I see an increase in mathematical importance and utility; whereas with matrices, the opposite.
One answer to my question that I am prepared to acknowledge is that my perception is clouded by ignorance: hypermatrices are just as important, useful, and prevalent in mathematics as 2D matrices. Perhaps tensors, especially when viewed as multilinear maps, fulfill this role. Certainly they play a crucial role in physics, fluid mechanics, Riemannian geometry, and other areas. Perhaps there is a rich spectral theory of hypermatrices, a rich decomposition (LU, QR, Cholesky, etc.) theory of hypermatrices, a rich theory of random hypermatrices—all analogous to corresponding theories of 2D matrices, all of which I am unaware.
I do know that Cayley explored hyperdeterminants in the 19th century, and that Gelfand, Kapranov, and Zelevinsky wrote a book entitled Discriminants, Resultants and Multidimensional Determinants (Birkhäuser, Boston, 1994) about which I know little.
If, despite my ignorance, indeed hypermatrices have found only relatively rare utility in mathematics, I would be interested to know if there is some high-level reason for this, some reason that 2D matrices are inherently more useful than hypermatrices?
I am aware of how amorphous is this question, and apologize if it is considered inappropriate.
-
5
Just a guess, but it may have to do with the difficulty of defining a (canonical) product of hypermatrices; you can't view them naturally as linear maps between vector spaces, and define a product via composition. – Gordon Craig Dec 2 2010 at 13:25
14
If tensors are ubiquitous, and tensors are hypermatrices, then aren't hypermatrices ubiquitous? (Not a rhetorical question. I can't tell which of these statements you believe less.) – Qiaochu Yuan Dec 2 2010 at 14:20
10
Here is a related question (related at least in my mind). Why are groups, rings and fields far more ubiquitous than sets $S$ equipped with a function $f:S \times S \times S \to S$ having "nice" properties? – Louigi Addario-Berry Dec 2 2010 at 15:01
5
Furthermore, $n$-tuples are immensely more common than matrices... – Gerald Edgar Dec 2 2010 at 15:16
3
I think this is also linked with the fact that while in many contexts a matrix is just a special case of a hypermatrix (e.g., representation of tensors, as above observed) there are also many cases where a multi-analog is really of less interesting generality. For instance, ternary or n-ary relations are definitely less used than binary relations; hypergraphs are not so widely used as graphs; linear maps between direct sums of vector spaces are rather a particular case than a generalization of linear maps between vector spaces. – Pietro Majer Dec 2 2010 at 18:28
show 11 more comments
## 11 Answers
Note that in linear algebra matrices describe at least two different things: linear maps between vector spaces (we consider only finite-dimensional vector spaces here) and bilinear forms. When thinking of matrices as tensors, linear maps between $V$ and $W$ are elements of the space $V^* \otimes W$, whereas bilinear forms between $V$ and $W$ are elements of $V^* \otimes W^*$. Now you can easily generalize the latter case to more than two spaces, but not the former. But it is the former case where several concepts like composition (matrix multiplication), determinants, eigenvalues etc. apply. (Note that eigenvalues and determinants can be defined for bilinear forms on a vector space equipped with an inner product, but not for bilinear forms on plain vector spaces). Of course you can consider spaces like $V^* \otimes W^* \otimes X$, but elements of this space are better thought as linear maps between $V\otimes W$ and $X$ than as three-dimensional hypermatrices. So what is special about the number 2 is that there is a notion of duality for vector spaces, but no "n-ality".
-
7
Related to this, there is a very fruitful interplay between graphs and matrices, particularly with the use of eigenvalues of adjacency matrices. One can say a few things along these lines about hypergraphs, but they are (to date) much less satisfactory. – gowers Dec 2 2010 at 15:49
3
@Florian: Your point about duality is a great insight! – Joseph O'Rourke Dec 2 2010 at 16:11
1
Recent paper on spectral hypergraph theory, for eigenvalues of adjacency hypermatrices: "Spectra of Hypergraphs", Joshua Cooper, Aaron Dutle, arxiv.org/abs/1106.4856 – Roy Maclean Sep 2 2011 at 15:40
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To such a complex problem, there cannot be unique answer. I see many, which all justify the tremendous interest that mathematicians have devoted so far to matrices, rather than to hypermatrices.
Ubiquity. Matrices are used by every species of mathematicians, and beyond, by a large fraction of scientists. This is perhaps the only mathematical area to enjoy this versatility. Let me provide a few examples. Matrix exponential is fundamental in differential equation (more generally in dynamical systems) and Lie theory of groups. Symmetric matrices are used in quantum mechanics, statistics, optimisation and numerical analysis; they have deep relations with representation theory and combinatorics (see the solution of Horn's conjecture by Tao \& Knutson). Positive matrices are encountered in probability and numerical analysis (discrete maximum principle). Matrix groups are used in representation theory, in number theory (including modular forms), in dynamical systems (because of symmetries). When depending on parameters, matrices enter in PDE theory as symbols.
Simplicity. The concept of matrix is by definition simpler than that of hypermatrices. It is natural that the study of matrices precedes that of HM. This argument will fade as time increases, of course.
Richness. What makes a field particularly attractive is that it involves several apparent unrelated concepts in order to produce unexpected results. This happens in matrix theory, because on the one hand, we may view them as linear maps (where conjugation is relevant) and on the other hand we may see them as bilinear or sesquilinear maps (where congruence is relevant). It becomes especially fruitful when we go back and forth between both points of view. This happens in the remarkable theorem that normal matrices are unitary diagonalizable, but also in the parametrization of a Lie group by its Lie algebra via the exponential and the Hermitian square root. I am not at all aware of the theory of HM, but if they do not form naturally an algebra, I doubt that their theory could be so rich, or if it is, it will be for completely different mathematical reasons.
To temperate this pledge, let me say that hypermatrices have been studied (although not so deeply) under the name tensors. They are of great importance in differential geometry (Ricci curvature tensor, with the many identities named after Christoffel, Gauss, Codazzi, ...) and in its applications: general relativity, elasticity. These are undoubtedly difficult topics, where even simple problems are not well understood. To mention one of them, there is still no satisfactory description of the twice-symmetric tensors of fourth order ($a_{ijkl}=a_{jikl}=a_{ijlk}$) that satisfy the Legendre-Hadamard condition $$\sum_{i,j,k,l}a_{ijkl}x_ix_j\xi_k\xi_l\ge0,\qquad\forall x\in\mathbb R^n,\xi\in\mathbb R^d.$$ It seems to me that the use of HM is too scattered, and therefore there is no research community specializing on all their aspects. Edit. Likewise, the notion of rank, although correctly defined in the case of tensors, is hard to manipulate and to compute explicitly. This is the reason why the exact algorithmic complexity of the multiplication of matrices is still not known (the operation $(A,B)\mapsto AB$ in $M_n(k)$ may be viewed as a $3$-tensor, and its tensorial rank governs the number of operations needed in an $n\times n$ mulitplication).
-
@Denis: I could not ask for a more knowledgeable and informative answer. Your points about richness are especially enlightening. I am grateful! – Joseph O'Rourke Dec 2 2010 at 15:25
3
The Legendre-Hadamard condition is a great example of what Gjergji asked in the comments above, one where the multilinear setting is much less understood than the linear case. Whereas it is well known that the convex cone generated by $\xi\otimes\xi$ "squares of vectors" in the space of square matrices is equivalent to the cone of positive semi-definite matrices, and as such that PSD cone is self-dual, an analogous statement is known to be false for Legendre-Hadamard tensors. – Willie Wong Dec 2 2010 at 15:33
2
In particular, it'd be great to find out what is the difference set between the Legendre-Hadamard tensors and the rank-one cone generated by elements of the form $x\otimes x\otimes\xi\otimes\xi$, or even find a self-dual cone sitting between the two convex cones. – Willie Wong Dec 2 2010 at 15:35
An awfully simplistic answer: we work on two-dimensional paper, so two-dimensional matrices are very convenient to write down and compute with, while higher-dimensional hypermatrices are not.
So while we could represent multilinear forms, tensors, etc. as hypermatrices, we often don’t, because doing so is not nearly as fruitful as representing linear maps, bilinear forms etc. as matrices. Instead, we usually use other notations when working with higher tensors by hand.
In computer algebra, the dimension of the paper is not significant, while some kinds of abstraction are harder, so in this context, higher tensors are much more often represented as hypermatrices.
-
11
I really think this is the answer. – Allen Knutson Dec 2 2010 at 18:22
1
And the fact that we are speaking of this now, when computers make it easier to treat hypermatrices, seems to me a sort of confirm of ypur thesis. – Pietro Majer Dec 2 2010 at 19:34
1
I do not think this is the answer. – Gil Kalai Dec 6 at 20:07
2
I'm sure that the comparative awkwardness of notation for hypermatrices at least partly explains their lack of popularity amongst mathematicians, and lack of popularity leads to lack of theory. So I think this answer is partly correct, but it needs to be combined with answers like Florian's that expose genuine features that differ for $D=2$ and $D>2$. – Brendan McKay Dec 6 at 22:37
I think much of what I'm about to say has been said already, but I wanted to repeat it: I don't agree with the premise of the question at all. A "hypermatrix" is simply another name for a tensor. All aspects of matrix operations that I know (multiplication, determinant, etc.) have direct generalizations to tensors. All of this is best developed and understood in the abstract setting using tensor and exterior algebras over abstract vector spaces, as well as representation theory. Moreover, tensors along with these operations are used in many settings, including but not restricted to algebraic and differential geometry. What is true is that you don't necessarily have the depth of theorems that you have for matrices, but my view is that is because of two reasons: 1) Tensors are more general, so they don't satisfy all of the properties of matrices, and 2) Tensors are more complicated, so it is takes longer to develop them to the same depth.
-
6
This is pretty much exactly the answer I would have given. I'd add one more reason: 3) it's harder to write the data of a tensor on a chalkboard or piece of paper. – Theo Johnson-Freyd Dec 2 2010 at 21:58
Theo, an excellent point! – Deane Yang Dec 2 2010 at 22:29
1
Theo, I just said a week ago to colleagues of mine (partly joking, but not entirely) that I specialized on matrices because I can write them on the blackboard. – Denis Serre Dec 3 2010 at 3:57
I entirely agree with you, and I think Theo makes a very good point. I don't think I've ever seen a concrete tensor... – Mozibur Ullah Dec 6 at 21:44
Bhargava explained Gauss composition of binary quadratic forms using 2x2x2-cubes, which can be identified with 2x2x2-hypermatrices on which $SL_2({\mathbb Z})^3$ acts naturally; the hyperdeterminant of this hypermatrix is the common discriminant of the three associated quadratic forms. Already Cayley realized the connection with composition of binary quadratic forms.
-
One reason linear algebra is so useful is that the basic notions, like rank, have so many equivalent definitions. Some are better for formulating problems, some for proving theorems, and some for doing computations. The ability to freely move between these is the key to solving many problems.
Some of these definitions of course won't make sense for hypermatrices, but many of them do. The problem is that they don't usually end up being equivalent.
For example: you can define rank one hypermatrices as outer products of vectors ("simple tensors") and define the minimum number of such terms which must be summed to yield a given hypermatrix to be the "hyperrank". But this does not properly classify all hypermatrices up to changes of basis along all the "axes" of the hypermatrix as it does for matrices (in fact the number of equivalence classes is no longer even finite). And except in very simple cases this does not agree with what you'd get by the vanishing of certain "hyperdeterminants" -- indeed, the set of hypermatrices with hyperrank at most $k$ isn't even closed.
So the things you can compute aren't the same as the things you'd like to compute, and everything ends up feeling much more ad hoc.
Of course this complexity makes for a lot of interesting things to study, but not a simple widely applicable tool every undergrad should learn. Though perhaps they should learn why they don't learn it!
-
It seems to me is that there are a lot of things in mathematics that one could call hypermatrices if one were so inclined, but which people generally don't (and if they call them anything, they call them tensors). For example, one use of matrices $M$ is that they represent bilinear forms $x^T M y$ and quadratic forms $x^T M x$. Three-dimensional hypermatrices then represent trilinear forms and cubic forms (and so forth for higher dimensions), and these do appear in various places in mathematics, for example in Lie theory. The norm map on a cubic number field is also an example of a cubic form. More generally, alternating multilinear forms appear as differential forms, and the same can be said about more general types of tensors. It could be said that studying a projective hypersurface defined by a homogeneous polynomial $f(x_1, ... x_n) = 0$ is the same as studying a certain $n$-dimensional hypermatrix associated to $f$. It could be said that studying a finite-dimensional algebra or Lie algebra $A$ is the same as studying the hypermatrix giving the structure constants of its multiplication or bracket $m : A \times A \to A$.
So, as I said in the comments, I'm not sure what you mean when you say that hypermatrices are rare. I suppose you are trying to draw a distinction between basis-dependent and basis-independent ideas?
-
Since a matrix is just how we write down a linear map $V\to W$ from one vector space to another, it seems to me that the prevalance of matrices over hypermatrices is just a reflection of the fact that we use categories so much more often than multicategories (where a morphism has a list of objects as its domain). And I feel that the large role categories play, with morphisms that just go from one object to another, is due to the way we look at the world in terms of states and processes, of where you are now and how to get where you're going, of being and becoming.
Of course, by using duals of vector spaces we can also use matrices to represent either functionals $V\otimes W\to k$ or elements $k\to V\otimes W$, but I feel that these uses are usually no more special than their generalizations for hypermatrices.
-
Just a small addition to the already nice answers above.
Although the following paper: Most tensor problems are NP Hard by Hillar and Lim does not explain why tensors are not so ubiquitous, it suggests that they might continue remaining non-ubiquitous. There seem to be no easy generalizations of standard notions in the matrix case: eigenvalues, singular value, spectral norm, etc., are shown two be NP-Hard to compute, even for 3-D tensors.
-
I think that the difference between matrices and hypermatrices is closely related to the difference between graphs and hypergraphs.
1) Many of the miracles for graphs/matrices (characterization, duality, efficient algorithms) do not extend to high dimensions. Gauss elimination and the efficincy of computing determinants is at the roots of some of these miracles.
2) For modeling, in many cases graphs /matrices suffices. You can model a hypergraph easily using a graphs.
-
Hi Joseph,
it seems that the basic difference between hypermatrices and matrices is the indexation.
Let me explain what I mean. Consider a finite graph $G=(V,E)$ (the graph structure is not important I mentioning this just to refer to some geometrical interpretation of the index) and a collection of numbers $$(A_{v,w})_{(v,w)\in V\times V}.$$
Despite this list have a "two dimensional" character (bi-indexed) it is in fact a hypermatrice.
The way to import to hypermatrices the results of standard theory of matrix is to define the operations as usual. The product, for instance, is ```$$
(AB)_{(v,w)}=\sum_{z\in V}A_{v,z}B_{z,w}.
$$```
The determinant and other important objects can also be defined in this way by replacing the group $\mathbb{S}_n$ by automorphisms of $V$ and so on.
So I guess that do not pop up any interesting feature that justify to give a great attention to this object. Because one can see them just as a replacement of the indexation process.
For the other hand in fields like statistical mechanics and percolation they are much more natural in high dimensional problems than the usual matrices. When we define $p_{uv}$ the probability to see the edge `$\{u,v\}$` open or when we are leading with the coupling constants $J_{i,j}$ in the Ising model, all of them are hypermatrices and its linear algebra structure are frequently arise in proofs of important results in correlations inequalities.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491018056869507, "perplexity_flag": "head"}
|
http://mathhelpforum.com/statistics/147310-coin-flip.html
|
# Thread:
1. ## coin flip
Two players, Jim and Tom, each are going to flip 3 fair coins. What is the probability that they will get the same number of heads?
I am reviewing for an exam and cannot figure out how to get to this answer. The key says 5/16
2. $\frac{1}<br /> {{64}} + \frac{9}<br /> {{64}} + \frac{9}<br /> {{64}} + \frac{1}<br /> {{64}} = \frac{5}<br /> {{16}}$
3. P(both 3 heads) = $(\frac{1}{2})^3 \times (\frac{1}{2})^3$
P(both 2 heads) = $3(\frac{1}{2})^3 \times 3(\frac{1}{2})^3$
I used 3 because you can have HHT, HTH or THH
P(both 1 head) = $3(\frac{1}{2})^3 \times 3(\frac{1}{2})^3$
Same thing here, TTH, THT, HTT
P(both 0 head) = $(\frac{1}{2})^3 \times (\frac{1}{2})^3$
Then add all of them to give 5/16
4. Let $A$ be the event that they both flip the same number of heads.
Condition on the number of heads Jim flips.
$P(A) = P(A|0)P(0) + P(A|1)P(1) + P(A|2)P(2) + P(A|3)P(3)$
$= \binom{3}{0} (1/2)^{0} (1/2)^{3}*\binom{3}{0} (1/2)^{0} (1/2)^{3} + \binom{3}{1} (1/2)^{1} (1/2)^{2}*\binom{3}{1} (1/2)^{1} (1/2)^{2}$ $+ \binom{3}{2} (1/2)^{2} (1/2)^{1}*\binom{3}{2} (1/2)^{2} (1/2)^{1} + \binom{3}{3} (1/2)^{3} (1/2)^{0}*\binom{3}{3} (1/2)^{3} (1/2)^{0}$
$= \frac{1}{8}*\frac{1}{8} + \frac{3}{8}*\frac{3}{8} + \frac{3}{8}*\frac{3}{8} + \frac{1}{8}*\frac{1}{8} = \frac{5}{16}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380388259887695, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/81685-gram-schmidt-process.html
|
# Thread:
1. ## Gram-Schmidt process
How do you use the Gram-Schmidt process to find an orthonormal basis for the subspace of R^4 with basis {(1,1,-1,0),(0,2,0,1),(-1,0,0,1)}? Thank you.
2. Originally Posted by antman
How do you use the Gram-Schmidt process to find an orthonormal basis for the subspace of R^4 with basis {(1,1,-1,0),(0,2,0,1),(-1,0,0,1)}? Thank you.
call your three vectors $v_1,v_2,v_3.$ now take a look at this. the vectors you're looking for are $e_1,e_2,e_3$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8393290638923645, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?s=f0c0b1cffc26d4e3c9eedaf470b07057&p=4349345
|
Physics Forums
Page 2 of 2 < 1 2
Recognitions:
Homework Help
Science Advisor
## Why is the strong Nuclear force the strongest?
1kg of copper, without electrons, would accelerate a copper nucleus in a distance of 10cm with an acceleration of about 1027 m/s2, reaching ultrarelativistic speeds within less than a nanometer (neglecting the finite propagation time of the electromagnetic force here). The total energy content in the electrostatic repulsion would be 1 billion times the rest energy of the copper block and equivalent to the total energy radiated by the sun in ~1/4 seconds. This corresponds to 50 billion megatons of TNT - one billion times the yield of the strongest bomb ever built, and more than 100 times the energy of the asteroid which probably killed most of the dinosaurs.
Yeah... so thank the universe that copper has always nearly as many electrons as protons. Otherwise, it would kill all life on the surface of earth.
So the "strength" that we are talking about is related to the coupling constant (which is some inherent property of an interaction I presume) and not to charge/mass ratio right?
Quote by mfb Yeah... so thank the universe that copper has always nearly as many electrons as protons. Otherwise, it would kill all life on the surface of earth.
Did you by any chance draw inspiration from this thread? ;D
Mentor
Quote by mishrashubham So the "strength" that we are talking about is related to the coupling constant (which is some inherent property of an interaction I presume) and not to charge/mass ratio right?
The charge to mass ratio is buried in the coupling constants.
$$\alpha_G=\frac{2 \pi G m_e^2}{hc}$$
$$\alpha = \frac{2 \pi k_e e^2}{h c}$$
Quote by DaleSpam The charge to mass ratio is buried in the coupling constants. $$\alpha_G=\frac{2 \pi G m_e^2}{hc}$$ $$\alpha = \frac{2 \pi k_e e^2}{h c}$$
Oh ok I guess I need to come back to this issue after I've understood the mathematic behind coupling constants. Thanks for all the replies
As far as I understand, the strong nuclear force is so effective because it only acts within a small radius, i.e. comparable to the radius of the average atomic nucleus. Due to the small acting area it becomes much more concentrated and thus has a significantly greater effect, hence nuclear binding events and difficulties that arise during nuclear fission. Our current understanding as to why gravity is significantly weaker is because it acts over a theoretically unbound area. It has been proposed that this is due to the force's ability to act across and through different spacial dimensions and thus cause interactions between objects that could be on opposite 'sides' of a Universe. If there exists a graviton, it is proposed by quantum mechanics that every single fermion in existence exchanges gravitons with every single other fermion in existence. This means, theoretically, that if you move a pen in front of you one metre to the right, you will be altering the forces that are acting on Betelgeuse (obviously the effects are too insignificant to even comprehend, but quantum theory says they're there). The only implication to this seemingly limitless exchange of energy is that it becomes extremely weak in the process. So weak in fact that we can overcome the gravitation of an object of mass 5.97x10^24 kilograms by simply lifting our arm. In conclusion, it's all about concentration. The sum of the magnitude of the fundamental forces of nature may actually be quite similar, but the area over which they're spread out has unquestionable effects on their force per area and hence their 'strength'. This is why gravity is so weak and the strong nuclear force is so much stronger.
Recognitions: Gold Member I don't think that's correct Jakus. The weak force acts on an even smaller distance than the strong force, yet it is extremely weak. Also, the strong force doesn't really 'fall off' with distance. Indeed, the strength of the force doesn't drop off at all! What happens is that when you try to pull the quarks apart inside an atom, at a certain point the energy required to pull them further apart is more than the energy required to simply create two new quarks. So two new quarks are created, which bind to the previous two quarks, and you have new particles. Each composite particle has no net color charge, so the strong force isn't felt further away than a few nucleons, much like the EM force isn't felt far away from a neutral atom, but still allows them to bond when they get very close.
I apologise, but I'm unable to determine a contradiction between my post and yours. It seems that we are on separate tangents to each other ^.^ I respect your position and experience on this forum and out of the two of us you're most likely to be correct, but could you emphasise the point in my post that you disagree with?
Recognitions:
Homework Help
Science Advisor
Well, let's see:
- the strength of the nuclear force is not related to its range
- the nuclear force does not have a short range
Quote by JakusLarkus Our current understanding as to why gravity is significantly weaker is because it acts over a theoretically unbound area.
This is just wrong.
It has been proposed that this is due to the force's ability to act across and through different spacial dimensions and thus cause interactions between objects that could be on opposite 'sides' of a Universe.
Who proposed the highlighted part (in relation to extra dimensions) where?
If there exists a graviton, it is proposed by quantum mechanics that every single fermion in existence exchanges gravitons with every single other fermion in existence.
That is not restricted to fermions, it applies to bosons as well. You don't need quantum theory for that, however, the classical theories of gravity predict the same.
The only implication to this seemingly limitless exchange of energy is that it becomes extremely weak in the process.
How is that an implication?
So weak in fact that we can overcome the gravitation of an object of mass 5.97x10^24 kilograms by simply lifting our arm.
To do this, you use the electromagnetic force, which has an infinite range as well, and is stronger by more than 30 orders of magnitude.
The sum of the magnitude of the fundamental forces of nature may actually be quite similar
It is not.
Recognitions:
Gold Member
Quote by JakusLarkus I apologise, but I'm unable to determine a contradiction between my post and yours. It seems that we are on separate tangents to each other ^.^ I respect your position and experience on this forum and out of the two of us you're most likely to be correct, but could you emphasise the point in my post that you disagree with?
Your assertion that the strength of the force depends on how far away interaction can take place. Aka how 'concentrated' it is. The weak force, being much more concentrated than the strong force should be much stronger, yet it is not. Then you'd have to compare gravitation with the EM force. Both have the same range and fall off at the same rate, but the EM force is much stronger than gravitation.
Or to put my last post in shorter words, the strong force acts within such a small range BECAUSE it is so strong, not the other way around. If it were weaker, then the energy required to separate quarks would be less and the range would be larger since the screening effect would happen further away.
Message understood, sorry about that. I'm fairly new to this website so is there any way to remove my post?
Recognitions:
Gold Member
Jakus:
I don't want to pile on here, but the following statement is so apparently inaccurate that somebody needs to explain why it is so,
Our current understanding as to why gravity is significantly weaker is because it acts over a theoretically unbound area. It has been proposed that this is due to the force's ability to act across and through different spacial dimensions and thus cause interactions between objects that could be on opposite 'sides' of a Universe.
Newtonian Gravity force F = GM1m2/r2
Coulomb electromagnetic force F = kq1q2/r2
So the electrostatic force and the gravitational force BOTH act over the entire universe. They both weaken as the square of the distance [r] of separation. Neither is 'bounded' [limited] in any way. They become asymptotically weaker a great distances...larger 'r'.
We know the electrostatic force is much stronger because a bit of static electricity can pick up a paper clip...despite the gravitational force opposing that from the entire earth.
On the other hand, when you say gravity acts in 'different spacial dimensions', perhaps that is something you read about string theory. Some propose gravity is weaker in our observed spacetime because because it diffuses into the additional compactified [unobservable] dimensions of such theories. So maybe that's you meant by 'unbounded' area....
so at a minimum the language in the first sentence conflicts with that in the second for virtually all readers here.
don't give up....this is a good place to learn....and most of us take our lumps from time to time....
Recognitions:
Gold Member
Quote by JakusLarkus Message understood, sorry about that. I'm fairly new to this website so is there any way to remove my post?
Not after a day or so. And don't worry, there's no need to remove it anyways. It would just confuse people who read this thread after it was deleted. Who knows, someone may have the same idea you did and having this conversation shown could help them.
Ok, thanks guys. And yeah I'm not sure why I said 'Unbound', I wanted to say 'large area'. It seems that I entered this website thinking I could have a go at answering a couple of questions and instead learnt a few things myself, so I appreciate your support. I am however, slightly insulted at some of mfb's responses.
Recognitions: Homework Help Science Advisor That was not my intention, sorry. English is not my native language, and I do not like to write 100 words if I can express the same content with 10 or 1. Can you explain which parts appear slightly insulting to you?
The 'This is just wrong' remark was a bit of a kick in the nuts, but that's ok, I understand =) It's not often you get to converse with decent, respectable people on the Internet but PhysicsForums seems to be a hub for the helpful and considerate. Anyway, I'm straying away from the thread topic and the rules ask us not to so I'll depart. Thanks again, everyone.
Recognitions: Gold Member JakusLarkus hey don't take that personally because throughout the quest for knowledge we all amke and have made mistakes and if someone says that you are wrong (theoretically or empirically) then it is just that your state of knowledge about the matter might need some help not you as a human being , so it is always good to not "fall in love" or take basic physical descriptions of natural laws or phenomenon very personally because later you will find out that either you were wrong at some point about them or maybe science will have something better at the time to offer as an explanation. We all have made mistakes and history of science also the history of the world as a whole is full with them , but then again they are here to help us become better so every "kick in the balls" makes you stronger in this case.
Recognitions:
Gold Member
Hi Jakus....you said ..
The 'This is just wrong' remark was a bit of a kick....
perhaps some posters will even be mean intentionally.....you have two choices here....if you just want to show off and somebody criticizes you, well that's just too bad.....on the other hand if you want to learn, then accept corrections when you believe them accurate....
Did you survive, learn something, and move on.....bravo, that's the key to life!!!
Page 2 of 2 < 1 2
Thread Tools
| | | |
|---------------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: Why is the strong Nuclear force the strongest? | | |
| Thread | Forum | Replies |
| | Quantum Physics | 1 |
| | High Energy, Nuclear, Particle Physics | 7 |
| | General Physics | 2 |
| | High Energy, Nuclear, Particle Physics | 3 |
| | General Physics | 11 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9614267945289612, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/87990/list
|
## Return to Question
2 added 31 characters in body
Consider a Wiener process with zero drift, infintesimal variance $\sigma^2$, and an unknown starting value $\nu$. That is, \begin{align} Y_t \sim \mathcal{N}(\nu, t\sigma^2). \end{align}
Now, suppose that we don't observe the $Y_i$ directly, but rather have a corresponding Gaussian likelihood for each $Y_1, \dotsc, Y_t$: Y_i$:$X_1, \dotsc, X_t$X_i$ having means mean $\mu_i$ and precisions precision $\lambda_i$.
Question 1: If we know $\sigma$ and have $X_{i\le t}$, what is the posterior on $Y_t$?
Intuitively, I think we modify each $X_{t-i}$ by adding $i\sigma^2$ to its variance, and then the combined likelihood on $Y_t$ is Gaussian with mean and precision: \begin{align} \mu^\star &= \frac{\sum_{i\le t}\mu_i\lambda_i}{\lambda^\star} \\ \lambda^\star &= \sum_{i\le t}\lambda_i. \end{align}
Question 2: If we don't know $\sigma$, but have $X_{i\le t}$, what is the posterior on $Y_t$ and $\sigma$?
It seems that we would want to estimate $\sigma$ by setting a gamma prior on $\sigma^{-2}$. (That's what we would do in the special case that each of the $X_i$ has zero variance.) I'm having trouble proceeding from here. (Doesn't this make the combined likelihood on $Y_t$ student's t-distributed?)
1
# Estimating Wiener process parameters
Consider a Wiener process with zero drift, infintesimal variance $\sigma^2$, and an unknown starting value $\nu$. That is, \begin{align} Y_t \sim \mathcal{N}(\nu, t\sigma^2). \end{align}
Now, suppose that we have a Gaussian likelihood for each $Y_1, \dotsc, Y_t$: $X_1, \dotsc, X_t$ having means $\mu_i$ and precisions $\lambda_i$.
Question 1: If we know $\sigma$ and have $X_{i\le t}$, what is the posterior on $Y_t$?
Intuitively, I think we modify each $X_{t-i}$ by adding $i\sigma^2$ to its variance, and then the combined likelihood on $Y_t$ is Gaussian with mean and precision: \begin{align} \mu^\star &= \frac{\sum_{i\le t}\mu_i\lambda_i}{\lambda^\star} \\ \lambda^\star &= \sum_{i\le t}\lambda_i. \end{align}
Question 2: If we don't know $\sigma$, but have $X_{i\le t}$, what is the posterior on $Y_t$ and $\sigma$?
It seems that we would want to estimate $\sigma$ by setting a gamma prior on $\sigma^{-2}$. (That's what we would do in the special case that each of the $X_i$ has zero variance.) I'm having trouble proceeding from here. (Doesn't this make the combined likelihood on $Y_t$ student's t-distributed?)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8977522850036621, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/statistics/81034-poisson-random-variable.html
|
# Thread:
1. ## Poisson Random Variable
The number of defective items that come out of a production line on any given day is a Poisson random variable with parameter λ=2. At the end of the day, the defective items are reworked. Each defective item can be repaired with probability 0.6 and is discarded with probability 0.4.
1. What is the probability that fewer than 3 items are discarded on a given day?
2. What is the expected number of items discarded?
2. Originally Posted by essedra
The number of defective items that come out of a production line on any given day is a Poisson random variable with parameter λ=2. At the end of the day, the defective items are reworked. Each defective item can be repaired with probability 0.6 and is discarded with probability 0.4.
1. What is the probability that fewer than 3 items are discarded on a given day?
2. What is the expected number of items discarded?
The probability of $k$ defectives in a day is:
$<br /> p(\text{k})=f(k,\lambda)=f(k,2)<br />$
where $f(k,\lambda)$ is the Poisson probability mass function.
Hence the probability that fewer that 3 items are discarded in a day is:
$p(\text{fewer than 3 discards})=\sum_{k=0}^{\infty} f(k,2)\sum_{r=0}^2 b(r;k,0.4)$
where $b(r;k,0.4)$ is the pmf for the binomial distribution with k trials with probability of success on a single trial of $0.4$.
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924834668636322, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/101793/inclusion-exclusion-principle-question/101800
|
# Inclusion-exclusion principle question
1st part of my question:
I have that $$P\left(\bigcup_{i=1}^{{2^n-n}}E_i\right)$$ , how would I write it out using the inclusion-exclusion principle? I know it starts off: $$\sum_{i=1}^{2^n-n} P(E_i)+...$$ But after that Im not sure what goes next.
2nd part --- I also read somewhere that (by subadditivity), $P\left(\bigcup_{i=1}^{{2^n-n}}E_i\right) \le \sum_{i=1}^{2^n-n} P(E_i)$, but why is that the case? I dont understand how it by subadditivity the above inequality comes about.
Thanks.
-
1
Is there any reason for considering $2^n-n$ events $E_i$ rather than $n$ events which would simplify the notation? That is, do the $E_i$ have more specific meaning that you are not revealing to us? For example, ignoring a possible typographical error in the upper limit, you could actually be wanting to find the probability that two or more of $n$ events have occurred. – Dilip Sarwate Jan 23 '12 at 22:22
## 2 Answers
$$\sum_{i=1}^{2^n-n} P(E_i) - \sum_{i=2}^{2^n-n} \sum_{j=1}^{i-1} P(E_i \cap E_j) + \sum_{i=3}^{2^n-n} \sum_{j=2}^{i-1} \sum_{k=1}^{j-1} P(E_i \cap E_j\cap E_k) -\cdots$$
The key point about the limits of the sums is you want each possible combination once and the $i,j,k,\ldots$ distinct
For the second part you have
$$P\left(\bigcup_{i=1}^{{2^n-n}}E_i\right) = P(E_1)+P(E_2 \cap E_1^C)+ P(E_3 \cap E_2^C \cap E_1^C) + \cdots$$
$$\le P(E_1)+P(E_2)+ P( E_3) + \cdots = \sum_{i=1}^{2^n-n} P(E_i)$$
-
$$\eqalign{ P\Bigl(\bigcup_{i=1}^n E_i\Bigr) = \sum_{i\le n} P(E_i) - &\sum_{i_1<i_2}\underbrace{ P(E_{i_1}\cap E_{i_2})}_{ {\text {two at a time}}} +\sum_{i_1<i_2<i_3} \underbrace{ P(E_{i_1}\cap E_{i_2}\cap E_{i_3})}_{\text {three at a time}} - \cr &\cdots+ (-1)^{n}\sum_{i_1<i_2<\cdots<i_{n-1} } \underbrace{ P(E_{i_1}\cap\cdots\cap E_{i_{n-1}} )}_{(n-1)\text { at a time}} \cr &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + (-1)^{n+1}P(E_1\cap E_2\cap\cdots\cap E_n)}$$
The subscripts in the above sums are just a handy way to write, for example in the term $\sum\limits_{i_1<i_2} P(E_{i_1}\cap E_{i_2})$, "take the sum of the probabilities of intersections of two distinct events (the intersections taken without regard to order; that is, in the sum, you have only only one of, e.g., $P(E_1\cap E_2)$ or $P(E_2\cap E_1) \thinspace$)".
Of course my "$n$" is your "$2^n-n$".
For your concern at the end of your post, note the formula above has negative terms.
In general, if the events $\{E_i\}$ are mutually exclusive, then $P(\cup E_i )=\sum P(E_i)$; but if the events overlap then $P(\cup E_i )\le\sum P(E_i)$. This is because the right hand side of the preceeding formula counts some probabilities more than once (namely those in the intersection of overlapping $E_i$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417262077331543, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Specific_heat
|
# Heat capacity
(Redirected from Specific heat)
Thermodynamics
The classical Carnot heat engine
Branches
State
Processes
Cycles
Specific heat capacity $c=$
$T$ $\partial S$
$N$ $\partial T$
Compressibility $\beta=-$
$1$ $\partial V$
$V$ $\partial p$
Thermal expansion $\alpha=$
$1$ $\partial V$
$V$ $\partial T$
• Internal energy
$U(S,V)$
• Enthalpy
$H(S,p)=U+pV$
$A(T,V)=U-TS$
$G(T,p)=H-TS$
History / Culture
Philosophy
History
Theories
Key publications
Timelines
Art
Education
Scientists
Heat capacity, or thermal capacity, is the measurable physical quantity that specifies the amount of heat required to change the temperature of an object or body by a given amount. The SI unit of heat capacity is joule per kelvin, J/K.
Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. When expressing the same phenomenon as an intensive property, the heat capacity is divided by the amount of substance, mass, or volume, so that the quantity is independent of the size or extent of the sample. The molar heat capacity is the heat capacity per mole of a pure substance and the specific heat capacity, often simply called specific heat, is the heat capacity per unit mass of a material. Occasionally, in engineering contexts, the volumetric heat capacity is used.
Temperature reflects the average randomized kinetic energy of particles in matter, while heat is the transfer of thermal energy across a system boundary into the body or from the body to the environment. Translation, rotation, and a combination of the two types of energy in vibration (kinetic and potential) of atoms represent the degrees of freedom of motion which classically contribute to the heat capacity of matter, but loosely bound electrons may also participate. On a microscopic scale, each system particle absorbs thermal energy among the few degrees of freedom available to it, and at sufficient temperatures, this process contributes to the specific heat capacity that classically approaches a value per mole of particles that is set by the Dulong-Petit law. This limit, which is about 25 joules per kelvin for each mole of atoms, is achieved by many solid substances at room temperature.
For quantum mechanical reasons, at any given temperature, some of these degrees of freedom may be unavailable, or only partially available, to store thermal energy. In such cases, the specific heat capacity is a fraction of the maximum. As the temperature approaches absolute zero, the specific heat capacity of a system also approaches zero, due to loss of available degrees of freedom. Quantum theory can be used to quantitatively predict the specific heat capacity of simple systems.
## Background
Before the development of modern thermodynamics, it was thought that heat was an invisible fluid, the so-called caloric. Bodies were capable of holding a certain amount of this fluid, hence the term heat capacity, named and first investigated by Joseph Black in the 1750s.[1] Today one instead discusses the internal energy of a system. This is made up of its microscopic kinetic and potential energy. Heat is no longer considered a fluid. Rather, it is a transfer of disordered energy at the microscopic level. Nevertheless, at least in English, the term "heat capacity" survives. Some other languages prefer the term thermal capacity, which is also sometimes used in English.
## Older units and English units
An older unit of heat is the kilogram-calorie (Cal), originally defined as the energy required to raise the temperature of one kilogram of water by one degree Celsius, typically from 15 to 16 °C. The specific heat capacity of water on this scale would therefore be exactly 1 Cal/(°C·kg). However, due to the temperature-dependence of the specific heat, a large number of different definitions of the calorie came into being. Whilst once it was very prevalent, especially its smaller cgs variant the gram-calorie (cal), defined so that the specific heat of water would be 1 cal/(K·g), in most fields the use of the calorie is now archaic.
In the United States other units of measure for heat capacity may be quoted in disciplines such as construction, civil engineering, and chemical engineering. A still common system is the English Engineering Units in which the mass reference is pound mass and the temperature is specified in degrees Fahrenheit or Rankine. One (rare) unit of heat is the pound calorie (lb-cal), defined as the amount of heat required to raise the temperature of one pound of water by one degree Celsius. On this scale the specific heat of water would be 1 lb-cal/(K·lb). More common is the British thermal unit, the standard unit of heat in the U.S. construction industry. This is defined such that the specific heat of water is 1 BTU/(°F·lb).
## Extensive and intensive quantities
An object's heat capacity (symbol C) is defined as the ratio of the amount of heat energy transferred to an object to the resulting increase in temperature of the object,
$C = \frac{\Delta Q}{\Delta T}.$
In the International System of Units, heat capacity has the unit joules per kelvin.
Heat capacity is an extensive property, meaning it is a physical property that scales with the size of a physical system. A sample containing twice the amount of substance as another sample requires the transfer of twice the amount of heat ($Q$) to achieve the same change in temperature ($\Delta T$).
For many experimental and theoretical purposes it is more convenient to report heat capacity as an intensive property - an intrinsic characteristic of a particular substance. This is most often accomplished by expressing the property in relation to a unit of mass. In science and engineering, such properties are often prefixed with the term specific.[2] International standards now recommend that specific heat capacity always refer to division by mass.[3] The units for the specific heat capacity are $[c] = \mathrm{\tfrac{J}{kg \cdot K}}$.
In chemistry, heat capacity is often specified relative to one mole, the unit of amount of substance, and is called the molar heat capacity. It has the unit $[C_\mathrm{mol}] =\mathrm{\tfrac{J}{mol \cdot K}}$.
For some considerations it is useful to specify the volume-specific heat capacity, commonly called volumetric heat capacity, which is the heat capacity per unit volume and has SI units $[s] = \mathrm{\tfrac{J}{m^{3} \cdot K}}$. This is used almost exclusively for liquids and solids, since for gases it may be confused with specific heat capacity at constant volume.
## Measurement of heat capacity
The heat capacity of most systems is not a constant. Rather, it depends on the state variables of the thermodynamic system under study. In particular it is dependent on temperature itself, as well as on the pressure and the volume of the system.
Different measurements of heat capacity can therefore be performed, most commonly either at constant pressure or at constant volume. The values thus measured are usually subscripted (by p and V, respectively) to indicate the definition. Gases and liquids are typically also measured at constant volume. Measurements under constant pressure produce larger values than those at constant volume because the constant pressure values also include heat energy that is used to do work to expand the substance against the constant pressure as its temperature increases. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume.[citation needed]
The specific heat capacities of substances comprising molecules (as distinct from monatomic gases) are not fixed constants and vary somewhat depending on temperature. Accordingly, the temperature at which the measurement is made is usually also specified. Examples of two common ways to cite the specific heat of a substance are as follows:
• Water (liquid): cp = 4.1855 [J/(g·K)] (15 °C, 101.325 kPa) or 1 calorie/gram °C
• Water (liquid): CvH = 74.539 J/(mol·K) (25 °C)
For liquids and gases, it is important to know the pressure to which given heat-capacity data refer. Most published data are given for standard pressure. However, quite different standard conditions for temperature and pressure have been defined by different organizations. The International Union of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100 kPa (≈750.062 Torr).[notes 1]
### Calculation from first principles
The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at non-cryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below.
### Thermodynamic relations and definition of heat capacity
The internal energy of a closed system changes either by adding heat to the system or by the system performing work. Written mathematically we have
${\ \mathrm{d}U = \delta Q + \delta W }.$
For work as a result of an increase of the system volume we may write,
${\ \mathrm{d}U = \delta Q - P\mathrm{d}V }.$
If the heat is added at constant volume, then the second term of this relation vanishes and one readily obtains
$\left(\frac{\partial U}{\partial T}\right)_V=\left(\frac{\partial Q}{\partial T}\right)_V=C_V.$
This defines the heat capacity at constant volume, CV. Another useful quantity is the heat capacity at constant pressure, CP. With the enthalpy of the system given by
${\ H = U + PV }$
our equation for dU changes to
${\ \mathrm{d}H = \delta Q + V \mathrm{d}P },$
and therefore, at constant pressure, we have
$\left(\frac{\partial H}{\partial T}\right)_P=\left(\frac{\partial Q}{\partial T}\right)_P=C_P.$
### Relation between heat capacities
Main article: Relations between heat capacities
Measuring the heat capacity, sometimes referred to as specific heat, at constant volume can be prohibitively difficult for liquids and solids. That is, small temperature changes typically require large pressures to maintain a liquid or solid at constant volume implying the containing vessel must be nearly rigid or at least very strong (see coefficient of thermal expansion and compressibility). Instead it is easier to measure the heat capacity at constant pressure (allowing the material to expand or contract freely) and solve for the heat capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws. Starting from the fundamental Thermodynamic Relation one can show
$C_p - C_V = T \left(\frac{\partial p}{\partial T}\right)_{V,N} \left(\frac{\partial V}{\partial T}\right)_{p,N}$
where the partial derivatives are taken at constant volume and constant number of particles, and constant pressure and constant number of particles, respectively.
This can also be rewritten
$C_{p} - C_{V}= V T\frac{\alpha^{2}}{\beta_{T}}\,$
where
$\alpha$ is the coefficient of thermal expansion,
$\beta_T$ is the isothermal compressibility.
The heat capacity ratio or adiabatic index is the ratio of the heat capacity at constant pressure to heat capacity at constant volume. It is sometimes also known as the isentropic expansion factor.
#### Ideal gas
[4] For an ideal gas, evaluating the partial derivatives above according to the equation of state where R is the gas constant for an ideal gas
$p V = R T \;$
$C_p - C_V = T \left(\frac{\partial p}{\partial T}\right)_{V} \left(\frac{\partial V}{\partial T}\right)_{p}$
$C_p - C_V = -T \left(\frac{\partial p}{\partial V}\right)_{T} \left(\frac{\partial V}{\partial T}\right)_{p}^2$
$p =\frac{RT}{V }$ →$\left(\frac{\partial p}{\partial V}\right)_{T}=\frac{-RT}{V^2 }$= $\frac{-p}{V }$
$V =\frac{RT}{p }$→$\left(\frac{\partial V}{\partial T}\right)_{p}^2=\frac{R^2}{p^2}$
substituting
$-T \left(\frac{\partial p}{\partial V}\right)_{T} \left(\frac{\partial V}{\partial T}\right)_{p}^2$= $-T\left(\frac{-p}{V }\right) \left(\frac{R^2}{p^2}\right)=R$
this equation reduces simply to Mayer's relation,
$C_p - C_V = R$
### Specific heat capacity
The specific heat capacity of a material on a per mass basis is
$c={\partial C \over \partial m},$
which in the absence of phase transitions is equivalent to
$c=E_ m={C \over m} = {C \over {\rho V}},$
where
$C$ is the heat capacity of a body made of the material in question,
$m$ is the mass of the body,
$V$ is the volume of the body, and
$\rho = \frac{m}{V}$ is the density of the material.
For gases, and also for other materials under high pressures, there is need to distinguish between different boundary conditions for the processes under consideration (since values differ significantly between different conditions). Typical processes for which a heat capacity may be defined include isobaric (constant pressure, $dp = 0$) or isochoric (constant volume, $dV = 0$) processes. The corresponding specific heat capacities are expressed as
$c_p = \left(\frac{\partial C}{\partial m}\right)_p,$
$c_V = \left(\frac{\partial C}{\partial m}\right)_V.$
From the results of the previous section, dividing through by the mass gives the relation
$c_p - c_V = \frac{\alpha^2 T}{\rho \beta_T}.$
A related parameter to $c$ is $CV^{-1}\,$, the volumetric heat capacity. In engineering practice, $c_V\,$ for solids or liquids often signifies a volumetric heat capacity, rather than a constant-volume one. In such cases, the mass-specific heat capacity (specific heat) is often explicitly written with the subscript $m$, as $c_m\,$. Of course, from the above relationships, for solids one writes
$c_m = \frac{C}{m} = \frac{c_{volumetric}}{\rho}.$
For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by the following equations analogous to the per mass equations:
$C_{p,m} = \left(\frac{\partial C}{\partial n}\right)_p$ = molar heat capacity at constant pressure
$C_{V,m} = \left(\frac{\partial C}{\partial n}\right)_V$ = molar heat capacity at constant volume
where n = number of moles in the body or thermodynamic system. One may refer to such a per mole quantity as molar heat capacity to distinguish it from specific heat capacity on a per mass basis.
### Polytropic heat capacity
The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperature) change
$C_{i,m} = \left(\frac{\partial C}{\partial n}\right)$ = molar heat capacity at polytropic process
The most important polytropic processes run between the adiabatic and the isotherm functions, the polytropic index is between 1 and the adiabatic exponent (γ or κ)
### Dimensionless heat capacity
The dimensionless heat capacity of a material is
$C^*={C \over nR} = {C \over {Nk}}$
where
C is the heat capacity of a body made of the material in question (J/K)
n is the amount of substance in the body (mol)
R is the gas constant (J/(K·mol))
N is the number of molecules in the body. (dimensionless)
k is Boltzmann’s constant (J/(K·molecule))
In the ideal gas article, dimensionless heat capacity $C^* \,$ is expressed as $\hat c$, and is related there directly to half the number of degrees of freedom per particle. This holds true for quadratic degrees of freedom, a consequence of the equipartition theorem.
More generally, the dimensionless heat capacity relates the logarithmic increase in temperature to the increase in the dimensionless entropy per particle $S^* = S / Nk$, measured in nats.
$C^* = {d S^* \over d \ln T}$
Alternatively, using base 2 logarithms, C* relates the base-2 logarithmic increase in temperature to the increase in the dimensionless entropy measured in bits.[5]
### Heat capacity at absolute zero
From the definition of entropy
$T \, dS=\delta Q\,$
the absolute entropy can be calculated by integrating from zero kelvins temperature to the final temperature Tf
$S(T_f)=\int_{T=0}^{T_f} \frac{\delta Q}{T} =\int_0^{T_f} \frac{\delta Q}{dT}\frac{dT}{T} =\int_0^{T_f} C(T)\,\frac{dT}{T}.$
The heat capacity must be zero at zero temperature in order for the above integral not to yield an infinite absolute entropy, which would violate the third law of thermodynamics. One of the strengths of the Debye model is that (unlike the preceding Einstein model) it predicts the proper mathematical form of the approach of heat capacity toward zero, as absolute zero temperature is approached.
### Negative heat capacity (stars)
Most physical systems exhibit a positive heat capacity. However, even though it can seem paradoxical at first,[6][7] there are some systems for which the heat capacity is negative. These include gravitating objects such as stars; and also sometimes some nano-scale clusters of a few tens of atoms, close to a phase transition.[8] A negative heat capacity can result in a negative temperature.
According to the virial theorem, for a self-gravitating body like a star or an interstellar gas cloud, the average potential energy UPot and the average kinetic energy UKin are locked together in the relation
$U_\text{Pot} = -2 U_\text{Kin}, \,$
The total energy U (= UPot + UKin) therefore obeys
$U = - U_\text{Kin}, \,$
If the system loses energy, for example by radiating energy away into space, the average kinetic energy and with it the average temperature actually increases. The system therefore can be said to have a negative heat capacity.[9]
A more extreme version of this occurs with black holes. According to black hole thermodynamics, the more mass and energy a black hole absorbs, the colder it becomes. In contrast, if it is a net emitter of energy, through Hawking radiation, it will become hotter and hotter until it boils away.
## Theory of heat capacity
### Factors that affect specific heat capacity
Molecules undergo many characteristic internal vibrations. Potential energy stored in these internal degrees of freedom contributes to a sample’s energy content, [10] [11] but not to its temperature. More internal degrees of freedom tend to increase a substance's specific heat capacity, so long as temperatures are high enough to overcome quantum effects.
For any given substance, the heat capacity of a body is directly proportional to the amount of substance it contains (measured in terms of mass or moles or volume). Doubling the amount of substance in a body doubles its heat capacity, etc.
However, when this effect has been corrected for, by dividing the heat capacity by the quantity of substance in a body, the resulting specific heat capacity is a function of the structure of the substance itself. In particular, it depends on the number of degrees of freedom that are available to the particles in the substance, each of which type of freedom allows substance particles to store energy. The translational kinetic energy of substance particles is only one of the many possible degrees of freedom which manifests as temperature change, and thus the larger the number of degrees of freedom available to the particles of a substance other than translational kinetic energy, the larger will be the specific heat capacity for the substance. For example, rotational kinetic energy of gas molecules stores heat energy in a way that increases heat capacity, since this energy does not contribute to temperature.
In addition, quantum effects require that whenever energy be stored in any mechanism associated with a bound system which confers a degree of freedom, it must be stored in certain minimal-sized deposits (quanta) of energy, or else not stored at all. Such effects limit the full ability of some degrees of freedom to store energy when their lowest energy storage quantum amount is not easily supplied at the average energy of particles at a given temperature. In general, for this reason, specific heat capacities tend to fall at lower temperatures where the average thermal energy available to each particle degree of freedom is smaller, and thermal energy storage begins to be limited by these quantum effects. Due to this process, as temperature falls toward absolute zero, so also does heat capacity.
#### Degrees of freedom
Main article: degrees of freedom (physics and chemistry)
Molecules are quite different from the monatomic gases like helium and argon. With monatomic gases, thermal energy comprises only translational motions. Translational motions are ordinary, whole-body movements in 3D space whereby particles move about and exchange energy in collisions—like rubber balls in a vigorously shaken container (see animation here). These simple movements in the three dimensions of space mean individual atoms have three translational degrees of freedom. A degree of freedom is any form of energy in which heat transferred into an object can be stored. This can be in translational kinetic energy, rotational kinetic energy, or other forms such as potential energy in vibrational modes. Only three translational degrees of freedom (corresponding to the three independent directions in space) are available for any individual atom, whether it is free, as a monatomic molecule, or bound into a polyatomic molecule.
As to rotation about an atom's axis (again, whether the atom is bound or free), its energy of rotation is proportional to the moment of inertia for the atom, which is extremely small compared to moments of inertia of collections of atoms. This is because almost all of the mass of a single atom is concentrated in its nucleus, which has a radius too small to give a significant moment of inertia. In contrast, the spacing of quantum energy levels for a rotating object is inversely proportional to its moment of inertia, and so this spacing becomes very large for objects with very small moments of inertia. For these reasons, the contribution from rotation of atoms on their axes is essentially zero in monatomic gases, because the energy spacing of the associated quantum levels is too large for significant thermal energy to be stored in rotation of systems with such small moments of inertia. For similar reasons, axial rotation around bonds joining atoms in diatomic gases (or along the linear axis in a linear molecule of any length) can also be neglected as a possible "degree of freedom" as well, since such rotation is similar to rotation of monatomic atoms, and so occurs about an axis with a moment of inertia too small to be able to store significant heat energy.
In polyatomic molecules, other rotational modes may become active, due to the much higher moments of inertia about certain axes which do not coincide with the linear axis of a linear molecule. These modes take the place of some translational degrees of freedom for individual atoms, since the atoms are moving in 3-D space, as the molecule rotates. The narrowing of quantum mechanically determined energy spacing between rotational states results from situations where atoms are rotating around an axis that does not connect them, and thus form an assembly that has a large moment of inertia. This small difference between energy states allows the kinetic energy of this type of rotational motion to store heat energy at ambient temperatures. Furthermore (although usually at higher temperatures than are able to store heat in rotational motion) internal vibrational degrees of freedom also may become active (these are also a type of translation, as seen from the view of each atom). In summary, molecules are complex objects with a population of atoms that may move about within the molecule in a number of different ways (see animation at right), and each of these ways of moving is capable of storing energy if the temperature is sufficient.
The heat capacity of molecular substances (on a "per-atom" or atom-molar, basis) does not exceed the heat capacity of monatomic gases, unless vibrational modes are brought into play. The reason for this is that vibrational modes allow energy to be stored as potential energy in intra-atomic bonds in a molecule, which are not available to atoms in monatomic gases. Up to about twice as much energy (on a per-atom basis) per unit of temperature increase can be stored in a solid as in a monatomic gas, by this mechanism of storing energy in the potentials of interatomic bonds. This gives many solids about twice the atom-molar heat capacity at room temperature of monatomic gases.
However, quantum effects heavily affect the actual ratio at lower temperatures (i.e., much lower than the melting temperature of the solid), especially in solids with light and tightly bound atoms (e.g., beryllium metal or diamond). Polyatomic gases store intermediate amounts of energy, giving them a "per-atom" heat capacity that is between that of monatomic gases (3⁄2 R per mole of atoms, where R is the ideal gas constant), and the maximum of fully excited warmer solids (3 R per mole of atoms). For gases, heat capacity never falls below the minimum of 3⁄2 R per mole (of molecules), since the kinetic energy of gas molecules is always available to store at least this much thermal energy. However, at cryogenic temperatures in solids, heat capacity falls toward zero, as temperature approaches absolute zero.
#### Example of temperature-dependent specific heat capacity, in a diatomic gas
To illustrate the role of various degrees of freedom in storing heat, we may consider nitrogen, a diatomic molecule that has five active degrees of freedom at room temperature: the three comprising translational motion plus two rotational degrees of freedom internally. Although the constant-volume molar heat capacity of nitrogen at this temperature is five-thirds that of monatomic gases, on a per-mole of atoms basis, it is five-sixths that of a monatomic gas. The reason for this is the loss of a degree of freedom due to the bond when it does not allow storage of thermal energy. Two separate nitrogen atoms would have a total of six degrees of freedom—the three translational degrees of freedom of each atom. When the atoms are bonded the molecule will still only have three translational degrees of freedom, as the two atoms in the molecule move as one. However, the molecule cannot be treated as a point object, and the moment of inertia has increased sufficiently about two axes to allow two rotational degrees of freedom to be active at room temperature to give five degrees of freedom. The moment of inertia about the third axis remains small, as this is the axis passing through the centres of the two atoms, and so is similar to the small moment of inertia for atoms of a monatomic gas. Thus, this degree of freedom does not act to store heat, and does not contribute to the heat capacity of nitrogen. The heat capacity per atom for nitrogen (5/2 per mole molecules = 5/4 per mole atoms) is therefore less than for a monatomic gas (3/2 per mole molecules or atoms), so long as the temperature remains low enough that no vibrational degrees of freedom are activated.[12]
At higher temperatures, however, nitrogen gas gains two more degrees of internal freedom, as the molecule is excited into higher vibrational modes that store thermal energy. Now the bond is contributing heat capacity, and is contributing more than if the atoms were not bonded. With full thermal excitation of bond vibration, the heat capacity per volume, or per mole of gas molecules approaches seven-thirds that of monatomic gases. Significantly, this is seven-sixths of the monatomic gas value on a mole-of-atoms basis, so this is now a higher heat capacity per atom than the monatomic figure, because the vibrational mode enables for diatomic gases allows an extra degree of potential energy freedom per pair of atoms, which monatomic gases cannot possess.[13] See thermodynamic temperature for more information on translational motions, kinetic (heat) energy, and their relationship to temperature.
However, even at these large temperatures where gaseous nitrogen is able to store 7/6ths of the energy per atom of a monatomic gas (making it more efficient at storing energy on an atomic basis), it still only stores 7/12 ths of the maximal per-atom heat capacity of a solid, meaning it is not nearly as efficient at storing thermal energy on an atomic basis, as solid substances can be. This is typical of gases, and results because many of the potential bonds which might be storing potential energy in gaseous nitrogen (as opposed to solid nitrogen) are lacking, because only one of the spatial dimensions for each nitrogen atom offers a bond into which potential energy can be stored without increasing the kinetic energy of the atom. In general, solids are most efficient, on an atomic basis, at storing thermal energy (that is, they have the highest per-atom or per-mole-of-atoms heat capacity).
#### Per mole of different units
##### Per mole of molecules
When the specific heat capacity, c, of a material is measured (lowercase c means the unit quantity is in terms of mass), different values arise because different substances have different molar masses (essentially, the weight of the individual atoms or molecules). In solids, thermal energy arises due to the number of atoms that are vibrating. "Molar" heat capacity per mole of molecules, for both gases and solids, offer figures which are arbitrarily large, since molecules may be arbitrarily large. Such heat capacities are thus not intensive quantities for this reason, since the quantity of mass being considered can be increased without limit.
##### Per mole of atoms
Conversely, for molecular-based substances (which also absorb heat into their internal degrees of freedom), massive, complex molecules with high atomic count—like octane—can store a great deal of energy per mole and yet are quite unremarkable on a mass basis, or on a per-atom basis. This is because, in fully excited systems, heat is stored independently by each atom in a substance, not primarily by the bulk motion of molecules.
Thus, it is the heat capacity per-mole-of-atoms, not per-mole-of-molecules, which is the intensive quantity, and which comes closest to being a constant for all substances at high temperatures. This relationship was noticed empirically in 1819, and is called the Dulong-Petit law, after its two discoverers.[14] Historically, the fact that specific heat capacities are approximately equal when corrected by the presumed weight of the atoms of solids, was an important piece of data in favor of the atomic theory of matter.
Because of the connection of heat capacity to the number of atoms, some care should be taken to specify a mole-of-molecules basis vs. a mole-of-atoms basis, when comparing specific heat capacities of molecular solids and gases. Ideal gases have the same numbers of molecules per volume, so increasing molecular complexity adds heat capacity on a per-volume and per-mole-of-molecules basis, but may lower or raise heat capacity on a per-atom basis, depending on whether the temperature is sufficient to store energy as atomic vibration.
In solids, the quantitative limit of heat capacity in general is about 3 R per mole of atoms, where R is the ideal gas constant. This 3 R value is about 24.9 J/mole.K. Six degrees of freedom (three kinetic and three potential) are available to each atom. Each of these six contributes 1⁄2R specific heat capacity per mole of atoms.[15] This limit of 3 R per mole specific heat capacity is approached at room temperature for most solids, with significant departures at this temperature only for solids composed of the lightest atoms which are bound very strongly, such as beryllium (where the value is only of 66% of 3 R), or diamond (where it is only 24% of 3 R). These large departures are due to quantum effects which prevent full distribution of heat into all vibrational modes, when the energy difference between vibrational quantum states is very large compared to the average energy available to each atom from the ambient temperature.
For monatomic gases, the specific heat is only half of 3 R per mole, i.e. (3⁄2R per mole) due to loss of all potential energy degrees of freedom in these gases. For polyatomic gases, the heat capacity will be intermediate between these values on a per-mole-of-atoms basis, and (for heat-stable molecules) would approach the limit of 3 R per mole of atoms, for gases composed of complex molecules, and at higher temperatures at which all vibrational modes accept excitational energy. This is because very large and complex gas molecules may be thought of as relatively large blocks of solid matter which have lost only a relatively small fraction of degrees of freedom, as compared to a fully integrated solid.
For a list of heat capacities per atom-mole of various substances, in terms of R, see the last column of the table of heat capacities below.
#### Corollaries of these considerations for solids (volume-specific heat capacity)
Since the bulk density of a solid chemical element is strongly related to its molar mass (usually about 3 R per mole, as noted above), there exists a noticeable inverse correlation between a solid’s density and its specific heat capacity on a per-mass basis. This is due to a very approximate tendency of atoms of most elements to be about the same size, and constancy of mole-specific heat capacity) result in a good correlation between the volume of any given solid chemical element and its total heat capacity. Another way of stating this, is that the volume-specific heat capacity (volumetric heat capacity) of solid elements is roughly a constant. The molar volume of solid elements is very roughly constant, and (even more reliably) so also is the molar heat capacity for most solid substances. These two factors determine the volumetric heat capacity, which as a bulk property may be striking in consistency. For example, the element uranium is a metal which has a density almost 36 times that of the metal lithium, but uranium's specific heat capacity on a volumetric basis (i.e. per given volume of metal) is only 18% larger than lithium's.
Since the volume-specific corollary of the Dulong-Petit specific heat capacity relationship requires that atoms of all elements take up (on average) the same volume in solids, there are many departures from it, with most of these due to variations in atomic size. For instance, arsenic, which is only 14.5% less dense than antimony, has nearly 59% more specific heat capacity on a mass basis. In other words; even though an ingot of arsenic is only about 17% larger than an antimony one of the same mass, it absorbs about 59% more heat for a given temperature rise. The heat capacity ratios of the two substances closely follows the ratios of their molar volumes (the ratios of numbers of atoms in the same volume of each substance); the departure from the correlation to simple volumes in this case is due to lighter arsenic atoms being significantly more closely packed than antimony atoms, instead of similar size. In other words, similar-sized atoms would cause a mole of arsenic to be 63% larger than a mole of antimony, with a correspondingly lower density, allowing its volume to more closely mirror its heat capacity behavior.
#### Other factors
##### Hydrogen bonds
Hydrogen-containing polar molecules like ethanol, ammonia, and water have powerful, intermolecular hydrogen bonds when in their liquid phase. These bonds provide another place where heat may be stored as potential energy of vibration, even at comparatively low temperatures. Hydrogen bonds account for the fact that liquid water stores nearly the theoretical limit of 3 R per mole of atoms, even at relatively low temperatures (i.e. near the freezing point of water).
##### Impurities
In the case of alloys, there are several conditions in which small impurity concentrations can greatly affect the specific heat. Alloys may exhibit marked difference in behaviour even in the case of small amounts of impurities being one element of the alloy; for example impurities in semiconducting ferromagnetic alloys may lead to quite different specific heat properties.[16]
### The simple case of the monatomic gas
In the case of a monatomic gas such as helium under constant volume, if it is assumed that no electronic or nuclear quantum excitations occur, each atom in the gas has only 3 degrees of freedom, all of a translational type. No energy dependence is associated with the degrees of freedom which define the position of the atoms. While, in fact, the degrees of freedom corresponding to the momenta of the atoms are quadratic, and thus contribute to the heat capacity. There are N atoms, each of which has 3 components of momentum, which leads to 3N total degrees of freedom. This gives:
$C_V=\left(\frac{\partial U}{\partial T}\right)_V=\frac{3}{2}N\,k_B =\frac{3}{2}n\,R$
$C_{V,m}=\frac{C_V}{n}=\frac{3}{2}R$
where
$C_V$ is the heat capacity at constant volume of the gas
$C_{V,m}$ is the molar heat capacity at constant volume of the gas
N is the total number of atoms present in the container
n is the number of moles of atoms present in the container (n is the ratio of N and Avogadro’s number)
R is the ideal gas constant, (8.3144621[75] J/(mol·K). R is equal to the product of Boltzmann’s constant $k_B$ and Avogadro’s number
The following table shows experimental molar constant volume heat capacity measurements taken for each noble monatomic gas (at 1 atm and 25 °C):
Monatomic gas CV, m (J/(mol·K)) CV, m/R
He 12.5 1.50
Ne 12.5 1.50
Ar 12.5 1.50
Kr 12.5 1.50
Xe 12.5 1.50
It is apparent from the table that the experimental heat capacities of the monatomic noble gases agrees with this simple application of statistical mechanics to a very high degree.
The molar heat capacity of a monatomic gas at constant pressure is then
$C_{p,m}=C_{V,m} + R=\frac{5}{2}R$
### Diatomic gas
Constant volume specific heat capacity of a diatomic gas (idealised). As temperature increases, heat capacity goes from 3/2 R (translation contribution only), to 5/2 R (translation plus rotation), finally to a maximum of 7/2 R (translation + rotation + vibration)
In the somewhat more complex case of an ideal gas of diatomic molecules, the presence of internal degrees of freedom are apparent. In addition to the three translational degrees of freedom, there are rotational and vibrational degrees of freedom. In general, the number of degrees of freedom, f, in a molecule with na atoms is 3na:
$f=3n_a \,$
Mathematically, there are a total of three rotational degrees of freedom, one corresponding to rotation about each of the axes of three dimensional space. However, in practice only the existence of two degrees of rotational freedom for linear molecules will be considered. This approximation is valid because the moment of inertia about the internuclear axis is vanishingly small with respect to other moments of inertia in the molecule (this is due to the very small rotational moments of single atoms, due to the concentration of almost all their mass at their centers; compare also the extremely small radii of the atomic nuclei compared to the distance between them in a diatomic molecule). Quantum mechanically, it can be shown that the interval between successive rotational energy eigenstates is inversely proportional to the moment of inertia about that axis. Because the moment of inertia about the internuclear axis is vanishingly small relative to the other two rotational axes, the energy spacing can be considered so high that no excitations of the rotational state can occur unless the temperature is extremely high. It is easy to calculate the expected number of vibrational degrees of freedom (or vibrational modes). There are three degrees of translational freedom, and two degrees of rotational freedom, therefore
$f_\mathrm{vib}=f-f_\mathrm{trans}-f_\mathrm{rot}=6-3-2=1 \,$
Each rotational and translational degree of freedom will contribute R/2 in the total molar heat capacity of the gas. Each vibrational mode will contribute $R$ to the total molar heat capacity, however. This is because for each vibrational mode, there is a potential and kinetic energy component. Both the potential and kinetic components will contribute R/2 to the total molar heat capacity of the gas. Therefore, a diatomic molecule would be expected to have a molar constant-volume heat capacity of
$C_{V,m}=\frac{3R}{2}+R+R=\frac{7R}{2}=3.5 R$
where the terms originate from the translational, rotational, and vibrational degrees of freedom, respectively.
Constant volume specific heat capacity of diatomic gases (real gases) between about 200 K and 2000 K. This temperature range is not large enough to include both quantum transitions in all gases. Instead, at 200 K, all but hydrogen are fully rotationally excited, so all have at least 5/2 R heat capacity. (Hydrogen is already below 5/2, but it will require cryogenic conditions for even H2 to fall to 3/2 R). Further, only the heavier gases fully reach 7/2 R at the highest temperature, due to the relatively small vibrational energy spacing of these molecules. HCl and H2 begin to make the transition above 500 K, but have not achieved it by 1000 K, since their vibrational energy-level spacing is too wide to fully participate in heat capacity, even at this temperature.
The following is a table of some molar constant-volume heat capacities of various diatomic gases at standard temperature (25 oC = 298 K)
Diatomic gas CV, m (J/(mol·K)) CV, m / R
H2 20.18 2.427
CO 20.2 2.43
N2 19.9 2.39
Cl2 24.1 3.06
Br2 (vapour) 28.2 3.39
From the above table, clearly there is a problem with the above theory. All of the diatomics examined have heat capacities that are lower than those predicted by the equipartition theorem, except Br2. However, as the atoms composing the molecules become heavier, the heat capacities move closer to their expected values. One of the reasons for this phenomenon is the quantization of vibrational, and to a lesser extent, rotational states. In fact, if it is assumed that the molecules remain in their lowest energy vibrational state because the inter-level energy spacings for vibration-energies are large, the predicted molar constant volume heat capacity for a diatomic molecule becomes just that from the contributions of translation and rotation:
$C_{V,m}=\frac{3R}{2}+R=\frac{5R}{2}=2.5R$
which is a fairly close approximation of the heat capacities of the lighter molecules in the above table. If the quantum harmonic oscillator approximation is made, it turns out that the quantum vibrational energy level spacings are actually inversely proportional to the square root of the reduced mass of the atoms composing the diatomic molecule. Therefore, in the case of the heavier diatomic molecules such as chlorine or bromine, the quantum vibrational energy level spacings become finer, which allows more excitations into higher vibrational levels at lower temperatures. This limit for storing heat capacity in vibrational modes, as discussed above, becomes 7R/2 = 3.5 R per mole of gas molecules, which is fairly consistent with the measured value for Br2 at room temperature. As temperatures rise, all diatomic gases approach this value.
### General gas phase
The specific heat of the gas is best conceptualized in terms of the degrees of freedom of an individual molecule. The different degrees of freedom correspond to the different ways in which the molecule may store energy. The molecule may store energy in its translational motion according to the formula:
$E=\frac{1}{2}\,m\left(v_x^2+v_y^2+v_z^2\right)$
where m is the mass of the molecule and $[v_x,v_y,v_z]$ is velocity of the center of mass of the molecule. Each direction of motion constitutes a degree of freedom, so that there are three translational degrees of freedom.
In addition, a molecule may have rotational motion. The kinetic energy of rotational motion is generally expressed as
$E=\frac{1}{2}\,\left(I_1\omega_1^2+I_2\omega_2^2+I_3\omega_3^2\right)$
where I is the moment of inertia tensor of the molecule, and $[\omega_1,\omega_2,\omega_3]$ is the angular velocity pseudo-vector (in a coordinate system aligned with the principle axes of the molecule). In general, then, there will be three additional degrees of freedom corresponding to the rotational motion of the molecule, (For linear molecules one of the inertia tensor terms vanishes and there are only two rotational degrees of freedom). The degrees of freedom corresponding to translations and rotations are called the rigid degrees of freedom, since they do not involve any deformation of the molecule.
The motions of the atoms in a molecule which are not part of its gross translational motion or rotation may be classified as vibrational motions. It can be shown that if there are n atoms in the molecule, there will be as many as $v = 3n-3-n_r$ vibrational degrees of freedom, where $n_r$ is the number of rotational degrees of freedom. A vibrational degree of freedom corresponds to a specific way in which all the atoms of a molecule can vibrate. The actual number of possible vibrations may be less than this maximal one, due to various symmetries.
For example, triatomic nitrous oxide N2O will have only 2 degrees of rotational freedom (since it is a linear molecule) and contains n=3 atoms: thus the number of possible vibrational degrees of freedom will be v = (3*3)-3-2 = 4. There are four ways or "modes" in which the three atoms can vibrate, corresponding to 1) A mode in which an atom at each end of the molecule moves away from, or towards, the center atom at the same time, 2) a mode in which either end atom moves asynchronously with regard to the other two, and 3) and 4) two modes in which the molecule bends out of line, from the center, in the two possible planar directions that are orthogonal to its axis. Each vibrational degree of freedom confers TWO total degrees of freedom, since vibrational energy mode partitions into 1 kinetic and 1 potential mode. This would give nitrous oxide 3 translational, 2 rotational, and 4 vibrational modes (but these last giving 8 vibrational degrees of freedom), for storing energy. This is a total of f = 3+2+8 = 13 total energy-storing degrees of freedom, for N2O.
For a bent molecule like water H2O, a similar calculation gives 9-3-3 = 3 modes of vibration, and 3 (translational) + 3 (rotational) + 6 (vibrational) = 12 degrees of freedom.
### The storage of energy into degrees of freedom
If the molecule could be entirely described using classical mechanics, then the theorem of equipartition of energy could be used to predict that each degree of freedom would have an average energy in the amount of (1/2)kT where k is Boltzmann’s constant and T is the temperature. Our calculation of the constant-volume heat capacity would be straightforward. Each molecule would be holding, on average, an energy of (f/2)kT where f is the total number of degrees of freedom in the molecule. Note that Nk = R if N is Avogadro's number, which is the case in considering the heat capacity of a mole of molecules. Thus, the total internal energy of the gas would be (f/2)NkT where N is the total number of molecules. The heat capacity (at constant volume) would then be a constant (f/2)Nk the mole-specific heat capacity would be (f/2)R the molecule-specific heat capacity would be (f/2)k and the dimensionless heat capacity would be just f/2. Here again, each vibrational degree of freedom contributes 2f. Thus, a mole of nitrous oxide would have a total constant-volume heat capacity (including vibration) of (13/2)R by this calculation.
In summary, the molar heat capacity (mole-specific heat capacity) of an ideal gas with f degrees of freedom is given by
$C_{V,m}=\frac{f}{2} R$
This equation applies to all polyatomic gases, if the degrees of freedom are known.[17]
The constant-pressure heat capacity for any gas would exceed this by an extra factor of R (see Mayer's relation, above). As example Cp would be a total of (15/2)R/mole for nitrous oxide.
### The effect of quantum energy levels in storing energy in degrees of freedom
The various degrees of freedom cannot generally be considered to obey classical mechanics, however. Classically, the energy residing in each degree of freedom is assumed to be continuous—it can take on any positive value, depending on the temperature. In reality, the amount of energy that may reside in a particular degree of freedom is quantized: It may only be increased and decreased in finite amounts. A good estimate of the size of this minimum amount is the energy of the first excited state of that degree of freedom above its ground state. For example, the first vibrational state of the hydrogen chloride (HCl) molecule has an energy of about 5.74 × 10−20 joule. If this amount of energy were deposited in a classical degree of freedom, it would correspond to a temperature of about 4156 K.
If the temperature of the substance is so low that the equipartition energy of (1/2)kT is much smaller than this excitation energy, then there will be little or no energy in this degree of freedom. This degree of freedom is then said to be “frozen out". As mentioned above, the temperature corresponding to the first excited vibrational state of HCl is about 4156 K. For temperatures well below this value, the vibrational degrees of freedom of the HCl molecule will be frozen out. They will contain little energy and will not contribute to the thermal energy or the heat capacity of HCl gas.
### Energy storage mode "freeze-out" temperatures
It can be seen that for each degree of freedom there is a critical temperature at which the degree of freedom “unfreezes” and begins to accept energy in a classical way. In the case of translational degrees of freedom, this temperature is that temperature at which the thermal wavelength of the molecules is roughly equal to the size of the container. For a container of macroscopic size (e.g. 10 cm) this temperature is extremely small and has no significance, since the gas will certainly liquify or freeze before this low temperature is reached. For any real gas translational degrees of freedom may be considered to always be classical and contain an average energy of (3/2)kT per molecule.
The rotational degrees of freedom are the next to “unfreeze". In a diatomic gas, for example, the critical temperature for this transition is usually a few tens of kelvins, although with a very light molecule such as hydrogen the rotational energy levels will be spaced so widely that rotational heat capacity may not completely "unfreeze" until considerably higher temperatures are reached. Finally, the vibrational degrees of freedom are generally the last to unfreeze. As an example, for diatomic gases, the critical temperature for the vibrational motion is usually a few thousands of kelvins, and thus for the nitrogen in our example at room temperature, no vibration modes would be excited, and the constant-volume heat capacity at room temperature is (5/2)R/mole, not (7/2)R/mole. As seen above, with some unusually heavy gases such as iodine gas I2, or bromine gas Br2, some vibrational heat capacity may be observed even at room temperatures.
It should be noted that it has been assumed that atoms have no rotational or internal degrees of freedom. This is in fact untrue. For example, atomic electrons can exist in excited states and even the atomic nucleus can have excited states as well. Each of these internal degrees of freedom are assumed to be frozen out due to their relatively high excitation energy. Nevertheless, for sufficiently high temperatures, these degrees of freedom cannot be ignored. In a few exceptional cases, such molecular electronic transitions are of sufficiently low energy that they contribute to heat capacity at room temperature, or even at cryogenic temperatures. One example of an electronic transition degree of freedom which contributes heat capacity at standard temperature is that of nitric oxide (NO), in which the single electron in an anti-bonding molecular orbital has energy transitions which contribute to the heat capacity of the gas even at room temperature.
An example of a nuclear magnetic transition degree of freedom which is of importance to heat capacity, is the transition which converts the spin isomers of hydrogen gas (H2) into each other. At room temperature, the proton spins of hydrogen gas are aligned 75% of the time, resulting in orthohydrogen when they are. Thus, some thermal energy has been stored in the degree of freedom available when parahydrogen (in which spins are anti-aligned) absorbs energy, and is converted to the higher energy ortho form. However, at the temperature of liquid hydrogen, not enough heat energy is available to produce orthohydrogen (that is, the transition energy between forms is large enough to "freeze out" at this low temperature), and thus the parahydrogen form predominates. The heat capacity of the transition is sufficient to release enough heat, as orthohydrogen converts to the lower-energy parahydrogen, to boil the hydrogen liquid to gas again, if this evolved heat is not removed with a catalyst after the gas has been cooled and condensed. This example also illustrates the fact that some modes of storage of heat may not be in constant equilibrium with each other in substances, and heat absorbed or released from such phase changes may "catch up" with temperature changes of substances, only after a certain time. In other words, the heat evolved and absorbed from the ortho-para isomeric transition contributes to the heat capacity of hydrogen on long time-scales, but not on short time-scales. These time scales may also depend on the presence of a catalyst.
Less exotic phase-changes may contribute to the heat-capacity of substances and systems, as well, as (for example) when water is converted back and forth from solid to liquid or gas form. Phase changes store heat energy entirely in breaking the bonds of the potential energy interactions between molecules of a substance. As in the case of hydrogen, it is also possible for phase changes to be hindered as the temperature drops, so that they do not catch up and become apparent, without a catalyst. For example, it is possible to supercool liquid water to below the freezing point, and not observe the heat evolved when the water changes to ice, so long as the water remains liquid. This heat appears instantly when the water freezes.
### Solid phase
Main articles: Einstein solid, Debye model, and Kinetic theory of solids
The dimensionless heat capacity divided by three, as a function of temperature as predicted by the Debye model and by Einstein’s earlier model. The horizontal axis is the temperature divided by the Debye temperature. Note that, as expected, the dimensionless heat capacity is zero at absolute zero, and rises to a value of three as the temperature becomes much larger than the Debye temperature. The red line corresponds to the classical limit of the Dulong-Petit law
For matter in a crystalline solid phase, the Dulong-Petit law, which was discovered empirically, states that the mole-specific heat capacity assumes the value 3 R. Indeed, for solid metallic chemical elements at room temperature, molar heat capacities range from about 2.8 R to 3.4 R. Large exceptions at the lower end involve solids composed of relatively low-mass, tightly bonded atoms, such as beryllium at 2.0 R, and diamond at only 0.735 R. The latter conditions create larger quantum vibrational energy-spacing, so that many vibrational modes have energies too high to be populated (and thus are "frozen out") at room temperature. At the higher end of possible heat capacities, heat capacity may exceed R by modest amounts, due to contributions from anharmonic vibrations in solids, and sometimes a modest contribution from conduction electrons in metals. These are not degrees of freedom treated in the Einstein or Debye theories.
The theoretical maximum heat capacity for multi-atomic gases at higher temperatures, as the molecules become larger, also approaches the Dulong-Petit limit of 3 R, so long as this is calculated per mole of atoms, not molecules. The reason for this behavior is that, in theory, gases with very large molecules have almost the same high-temperature heat capacity as solids, lacking only the (small) heat capacity contribution that comes from potential energy that cannot be stored between separate molecules in a gas.
The Dulong-Petit limit results from the equipartition theorem, and as such is only valid in the classical limit of a microstate continuum, which is a high temperature limit. For light and non-metallic elements, as well as most of the common molecular solids based on carbon compounds at standard ambient temperature, quantum effects may also play an important role, as they do in multi-atomic gases. These effects usually combine to give heat capacities lower than 3 R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules in molecular solids may be more than 3 R. For example, the heat capacity of water ice at the melting point is about 4.6 R per mole of molecules, but only 1.5 R per mole of atoms. As noted, heat capacity values far lower than 3 R "per atom" (as is the case with diamond and beryllium) result from “freezing out” of possible vibration modes for light atoms at suitably low temperatures, just as happens in many low-mass-atom gases at room temperatures (where vibrational modes are all frozen out). Because of high crystal binding energies, the effects of vibrational mode freezing are observed in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3 R per mole of atoms of the Dulong-Petit theoretical maximum.
For a more modern and precise analysis of the heat capacities of solids, especially at low temperatures, it is useful to use the idea of phonons. See Debye model. Phonons can also be applied to the heat capacity of liquids[18]
The specific heat of amorphous materials has characteristic discontinuities at the glass transition temperature due to rearrangements that occur in the distribution of atoms.[19] These discontinuities are frequently used to detect the glass transition temperature where a supercooled liquid transforms to a glass.[20]
## Table of specific heat capacities
See also: List of thermal conductivities
Note that the especially high molar values, as for paraffin, gasoline, water and ammonia, result from calculating specific heats in terms of moles of molecules. If specific heat is expressed per mole of atoms for these substances, none of the constant-volume values exceed, to any large extent, the theoretical Dulong-Petit limit of 25 J/(mol·K) = 3 R per mole of atoms (see the last column of this table). Paraffin, for example, has very large molecules and thus a high heat capacity per mole, but as a substance it does not have remarkable heat capacity in terms of volume, mass, or atom-mol (which is just 1.41 R per mole of atoms, or less than half of most solids, in terms of heat capacity per atom).
In the last column, major departures of solids at standard temperatures from the Dulong-Petit law value of 3R, are usually due to low atomic weight plus high bond strength (as in diamond) causing some vibration modes to have too much energy to be available to store thermal energy at the measured temperature. For gases, departure from 3R per mole of atoms in this table is generally due to two factors: (1) failure of the higher quantum-energy-spaced vibration modes in gas molecules to be excited at room temperature, and (2) loss of potential energy degree of freedom for small gas molecules, simply because most of their atoms are not bonded maximally in space to other atoms, as happens in many solids.
Table of specific heat capacities at 25 °C (298 K) unless otherwise noted Notable minima and maxima are shown in maroon
Substance Phase (mass) specific
heat capacity
cp or cm
J·g−1·K−1
Constant
pressure molar
heat capacity
Cp,m
J·mol−1·K−1
Constant
volume molar
heat capacity
Cv,m
J·mol−1·K−1
Volumetric
heat capacity
Cv
J·cm−3·K−1
Constant vol.
atom-molar
heat capacity
in units of R
Cv,m(atom)
atom-mol−1
Air (Sea level, dry,
0 °C (273.15 K))
gas 1.0035 29.07 20.7643 0.001297 ~ 1.25 R
Air (typical
room conditionsA)
gas 1.012 29.19 20.85 0.00121 ~ 1.25 R
Aluminium solid 0.897 24.2 2.422 2.91 R
Ammonia liquid 4.700 80.08 3.263 3.21 R
Animal tissue
(incl. human)
[21]
mixed 3.5 3.7*
Antimony solid 0.207 25.2 1.386 3.03 R
Argon gas 0.5203 20.7862 12.4717 1.50 R
Arsenic solid 0.328 24.6 1.878 2.96 R
Beryllium solid 1.82 16.4 3.367 1.97 R
Bismuth[22] solid 0.123 25.7 1.20 3.09 R
Cadmium solid 0.231 26.02 3.13 R
Carbon dioxide CO2[17] gas 0.839* 36.94 28.46 1.14 R
Chromium solid 0.449 23.35 2.81 R
Copper solid 0.385 24.47 3.45 2.94 R
Diamond solid 0.5091 6.115 1.782 0.74 R
Ethanol liquid 2.44 112 1.925 1.50 R
Gasoline (octane) liquid 2.22 228 1.64 1.05 R
Glass[22] solid 0.84
Gold solid 0.129 25.42 2.492 3.05 R
Granite[22] solid 0.790 2.17
Graphite solid 0.710 8.53 1.534 1.03 R
Helium gas 5.1932 20.7862 12.4717 1.50 R
Hydrogen gas 14.30 28.82 1.23 R
Hydrogen sulfide H2S[17] gas 1.015* 34.60 1.05 R
Iron solid 0.450 25.1[citation needed] 3.537 3.02 R
Lead solid 0.129 26.4 1.44 3.18 R
Lithium solid 3.58 24.8 1.912 2.98 R
Lithium at 181 °C[23] liquid 4.379 30.33 2.242 3.65 R
Magnesium solid 1.02 24.9 1.773 2.99 R
Mercury liquid 0.1395 27.98 1.888 3.36 R
Methane at 2 °C gas 2.191 35.69 0.66 R
Methanol (298 K)[24] liquid 2.14 68.62 1.38 R
Nitrogen gas 1.040 29.12 20.8 1.25 R
Neon gas 1.0301 20.7862 12.4717 1.50 R
Oxygen gas 0.918 29.38 21.0 1.26 R
Paraffin wax
C25H52
solid 2.5 (ave) 900 2.325 1.41 R
Polyethylene
(rotomolding grade)[25][26]
solid 2.3027
Silica (fused) solid 0.703 42.2 1.547 1.69 R
Silver[22] solid 0.233 24.9 2.44 2.99 R
Sodium solid 1.230 28.23 3.39 R
Steel solid 0.466
Tin solid 0.227 27.112 3.26 R
Titanium solid 0.523 26.060 3.13 R
Tungsten[22] solid 0.134 24.8 2.58 2.98 R
Uranium solid 0.116 27.7 2.216 3.33 R
Water at 100 °C (steam) gas 2.080 37.47 28.03 1.12 R
Water at 25 °C liquid 4.1813 75.327 74.53 4.1796 3.02 R
Water at 100 °C liquid 4.1813 75.327 74.53 4.2160 3.02 R
Water at −10 °C (ice)[22] solid 2.11 38.09 1.938 1.53 R
Zinc[22] solid 0.387 25.2 2.76 3.03 R
Substance Phase Cp
J/(g·K)
Cp,m
J/(mol·K)
Cv,m
J/(mol·K)
Volumetric
heat capacity
J/(cm3·K)
A Assuming an altitude of 194 metres above mean sea level (the world–wide median altitude of human habitation), an indoor temperature of 23 °C, a dewpoint of 9 °C (40.85% relative humidity), and 760 mm–Hg sea level–corrected barometric pressure (molar water vapor content = 1.16%).
*Derived data by calculation. This is for water-rich tissues such as brain. The whole-body average figure for mammals is approximately 2.9 J/(cm3·K) [27]
## Specific heat capacity of building materials
See also: Thermal mass
(Usually of interest to builders and solar designers)
Specific heat capacity of building materials
Substance Phase cp
J/(g·K)
Asphalt solid 0.920
Brick solid 0.840
Concrete solid 0.880
Glass, silica solid 0.840
Glass, crown solid 0.670
Glass, flint solid 0.503
Glass, pyrex solid 0.753
Granite solid 0.790
Gypsum solid 1.090
Marble, mica solid 0.880
Sand solid 0.835
Soil solid 0.800
Sulphur Hexafluoride gas 0.664
Wood solid 1.7 (1.2 to 2.3)
Substance Phase cp
J/(g·K)
## Notes
1. IUPAC, , 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "Standard Pressure".. Besides being a round number, this had a very practical effect: relatively few[] people live and work at precisely sea level; 100 kPa equates to the mean pressure at an altitude of about 112 metres (which is closer to the 194–metre, world–wide median altitude of human habitation[]).
## References
1. Laider, Keith J. (1993). The World of Physical Chemistry. Oxford University Press. ISBN 0-19-855919-4.
2.
3. Fraundorf, P. (2003). "Heat capacity in bits". American Journal of Physics 71 (11): 1142. arXiv:cond-mat/9711074. Bibcode:2003AmJPh..71.1142F. doi:10.1119/1.1593658.
4. D. Lynden-Bell & R. M. Lynden-Bell; Lynden-Bell (Nov. 1977). "On the negative specific heat paradox". Monthly Notices of the Royal Astronomical Society 181: 405–419. Bibcode:1977MNRAS.181..405L.
5. Lynden-Bell, D. (Dec. 1998). "Negative Specific Heat in Astronomy, Physics and Chemistry". Physica A 263: 293–304. arXiv:cond-mat/9812172v1. Bibcode:1999PhyA..263..293L. doi:10.1016/S0378-4371(98)00518-4.
6. Schmidt, Martin; Kusche, Robert; Hippler, Thomas; Donges, Jörn; Kronmüller, Werner; Von Issendorff, Bernd; Haberland, Hellmut (2001). "Negative Heat Capacity for a Cluster of 147 Sodium Atoms". Physical Review Letters 86 (7): 1191–4. Bibcode:2001PhRvL..86.1191S. doi:10.1103/PhysRevLett.86.1191. PMID 11178041.
7. See e.g., Wallace, David (2010). "Gravity, entropy, and cosmology: in search of clarity" (preprint). British Journal for the Philosophy of Science 61 (3): 513. arXiv:0907.0659. Bibcode:2010BJPS...61..513W. doi:10.1093/bjps/axp048. Section 4 and onwards.
8. Reif, F. (1965). Fundamentals of statistical and thermal physics. McGraw-Hill. pp. 253–254. ISBN 07-051800-9 Check `|isbn=` value (help).
9. Charles Kittel; Herbert Kroemer (2000). Thermal physics. Freeman. p. 78. ISBN 0-7167-1088-9.
10. Smith, C. G. (2008). Quantum Physics and the Physics of large systems, Part 1A Physics. University of Cambridge.
11. The comparison must be made under constant-volume conditions—CvH—so that no work is performed. Nitrogen’s CvH (100 kPa, 20 °C) = 20.8 J mol–1 K–1 vs. the monatomic gases which equal 12.4717 J mol–1 K–1. Citations: . Also
12. Petit A.-T., Dulong P.-L. (1819). Translation "Recherches sur quelques points importants de la Théorie de la Chaleur". Annales de Chimie et de Physique 10: 395–413.
13.
14. Hogan, C. (1969). "Density of States of an Insulating Ferromagnetic Alloy". Physical Review 188 (2): 870. Bibcode:1969PhRv..188..870H. doi:10.1103/PhysRev.188.870.
15. ^ a b c Young; Geller (2008). Young and Geller College Physics (8th ed.). Pearson Education. ISBN 0-8053-9218-1.
16. Bolmatov, D.; Brazhkin, V. V.; Trachenko, K. (2012). "The phonon theory of liquid thermodynamics". Scientific Reports 2. arXiv:1202.0459. Bibcode:2012NatSR...2E.421B. doi:10.1038/srep00421. Lay summary – .
17. Ojovan, M. I. (2008). "Configurons: thermodynamic parameters and symmetry changes at glass transition" (PDF). Entropy 10 (3): 334–364. Bibcode:2008Entrp..10..334O. doi:10.3390/e10030334.
18. Ojovan, Michael I. (2008). "Viscosity and Glass Transition in Amorphous Oxides". Advances in Condensed Matter Physics 2008: 1. Bibcode:2008AdCMP2008....1O. doi:10.1155/2008/817829.
19. Page 183 in: Cornelius, Flemming (2008). Medical biophysics (6th ed.). ISBN 1-4020-7110-8. (also giving a density of 1.06 kg/L)
20.
21. "HCV (Molar Heat Capacity (cV)) Data for Methanol". Dortmund Data Bank Software and Separation Technology.
22. Crawford, R. J. Rotational molding of plastics. ISBN 1-59124-192-8.
23. Gaur, Umesh; Wunderlich, Bernhard (1981). "Heat capacity and other thermodynamic properties of linear macromolecules. II. Polyethylene". Journal of Physical and Chemical Reference Data 10: 119. Bibcode:1981JPCRD..10..119G. doi:10.1063/1.555636.
24. Faber, P.; Garby, L. (1995). "Fat content affects heat capacity: a study in mice". Acta Physiologica Scandinavica 153 (2): 185–7. doi:10.1111/j.1748-1716.1995.tb09850.x. PMID 7778459.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 93, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8883158564567566, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/2377/deterministic-nonces-in-ctr-mode/2378
|
# Deterministic nonces in CTR mode
I want to encrypt a file with AES in CTR mode. I have a 256 bit master key and the file. Given these, the encryption must be deterministic, so I can't use a random nonce in the usual way. Fortunately the master key will be unique¹.
My original plan was to simply set the nonce to 0. Assuming no collision happens when deriving the 128 bit AES key from the master key, this should as secure as conventional CTR, where the nonce is prepended to the message.
An alternative plan is to also derive the nonce from the master-key. This seems to offer two advantages:
1. It makes key-nonce pair reuse more unlikely, since now they have 256 bits and not just 128
2. It prevents some kinds of known-plaintext attacks, since the attacker now doesn't know the content of the counter-stream, effectively turning the nonce into some kind of secondary key.
Is there a problem with either scheme? Is the second scheme better than the first?
¹ assuming the 256 bit hashfunction I use is collision free
-
– CodesInChaos Apr 16 '12 at 12:54
## 1 Answer
Assuming that you can indeed guarantee that the keys will never be reused, both schemes should be secure.
The only requirement for the nonce in CTR mode is that it must be unique (and, if used directly as the initial counter value, not equal to any intermediate counter value used in the past or in the future). If you're only encrypting one message with a given key, the nonce $0$ is as unique as any other.
As you correctly note, your second scheme provides somewhat less information to an attacker who can guess some of the plaintext. (Reading between the lines in your question, I'm assuming you're not planning to store the nonce along with the ciphertext, but to re-derive it from the master key on decryption.) Whether it's "better" is hard to say — it only makes a difference if the cipher you're using is broken, and at that point it will depend on just how it's broken — but it's at least unlikely to be worse.
-
The AES key will be as unique as random 128 bit numbers are. It's derived from the hash of the plaintext file, optionally combined with a password. – CodesInChaos Apr 16 '12 at 10:16
Pardon for asking, but how are you going to get the hash of the plain text file before you have decrypted the cipher text? It's obviously not a good idea to append a deterministic hash of the plain text to the cipher text, since it will trivially reveal information about the plain text. – Henrick Hellström Apr 16 '12 at 10:41
@HenrickHellström You need to get that hash for an outside source, typically it's part of the download link. This is a scheme similar to freenet's CHK or tahoe-lafs immutable files. – CodesInChaos Apr 16 '12 at 11:57
@HenrickHellström How does a hash trivially reveal information about the preimage? (For a hash with "good" properties such as SHA256) – MartinSuecia Apr 16 '12 at 12:00
1
@MartinSuecia: Well, for instance the hash will not change if the plain text doesn't change, so the adversary will know if the file at the other end has changed or not. This is a piece of information that a properly used confidentiality mode will hide (with the exception of information about the maximum length of the plain text). – Henrick Hellström Apr 16 '12 at 12:05
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196478128433228, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/117466/henselization-of-valued-field
|
## Henselization of valued field
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What is the importance of henselization in valuation theory, when the rank of valuation is bigger than one? Thanks
-
## 1 Answer
Same as its importance in commutative algebra. Just to be clear about the definition, for a valued field $K$ with valuation ring $R$, the henselization $K^{\rm{h}}$ is defined to be the valued extension Frac($R^{\rm{h}}$) for the henselization $R^{\rm{h}}$ of $R$ in the sense of commutative algebra (and $R^{\rm{h}}$ is equipped with a preferred valuation extending the one on $R$).
This satisfies good properties as if it were a "completion" of $K$ even though it is (separable) algebraic over $K$, and it can be "approximated" using local-etale extensions of $R$; that is really the point. It satisfies Hensel's Lemma and every finite extension $F$ of $K^{\rm{h}}$ admits a unique valuation (necessarily henselian...) extending the one on $K^{\rm{h}}$ (with associated valuation ring that is the integral closure of $R^{\rm{h}}$ in $F$).
-
Thanks Ayanta. I am trying to get the idea about the Henselization and its important in valuation theory. Do you know any text book or any other material where I can read about Henselization? – Rajnish Dec 29 at 5:38
@Rajnish: To answer your reference question in a useful way it would be helpful to know the reason you are specifically interested in this rather specialized aspect of valuation theory (especially beyond the rank-1 setting). – ayanta Dec 29 at 5:55
Thanks Ayanta. I was reading the extension of valuation and suddenly appears the henselization as an immediate extension. That makes kind of hard time to get the idea for me. – Rajnish Dec 30 at 0:00
@Rajnish: If nothing is being done with it (beyond as an example of an immediate extension) then I recommend focusing on the discretely-valued case (where it's the fixed field of a decomposition group at a place on a separable closure) and ignore the topic until you have a real need to work with it. But if something is being done with it (especially beyond the rank-1 case) then what is that? Anyway, the theory of henselization of local rings beyond dvr's requires a lot of hard work to set up; Raynaud's "Anneaux locaux henseliens" Springer LNM 169 is on exactly this topic. – ayanta Dec 30 at 0:47
Thanks for your help Ayanta. – Rajnish Jan 3 at 22:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181633591651917, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/112195/max-of-words-with-restricted-total-content/112722
|
## max # of words with restricted total content
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is the sort of problems in combinatorics with a rather innocent look that turn out to be quite challenging - at least for a bunch of physicists! :)
Suppose we have a multiset `$\mathbf{M}$` on a finite alphabet `$\alpha$`. Given a length `$L$`, what is the maximum number of different words of size `$L$` we can produce (assuming, obviously, no replacement -- (EDIT: i.e. no re-use of elements of $M$ (?)))?
Let me further specify the problem with a simple example. Now we take `$\alpha=\{a,b,c \}$` and `$\mathbf{M}=\{a,a,a,a,b,c\}$`. Taking $L=2$, we could have, for instance, `$W_{1}=\{ aa,bc\}$` or `$W_{2}=\{ab,ac,aa\}$`. In this case it is easy to see that we cannot form more than 3 different words.
Perhaps someone more familiar with combinatorics than me could give the correct phrasing to the problem!
-
1
I conjecture that you don't really mean $L>|M|$. – Andreas Blass Nov 12 at 18:18
Yes, sorry. Fixed. – Pluvio Nov 12 at 18:20
Nice problem! I'd suggest starting by considering the case $|\alpha|=2$ and $|M| = KL$ for some integer $K$. Then your question more or less reduces to asking if there exist $K$ distinct binary strings of length $L$ that collectively use a predetermined number of 0's and 1's. This already looks nontrivial to me, but perhaps tractable, and may give some insight into the general problem. – Timothy Chow Nov 14 at 16:13
1
@Pluvio The precise answer in full generality is certainly out of reach. How much do you really need to know? @Timothy $\alpha=2$ case is simple. Assume that you have $n$ zeroes and $N$ ones with $n\le N$. Try to form as many words as possible using as few zeroes as you can until you need to use more than $n$ at the next step (greedy algorithm). If you use not more than $N$ ones by the end, this is your answer. If you use more than $N$ ones, then it is $[\frac{n+N}L]$ (just take that many first words in the original list and start moving them up replacing 0 with 1 in one word every time). – fedja Nov 17 at 0:36
## 6 Answers
You can probably get pretty close to the answer with linear programming. We can restate the problem in terms of an integer linear programming problem -- suppose $M$ has $n_1$ copies of the letter a, $n_2$ of b, and so on. There are $$K_{c_1,\dots, c_k}=\frac{(\sum c_i)!}{c_1!\dots c_k!}$$ ways to make a word with $c_1$ a's, $c_2$ b's, etc. So if $x_{c_1,\dots, c_k}$ is the unknown number of words in $W$ with the given number of each type of letter, we have two sorts of constraints. First, $$x_{c_1,\dots, c_k}\le K_{c_1,\dots, c_k},$$ (all the words are distinct) and second, $$\sum c_i x_{c_1,\dots, c_k}\le n_i$$ (you don't use more than $n_i$ of each letter.) You want to maximize $\sum x_{c_1,\dots, c_k}$ subject to these constraints and subject to the $x$'s being integers. If you allow the $x$'s to be real numbers, this is just a linear programming problem, and there are algorithms to solve it. My guess is that the solution over the reals is probably close to the solution over the integers.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Often similar problems are treated by Transfer Matrix method in enumerative combinatorics, see R.Stanley's book, Sect. 4.7. In particular 4.7.7 and 4.7.8 can in principle help here, if you reformulate your problem as a problem of enumerating length $L$ sequences from $\alpha$ with forbidden factors (i.e. patterns). E.g. in your 1st examples the forbidden factors are $bb$ and $cc$.
However, it looks as if in your examples you do not allow your symbols being re-used in different words (is this what you mean by no replacement?), so this is actually something rather different. You appear to ask for the maximum number of "ordered" bins of size $L$ each, where you pack elements of $M$ in such a way that no two bins get the same ordered content. Note that it is easy to solve if the multiplicities of the elements of $\alpha$ in $M$ are small, compared to $L$. In general, a greedy approach might produce good results: you try to get rid of elements of $\alpha$ with high multiplicity first, by packing as many of them as possible into one bin (e.g. you would produce your $W_2$ by packing $aa$ first, then $ab$, then $ac$). It could even happen to give you an optimal packing all the time (I am not convinced that this is a computationally difficult problem).
-
First suppose that L=2. In this case imagine that each element of $\alpha$ is the vertex of a graph and its multiplicity in M is its required degree. (Except that you also allow words like aa, moreover, ab and ba count distinct, but I don't think this really changes the problem.) So whether we can produce |M|/L words or not depends on whether the given sequence is graphic or not. See http://mathworld.wolfram.com/GraphicSequence.html
Now if L>2, then we don't know the answer to the corresponding hypergraph problem, so probably also your problem is hard. (In the sense that it is probably NP-hard.) See http://www.math.uiuc.edu/~west/regs/hypergraphic.html
-
Interesting. I am in fact interested not in the proposition " `$|M|/L$` different words of length `$L$` can be formed with this given multiset " but in the \textit{maximum} number of such different words - and yes, words with repeated letters are allowed. I also believe this is an NP-hard problem... – Pluvio Nov 12 at 20:30
If you have M/L distinct symbols, then you can achieve the maximum. The problem gets hard only for small values of L and very few distinct symbols. Even then, if you have L occurences of each symbol, you can get L distinct words for each pair (from a partition into pairs) of symbols. So the cases where the maximum is not achievable seem pretty rare to me. Gerhard "Ask Me About System Design" Paseman, 2012.11.12 – Gerhard Paseman Nov 12 at 21:09
@Pluvio: Yes, but if it is already NP-hard to decide whether M/L words can be formed, then maximizing the number of words is also NP-hard, as M/L is a trivial upper bound. Also, if you are looking for a NP-hardness result, then maybe you should ask your question on cstheory.stackexchange.com – domotorp Nov 12 at 21:31
@Domotorp: yes, I understand. My gut feelings say that this is perhaps analytically intractable, but I am largely ignorant about combinatorics in general, so perhaps I am wrong. – Pluvio Nov 14 at 14:42
If you know L is small and M/L is large compared to the number of symbols available, you might start with crude estimates such as $A^L$ words requiring $LA^{(L-1)}$ symbols of each kind from an alphabet of size A. If A=2 you can use partial sums $\sum \binom{L}{i}$ to guess how many words you get using a sea of b's and $\sum i\binom{L}{i}$ many a's. Of course it gets more complex with more letters, but you could use the above result as a sort of multiplier. For exact estimates for every case, that is likely to be NP, possibly NP complete.
-
just for the bunch of Physicists: look at the Multiset as a partition, in your example {4,1,1}; specialising for word length 2: to get a maximal pairing, use a recursion by pruning. Pair off the first (=most frequent : the 4 a's) with one each of all the remaining parts. Note that the first 2 a's can be paired together, and any excess a's need to be pruned off. Then, define the residue as the decapitated partition in wich the k-1 last parts are decremented by 1, with k= number of parts-first part+1; then recurse on the residu while keeping track of the pruning count. In Mathematica 4.1 : (* start def *) residu[par : {1 ...}, p_:0] := p + Mod[Length[par], 2]; residu[par_?PartitionQ, p_:0] := Block[{f = First[par], l = Length[par], n = Tr[par], temp = p}, If[f - 1 > l, temp += f - 1 - l]; {DeleteCases[ Rest[par] - Table[If[k < n - 1 - (f - 1), 0, 1], {k, l - 1}], 0], temp}]; prune[par_?PartitionQ, p_:0] := If[Max[par]<=1, residu @@ {par, p}, prune @@ residu @@ {par, p}]; maxwords[par_?PartitionQ] := Floor[(Tr[par] - prune[par])/2]; (* end def *) Example: the 2+10^8 'th and 3+10^8 'th partitions of 100 (in reverse lexicographic order) are {20, 14, 8, 8, 6, 5, 5, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2} :: maxwords=50 and {20, 14, 8, 8, 6, 5, 5, 4, 4, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 1, 1} :: maxwords=49;
generalising on length 3 can in principle be done along the same lines, by forming trio's {a,a,a}, {a,a,u},{a,a,v} .. etc, and pruning off excess a's, decapitation, tail decrement and recursion.
-
erratum: read not "k< n-1 -(f-1) " but "k< l+1 -(f-1)"
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923452615737915, "perplexity_flag": "head"}
|
http://spikedmath.com/forum/viewtopic.php?p=1012
|
Spiked Math Forums
Where math geeks unite to discuss math and more math!
Diameter vs Radius
An enlightening discussion about pi and tau.
12 posts • Page 1 of 1
Diameter vs Radius
by Nyarly » Thu Jul 14, 2011 12:32 pm
The first point in the $\tau$ Manifesto is that it seems more natural to define the "Circle Constant" as
$\frac{C}{r}$
instead of
$\frac{C}{d},$
where $C$,$r$ and $d$ are respectively circle's circumference, radius and diameter. Indeed the circle is defined as the set of the points with fixed distance (radius) from a fixed center, so the radius is more "important" than the diameter.
My point is to forget that we are talking about circles, and consider a (planar) shape $F$, let's think of the points inside a closed simple curve. We can define its diameter as
$d(F)=\sup \{ |x-y|:x,y\in F\}$
and we can also define in a standard way (e.g. by integration) its area $A(F)$ and perimeter $P(F)$. The radius can be defined as half the diameter, but I don't see any geometrical meaning in it. If $F$ is convex then
$P(F)\leq d(F)\pi$
i.e. the perimeter of any convex shape is always smaller than the perimeter of the circle with the same diameter. The idea behind this inequality is that for any convex set, if you draw the circle with the same diameter centered in the barycenter of the set, the the circle contains the set. (I'm not entirely sure of this, if someone has a rigorous proof he's welcome!).
Then we have the following beautiful (at least for me) variational formulation of $\pi$:
$\pi=\sup \{ \frac{P(F)}{d(F)}:F \text{convex}\}.$
Moreover the supremum is a maximum (e.g. the circles are minimizers, but also the so called Reuleaux polygons). So $\pi$ is not only the "circle constant", but is the "every convex shape maximal constant". This because the diameter is an intrinsic value of ANY shape.
I hope you agree with my argument and that it is correct. I'm not a native English speaker, so I'm sorry if there are any kind of grammatical errors. Please do not kill me for whose.
Nyarly
Kindergarten
Posts: 6
Joined: Thu Jul 14, 2011 10:40 am
Location: Italy
Re: Diameter vs Radius
by Chris Park » Thu Jul 14, 2011 1:39 pm
Interesting, I never thought to generalize a definition for 2d shapes other than circles in support of which constant is more natural.
Can you explain your first step, though? Where is the origin in your coordinate system containing $F$? For instance, if $F$ is a circle and the origin is at its (bary)center, wouldn't
$sup\{|x-y|: x,y \in F\} = radius?$
Or can the origin be anywhere in $F$? Forgive me if I've got it wrong, I'm more of an engineering/physics guy and not used to set notation.
Chris Park
Kindergarten
Posts: 8
Joined: Sat Jul 09, 2011 1:25 am
Re: Diameter vs Radius
by Nyarly » Thu Jul 14, 2011 1:54 pm
Chris Park wrote:Interesting, I never thought to generalize a definition for 2d shapes other than circles in support of which constant is more natural.
I'm a Ph.D. student in calculus of variation, so its the first thing that I thought
Chris Park wrote:Can you explain your first step, though? Where is the origin in your coordinate system containing $F$? For instance, if $F$ is a circle and the origin is at its (bary)center, wouldn't
$sup\{|x-y|: x,y \in F\} = radius?$
Or can the origin be anywhere in $F$? Forgive me if I've got it wrong, I'm more of an engineering/physics guy and not used to set notation.
it doesn't matter where the origin is, you can think of the diameter of a (closed) set as the length of the longest segment contained in it. In fact $|x-y|$ is the length of the segment with extremal points $x$ and $y$! So you take every couple of point $x,y\in F,$ construct the segment from $x$ to $y$ and take its length. Then you take the supremum/maximum of these lengths.
I hope it is clear.
By the way there is an error (I don't know if I can edit the post, so I write here) on the suggested proof of the inequality
$P(F)\leq d(F)\pi$
and I can't find a simple argument to justify it. (But I am 99.99% sure it is true).
Moreover I think that there is a similar "dual" point of view, considering the so-called "width" of a set, i.e. the width of the smallest stripe which contains $F$.
Nyarly
Kindergarten
Posts: 6
Joined: Thu Jul 14, 2011 10:40 am
Location: Italy
Re: Diameter vs Radius
by Chris Park » Thu Jul 14, 2011 4:02 pm
Oooh, I see. I was thinking x and y were coordinates instead of vectors and the | | were absolute value instead of magnitude. I see then that the origin can be anywhere.
So if I understand the problem correctly, you want to prove $P(F)\leq d(F)\pi$ by integrating. I don't think you would integrate around the barycenter, but maybe around the midpoint of the diameter? (Bounds are zero to ... well let's just say all the way around the circle).
$P1 = \int f(\theta) d\theta<br />P2 = \int \frac{d}{2} d\theta = d(F)\pi$
but usually
$f(\theta) < \frac{d}{2}$
so
$(P1 = \int f(\theta) d\theta) < (P2 = \int \frac{d}{2} d\theta = d(F)\pi)$
The problem is that "usually" above is not an "always". Hmmm.
Edit: By using a drawing, it is very easy to prove that a circle maximizes the perimeter of a concave shape with a given diameter. But this drawing would be very difficult to describe!
Chris Park
Kindergarten
Posts: 8
Joined: Sat Jul 09, 2011 1:25 am
Re: Diameter vs Radius
by Nyarly » Thu Jul 14, 2011 4:42 pm
Probably there is a proof in which you integrate something, but i think is more related to isoperimetric or Brunn-Minkowsky inequalities.
In your proof there is a more serious problem than the fact that $f(\theta)$ can be greater than $\theta/2$ (this is more or less related to the fact that my "hint" for the proof is not corrected). The problem is that there can be a lot of diameters (the circle has infty diameters!) and the midpoints of these diameters can be different (as in the Reuleaux triangle, http://en.wikipedia.org/wiki/Reuleaux_triangle).
Yes I think that there is an elementary proof "by drawing", but I can't visualize it. (for me it's 11:30 PM , and I stop thinking at 7pm)
By the way, my point on using diameter instead of radius, is that if you give me any convex shape I can always tell you what the diameter is, but I can tell you what the radius is only if I know that it is a circle, and I find it halving the diameter!
Edit: I think first of difficult thing, but the easiest shape with very distinct diameters is the equilateral triangle (the diameters are the edges).
Nyarly
Kindergarten
Posts: 6
Joined: Thu Jul 14, 2011 10:40 am
Location: Italy
Re: Diameter vs Radius
by Chris Park » Thu Jul 14, 2011 6:30 pm
Yes, yes! My drawing leads me to a Reuleaux triangle! My mind is blown
Basically I started by drawing the diameter of a circle and calling it (the diameter) AB. Then I tried to determine the locus of points that would not create a chord that would exceed the length of AB. Obviously this area was the intersection of two bigger circles (each of diameter 2AB) drawn around points A and B. Since this football-shaped intersection was symmetric on either side of line AB, I just looked at the bottom half. Choosing the vertex of the new region as a third point and drawing yet another circle around it yields a Reuleaux triangle.
The triangle appears to be the 'worst case scenario' in terms of drawing a shape with diameter D whose $f(\theta)$ exceeds $\frac{d}{2}$ as much as possible without changing D. Even in this shape, however, the perimeter is still only $\frac{\pi D}{\sqrt{3}}$.
I agree that this description makes diameter seem like the more natural dimension. I've actually seen it before: in engineering, sometimes one characterizes a diameter to calculate properties of fluid flow in pipes. When a pipe isn't circular, this is done as $D_{equiv} = \frac{4A}{P}$, so that even unusual shapes can be characterized with a 'diameter'.
Chris Park
Kindergarten
Posts: 8
Joined: Sat Jul 09, 2011 1:25 am
Re: Diameter vs Radius
by Nyarly » Fri Jul 15, 2011 4:46 am
Chris Park wrote:The triangle appears to be the 'worst case scenario' in terms of drawing a shape with diameter D whose $f(\theta)$ exceeds $\frac{d}{2}$ as much as possible without changing D. Even in this shape, however, the perimeter is still only $\frac{\pi D}{\sqrt{3}}$.
The perimeter of the Reuleaux triangle (and every Reuleaux polygon) is $\pi D$, in fact it is made of three circular arc of radius $D$ and subtended angle $\pi/3$. Then
$P=3\left(\frac{\pi}{3} D\right).$
The problem in the proof is that the circle is not the only shape for which $P=\pi D$, so, probably, the idea would be, given a shape, to find a "constant width set", i.e. a shape for which $P=\pi D$ that trivially has the perimeter bigger. An idea maybe is to take the convex hull of all the diameters of the set, and hope that the worst case scenarios are the Reuleaux polygons! I'm starting to think that is not an easy result...
Nyarly
Kindergarten
Posts: 6
Joined: Thu Jul 14, 2011 10:40 am
Location: Italy
Re: Diameter vs Radius
by SpikedMath » Fri Jul 15, 2011 8:45 am
I don't have a rigorous proof but there might be stuff known already. One interesting result is Theorem B from "Some Inequalities for Convex and Star-Shaped Domains". This isn't exactly the same problem as their "r" is different than our "r".
We can also try looking at the radius of curvature and see if that gives anything interesting.
Math - It's in you to give.
SpikedMath
Site Admin
Posts: 133
Joined: Mon Feb 07, 2011 1:31 am
Location: Canada
Re: Diameter vs Radius
by Nyarly » Fri Jul 15, 2011 1:20 pm
The inequality
$P<\pi d$
is a known fact. e.g. see the book Convex figures by Jaglom, I. M.; Boltjanskiĭ and V. G. (MathSciNet http://www.ams.org/mathscinet-getitem?mr=123962, i don't know if it always work of if you need access to the MathSciNet database) is exercise 7.17 and the solution is in the appendix.
It is not a difficult proof, but it isn't elementary as I hoped...
Nyarly
Kindergarten
Posts: 6
Joined: Thu Jul 14, 2011 10:40 am
Location: Italy
Re: Diameter vs Radius
by Chris Park » Fri Jul 15, 2011 1:49 pm
Nyarly wrote:The perimeter of the Reuleaux triangle (and every Reuleaux polygon) is $\pi D$, in fact it is made of three circular arc of radius $D$ and subtended angle $\pi/3$. Then
$P=3\left(\frac{\pi}{3} D\right).$
Sorry, my $D$ in that post is different from my $d$, which I failed to elaborate. $D$ is the diameter of the circle that circumscribes the Reuleaux, and that is (correct me if I'm wrong) $\sqrt{3}d$ So when written in terms of the original diameter, I also get $\pi d$
I think this picture tells the story pretty well. The red line is the diameter of some shape we want to draw. The black region is the locus of points that can be drawn without creating new chords longer than the diameter (excluding the original circle, which is left blue because MS paint wasn't cooperating). When you choose point C and maximize perimeter by tracing the outer edge of the remaining available points, you get a Reuleaux triangle. You can also see by symmetry that no chosen points between C and the original circle will ever increase the perimeter beyond that of the two situational extrema (which both gave pi D).
Interesting that we've just brought up the similarity of all Reuleaux n-gons in having perimeters $\pi d$. When you start looking at this subset of polygons, radius becomes a player again (since it's easy to define the radius of a Reuleaux n-gon).
Attachments
Reuleaux.jpg (27.64 KiB) Viewed 3181 times
Chris Park
Kindergarten
Posts: 8
Joined: Sat Jul 09, 2011 1:25 am
Re: Diameter vs Radius
by Nyarly » Tue Jul 19, 2011 1:51 pm
Chris Park wrote:Interesting that we've just brought up the similarity of all Reuleaux n-gons in having perimeters $\pi d$. When you start looking at this subset of polygons, radius becomes a player again (since it's easy to define the radius of a Reuleaux n-gon).
Sorry, how do you define it? It's not so clear to me....
Nyarly
Kindergarten
Posts: 6
Joined: Thu Jul 14, 2011 10:40 am
Location: Italy
Re: Diameter vs Radius
by Chris Park » Thu Apr 12, 2012 8:06 am
(Nine months later) -- you can define it as the distance from the center to a vertex!
Chris Park
Kindergarten
Posts: 8
Joined: Sat Jul 09, 2011 1:25 am
12 posts • Page 1 of 1
Return to The Pi Manifesto
Who is online
Users browsing this forum: No registered users and 0 guests
• Board index
• The team • Delete all board cookies • All times are UTC - 5 hours
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 62, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373133778572083, "perplexity_flag": "middle"}
|
http://medlibrary.org/medwiki/Geometric_group_theory
|
# Geometric group theory
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
Geometric group theory is an area in mathematics devoted to the study of finitely generated groups via exploring the connections between algebraic properties of such groups and topological and geometric properties of spaces on which these groups act (that is, when the groups in question are realized as geometric symmetries or continuous transformations of some spaces).
Another important idea in geometric group theory is to consider finitely generated groups themselves as geometric objects. This is usually done by studying the Cayley graphs of groups, which, in addition to the graph structure, are endowed with the structure of a metric space, given by the so-called word metric.
Geometric group theory, as a distinct area, is relatively new, and has become a clearly identifiable branch of mathematics in late 1980s and early 1990s. Geometric group theory closely interacts with low-dimensional topology, hyperbolic geometry, algebraic topology, computational group theory and differential geometry. There are also substantial connections with complexity theory, mathematical logic, the study of Lie Groups and their discrete subgroups, dynamical systems, probability theory, K-theory, and other areas of mathematics.
In the introduction to his book Topics in Geometric Group Theory, Pierre de la Harpe wrote: "One of my personal beliefs is that fascination with symmetries and groups is one way of coping with frustrations of life's limitations: we like to recognize symmetries which allow us to recognize more than what we can see. In this sense the study of geometric group theory is a part of culture, and reminds me of several things that Georges de Rham practices on many occasions, such as teaching mathematics, reciting Mallarmé, or greeting a friend" (page 3 in [1]).
## History
Geometric group theory grew out of combinatorial group theory that largely studied properties of discrete groups via analyzing group presentations, that describe groups as quotients of free groups; this field was first systematically studied by Walther von Dyck, student of Felix Klein, in the early 1880s,[2] while an early form is found in the 1856 Icosian Calculus of William Rowan Hamilton, where he studied the icosahedral symmetry group via the edge graph of the dodecahedron. Currently combinatorial group theory as an area is largely subsumed by geometric group theory. Moreover, the term "geometric group theory" came to often include studying discrete groups using probabilistic, measure-theoretic, arithmetic, analytic and other approaches that lie outside of the traditional combinatorial group theory arsenal.
In the first half of the 20th century, pioneering work of Dehn, Nielsen, Reidemeister and Schreier, Whitehead, van Kampen, amongst others, introduced some topological and geometric ideas into the study of discrete groups.[3] Other precursors of geometric group theory include small cancellation theory and Bass–Serre theory. Small cancellation theory was introduced by Martin Grindlinger in 1960s[4][5] and further developed by Roger Lyndon and Paul Schupp.[6] It studies van Kampen diagrams, corresponding to finite group presentations, via combinatorial curvature conditions and derives algebraic and algorithmic properties of groups from such analysis. Bass–Serre theory, introduced in the 1977 book of Serre,[7] derives structural algebraic information about groups by studying group actions on simplicial trees. External precursors of geometric group theory include the study of lattices in Lie Groups, especially Mostow rigidity theorem, the study of Kleinian groups, and the progress achieved in low-dimensional topology and hyperbolic geometry in 1970s and early 1980s, spurred, in particular, by Thurston's Geometrization program.
The emergence of geometric group theory as a distinct area of mathematics is usually traced to late 1980s and early 1990s. It was spurred by the 1987 monograph of Gromov "Hyperbolic groups"[8] that introduced the notion of a hyperbolic group (also known as word-hyperbolic or Gromov-hyperbolic or negatively curved group), which captures the idea of a finitely generated group having large-scale negative curvature, and by his subsequent monograph Asymptotic Invariants of Inifinite Groups,[9] that outlined Gromov's program of understanding discrete groups up to quasi-isometry. The work of Gromov had a transformative effect on the study of discrete groups[10][11][12] and the phrase "geometric group theory" started appearing soon afterwards. (see, e.g.,[13]).
## Modern themes and developments
Notable themes and developments in geometric group theory in 1990s and 2000s include:
• Gromov's program to study quasi-isometric properties of groups.
A particularly influential broad theme in the area is Gromov's program[14] of classifying finitely generated groups according to their large scale geometry. Formally, this means classifying finitely generated groups with their word metric up to quasi-isometry. This program involves:
1. The study of properties that are invariant under quasi-isometry. Examples of such properties of finitely generated groups include: the growth rate of a finitely generated group; the isoperimetric function or Dehn function of a finitely presented group; the number of ends of a group; hyperbolicity of a group; the homeomorphism type of the boundary of a hyperbolic group;[15] asymptotic cones of finitely generated groups (see, e.g.,[16][17]); amenability of a finitely generated group; being virtually abelian (that is, having an abelian subgroup of finite index); being virtually nilpotent; being virtually free; being finitely presentable; being a finitely presentable group with solvable Word Problem; and others.
2. Theorems which use quasi-isometry invariants to prove algebraic results about groups, for example: Gromov's polynomial growth theorem; Stallings' ends theorem; Mostow rigidity theorem.
3. Quasi-isometric rigidity theorems, in which one classifies algebraically all groups that are quasi-isometric to some given group or metric space. This direction was initiated by the work of Schwartz on quasi-isometric rigidity of rank-one lattices[18] and the work of Farb and Mosher on quasi-isometric rigidity of Baumslag-Solitar groups.[19]
• The theory of word-hyperbolic and relatively hyperbolic groups. A particularly important development here is the work of Sela in 1990s resulting in the solution of the isomorphism problem for word-hyperbolic groups.[20] The notion of a relatively hyperbolic groups was originally introduced by Gromov in 1987[8] and refined by Farb[21] and Bowditch,[22] in the 1990s. The study of relatively hyperbolic groups gained prominence in 2000s.
• Interactions with mathematical logic and the study of first-order theory of free groups. Particularly important progress occurred on the famous Tarski conjectures, due to the work of Sela[23] as well as of Kharlampovich and Myasnikov.[24] The study of limit groups and introduction of the language and machinery of non-commutative algebraic geometry gained prominence.
• Interactions with computer science, complexity theory and the theory of formal languages. This theme is exemplified by the development of the theory of automatic groups,[25] a notion that imposes certain geometric and language theoretic conditions on the multiplication operation in a finitely generate group.
• The study of isoperimetric inequalities, Dehn functions and their generalizations for finitely presented group. This includes, in particular, the work of Birget, Ol'shanskii, Rips and Sapir[26][27] essentially characterizing the possible Dehn functions of finitely presented groups, as well as results providing explicit constructions of groups with fractional Dehn functions.[28]
• Development of the theory of JSJ-decompositions for finitely generated and finitely presented groups.[29][30][31][32][33]
• Connections with geometric analysis, the study of C*-algebras associated with discrete groups and of the theory of free probability. This theme is represented, in particular, by considerable progress on the Novikov conjecture and the Baum-Connes conjecture and the development and study of related group-theoretic notions such as topological amenability, asymptotic dimension, uniform embeddability into Hilbert spaces, rapid decay property, and so on (see, for example,[34][35][36]).
• Interactions with the theory of quasiconformal analysis on metric spaces, particularly in relation to Cannon's Conjecture about characterization of hyperbolic groups with boundary homeomorphic to the 2-sphere.[37][38][39]
• Finite subdivision rules, also in relation to Cannon's Conjecture.[40]
• Interactions with topological dynamics in the contexts of studying actions of discrete groups on various compact spaces and group compactifications, particularly convergence group methods[41][42]
• Development of the theory of group actions on $\mathbb R$-trees (particularly the Rips machine), and its applications.[43]
• The study of group actions on CAT(0) spaces and CAT(0) cubical complexes,[44] motivated by ideas from Alexandrov geometry.
• Interactions with low-dimensional topology and hyperbolic geometry, particularly the study of 3-manifold groups (see, e.g.,[45]), mapping class groups of surfaces, braid groups and Kleinian groups.
• Introduction of probabilistic methods to study algebraic properties of "random" group theoretic objects (groups, group elements, subgroups, etc.). A particularly important development here is the work of Gromov who used probabilistic methods to prove[46] the existence of a finitely generated group that is not uniformly embeddable into a Hilbert space. Other notable developments include introduction and study of the notion of generic-case complexity[47] for group-theoretic and other mathematical algorithms and algebraic rigidity results for generic groups.[48]
• The study of automata groups and iterated monodromy groups as groups of automorphisms of infinite rooted trees. In particular, Grigorchuk's groups of intermediate growth, and their generalizations, appear in this context.[49][50]
• The study of measure-theoretic properties of group actions on measure spaces, particularly introduction and development of the notions of measure equivalence and orbit equivalence, as well as measure-theoretic generalizations of Mostow rigidity.[51][52]
• The study of unitary representations of discrete groups and Kazhdan's property (T)[53]
• The study of Out(Fn) (the outer automorphism group of a free group of rank n) and of individual automorphisms of free groups. Introduction and the study of Culler-Vogtmann's outer space[54] and of the theory of train tracks[55] for free group automorphisms played a particularly prominent role here.
• Development of Bass–Serre theory, particularly various accessibility results[56][57][58] and the theory of tree lattices.[59] Generalizations of Bass–Serre theory such as the theory of complexes of groups.[60]
• The study of random walks on groups and related boundary theory, particularly the notion of Poisson boundary (see, e.g.,[61]). The study of amenability and of groups whose amenability status is still unknown.
• Interactions with finite group theory, particularly progress in the study of subgroup growth.[62]
• Studying subgroups and lattices in linear groups, such as $SL(n, \mathbb R)$, and of other Lie Groups, via geometric methods (e.g. buildings), algebro-geometric tools (e.g. algebraic groups and representation varieties), analytic methods (e.g. unitary representations on Hilbert spaces) and arithmetic methods.
• Group cohomology, using algebraic and topological methods, particularly involving interaction with algebraic topology and the use of morse-theoretic ideas in the combinatorial context; large-scale, or coarse (e.g. see [63]) homological and cohomological methods.
• Progress on traditional combinatorial group theory topics, such as the Burnside problem,[64][65] the study of Coxeter groups and Artin groups, and so on (the methods used to study these questions currently are often geometric and topological).
## Examples
The following examples are often studied in geometric group theory:
• Amenable groups
• The infinite cyclic group Z
• Outer automorphism groups Out(Fn) (via Outer space)
• Hyperbolic groups
• Mapping class groups (automorphisms of surfaces)
• Symmetric groups
• Braid groups
• Coxeter groups
• General Artin groups
• Thompson's group F
• CAT(0) groups
• Arithmetic groups
• Automatic groups
• Kleinian groups, and other lattices acting on symmetric spaces.
• Wallpaper groups
• Baumslag-Solitar groups
• Fundamental groups of graphs of groups
• Grigorchuk group
## See also
• The Ping-pong lemma, a useful way to exhibit a group as a free product
• Amenable group
• Nielsen transformation
• Tietze transformation
## References
1. P. de la Harpe, Topics in geometric group theory. Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, 2000. ISBN 0-226-31719-6 [Amazon-US | Amazon-UK], ISBN 0-226-31721-8 [Amazon-US | Amazon-UK].
2. Stillwell, John (2002), Mathematics and its history, Springer, p. 374, ISBN 978-0-387-95336-6 [Amazon-US | Amazon-UK]
3. Bruce Chandler and Wilhelm Magnus. The history of combinatorial group theory. A case study in the history of ideas. Studies in the History of Mathematics and Physical Sciences, vo. 9. Springer-Verlag, New York, 1982.
4. M. Greendlinger, An analogue of a theorem of Magnus. Archiv der Mathematik, vol. 12 (1961), pp. 94-96.
5. R. Lyndon and P. Schupp, Combinatorial Group Theory, Springer-Verlag, Berlin, 1977. Reprinted in the "Classics in mathematics" series, 2000.
6. J.-P. Serre, Trees. Translated from the 1977 French original by John Stillwell. Springer-Verlag, Berlin-New York, 1980. ISBN 3-540-10103-9 [Amazon-US | Amazon-UK].
7. ^ a b M. Gromov, Hyperbolic Groups, in "Essays in Group Theory" (G. M. Gersten, ed.), MSRI Publ. 8, 1987, pp. 75-263.
8. M. Gromov, "Asymptotic invariants of infinite groups", in "Geometric Group Theory", Vol. 2 (Sussex, 1991), London Mathematical Society Lecture Note Series, 182, Cambridge University Press, Cambridge, 1993, pp. 1-295.
9. I. Kapovich and N. Benakli. Boundaries of hyperbolic groups. Combinatorial and geometric group theory (New York, 2000/Hoboken, NJ, 2001), pp. 39-93, Contemp. Math., 296, Amer. Math. Soc., Providence, RI, 2002. From the Introduction:" In the last fifteen years geometric group theory has enjoyed fast growth and rapidly increasing influence. Much of this progress has been spurred by remarkable work of M. L. Gromov [in Essays in group theory, 75--263, Springer, New York, 1987; in Geometric group theory, Vol. 2 (Sussex, 1991), 1--295, Cambridge Univ. Press, Cambridge, 1993], who has advanced the theory of word-hyperbolic groups (also referred to as Gromov-hyperbolic or negatively curved groups)."
10. B. H. Bowditch, Hyperbolic 3-manifolds and the geometry of the curve complex. European Congress of Mathematics, pp. 103-115, Eur. Math. Soc., Zürich, 2005. From the Introduction:" Much of this can be viewed in the context of geometric group theory. This subject has seen very rapid growth over the last twenty years or so, though of course, its antecedents can be traced back much earlier. [...] The work of Gromov has been a major driving force in this. Particularly relevant here is his seminal paper on hyperbolic groups [Gr]."
11. G. Elek. The mathematics of Misha Gromov. Acta Mathematica Hungarica, vol. 113 (2006), no. 3, pp. 171-185. From p. 181: "Gromov's pioneering work on the geometry of discrete metric spaces and his quasi-isometry program became the locomotive of geometric group theory from the early eighties."
12. Geometric group theory. Vol. 1. Proceedings of the symposium held at Sussex University, Sussex, July 1991. Edited by Graham A. Niblo and Martin A. Roller. London Mathematical Society Lecture Note Series, 181. Cambridge University Press, Cambridge, 1993. ISBN 0-521-43529-3 [Amazon-US | Amazon-UK].
13. M. Gromov, Asymptotic invariants of infinite groups, in "Geometric Group Theory", Vol. 2 (Sussex, 1991), London Mathematical Society Lecture Note Series, 182, Cambridge University Press, Cambridge, 1993, pp. 1-295.
14. I. Kapovich and N. Benakli. Boundaries of hyperbolic groups. Combinatorial and geometric group theory (New York, 2000/Hoboken, NJ, 2001), pp. 39-93, Contemp. Math., 296, Amer. Math. Soc., Providence, RI, 2002.
15. T. R. Riley, Higher connectedness of asymptotic cones. Topology, vol. 42 (2003), no. 6, pp. 1289-1352.
16. R. E. Richard. The quasi-isometry classification of rank one lattices. Institut des Hautes Études Scientifiques. Publications Mathématiques. No. 82 (1995), pp. 133-168.
17. B. Farb and L. Mosher. A rigidity theorem for the solvable Baumslag-Solitar groups. With an appendix by Daryl Cooper. Inventiones Mathematicae, vol. 131 (1998), no. 2, pp. 419-451.
18. Z. Sela, The isomorphism problem for hyperbolic groups. I. Annals of Mathematics (2), vol. 141 (1995), no. 2, pp. 217-283.
19. B. Farb. Relatively hyperbolic groups. Geometric and Functional Analysis, vol. 8 (1998), no. 5, pp. 810-840.
20. B. H. Bowditch. Treelike structures arising from continua and convergence groups. Memoirs American Mathematical Society vol. 139 (1999), no. 662.
21. Z.Sela, Diophantine geometry over groups and the elementary theory of free and hyperbolic groups. Proceedings of the International Congress of Mathematicians, Vol. II (Beijing, 2002), pp. 87-92, Higher Ed. Press, Beijing, 2002.
22. O. Kharlampovich and A. Myasnikov, Tarski's problem about the elementary theory of free groups has a positive solution. Electronic Research Announcements of the American Mathematical Society, vol. 4 (1998), pp. 101-108.
23. D. B. A. Epstein, J. W. Cannon, D. Holt, S. Levy, M. Paterson, W. Thurston. Word processing in groups. Jones and Bartlett Publishers, Boston, MA, 1992.
24. M. Sapir, J.-C. Birget, E. Rips, Isoperimetric and isodiametric functions of groups. Annals of Mathematics (2), vol 156 (2002), no. 2, pp. 345-466.
25. J.-C. Birget, A. Yu. Ol'shanskii, E. Rips, M. Sapir, Isoperimetric functions of groups and computational complexity of the word problem. Annals of Mathematics (2), vol 156 (2002), no. 2, pp. 467-518.
26. M. R. Bridson, Fractional isoperimetric inequalities and subgroup distortion. Journal of the American Mathematical Society, vol. 12 (1999), no. 4, pp. 1103-1118.
27. E. Rips and Z. Sela, Cyclic splittings of finitely presented groups and the canonical JSJ decomposition. Annals of Mathematics (2), vol. 146 (1997), no. 1, pp. 53-109.
28. M. J. Dunwoody and M. E. Sageev. JSJ-splittings for finitely presented groups over slender groups. Inventiones Mathematicae, vol. 135 (1999), no. 1, pp. 25-44.
29. P. Scott and G. A. Swarup. Regular neighbourhoods and canonical decompositions for groups. Electronic Research Announcements of the American Mathematical Society, vol. 8 (2002), pp. 20-28.
30. B. H. Bowditch. Cut points and canonical splittings of hyperbolic groups. Acta Mathematica, vol. 180 (1998), no. 2, pp. 145-186.
31. K. Fujiwara and P. Papasoglu, JSJ-decompositions of finitely presented groups and complexes of groups. Geometric and Functional Analysis, vol. 16 (2006), no. 1, pp. 70-125.
32. G. Yu. The Novikov conjecture for groups with finite asymptotic dimension. Annals of Mathematics (2), vol. 147 (1998), no. 2, pp. 325-355.
33. G. Yu. The coarse Baum-Connes conjecture for spaces which admit a uniform embedding into Hilbert space. Inventiones Mathematicae, vol 139 (2000), no. 1, pp. 201--240.
34. I. Mineyev and G. Yu. The Baum-Connes conjecture for hyperbolic groups. Inventiones Mathematicae, vol. 149 (2002), no. 1, pp. 97-122.
35. M. Bonk and B. Kleiner. Conformal dimension and Gromov hyperbolic groups with 2-sphere boundary. Geometry and Topology, vol. 9 (2005), pp. 219-246.
36. M. Bourdon and H. Pajot. Quasi-conformal geometry and hyperbolic geometry. Rigidity in dynamics and geometry (Cambridge, 2000), pp. 1-17, Springer, Berlin, 2002.
37. M. Bonk, Quasiconformal geometry of fractals. International Congress of Mathematicians. Vol. II, pp. 1349-1373, Eur. Math. Soc., Zürich, 2006.
38. J. W. Cannon, W. J. Floyd, W. R. Parry. Finite subdivision rules. Conformal Geometry and Dynamics, vol. 5 (2001), pp. 153–196.
39. P. Tukia. Generalizations of Fuchsian and Kleinian groups. First European Congress of Mathematics, Vol. II (Paris, 1992), pp. 447-461, Progr. Math., 120, Birkhäuser, Basel, 1994.
40. A. Yaman. A topological charactesization of relatively hyperbolic groups. Journal für die Reine und Angewandte Mathematik, vol. 566 (2004), pp. 41-89.
41. M. Bestvina and M. Feighn. Stable actions of groups on real trees. Inventiones Mathematicae, vol. 121 (1995), no. 2, pp. 287-321.
42. M. R. Bridson and A. Haefliger, Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319. Springer-Verlag, Berlin, 1999.
43. M. Kapovich, Hyperbolic manifolds and discrete groups. Progress in Mathematics, 183. Birkhäuser Boston, Inc., Boston, MA, 2001.
44. M. Gromov. Random walk in random groups. Geometric and Functional Analysis, vol. 13 (2003), no. 1, pp. 73-146.
45. I. Kapovich, A. Miasnikov, P. Schupp and V. Shpilrain, Generic-case complexity, decision problems in group theory, and random walks. Journal of Algebra, vol. 264 (2003), no. 2, pp. 665-694.
46. I. Kapovich, P. Schupp, V. Shpilrain, Generic properties of Whitehead's algorithm and isomorphism rigidity of random one-relator groups. Pacific Journal of Mathematics, vol. 223 (2006), no. 1, pp. 113-140.
47. L. Bartholdi, R. I. Grigorchuk and Z. Sunik. Branch groups. Handbook of algebra, Vol. 3, pp. 989-1112, North-Holland, Amsterdam, 2003.
48. V. Nekrashevych. Self-similar groups. Mathematical Surveys and Monographs, 117. American Mathematical Society, Providence, RI, 2005. ISBN 0-8218-3831-8 [Amazon-US | Amazon-UK].
49. A. Furman, Gromov's measure equivalence and rigidity of higher rank lattices. Annals of Mathematics (2), vol. 150 (1999), no. 3, pp. 1059-1081.
50. N. Monod, Y. Shalom, Orbit equivalence rigidity and bounded cohomology. Annals of Mathematics (2), vol. 164 (2006), no. 3, pp. 825-878.
51. Y. Shalom. The algebraization of Kazhdan's property (T). International Congress of Mathematicians. Vol. II, pp. 1283-1310, Eur. Math. Soc., Zürich, 2006.
52. M Culler and K. Vogtmann. Moduli of graphs and automorphisms of free groups. Inventiones Mathematicae, vol. 84 (1986), no. 1, pp. 91-119.
53. M. Bestvina and M. Handel, Train tracks and automorphisms of free groups. Annals of Mathematics (2), vol. 135 (1992), no. 1, pp. 1-51.
54. M. J. Dunwoody. The accessibility of finitely presented groups. Inventiones Mathematicae, vol. 81 (1985), no. 3, pp. 449-457.
55. M. Bestvina and M. Feighn. Bounding the complexity of simplicial group actions on trees. Inventiones Mathematicae, vol. 103 (1991), no 3, pp. 449-469 (1991).
56. Z. Sela, Acylindrical accessibility for groups. Inventiones Mathematicae, vol. 129 (1997), no. 3, pp. 527-565.
57. H. Bass and A. Lubotzky. Progress in Mathematics, 176. Birkhäuser Boston, Inc., Boston, MA, 2001. ISBN 0-8176-4120-3 [Amazon-US | Amazon-UK].
58. M. R. Bridson and A. Haefliger, Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319. Springer-Verlag, Berlin, 1999. ISBN 3-540-64324-9 [Amazon-US | Amazon-UK].
59. V. A. Kaimanovich, The Poisson formula for groups with hyperbolic properties. Annals of Mathematics (2), vol. 152 (2000), no. 3, pp. 659-692.
60. A. Lubotzky and D. Segal. Subgroup growth. Progress in Mathematics, 212. Birkhäuser Verlag, Basel, 2003. ISBN 3-7643-6989-2 [Amazon-US | Amazon-UK].
61. M. Bestvina, M. Kapovich and B. Kleiner. Van Kampen's embedding obstruction for discrete groups. Inventiones Mathematicae, vol. 150 (2002), no. 2, pp. 219-235.
62. S. V. Ivanov. The free Burnside groups of sufficiently large exponents. International Journal of Algebra and Computation, vol. 4 (1994), no. 1-2.
63. I. G. Lysënok. Infinite Burnside groups of even period. (Russian) Izvestial Rossiyskoi Akademii Nauk Seriya Matematicheskaya, vol. 60 (1996), no. 3, pp. 3-224; translation in Izvestiya. Mathematics vol. 60 (1996), no. 3, pp. 453-654.
### Books and monographs
These texts cover geometric group theory and related topics.
• B. H. Bowditch. A course on geometric group theory. MSJ Memoirs, 16. Mathematical Society of Japan, Tokyo, 2006. ISBN 4-931469-35-3 [Amazon-US | Amazon-UK]
• M. R. Bridson and A. Haefliger, Metric spaces of non-positive curvature. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319. Springer-Verlag, Berlin, 1999. ISBN 3-540-64324-9 [Amazon-US | Amazon-UK]
• Michel Coornaert, Thomas Delzant and Athanase Papadopoulos, "Géométrie et théorie des groupes : les groupes hyperboliques de Gromov", Lecture Notes in Mathematics, vol. 1441, Springer-Verlag, Berlin, 1990, x+165 pp. MR 92f:57003, ISBN 3-540-52977-2 [Amazon-US | Amazon-UK]
• Michel Coornaert and Athanase Papadopoulos, Symbolic dynamics and hyperbolic groups. Lecture Notes in Mathematics. 1539. Springer-Verlag, Berlin, 1993, viii+138 pp. ISBN 3-540-56499-3 [Amazon-US | Amazon-UK]
• P. de la Harpe, Topics in geometric group theory. Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, 2000. ISBN 0-226-31719-6 [Amazon-US | Amazon-UK]
• D. B. A. Epstein, J. W. Cannon, D. Holt, S. Levy, M. Paterson, W. Thurston. Word processing in groups. Jones and Bartlett Publishers, Boston, MA, 1992. ISBN 0-86720-244-0 [Amazon-US | Amazon-UK]
• M. Gromov, Hyperbolic Groups, in "Essays in Group Theory" (G. M. Gersten, ed.), MSRI Publ. 8, 1987, pp. 75–263. ISBN 0-387-96618-8 [Amazon-US | Amazon-UK]
• M. Gromov, Asymptotic invariants of infinite groups, in "Geometric Group Theory", Vol. 2 (Sussex, 1991), London Mathematical Society Lecture Note Series, 182, Cambridge University Press, Cambridge, 1993, pp. 1–295
• M. Kapovich, Hyperbolic manifolds and discrete groups. Progress in Mathematics, 183. Birkhäuser Boston, Inc., Boston, MA, 2001
• R. Lyndon and P. Schupp, Combinatorial Group Theory, Springer-Verlag, Berlin, 1977. Reprinted in the "Classics in mathematics" series, 2000. ISBN 3-540-41158-5 [Amazon-US | Amazon-UK]
• A. Yu. Ol'shanskii, Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991
• J. Roe, Lectures on coarse geometry. University Lecture Series, 31. American Mathematical Society, Providence, RI, 2003. ISBN 0-8218-3332-4 [Amazon-US | Amazon-UK]
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Geometric group theory", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Geometric_group_theory
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8024935722351074, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/07/11/what-is-knot-homology/
|
# The Unapologetic Mathematician
## What is Knot Homology?
So I’m back from a week in Faro, Portugal, talking about various things surrounding the ideas of “knot homology”. So what is it? Well, this will be a bit of a loose treatment of the subject, and may not be completely on the mark. I like David Corfield’s idea that a mathematician is a sort of storyteller, and I’m not about to let mere history get in the way of a good história. Besides, I’ll get to most of the details in my main line sooner or later.
First I should mention the Bracket polynomial and the Jones polynomial. Jones was studying a certain kind of algebra when he realized that the defining relations for these algebras were very much like those of the braid groups. In fact, he was quickly able to use this similarity to assign a Laurent polynomial — one which allows negative powers of the variable — to every knot diagram that didn’t change when two diagrams differed by a Reidemeister move. That is, it was a new invariant of knots.
The Jones polynomial came out of nowhere, from the perspective of the day’s knot theorists. And it set the whole field on its ear. From my perspective looking back, there’s a huge schism in knot theory between those who primarily study the geometry and the “classical topology” of the situation and those who primarily study the algebra, combinatorics, and the rising field of “quantum topology”. To be sure there are bridges between the two, some of which I’ll mention later. But the upshot was that the Jones polynomial showed a whole new way of looking at knots and invariants.
Immediately in its aftermath a huge number of interpretations and generalizations poured forth. One of the most influential was Louis Kauffman’s “state-sum” model: the Bracket. This is an invariant of regular isotopy instead of ambient isotopy, which basically means we throw out Reidemeister I moves. Meanwhile, I glossed over above that the Jones polynomial actually applies to oriented links, where there’s a little arrow saying “go this way around the loop”. This is a subtle distinction between the Bracket and the Jones polynomial that many authors steamroll over, but I find it important for my own reasons.
Anyhow, the Bracket also assigns a Laurent polynomial to every diagram in a way that’s invariant under an apropriate collection of moves. It does this by taking each crossing and “splitting” it in two ways — turning an incoming strand to the left or the right rather than connecting straight across. For a link diagram with $n$ crossings there are now $2^n$ “states” of the diagram. Now we assign each diagram a “weight” and just add up the weights for all the different states. Thus: “state-sum”. If we choose the rule for weighting states correctly we can make the resulting polynomial into an invariant.
So now we flash forward from the mid-’80s to the late ’90s. Mikhail Khovanov, as a student of Igor Frenkel at Yale, becomes interested in the nascent field of categorification. Particularly, he was interested in categorifying the Lie algebra $\mathfrak{sl}_2$. That is, he needed to find a category and functors from that category to itself that satisfied certain relations analogous to the defining relations of the Lie algebra structure. He did this using some techniques from a field called “homological algebra”. I’ll eventually talk about it, but for now ask Michi.
But as it happens, this Lie algebra has a very nice category of representations. It’s a monoidal category with duals. In fact, every object is its own dual, and we can (morally, at least) build them all from a single fundamental representation. That means that it’s deeply related to the category of so-called “unoriented Temperley-Lieb” diagrams, which is (roughly) to categories with duals as the category of braids is to braided monoidal categories.
A Temperley-Lieb diagram is just a bunch of loops and arcs on the plane. The arcs connect marked points at the top and bottom of the diagram (like braid strands) while the loops just sorta float there, and none of the strands cross each other at all. So if there are no arcs, there’s just a bunch of separate loops. And we care about this because the states of a link in the definition of the Bracket are just bunches of separate loops too!
So we can take each state and read it in terms of this homological categorification of $\mathfrak{sl}_2$. And we can read a combination of states — a state sum — in such homological terms as well. So the defining relations of the bracket become “chain homotopies” — natural isomorphisms — in the homological context of the $\mathfrak{sl}_2$ categorification. Thus we have a homological categorification of the Bracket model of the Jones polynomial.
And again, it just came out of nowhere and has immediately revolutionized the field. Homology theories are hot right now. This high-level approach has been broken down by Khovanov and Rozansky into a combinatorial formulation, which knot theory groups like those at George Washington University and the University of Iowa have latched onto. The field of “Heegaard Floer homology” has been nudged closer and closer to the combinatorial Khovanov framework from its origins in analytic problems. Other knot invariants are lining up to be categorified along the same lines. And all the while the incredibly rich structure of Khovanov homology itself is being spruced up and neatened, leading to a series of clear examples to act as guideposts for those probing higher categorical structures in general.
And that’s what we just spent the last week talking about in Faro.
### Like this:
Posted by John Armstrong | Knot theory
## 4 Comments »
1. [...] after last week’s shake-ups I’m ready to get back into the swing of things. I mentioned yesterday something called the “Temperley-Lieb Category”, and it just so happens we’re [...]
Pingback by | July 12, 2007 | Reply
2. [...] beloved Dr. Mathochist just gave me the task of taking care of any readers prematurely interested in it while telling us all just a tad [...]
Pingback by | July 12, 2007 | Reply
3. I love reading every word of this sort of stuff. Yet no quantity of new terms expanding the lexicon of mathematical obscurantism can avoid the fact that there’s nothing to observe, react to, or act upon except the difference between I and You. I am always No.1, the authoritative asserter, and You are always No. 2, since You have no numerical reality until I recognize I and You as somehow functionally and indivisibly connected. Existential knots and entanglements ensue as ill understood social constructs based on worse yet theological constructs. Stick with your differentiated gender function if you honestly want to know what number 2 does, and I will stick with my No.1 position and function, whereby I learned where you came from and what you are for.
Comment by Rebecca Boone | August 24, 2007 | Reply
4. Thanks, Rebecca.. I think..
Basically, after the first sentence you veer off into some strange land of undefined terms and overcapitalization, which makes me think you’re reading this as if it’s some sort of post-structuralist weblog. Sorry to disappoint, but we do math here.
Comment by | August 25, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366808533668518, "perplexity_flag": "middle"}
|
http://bayesrules.wordpress.com/
|
# BayesRules
Just another WordPress.com weblog
## STAT 330 November 29, 2012
December 1, 2012
We finished off the discussion of model selection/averaging.
We then went into missing data models. In some cases the fact that data are not observed can have no effect on the inference whatsoever. In other cases, the effects can be profound. Sometimes (as in Example 2) the effect only appears through a prior that connects the probability of selection to the parameter being estimated. Censorship (where you know that an observation has been made, but the answer is only that the observation is not within the range of the measurement process) and truncation (where you are only given the successful measurements and do not know how many of the measurements were unsuccessful, as in a survey of voters where the voters who hung up the phone are not reported) are examples of situations where missing data affects the likelihood function and hence the inference.
We discussed a general approach to problems of this sort, where the likelihood is explicitly written to include both the observed and the missing data. The missing data are considered as latent variables, which thus require a prior and should be marginalized out. We verified that (in Example 4) this approach gives the same results as we got earlier.
In a sampling scheme, we sample on all parameters, including the latent variables that represent the missing data. We started on discussing a sampling scheme for Example 4, and will finish it next time.
Posted in STAT 330 | 1 Comment »
## STAT 330 November 27, 2012
November 26, 2012
The next chart set is on Model Selection and Model Averaging.
The chart set about missing data can be found here.
We started by discussing how our prior belief in a hypothesis can affect our approach to the data, e.g., guessing three cards right if the subject is a child who looks at the face of the cards, or a magician who looks only at the backs of the cards, is a priori more believable than that the subject is a real psychic. We looked at the Soal-Bateman example, where it was eventually shown that the amazing results were due to cheating. I discussed a general framework whereby very unlikely data can raise a more plausible hypothesis (cheating) over a less plausible one (psychic powers are real) when we take care to include all possible hypotheses in the analysis. A student asked, how can we do this? In fact we can never consider all possible hypotheses, there are just too many of them. The best we can do is to consider all hypotheses that might be plausible enough to have their posterior probabilities raised to significance. I finished this chart set by pointing out that just doing hypothesis tests isn’t really how decisions should be made. Really we should be using decision theory, and it turns out that essentially all admissible decision rules in classical (frequentist) decision theory are Bayes rules. Decision theory looks not only at the probabilities, but also at the costs of the decisions we make.
We then turned to model selection and model averaging. Model selection is an obvious extension of the hypothesis testing situation, except that instead of having just two models we have multiple models. The trickiest part of model selection (and averaging) is assigning priors on the parameters of the different models since if they are too vague, they will artificially favor models with fewer parameters. Unlike likelihood ratio tests, the Bayesian model selection idea can be used on non-nested models.
I also discussed approximate methods for model selection, the AIC (Akaike Information Criterion) and Schwartz’ BIC (Bayesian Information Criterion, which is not particularly Bayesian since it ignores the priors on the model parameters). BIC penalizes larger models more than AIC. Closer looks at BIC show that it reduces to the usual asymptotic form for hypothesis testing of a simple vs. a complex hypothesis.
I demonstrated the Zellner g-prior idea for linear models.
Model averaging is related but different. In model selection we average (marginalize) with respect to all the parameters and look at the posterior probabilities of the models. In model averaging there is a parameter that is common to all models that we are interested in, and we marginalize with respect to all the other model parameters and with respect to the models, so that all models contribute to the estimate of the parameter that we are interested in, in proportion to the posterior probabilities of the models themselves.
Finally, we looked at normal linear models. We saw that the prior on $\tau_\theta$ cannot be chosen to be a Jeffreys prior, because the resulting integral blows up at infinity.
We finished by looking at Gull’s approximation and a polynomial example that he presented. I noted that in his Figure 8, the y-axis is the log of the posterior probability, so it actually decreases much more rapidly after the peak is reached at N=10 than the figure seems to indicate.
## STAT 330 (Vacation special)
November 18, 2012
I was pointed to this discussion on Andrew Gelman’s blog. “The sample size is huge, so a p-value of 0.007 is not that impressive”. It reinforces the lesson of the past two lectures. The comments contain links to this paper and this paper that I mentioned in class.
Posted in STAT 330 | 1 Comment »
## STAT 330 November 15, 2012
November 16, 2012
I started out discussing the cartoon that I linked to a few days ago. I pointed out that both the sensitivity and specificity of the test in the cartoon were very high (35/36 or about 97.3%). Nonetheless, the cartoon test is rather silly, and it reinforces the idea that frequentist tests only talk about what happens if you repeat them many times. The Bayesian probably knows (background information) that it is physically impossible for the Sun to go nova (it will die in an entirely different fashion, its mass is too small), and even if it were possible, the bet is an entirely safe one since if the Sun had gone nova, no one would be around to collect the bet!
I then showed a cartoon that Larry Wasserman put on his blog. Larry’s point here is that (under most circumstances) Bayesian credible intervals don’t say anything about frequentist coverage. There are no coverage guarantees. It is true that under some special circumstances, such as the Berger-Mossman example that you calculated for an assignment, it is possible for a Bayesian credible interval to have good frequentist coverage; but in this example, it was by design, and happened because Berger and Mossman used a standard objective prior. These objective priors probably will give decent coverage in most situations (but it should be checked if coverage is important to you), just as they usually give similar results in parameter-estimation problems (e.g., regression). But in general, informative priors will not necessarily have these properties.
We returned to the perihelion motion of Mercury. The bottom line here is that the “fudge factor” theory F spreads its bets over a large area of outcomes. It’s got to match the actual outcome, but it wastes prior probability on outcomes that do not pan out. On the other hand, Einstein’s theory E makes a very sharp and risky prediction. And, since the data lie close to that prediction, it wins big time, just as when a gambler bets all his chips on one outcome and that outcome is the one that happens.
I noted Berger’s “objective” prior that is symmetric about “no effect” and decreases monotonically away from “no effect”. It doesn’t support Einstein quite as much, but it provides an objective lower bound on the evidence for E.
Even if you put all the prior probability under F on the alternative hypothesis, you get probabilities that are significantly higher than the corresponding p-values. So p-values overestimate the evidence against the null.
Another danger is that the likelihood ratio (Bayes factor) in favor of the simpler hypothesis will increase proportionally to $\sqrt{n}$, so the larger the data set (for a given p-value), the more strongly the null hypothesis will be supported. Jack Good suggested a way to convert p-values to Bayes factors and posterior probabilities that, as we calculated, does a pretty good job (but it is approximate).
This led to a discussion of the Jeffreys-Lindley “paradox”, whereby you can have data that simultaneously give strong evidence in favor of the null hypothesis and a very small p-value that would reject it. I gave a real-life example that I wrote a paper on, from some parapsychology research.
Finally I discussed sampling to a foregone conclusion and the Stopping Rule Principle. If you are doing frequentist analysis, you are not supposed to watch the data and stop when the data give you a small enough p-value. Frequentist theory disallows this (but people do it a lot, and the parapsychologists did it in a huge way). The good news is that Bayesian analysis does not have this defect. This means that ethical problems using frequentist principles can be avoided by using Bayesian methods. The notes discuss this.
November 14, 2012
We spent the first part of the period looking at this code example. In this example we are studying the evidence that a coin is fair or not, given that we observed 60 heads and 40 tails. The code illustrates with this simple example the idea of Reversible Jump MCMC, where we propose simultaneously a new model and a value of the parameter (which is 0.5 if the coin is fair, but uniformly distributed on (0,1) if the coin is not fair, in this – highly unrealistic – example). The code allows you to use one of three proposals for the parameter on the unfair case – the exact beta distribution on 60 heads and 40 tails, a normal approximation, and a flat distribution. I pointed out that the first two ought to do pretty well, but that the flat distribution will often propose the parameter out in the tails of the distribution where the posterior probability is low, in which case the proposal is likely to be rejected. We ran the program and found that these predictions were borne out.
We spent the rest of the class looking at various aspects of the Ockham’s razor idea, that one should choose models that are as simple as possible but which still fit the data adequately. Too complex a model is likely to follow “noise” in the data, and too simple will not be adequate to predict the data. We looked at it from the point of view of the idea that a simple hypothesis predicts fewer outcomes than a complex one can. We saw how this worked in the case an alleged planet that had been announced around a pulsar; of proving plagiarism by encoding errors or other unique information into maps, mathematical tables, etc; and how it might be used to detect cheating on multiple choice tests. We also saw how the hypothesis of copying DNA from ancestors provides evidence for evolution, e.g., pseudogened that used to code for vitamin C production have the same defects in humans and chimpanzees, indicating descent from a common ancestor, and the redundancy of the genetic code provides independent evidence as well (there are 64 combinations of three base pairs in the genetic code, but only 20 amino acids are coded for).
I ended by describing Mercury’s perihelion motion. We will finish this example next time.
## STAT 330 November 8, 2011
November 9, 2012
Here is the link to the VPN client mentioned in class today, that allows you to connect to websites as if you were on campus.
Here are the links to the papers I mentioned this morning. First the paper by Berger and Delampady. Next, the paper by Berger and Sellke. And finally, the link to Jim Berger’s website with the Java Applet that allows you to try the thought experiment I discussed in class. The URLs for the first two have changed as jstor.org has changed their method of assigning stable URLs.
Finally, here is a link to the paper by Dellaportas, Forster and Ntzoufras, on reversible jump MCMC.
It appears that Romney has conceded Florida. This comment to today’s Nate Silver blog is very cool (difference between mean and mode).
Here’s a cartoon about frequentism vs. Bayesianism. Don’t take it too seriously. If you mouse over the picture, a hidden message appears.
I pointed out some shortcomings of classical hypothesis tests and p-values. I then outlined how Bayesian tests might be conducted, first in the context of two simple hypotheses and then in the context of one simple and one complex hypothesis. In the latter case there is an additional parameter $\theta$ which requires a prior. Then to consider just the two hypotheses we have to marginalize the posterior probability on the complex hypothesis with respect to $\theta$.
I pointed out that the results will depend sensitively on the prior, which means that Bayesian hypothesis tests must be conducted with great care. There are some results that are more robust with respect to the prior, and we will discuss them in subsequent classes. I showed an example (biased coin) and demonstrated that the results of a Bayesian hypothesis test can be very different from frequentist ones.
I briefly outlined how reversible jump MCMC can be used to evaluate the posterior probabilities of hypotheses. I’ll show you a program on Tuesday to make this more concrete. I mentioned that the same ideas can be used to compare multiple models of various number of parameters.
I discussed some other problems with p-values, in particular that they overstate the evidence against the null. I pointed to Jim Berger’s website for the Java applet (link above).
I finished with the beginning of a discussion of philosophical issues that relate to Bayesian epistemology.
## STAT 330 November 7, 2012
November 7, 2012
Nate Silver nailed it. See here. Hooray for soberly analyzed statistics.
Posted in STAT 330 | 1 Comment »
## STAT 330 November 6, 2012
November 6, 2012
In class I mentioned an article in Scientific American by Efron and Morris on the Stein problem. Here it is! There’s also a WikiPedia article on the Stein problem here.
At start of class I mentioned an NPR story that I heard that indicated that more reliable answers to polling questions about elections would be gotten, not by asking who a person is going to vote for, but who that person thinks will win the election. Here is the story, and here is a link to a related article by the respected pollster Andrew Kohut, of Pew Research.
Here are the notes on Bayesian hypothesis testing, which we started on today.
And here is the next (and final) assignment, due after the holiday break.
I continued the discussion of the Stein problem; we saw how an estimator that dominates the obvious estimator shrinks the estimated batting averages towards the common mean (this was the Efron-Morris estimator). This is very typical of shrinkage estimators, which are a common feature of hierarchical Bayes models. I mentioned that the Efron-Morris estimator (and the James-Stein estimator) are themselves inadmissible, although they are better than the naive estimator. I noted that every admissible decision rule (with exceptions concerning finiteness) is a Bayes rule. This is interesting because admissibility is a frequentist idea, and this observation unites frequentism and Bayesian ideas in the area of decision theory.
I then discussed several examples; one was a normal model analogous to the binomial model that we discussed the other day for the baseball batting averages. I also discussed an oil well logging problem that one of my Texas students suggested some years ago. I went through writing down the posterior probability, but I did not discuss the sampling strategy (which is in the notes). The important points are two: First, enforcing the condition that $\nu_1<\nu_2$ by including a factor of $I(\nu_1<\nu_2)$ in the likelihood; and second, using a hierarchical independence Jeffreys prior on $\tau_1$ and $\tau_2$.
Between those two examples I discussed an example that shows how a bunch of independent MCMC calculations can be combined, after the fact, into a single hierarchical model, by using Peter Müller’s “slick trick” of using the samples from the individual calculations to provide the proposals for the hierarchical model.
## STAT 330 November 1, 2012
November 1, 2012
Today we first looked at several examples of Jeffreys priors. First, known variance but unknown mean; Second, known mean but unknown variance. The first was flat, the second was the usual $1/\sigma$ prior. We then looked at unknown mean and unknown variance and (with apologies) we finally ground through to get $1/\sigma^2$. Jeffreys didn’t like this (it is what you get for the left invariant Haar prior, which Jim Berger thinks we should not be using). Instead he favored the “independence Jeffreys prior”, which is flat x $1/\sigma$.
I pointed out that none of these is perfect. There may be no underlying group structure, so those priors may not be useful for some problems. The maximum entropy priors are not invariant under coordinate transformations, meaning that if you work a problem out in one set of coordinates, you may get a result that it incompatible with the working out of the problem in a different set of coordinates. And, since the Jeffreys prior is constructed from the likelihood as a sampling distribution (but the data are integrated out), some think that it is incompatible with the Likelihood Principle.
There are other ideas for constructing priors of this sort.
I again noted that if you have actual prior information, you should use it, and illustrated it with an example from astronomy.
We then turned to hierarchical Bayes models. Here the idea is that we may introduce new parameters that are not in the likelihood via a prior that is conditioned on the new parameters. We looked at an example involving baseball batting averages (trying to predict the end-of-season batting averages based on the results of the first 45 at-bats. I pointed out that because of sampling error, the averages at the extremes might be more extreme than they really should be, so that the player with the best batting average after 45 tries might just have been lucky, whereas the one with the worst batting average might have just been unlucky. There are differences in the ability of players, to be sure, but the first few at-bats are also affected by sampling error. So we modeled the individual players as a binomial with a probability that is unique to the player, but assumed that the individual probabilities are drawn from a distribution that represents the varying abilities of all players (modeled as a beta distribution). I demonstrated a program that calculates this. We’ll take this up again on Tuesday.
I finished with a short discussion of admissibility in frequentist decision theory.
Posted in STAT 330 | 1 Comment »
## STAT 330 October 30, 2012
October 28, 2012
I am hoping and expecting that Sandy will not prevent me from making class on Tuesday, but it all depends on how bad the storm will be. If I cannot make it I will tweet at bayesrulez, and send email (assuming that we have power!) and if that doesn’t work I will try to contact the department and get a message posted on the blackboard.
UPDATE: I am in Burlington and there will be class today.
Here is the next set of charts, on Hierarchical Bayes models.
Nate Silver, whom I have mentioned before as a Bayesian who tries to predict election outcomes (and more…) has a new book.
Andrew Gelman has an op-ed in the NY Times today on how to interpret the probabilities that Nate (and others) are calculating. And here is another similar discussion from today’s Salon.com.
Today we talked about Maximum Entropy priors; I explained how mathematical entropy can be used to quantify the amount of information that we stand to gain by learning which of a number of states happens to be the case, when all we have is a probability distribution on those states. We would have maximum uncertainty, that is to say, minimum amount of prior information, by maximizing the entropy of a distribution, subject to constraints that reflect what we do know. That maximization is accomplished using Lagrange multipliers. In the case of a continuous distribution we must also use the calculus of variations. I gave several examples of how to do this.
We then took up Jeffreys priors, introduced by the statistician Harold Jeffreys. The Jeffreys prior is the square root of the determinant of the Fisher information of the likelihood function. It has the advantage that if you transform the parameters of a problem, the Jeffreys prior in the new coordinates is the prior that will give the same results as the Jeffreys prior in the original parameter set would give. So you can decide which parameters are most convenient, and then just calculate and use the Jeffreys prior in those coordinates (if you have decided that the Jeffreys prior is the right one for the problem).
Next time I will give you some examples; then we will proceed with the next chart set.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943213164806366, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=The_Golden_Ratio&diff=31337&oldid=31282
|
# The Golden Ratio
### From Math Images
(Difference between revisions)
| | | | |
|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 1: | | Line 1: | |
| | {{Image Description Ready | | {{Image Description Ready |
| | |ImageName=The Golden Ratio | | |ImageName=The Golden Ratio |
| - | |Image= | + | |Image=Animated-gifs-pentagrams-010.gif |
| | |ImageIntro=The '''golden number,''' often denoted by lowercase Greek letter "phi"(φ) is numerically equal to <math>\frac{1 + \sqrt{5}}{2} = 1.61803399... \dots =\varphi</math>. | | |ImageIntro=The '''golden number,''' often denoted by lowercase Greek letter "phi"(φ) is numerically equal to <math>\frac{1 + \sqrt{5}}{2} = 1.61803399... \dots =\varphi</math>. |
| - | | + | The term '''golden ratio''' refers to the ratio <math>\varphi</math> : 1. |
| - | The term '''golden ratio''' refers to the ratio <math>\varphi</math> : 1. The line segments to the right are all examples of the golden ratio. In each case, | + | |
| - | | + | |
| - | :<math>\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi . </math> | + | |
| - | | + | |
| - | :keep or discard? | + | |
| - | :[[Image:Animation2.gif]] | + | |
| - | | + | |
| | This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number. | | This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number. |
| | |ImageDescElem=The golden ratio, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa uses the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. However, such claims have been criticized in scholarly journals (see references at the end of the page) as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle. | | |ImageDescElem=The golden ratio, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa uses the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. However, such claims have been criticized in scholarly journals (see references at the end of the page) as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle. |
| Line 27: | | Line 20: | |
| | | | |
| | | | |
| - | The golden ratio can be defined using a line segment divided into two sections, of lengths a and b, respectively. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to <math>\varphi</math>. The value of this ratio turns out not to depend on the particular values of a and b, as long as they satisfy the proportion. The line segment above exhibits the golden proportions. | + | The golden ratio can be defined using a line segment divided into two sections, of lengths a and b, respectively. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to <math>\varphi</math>. The value of this ratio turns out not to depend on the particular values of a and b, as long as they satisfy the proportion. The line segment above exhibits the golden proportions. |
| | | + | |
| | | + | |
| | | + | The line segments below are all examples of the golden ratio. |
| | | + | |
| | | + | :[[Image:Animation2.gif]] |
| | | + | :In each case, <math>\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi . </math> |
| | | + | |
| | | + | |
| | The golden rectangle is made up of line segments exhibiting the golden proportion. Remarkably, when a square is cut off of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle above. | | The golden rectangle is made up of line segments exhibiting the golden proportion. Remarkably, when a square is cut off of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle above. |
| | | | |
| Line 291: | | Line 292: | |
| | |InProgress=Yes | | |InProgress=Yes |
| | |HideMME=No | | |HideMME=No |
| | | + | } |
| | | + | |Field=Algebra |
| | | + | |InProgress=No |
| | } | | } |
| | |Field=Algebra | | |Field=Algebra |
## Revision as of 15:56, 4 June 2012
The Golden Ratio
Fields: Algebra and Geometry
Image Created By: Azhao1
Website: The Math Forum
The Golden Ratio
The golden number, often denoted by lowercase Greek letter "phi"(φ) is numerically equal to $\frac{1 + \sqrt{5}}{2} = 1.61803399... \dots =\varphi$.
The term golden ratio refers to the ratio $\varphi$ : 1. This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number.
# Basic Description
The golden ratio, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa uses the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. However, such claims have been criticized in scholarly journals (see references at the end of the page) as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle.
### Misconceptions about the Golden Ratio
In his paper, Misconceptions about the Golden Ratio, George Markowsky investigates many claims about the golden ratio appearing in man-made objects and in nature. Specifically, he claims that the golden ratio does not appear in the Parthenon or the Great Pyramids, two of the more common beliefs. He also disputes the belief that the human body exhibits the golden ratio. To read more, click here!
[1]
## A Geometric Representation
### The Golden Ratio in a Line Segment
The golden ratio can be defined using a line segment divided into two sections, of lengths a and b, respectively. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to $\varphi$. The value of this ratio turns out not to depend on the particular values of a and b, as long as they satisfy the proportion. The line segment above exhibits the golden proportions.
The line segments below are all examples of the golden ratio.
In each case, $\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi .$
The golden rectangle is made up of line segments exhibiting the golden proportion. Remarkably, when a square is cut off of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle above.
### Triangles
The golden ratio $\varphi$ is used to construct the golden triangle, an isoceles triangle that has legs of length $\varphi$ and base length of 1. It is above and to the left. Similarly, the golden gnomon has base ${\varphi}$ and legs of length 1. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above),pentagrams, and pentacles.
The pentacle below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio.
$\frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} } = \frac{{\color{Red}\mathrm{red}} }{{\color{Green}\mathrm{green}} } = \frac{{\color{Green}\mathrm{green}} }{{\color{Magenta}\mathrm{pink}} } = \varphi .$
These triangles can be used to form fractals and are one of the only ways to tile a plane using pentagonal symmetry. Two fractal examples are shown below.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Algebra, Geometry
# Mathematical Representations of the Golden Ratio
## An Algebraic Representation
[Click to expand]
We may algebraically solve for the ratio ($\varphi$) by observing that ratio satisfies the following property by definition:
$\frac{b}{a} = \frac{a+b}{b} = \varphi$
[Click to hide]
Let $r$ denote the ratio :
$r=\frac{a}{b}=\frac{a+b}{a}$.
So
$r=\frac{a+b}{a}=1+\frac{b}{a} =1+\cfrac{1}{a/b}=1+\frac{1}{r}$.
$r=1+\frac{1}{r}$
Multiplying both sides by $r$, we get
${r}^2=r+1$
which can be written as:
$r^2 - r - 1 = 0$.
Applying the quadratic formula An equation, $\frac{-b \pm \sqrt {b^2-4ac}}{2a}$, which produces the solutions for equations of form $ax^2+bx+c=0$ , we get $r = \frac{1 \pm \sqrt{5}} {2}$.
Because the ratio has to be a positive value,
$r=\frac{1 + \sqrt{5}}{2} \approx 1.61803399 \dots =\varphi$.
The golden ratio can also be written as what is called a continued fraction by using recursion.
[Click to expand]
[Click to hide]
We have already solved for $\varphi$ using the following equation:
${\varphi}^2-{\varphi}-1=0$.
We can add one to both sides of the equation to get
${\varphi}^2-{\varphi}=1$.
Factoring this gives
$\varphi(\varphi-1)=1$.
Dividing by $\varphi$ gives us
$\varphi -1= \cfrac{1}{\varphi }$.
Solving for $\varphi$ gives
$\varphi =1+ \cfrac{1}{\varphi }$.
Now use recursion and substitute in the entire right side of the equation for $\varphi$ in the bottom of the fraction.
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{\varphi } }$
Substituting in again,
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\varphi}}}$
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}$
This last infinite form is a continued fraction
If we evaluate truncations of the continued fraction by evaluating only part of the continued fraction (the finite displays above it), replacing $\varphi$ by 1, we produce the ratios between consecutive terms in the Fibonacci sequence.
$\varphi \approx 1 + \cfrac{1}{1} = 2$
$\varphi \approx 1 + \cfrac{1}{1+\cfrac{1}{1}} = 3/2$
$\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1} } } = 5/3$
$\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1+\cfrac{1}{1}}}} = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{2}}} =1 + \cfrac{1}{1 + \cfrac{2}{3}} = 8/5$
Thus we discover that the golden ratio is approximated in the Fibonacci sequence.
$1,1,2,3,5,8,13,21,34,55,89,144...\,$
| | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $1/1$ | $=$ | $1$ |
| $2/1$ | $=$ | $2$ |
| $3/2$ | $=$ | $1.5$ |
| $8/5$ | $=$ | $1.6$ |
| $13/8$ | $=$ | $1.625$ |
| $21/13$ | $=$ | $1.61538462...$ |
| $34/21$ | $=$ | $1.61904762...$ |
| $55/34$ | $=$ | $1.61764706...$ |
| $89/55$ | $=$ | $1.61818182...$ |
$\varphi = 1.61803399...\,$
As you go farther along in the Fibonacci sequence, the ratio between the consecutive terms approaches the golden ratio. Many real world applications of the golden ratio are related to the Fibonacci sequence. For more real-world applications of the golden ratio click here!
In fact, we can prove this relationship using mathematical Induction.
[Click to show proof]
[Click to hide proof]
Since we have already shown that
$\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}$,
we only need to show that each of the terms in the continued fraction is the ratio of Fibonacci numbers as shown above.
First, let $x_1=1$, $x_2=1+\frac{1}{1}=1+\frac{1}{x_1}$, $x_3= 1+\frac{1}{1+\frac{1}{1}}=1+\frac{1}{x_2}$ and so on so that $x_n=1+\frac{1}{x_{n-1}}$.
These are just the same truncated terms as listed above. Let's also denote the terms of the Fibonacci sequence as $f_n=f_{n-1}+f_{n-2}$ where $f_1=1$,$f_2=1$, and so $f_3=1+1=2$, $f_4=1+2=3$ and so on.
We want to show that $x_n=\frac{f_{n+1}}{f_n}$ for all n.
First, we establish our base case. We see that $x_1=1=\frac{1}{1}=\frac{f_2}{f_1}$, and so the relationship holds for the base case.
Now we assume that $x_k=\frac{f_{k+1}}{f_{k}}$ for some $1 \leq k < n$ (This step is the inductive hypothesis). We will show that this implies that $x_{k+1}=\frac{f_{(k+1)+1}}{f_{k+1}}=\frac{f_{k+2}}{f_{k+1}}$.
By our definition of $x_n$, we have
$x_{k+1}=1+\frac{1}{x_k}$.
By our inductive hypothesis, this is equivalent to
$x_{k+1}=1+\frac{1}{\frac{f_{k+1}}{f_{k}}}$.
Now we only need to complete some simple algebra to see
$x_{k+1}=1+\frac{f_k}{f_{k+1}}$
$x_{k+1}=\frac{f_{k+1}+f_k}{f_{k+1}}$
Noting the definition of $f_n=f_{n-1}+f_{n-2}$, we see that we have
$x_{k+1}=\frac{f_{k+2}}{f_{k+1}}$
Since that was what we wanted to show, we see that the terms in our continued fraction are represented by ratios of Fibonacci numbers.
The exact continued fraction is $x_{\infty} = \lim_{n\rightarrow \infty}\frac{f_{n+1}}{f_n} =\varphi$.
## Proof of the Golden Ratio's Irrationality
[Click to expand]
[Click to hide]
Remarkably, the Golden Ratio is irrational, despite the fact that we just proved that is approximated by a ratio of Fibonacci numbers. We will use the method of contradiction to prove that the golden ratio is irrational.
Suppose $\varphi$ is rational. Then it can be written as fraction in lowest terms $\varphi = b/a$, where a and b are integers.
Our goal is to find a different fraction that is equal to $\varphi$ and is in lower terms. This will be our contradiction that will show that $\varphi$ is irrational.
First note that the definition of $\varphi = \frac{b}{a}=\frac{a+b}{b}$ implies that $b > a$ since clearly $b+a>b$ and the two fractions must be equal.
Now, since we know
$\frac{b}{a}=\frac{a+b}{b}$
we see that $b^2=a(a+b)$ by cross multiplication. Writing this all the way out gives us $b^2=a^2+ab$.
Rearranging this gives us $b^2-ab=a^2$, which is the same as $b(b-a)=a^2$.
Dividing both sides of the equation by $(b-a)$ and $a$ gives us that
$\frac{b}{a}=\frac{a}{b-a}$.
Since $\varphi=\frac{b}{a}$, we can see that $\varphi=\frac{a}{b-a}$.
Since we have assumed that a and b are integers, we know that b-a must also be an integer. Furthermore, since $a<b$, we know that $\frac{a}{b-a}$ must be in lower terms than $\frac{b}{a}$.
Since we have found a fraction of integers that is equal to $\varphi$, but is in lower terms than $\frac{b}{a}$, we have a contradiction: $\frac{b}{a}$ cannot be a fraction of integers in lowest terms. Therefore $\varphi$ cannot be expressed as a fraction of integers and is irrational.
# For More Information
• Markowsky. “Misconceptions about the Golden Ratio.” College Mathematics Journal. Vol 23, No 1 (1992). pp 2-19.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# References
1. ↑ "Parthenon", Retrieved on 16 May 2012.
# Future Directions for this Page
-animation?
http://www.metaphorical.net/note/on/golden_ratio http://www.mathopenref.com/rectanglegolden.html
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 114, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213073253631592, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/150615/references-for-kolmogorovs-strong-law-of-a-large-numbers
|
# References for Kolmogorov's strong law of a large numbers
On the Wikipedia law of large numbers site, they mention "Kolmogorov's strong law of large numbers", which works even if the random variables are not identically distributed.
Where can I find this theorem shown and proven? I know that a reference is provided on the Wikipedia site, but that book is out of availability. Are there any other references out there?
(Interestingly, Allan Gut's book "Probability: A Graduate Course", has a theorem by the name of "Kolmogorov's strong law", but in his book, the random variables have to be identically distributed. Any ideas why this is?)
-
## 1 Answer
It does not work, in general, if the summands are not iid. Both "independent" and "identically distributed" can be weakened, but you can't dispense with either of them entirely and still get the result without giving up something else.
The result that they cite as "Kolmogorov's Strong Law" is not what I always refer to as Kolmogorov's Strong Law (I suspect that what I refer to as Kolmogorov's Strong Law is the same thing that Allan Gut does). The result given on Wikipedia requires a finite second moment and that $\sum \frac{\mbox{Var} X_k}{k^2} < \infty$, but in exchange you can drop the requirement of being identically distributed. The proof of this version I think is actually pretty easy, the sketch being: because the series converges we can apply the Khintchine-Kolmogorov convergence theorem so that $\sum \frac{(X_k - \mu_k)}{k}$ converges almost surely, and the result follows after an application of Kronecker's Lemma.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389640092849731, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/15912-simple-problem.html
|
# Thread:
1. ## simple problem
i finished about all of this math problem except for the ending.
i reduced the problem to 5000^-10k
what i need is how to find the value of k
2. Originally Posted by chinese_man_77
i finished about all of this math problem except for the ending.
i reduced the problem to 5000^-10k
what i need is how to find the value of k
we need an equation or something, you haven't given enough information
3. A = (ax)(10)^-kt
ax = 500
t= 10
A = 450
i got it down to 450 = 5000^-10k
4. ## Re:
RE:
Even if you set this equation equal to zero there still isn't a solution...
-qbkr21
5. its a regents question, so im pretty sure there is a solution and why would it be equal to 0 when its equal to 450
6. A = (ax)(10)^-kt
ax = 500
t= 10
A = 450
i got it down to 450 = 5000^-10k
i just dont know how to finish off the problem
7. Originally Posted by chinese_man_77
A = (ax)(10)^-kt
ax = 500
t= 10
A = 450
i got it down to 450 = 5000^-10k
we have $450 = 500 \cdot 10^{-10k}$
$\Rightarrow \frac {450}{500} = 10^{-10k}$
$\Rightarrow \log \left( \frac {9}{10} \right) = -10k$ ........do you understand this step?
$\Rightarrow k = \frac {\log \left( \frac {9}{10} \right)}{-10}$
8. nice, thanks a lot
9. Originally Posted by Jhevon
we have $450 = 500 \cdot 10^{-10k}$
$\Rightarrow \frac {450}{500} = 10^{-10k}$
$\Rightarrow \log \left( \frac {9}{10} \right) = -10k$ ........do you understand this step?
$\Rightarrow k = \frac {\log \left( \frac {9}{10} \right)}{-10}$
one more quick question... i see how you got to the end of the problem, but is there any reason why you knew it was a log problem?
10. Originally Posted by chinese_man_77
one more quick question... i see how you got to the end of the problem, but is there any reason why you knew it was a log problem?
the unknown was in the power. when we see that, we either need logs or the laws of exponents to solve the problem. logs was the appropriate choice here
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464894533157349, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/44572/how-to-find-parameters-from-an-equation/44576
|
# How to find parameters from an equation
the question could be "stupid" but i don't know if it is feasible or not, please don't kill me :)
EDIT WITH NEW FORMULAS!
I have an equation like this: (unfortunatly in my first Q&A i cannot upload images for "spam" reason, I post latex version of the formula hoping it is understandable. Otherwise which kind of representation can i use?)
$$N_c = \sum_{i=1}^{max} \bigg( \frac{1}{i^s H} * N_u * i\bigg)$$
where, $H = \sum_{i=1}^{max} \frac{1}{i^s}$
If all the variables are known except one:
• "$max$" is UNKNOWN
it is possible to find the "$max$" parameter? Or it is mathematically impossible?
Thank you very much.
ADDED (based on Ross Millikan answer): Practical example based on you approximations to verify the correcteness of "H" formula. If i set:
• $s = 0.5$
• $max = 500$ (in this case i know also the "max" value, i want only to verify the integral)
we have:
$$H=\sum_{i=j}^{m} \frac{1}{j^s}\approx \int_1^{m} x^{-s} \; dx=\frac{1}{(1-s)x^{1-s}}\mid _1^{m}=\frac{1}{1-s}\left(1-\frac{1}{m^{1-s}}\right) =$$ $$= \frac{1}{1-0.5}\left(1-\frac{1}{500^{1-0.5}}\right) = 2 \times 0.95 = 1.91$$
But if I calculate the original summatory we obtain:
$$H=\sum_{i=j}^{m} \frac{1}{j^s} = 43.28$$
I think there is something strange in the integral's "evolution". I think we can expand it in this way:
$$H=\sum_{i=j}^{m} \frac{1}{j^s}\approx \int_1^{m} x^{-s} \; dx=\frac{x^{1-s}}{(1-s)}\mid _1^{m}= \frac{500^{0.5}}{1-0.5} - \frac{1^{0.5}}{1-0.5} = 44.72 - 2 = 42.72$$
that is a better approximation. What do you think? I don't know if it is correct, this is a "new world" for me!
-
If you enclose your $\LaTeX$ in dollar signs it gets rendered. Single dollar signs is inline, double gets display mode. – Ross Millikan Jun 10 '11 at 15:07
## 1 Answer
Unless there is unstated dependence on $i$, you can distribute out $H$ and $N_u$. If so you have $$N_c = \frac{N_u}{H}\sum_{i=1}^{max} \frac{1}{i^s}$$ This is a series that can be summed in terms of generalized harmonic numbers, but I am not sure that helps a lot. You can also approximate the sum by an integral $$\sum_{i=1}^{max} \frac{1}{i^s}\approx \int_1^{max} x^{-s} \; dx=\frac{1}{(1-s)}x^{1-s}\bigg| _1^{max}=\frac{1}{(1-s)}(max^{1-s}-1)$$and be pretty close. If you want the exact value you could then search.
Added: for the new version, your $H$ is just the same as the sum above, but then you need to sum again. I would presume the sums are over two different variables, so I will change the $H$ sum to go over $j$ and let $m$=max: $$H=\sum_{i=j}^{m} \frac{1}{j^s}\approx \int_1^{m} x^{-s} \; dx=\frac{x^{1-s}}{1-s}\bigg| _1^{m}=\frac{1}{1-s}\left(m^{1-s}-1\right)$$
Now your new $$N_c=\frac{N_u}{H}\sum_{i=1}^m\frac{i}{i^s}=\frac{N_u}{H}\sum_{i=1}^m\frac{1}{i^{s-1}}$$ and we can use the same approximation, as the sum is the same except for being $s-1$: $$N_c\approx N_u\frac{1-s}{2-s}\frac{m^{2-s}-1}{m^{1-s}-1}$$
-
@Ross Millikan, thank you for the reply. Even if i approximate the sum by an integral, i should know the "max" value anyway, is n't it? – Maurizio Jun 12 '11 at 12:37
If you integrate from $1$ the integral is less than the sum. For example, if max=2, the integral includes all the area from 1 to 2, but the sum is the higher value at 1 (times the width=1). But if you add 1 (the value at $i=1$), you have an upper bound as now the integral gets the average over each interval and the sum gets the least. But it could take a lot of terms when max is large to make up the 1, so you could have a range of uncertainty. – Ross Millikan Jun 12 '11 at 13:15
I add a new version of the formula..the first was a bit wrong! – Maurizio Jun 14 '11 at 10:59
@Ross Millikan, I add a revision of your formula about "H"! I have to verify also the second part regarding "Nc"! – Maurizio Jun 14 '11 at 14:52
@Maurizio: I had gotten the $x$ in the denominator instead of the numerator and carried it through. I have corrected it. In your check of $H$ it now comes out 44.72, much closer – Ross Millikan Jun 14 '11 at 15:39
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282333254814148, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.