url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/178183/accidents-of-small-n/178495
|
# Accidents of small $n$
In studying mathematics, I sometimes come across examples of general facts that hold for all $n$ greater than some small number. One that comes to mind is the Abel–Ruffini theorem, which states that there is no general algebraic solution for polynomials of degree $n$ except when $n \leq 4$.
It seems that there are many interesting examples of these special accidents of structure that occur when the objects in question are "small enough", and I'd be interested in seeing more of them.
-
2
– Asaf Karagila Aug 2 '12 at 21:52
4
I don't think they're the same question. The one you linked is about large counterexamples, this is about small ones. – SiliconCelery Aug 2 '12 at 21:57
1
@SiliconCelery: All finite numbers are small, merely by the virtue that there are only finitely many smaller. For some people even countable sets are small and when talking about things like Woodin cardinals then pretty much every feasible set is nothing more than a tiny speck of insignificance. – Asaf Karagila Aug 2 '12 at 22:40
2
$\mathbb R^n$ remains connected after the removal of a point, unless $n=1$. I'm sure I don't understand the question. – Rahul Narain Aug 2 '12 at 22:45
3
– Gerry Myerson Aug 2 '12 at 23:50
show 3 more comments
## 17 Answers
You might enjoy the article The Strong Law of Small Numbers by Richard K. Guy.
He has other publications in which he partly recycles the title.
-
Another accident of small dimension: in all dimensions $\gt 4$, there are only three regular polytopes: the simplex, hypercube, and cross-polytope (the dual of the hypercube). In two dimensions there are infinitely many (all of the $n$-gons); in three dimensions you have two additional polyhedra, the dodecahedron and icosahedron; and in four dimensions there are three additional ones, the 120-cell and 600-cell (which are dual to each other) and the 24-cell (which is self-dual).
-
Thank you, this is a very nice example. – Will Aug 3 '12 at 15:16
The outer automorphism group of $S_n$ is trivial, except when $n=6$.
-
2
This is probably the "funniest" case that I know of, how it just "randomly" fails for 6. – tomasz Aug 3 '12 at 17:44
1
Out(S2)=1 is still trivial, it just isn't equal to S2. – Jack Schmidt Aug 3 '12 at 19:44
The 26 sporadic groups is also an "accidental" occurrence. Since there is a finite number of them, after some large N, all the simple groups of order N or larger fit into a given category.
-
3
– anon Aug 3 '12 at 3:40
Deciding whether a graph is $2$-colorable has an obvious polynomial-time algorithm.
Deciding whether a graph is $k$-colorable for $k \geq 3$ is already NP-Complete!
-
3
Also works for $2-SAT$ and $3-SAT$ – Belgi Aug 2 '12 at 22:32
1
And two-dimensional matching (P) versus three-dimensional matching (NP-complete); also exact cover by 2-sets (P) versus exact cover by 3-sets (NP-complete). – MJD Aug 3 '12 at 1:53
Similar to the answer about the $26$ sporadic groups and nontrivial $\mathrm{Out}(S_n)$ for $n=2,6$, there are a bunch of what are called exceptional isomorphisms that occur with low order/dimension. Basically, groups come in all sorts of special families (where there is a rule designating what groups are in the family and why) that are infinite, but these infinite families share a few finite cases between them.
-
I would include in this list a discussion of $G(n)$ and $g(n)$ in the Waring Problem.
The point is that when it comes to representing integers as sums of powers of non-negative integers, it seems to happen that some (smallish) integers require more powers to represent them due to some some (presumably unknown) peculiarity of small integers.
For example, in the case of representing integers as sums of cubes, it has been proved that 9 cubes are sufficient, and some numbers require 9, so that $g(3)=9$.
On the other hand, calculations suggest that almost all numbers from some point onwards are sums of at most 4 cubes (so that $G(3)$ might be 4, but this is not proved), and it appears that it is an "accident of small $n$" that there are some (smallish) numbers which require more than 4 cubes.
-
The 'pecularity' of small integers for Waring's problem is presumably just that there aren't enough of them to go around; for large numbers there are so many possible cubes to consider for expressing them as sums of $k$ cubes that at least one combination 'should' fit, but for smaller $n$ the number of possibilities for a fit is so much smaller that it's unsurprising when some numbers don't fit at all. – Steven Stadnicki Aug 3 '12 at 0:19
You are probably right - I had always assumed it was something to do with quirks of small numbers, but your explanation seems more likely. – John Wordsworth Aug 3 '12 at 7:15
The ring of integers of $\mathbf{Q}[\sqrt d]$ is a principal ideal domain for $1 \leq d < 10$, but not for $d=10$.
-
1
$2\times5=(4+\sqrt6)(4-\sqrt6)$. – Gerry Myerson Aug 4 '12 at 2:33
Bruno, $(4+\sqrt6)(4-\sqrt6)=4^2-\sqrt6^2=16-6=10=2\times5$. – Gerry Myerson Aug 4 '12 at 5:54
– Bruno Aug 4 '12 at 20:02
Yes, you're right. I was thinking 2 was irreducible in that ring, since there's nothing of norm 2, but there are elements of norm $-2$, and they do the trick. – Gerry Myerson Aug 4 '12 at 23:38
The $n^\mathrm{th}$ cyclotomic polynomial is the minimal polynomial whose roots are the primitive $n^\mathrm{th}$ roots of unity, that is
$$\Phi_n(X) = \prod_{ {0\leq j < n} \atop {\gcd(j,n)=1}}(X - e^{2\pi i j / n})$$
The first few examples are: $$\Phi_1(X) = X - 1$$ $$\Phi_2(X) = X + 1$$ $$\Phi_3(X) = X^2 + X + 1$$
For all $n$ small enough, all the coefficients are $\pm 1$ or $0$. But if at least three odd primes divide $n$ (which requires at least $n\geq3\cdot5\cdot7 = 105$), other coefficients are possible.
-
Orthogonal Latin squares exist for every order except 2 and 6. Euler conjectured from the small examples that they existed for any order not of the form $4k+2$, but he was mistaken.
-
The alternating group $A_n$ is simple for $n\neq 4$.
This is related to the example given by the thread creator since it allows $S_4$ to be solvable, thus guaranteeing that all polynomials of degree $4$ or less are resolvable with only the field operations plus the extraction of roots.
-
$\mathbb{R}^n$ has a unique smooth structure except when $n=4$. Furthermore, the cardinality of [diffeomorphism classes of] smooth manifolds that are homeomorphic but not diffeomorphic to $\mathbb{R}^4$ is $\mathfrak{c}=2^{\aleph_0}$.
-
The sequence of the maximal number of regions formed by drawing chords between all pairs of n points in arbitrary position on the border of a circle starts 1, 2, 4, 8, 16; and then the next term, of course, is... 31. (The canonical formula is $1+{n\choose2}+{n\choose4}$.) For more details, see http://oeis.org/A000127
-
1
This seems to be an example of a formula that seems to hold for small n, but doesn't hold for slightly larger n. OP was looking for a formula/rule that holds for all numbers except for small n. – BlueRaja - Danny Pflughoeft Aug 3 '12 at 2:19
I realized that after writing this up (though it's an example of a small-$n$ trend that doesn't continue for large $n$); it's part of the reason I wrote up my other answer. I'm tempted to delete this one, but it seems to be liked... – Steven Stadnicki Aug 3 '12 at 2:23
It's fun to point out to people that the sequence contains 256; that provides extra "evidence" that it consists of powers of two! – Michael Lugo Aug 24 '12 at 21:34
Every integer is less then 100, except for $n>99$, where the pattern breaks.
-
2
This is supposed to be an answer? – J. M. Aug 3 '12 at 8:15
2
I do not understand the downvotes... This is the obvious example of a pattern that holds only for small $n$! – Mariano Suárez-Alvarez♦ Aug 3 '12 at 17:22
2
Better: Every integer is not equal to seven, except 7. – Mark Hurd Aug 3 '12 at 18:40
What about: Every natural number is positive, except $0$. – celtschk Aug 3 '12 at 20:04
I suppose I should have specified "nontrivial examples". – Will Aug 4 '12 at 19:51
I wouldn't really call them accidents, but here are two simple related results from algebra:
There exists algebraic extensions of $\mathbb{R}$ of dimension $n$ only for $n=1$ and $n=2$.
Also it is not possible to construct an Division Ring over $\mathbb{R}$ of dimension $n$, excepting when $n=4$.
-
There are cyclic numbers for all bases $>4$, except for perfect squares, and $6$.
-
Although there is an infinite family of generalized Petersen graphs with arbitrarily many vertices, there are exactly seven of these that are edge-transitive, the largest having 48 vertices.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495776891708374, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/78259/list
|
## Return to Question
3 added 57 characters in body
Let $L$ be the Banach algebra of $L^1$-functions from $\mathbb{R}$ to $\mathbb{C}$ with $L^1$-norm and convolution as algebra multiplication. Assume that we realized knew that all the homomorphisms from $L$ to $\mathbb{C}$ are identical the zero map and evaluation of the Fourier transform values in specific at individual real pointsnumbers: $f\rightarrow f \int mapsto \int_{\mathbb R} f(t)e^{it\alpha}dt$ . for some real $\alpha$. We may add unity a unit $e$ to $L$ artifically, just artificially by considering the new Banach algebra $A:=L\oplus \mathbb{C}\cdot e$ with natural operations. Then the fact that any $L^1$-function with zero whose Fourier transform is zero must be zero itself may be rephrased algebraically: the algebra $A$ is semisimple (as maximal ideals of unital banach Banach algebras correspond to homomorphisms to $\mathbb{C}$ by the Gelfand-Mazur theorem).
My question is whether this may be proved a priori and independently (and maybe for some wide class of commutative unital Banach algebras).
2 edited title
1
# uniqueness theorem of Fourier transform: is there algebraic proof?
Let $L$ be the Banach algebra of $L^1$-functions from $\mathbb{R}$ to $\mathbb{C}$ with $L^1$-norm and convolution as algebra multiplication. Assume that we realized that all homomorphisms from $L$ to $\mathbb{C}$ are identical zero and Fourier transform values in specific real points: $f\rightarrow \int f(t)e^{it\alpha}dt$. We may add unity $e$ to $L$ artifically, just considering new Banach algebra $A:=L\oplus \mathbb{C}\cdot e$ with natural operations. Then the fact that any $L^1$-function with zero Fourier transform must be zero itself may be rephrased algebraically: algebra $A$ is semisimple (as maximal ideals of unital banach algebras correspond to homomorphisms to $\mathbb{C}$ by Gelfand-Mazur theorem).
My question is whether this may be proved a priori and independently (and maybe for some wide class of commutative unital Banach algebras).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9030017256736755, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/109920/intersection-theory-for-g-varieties-an-action-on-the-chow-ring/109922
|
## Intersection theory for $G$-varieties - an action on the chow ring?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $G$ be a reductive algebraic group. Let $X$ be a $G$-variety and consider any closed subvariety $Z$ of $X$. Since any $g\in G$ acts as an automorphism, we know that $g.Z$ is again a closed subvariety of $X$. This yields an action of $G$ on the free module of cycles of $X$ which should induce an action of $G$ on the Chow ring of $X$. The invariants of this ring should be precisely the classes that correspond to linear combinations of $G$-orbits.
Has this action been studied before? Any kind of reference would be very welcome. Thanks!
Edit: It looks like my above idea is rather futile, so let me ask more broadly: Are there any techniques or results in intersection theory specifically on $G$-varieties? Could you name some references?
-
3
The invariants are not the classes that correspond to linear combinations of $G$-orbits. The class of a point, or one point in each connected component of $X$, is invariant in the Chow ring, even if all the orbits are much larger than points. – Will Sawin Oct 17 at 16:19
3
There are also the papers of Edidin and Graham. – Damian Rössler Oct 17 at 21:36
@Damian Rössler: Could you give me the exact titles? Thanks a bunch! – Jesko Hüttenhain Oct 18 at 9:28
Edidin and Graham have several versions of their paper "Equivariant Intersection Theory" on the arXiv. You may prefer reading one of the earlier versions rather than the final published version. In fact, I suspect that is why they kept the earlier versions available. – Michael Joyce Oct 22 at 12:44
## 2 Answers
If you are interested in intersection theory of varieties with $G$-actions, then you want to study equivariant intersection theory. This theory exploits the $G$-action in a way that leads to deeper invariants than ordinary intersection theory. The three references I would recommend if you are first learning the subject are:
Fulton's lectures notes on equivariant cohomology (compiled by Dave Anderson)
Equivariant Cohomology and Equivariant Intersection Theory by Michel Brion
Equivariant Chow Groups for Torus Actions by Michel Brion
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I assume that you are over an algebraically closed field. If $G$ is connect, the action is trivial, because any affine algebraic group is rational, so every point can be connected via a chain of open subsets of $\mathbb A^1$ to the identity.
-
How unfortunate. Let me ask more broadly then: Are there any particular techniques or results in intersection theory on $G$-varieties that make use of the $G$-action? In fact, let me add that to my original question. – Jesko Hüttenhain Oct 17 at 16:43
To Jesko: yes, definitely. There is a whole lot of work on homogeneous spaces, toric varieties, wonderful compactifications, spherical varieties, and so on, in which one exploits the group action to get results on the intersection theory. – Angelo Oct 17 at 18:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121845960617065, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/44507/whats-projective-about-projective-pro-finite-groups/44531
|
## What’s “projective” about “projective pro-finite groups”?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
A profinite group is said to be projective if its cohomological dimension is $\leq 1$. Is this related to some other notion of "projective"? How so?
-
## 2 Answers
A profinite group $P$ is projective if and only if any continuous group homomorphism from it to a profinite quotient group $G/H$ lifts to a continuous group homomorphism to the profinite group $G$.
-
I think you mean any continuous group epimorphism from it to a profinite quotient group $G/H$ lifts to a continuous group homomorphism to the profinite group $G$. Are you trying to talk about a similarity with projective modules? If so, we should explain why it is an epimorphism we're talking about and not a homomorphism. – Makhalan Duff Nov 2 2010 at 13:16
You should take another look at the definition of projective module. Only the map from G to G/H needs to be an epi, not the map from P to G/H. If I interpret Leonid Positselski correctly, he is simply saying that P is a projective object in the category of profinite groups. – Saul Glasman Nov 2 2010 at 13:45
Ah, yes. You're correct. Very good! – Makhalan Duff Nov 2 2010 at 14:21
@Saul: I am not quite sure about what should be meant by a projective object in a category, except if the category is abelian. One can imitate the conventional definition for abelian categories, but then one has to explain what is meant by an epimorphism in a category. Once again, there is the standard definition, but it is not applicable in the case at hand. Not every categorical epimorphism in the category of (say) finite noncommutative groups is a surjective map. If one replaces (categorical) epimorphisms with universal epimorphisms, it will finally work. – Leonid Positselski Nov 2 2010 at 20:10
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is explained in Section 7.6 of Ribes and Zalesskii, "Profinite groups". The notion is similar to the notion of a projective module. For example, free profinite groups are projective. Moreover, a profinite group is projective if and only if it is a closed subgroup of a free profinite group.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094351530075073, "perplexity_flag": "head"}
|
http://skullsinthestars.com/2008/01/17/optics-basics-defining-the-velocity-of-a-wave/
|
The intersection of physics, optics, history and pulp fiction
## Optics basics: Defining the velocity of a wave
Posted on January 17, 2008 by
How do we define how fast a wave is going? The question at first glance seems obvious. When we discussed harmonic waves in a previous post, we observed that the velocity of the wave could be measured by measuring how far one of the peaks of the wave travels in a certain amount of time. There are a number of subtle points that arise when talking about wave velocity, however, including the possibility of light traveling at faster than the ‘speed of light’! In this post we’ll try and define the velocity of a wave, and explain why the question is not so easy to answer.
The velocity v of a harmonic wave is directly related to the angular frequency ω of the wave and its wavenumber k, in the form
$v =\omega/k$.
We will call this definition of wave velocity the phase velocity of the wave, for reasons which will hopefully become clear soon. It is customary to write this velocity as a fraction of the speed of light c, and the fractional coefficient is simply the refractive index n of the material the light is passing through, i.e.
$v = c/n$.
The refractive index, and therefore the velocity of the light wave, is a property of the material medium. Ordinary glass, for instance, has a refractive index of about n = 1.5 (light travels at 2/3rds the vacuum speed); the vacuum of interstellar space has a refractive index of unity, by definition. The frequency of the wave is unchanged on entering a new material, which means that the wavelength of the wave depends on the medium.
Harmonic waves are an ideal waveform not seen in practice. How can we measure the velocity of an arbitrary wave ‘pulse’, as illustrated below?
As long as the shape of the pulse does not change, we can measure its velocity in a manner similar to the harmonic wave: follow one of the peaks of the pulse, and measure how far it goes in a given period of time.
This works just fine for waves on a string, or light fields traveling in vacuum: the shape of the pulse does not change as it travels*.
When one considers a light pulse traveling in matter, however, a problem arises: the shape of the wave changes as it travels through the medium! A simple simulated example of this is shown in the animation below:
We can see that while the bulk of the wave is traveling to the right, the ‘ups and downs’ of the wave are moving within the bulk. Now there’s no fixed feature of the wave that we can use to measure the velocity!
The origin of this shape change is what is known as dispersion. The atoms in a material respond differently to light of different frequencies, and the wavelength and hence the phase velocity of light is different at each frequency. As we have mentioned in a previous basics post, we can mathematically represent any pulse of light as a collection of harmonic waves of different amplitudes. This means that different parts of the pulse are traveling at different velocities, resulting in a change of pulse shape.
A very famous and well-known example of dispersion is illustrated on the cover of a classic Pink Floyd album:
Because the glass of the prism is dispersive, different frequencies of the incoming white light are bent at different angles on entering and leaving the prism, resulting in a separation of the colors of the light.
Dispersion is characterized mathematically by what is called a dispersion relation, a functional relationship between the frequency of a wave and its wavenumber in the medium, i.e. ω=ω(k).
What can we say about wave velocity, then? Looking at the animation above again, we note that even though individual features of the wave are changing, the entire ‘group’ of features are moving to the right with what seems to be a well-defined velocity. Can we quantify this velocity?
It turns out we can! Suppose our wave consists of only of harmonic waves that lie within a range of frequencies Δω, and a corresponding range of wavenumbers Δk. We can define the group velocity of the wave as:
$v_g=\Delta \omega/\Delta k$.
It is to be noted that this definition is different from the phase velocity of the wave. The phase velocity dealt with the ratio of the center frequency of the wave to the center wavenumber of the wave, while the group velocity deals with the ratio of the range of frequencies in the wave to the range of wavenumbers in the wave. This group velocity is typically a reasonable measure of how fast the bulk of the wave is traveling through a material.
Ordinarily, the group velocity is smaller than the vacuum speed of light. Unfortunately, though, this is not always the case! When a pulse has a center frequency which is strongly absorbed by the material (a situation known as anomalous dispersion), the group velocity can be greater than the speed of light or even a negative quantity! In these circumstances it is, like the phase velocity, a useless measure of wave velocity.
This issue with the group velocity has been known for a century now. It has been typically dismissed by noting that, in circumstances of anomalous dispersion, the wave is so strongly absorbed and distorted that one cannot define a meaningful velocity of the group. For instance, what would we call the speed of the wave based on the picture below?
The input wave had a single, well-defined peak, but the output wave has multiple peaks. Which one do we use to measure the velocity? To make a very odd analogy (I came up with it late at night), imagine sending five identical dogs single file into a forest, and timing how long they take to come out the other side. If they come out say, ten minutes later, still single file and with roughly the same separation between them, we can confidently describe the average speed of the dogs. But suppose one comes out in a minute, two more come out at five minutes, one more at ten minutes, and the last comes out an hour later! Not only is there no single time to use to calculate the speed, but we can’t even be certain that the dogs came out in the same order they entered! In such a case, a single well-defined measure of ‘velocity’ is meaningless.
This would seem to be the end of the discussion. We can use ‘group velocity’ to describe the velocity of a light pulse, except in cases of anomalous dispersion when there is no well-defined velocity. However, in 2000 a group of researchers experimentally demonstrated a medium where the group velocity is greater than the vacuum speed of light, and the pulse shape is unchanged on passing through the medium! One of the odd consequences of this is the ability to construct situations where the pulse seems to leave the medium before it fully enters it!
We’ll leave a detailed discussion of this ‘superluminal’ case for another post. We can ask right now, though: does this observation conflict with Einstein’s relativity? The answer is a (qualified) no! Once the dispersion relation of a material is known, one can demonstrate that the leading edge of any pulse cannot travel faster than the vacuum speed of light. What is the qualification to this statement? In order to demonstrate that causality is exactly satisfied, we would need to know the complete dispersion relation, from k=0 to k = infinity!** Since we can’t generate (or measure) fields with arbitrarily high frequency and/or wavenumber, there’s always the chance that there exists some ultra-high-frequency violation of causality that we are unaware of…
This post is intended to give some inkling of the challenges and difficulties of defining the velocity of a wave. There are even more challenges in such definitions, including the previously mentioned superluminal light, but also including the definition of ‘tunneling times’ for evanescent waves. We’ll take a look at these issues in later posts.
* We’re considering only one-dimensional waves here. For 3-dimensional fields, there is also natural spreading (diffraction) of the wave, but this does not change the essential observations and arguments presented above.
** This point was brought to my attention by the book Fast Light, Slow Light and Left-Handed Light, by P.W. Milonni.
### Like this:
This entry was posted in Optics, Optics basics. Bookmark the permalink.
### One Response to Optics basics: Defining the velocity of a wave
1. J Thomas says:
“We can ask right now, though: does this observation conflict with Einstein’s relativity? The answer is a (qualified) no! Once the dispersion relation of a material is known, one can demonstrate that the leading edge of any pulse cannot travel faster than the vacuum speed of light.”
But the original Michaelson-Morley experiment and some others did not measure the speed of the leading edge of a pulse, did they? Didn’t they measure changes in interference fringes? So if somehow the leading edges arrived at different times, but the light was still in phase, you wouldn’t see a difference in interference pattern.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451420903205872, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/204254/expected-occupied-volume-of-cubic-region-with-intersecting-cubic-objects
|
# Expected Occupied Volume of Cubic Region with Intersecting Cubic Objects
I have a simulation in which I generate $n$ small cubes with side length $w$, with random (uniformly distributed) positions inside a large cubic region with side length $S$.
The smaller cubes are allowed to intersect, but are constrained to be wholly inside the large region.
I need to determine an estimate (expected value) of the total volume occupied by the smaller cubes, taking their possible intersection into account.
I suspect the problem at Expected occupied area of a surface covered with possibly overlapping random shapes. is related, but I do not understand how the answerer gets from the second last to the last step. I get a different result. I would greatly appreciate any help.
Thank you in advance.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520084857940674, "perplexity_flag": "head"}
|
http://en.m.wikibooks.org/wiki/GLSL_Programming/GLUT/Cutaways
|
# GLSL Programming/GLUT/Cutaways
Cutaway drawing of the dome of the Florence cathedral by Filippo Brunelleschi, 1414-36.
This tutorial covers discarding fragments, determining whether the front face or back face is rendered, and front-face culling. This tutorial assumes that you are familiar with varying variables as discussed in the tutorial on a RGB cube.
The main theme of this tutorial is to cut away triangles or fragments even though they are part of a mesh that is being rendered. The main two reasons are: we want to look through a triangle or fragment (as in the case of the roof in the drawing to the left, which is only partly cut away) or we know that a triangle isn't visible anyways; thus, we can save some performance by not processing it. OpenGL supports these situations in several ways; we will discuss two of them.
### Very Cheap Cutaways
The following shader is a very cheap way of cutting away parts of a mesh: all fragments are cut away that have a positive $y$ coordinate in object coordinates (i.e. in the coordinate system in which it was modeled; see “Vertex Transformations” for details about coordinate systems). Here is the vertex shader:
```attribute vec4 v_coord;
uniform mat4 m, v, p;
varying vec4 position_in_object_coordinates;
void main()
{
mat4 mvp = p*v*m;
position_in_object_coordinates = v_coord;
gl_Position = mvp * v_coord;
}
```
And here is the fragment shader:
```varying vec4 position_in_object_coordinates;
void main()
{
if (position_in_object_coordinates.y > 0.0)
{
discard; // stop processing the fragment if y coordinate is positive
}
if (gl_FrontFacing) // are we looking at a front face?
{
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); // yes: green
}
else
{
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // no: red
}
}
```
When you apply this shader to any of the default objects, the shader will cut away half of them. This is a very cheap way of producing hemispheres or open cylinders. If you enable back-face culling:
``` glEnable(GL_CULL_FACE);
```
you will also notice that the “inside” of the object is invisible. We discuss back-face culling further below.
### Discarding Fragments
Let's first focus on the `discard` instruction in the fragment shader. This instruction basically just discards the processed fragment. (This was called a fragment “kill” in earlier shading languages; I can understand that the fragments prefer the term “discard”.) Depending on the hardware, this can be a quite expensive technique in the sense that rendering might perform considerably worse as soon as there is one shader that includes a `discard` instruction (regardless of how many fragments are actually discarded, just the presence of the instruction may result in the deactivation of some important optimizations). Therefore, you should avoid this instruction whenever possible but in particular when you run into performance problems.
One more note: the condition for the fragment `discard` includes only an object coordinate. The consequence is that you can rotate and move the object in any way and the cutaway part will always rotate and move with the object. You might want to check what cutting in view or world space looks like: change the vertex and fragment shader such that the view coordinate $y$ or world coordinate $y$ is used in the condition for the fragment `discard`. Tip: see the tutorial on shading in view space for how to transform the vertex into world space.
### Better Cutaways
You might try the following idea to improve the shader: change it such that fragments are discarded if the $y$ coordinate is greater than some threshold variable. Then introduce a uniform to allow the user to control this threshold. Tip: see the tutorial on shading in view space for a discussion of uniforms.
### Distinguishing between Front and Back Faces
As demonstrated by the shader, a special boolean variable `gl_FrontFacing` is available in the fragment shader that specifies whether we are looking at the front face of a triangle. Usually, the front faces are facing the outside of a mesh and the back faces the inside. (Just as the surface normal vector usually points to the outside.) However, the actual way front and back faces are distinguished is the order of the vertices in a triangle: if the camera sees the vertices of a triangle in counter-clockwise order, it sees the front face. If it sees the vertices in clockwise order, it sees the back face.
Our fragment shader checks the variable `gl_FrontFacing` and assigns green to the output fragment color if `gl_FrontFacing` is `true` (i.e. the fragment is part of a front-facing triangle; i.e. it is facing the outside), and red if `gl_FrontFacing` is `false` (i.e. the fragment is part of a back-facing triangle; i.e. it is facing the inside). In fact, `gl_FrontFacing` allows you not only to render the two faces of a surfaces with different colors but with completely different styles.
Note that basing the definition of front and back faces on the order of vertices in a triangle can cause problems when vertices are mirrored, i.e. scaled with a negative factor. We can try to turn the inside out by multiplying one (or three) of the coordinates with -1, e.g. by assigning `gl_Position` this way in the vertex shader:
``` gl_Position = mvp * vec4(-v_coord.x, v_coord.y, v_coord.z, 1.0);
```
This just multiplies the $x$ coordinate by -1. For a sphere, you might think that nothing happens, but it actually turns front faces into back faces and vice versa; thus, now the inside is green and the outside is red. (By the way, this problem also affects the surface normal vector.) Thus, be careful with mirrors!
### Culling of Front or Back Faces
We saw that we can use `glEnable(GL_CULL_FACE)` to turn on any triangle culling. By default back faces are culled away as if the line `glFrontFace(GL_CCW)` was specified. You can also specify the culling of front faces with `glFrontFace(GL_CW)`. You can enable culling of back-facing, because the inside of objects is usually invisible; thus, back-face culling can save quite some performance by avoiding to rasterize these triangles as explained next. Of course, we were able to see the inside with our shader because we have discarded some fragments; thus, we have to deactivate back-face culling in that case.
How does culling work? Triangles and vertices are processed as usual. However, after the viewport transformation of the vertices to screen coordinates (see “Vertex Transformations”) the graphics processor determines whether the vertices of a triangle appear in counter-clockwise order or in clockwise order on the screen. Based on this test, each triangle is considered a front-facing or a back-facing triangle. If it is front-facing and culling is activated for front-facing triangles, it will be discarded, i.e., the processing of it stops and it is not rasterized. Analogously, if it is back-facing and culling is activated for back-facing triangles. Otherwise, the triangle will be processed as usual.
### Summary
Congratulations, you have worked through another tutorial. (If you have tried one of the assignments: good job! I didn't yet.) We have looked at:
• How to activate the culling of back faces.
• How to render front-facing and back-facing triangles in different colors.
### Further Reading
If you still want to know more
• about the vertex transformations such as the model transformation from object to world coordinates or the viewport transformation to screen coordinates, you should read “Vertex Transformations”.
• about how to define uniforms, you should read the tutorial on shading in view space.
< GLSL Programming/GLUT
Unless stated otherwise, all example source code on this page is granted to the public domain.
Back to OpenGL Programming - Lighting section Back to GLSL Programming - GLUT section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916816234588623, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/21048/why-does-string-theory-in-case-it-is-true-have-no-divergencies
|
# Why does string theory (in case it is true ) have NO divergencies?
Why is string theory in 10 or 26 dimension not divergent? Due to the high number of spacetime dimension (10 or 26) it should have a lot of UV divergencies of the form $\int k^{n}dk$ and gravity within the approach of the string theory should be non-renormalizable too, or shouldn't it?
-
4
– Qmechanic♦ Feb 15 '12 at 21:08
1
... I assume You are honestly interested in what You are asking such that my +1 was not a premature slip of my mouse ;-) – Dilaton Feb 15 '12 at 22:17
You must remember that string theory trades in an infinite tower of particles for a worldsheet, and the final world-sheet sum is much milder and more nonlocal than any of the particle sums that go into it. The momentum integration is not unbounded, because high k fluctuations become worldsheet fluctuations, and at high energy they are infrared big. – Ron Maimon Apr 8 '12 at 6:31
## 1 Answer
Bosonic closed oriented string theory is divergent in flat space time, see for example Lecture 3 of D'Hoker in "Quantum Fields and Strings" Volume 2. The reason is the presence of the tachyon. To my knowledge for the NS-NS string finiteness is only known up to one loop and already difficult.
Edit: Apparently a little bit more is known for the superstring, see Lectures on Two-Loop Superstrings by D'Hoker and Phong. This seems to be the most recent result.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289350509643555, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/190070-variables-within-variables-simultaneous-equation-solutions.html
|
# Thread:
1. ## Variables within variables? Simultaneous equation solutions
I have been asked to find real values for m such that this system of equations has no solution, infinitely many solutions and exactly one solution:
$2x-2y=5$
$10x-10y=m$
At first I thought that the question wanted me to find a value for each scenario, but I could only find a value that gave infinitely many solutions: m = 25. I can see that they are essentially the same line so any value other than 25 would just shift it up or down and cause the two lines to never intersect.
I couldn't see any other real value for m could produce any solutions until I though what if m = 10y ?
$10x-10y=10y$
$y = 0$ (sub into first equation)
Am I way off track with this question?
$2x = 5$
$x = \frac{5}{2}$
Does that count? I guess there would be infinitely many ways to do that though...
2. ## Re: Variables within variables? Simultaneous equation solutions
No, you're not off track. The only way there are infinite solutions for real values of $m$ are when $m=25$, and the only way there are no solutions are when $m\neq{25}$
I can't see any way to find real values of m that give 'just one solution'.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952774167060852, "perplexity_flag": "head"}
|
http://scicomp.stackexchange.com/questions/5213/nonlinear-bad-constraints
|
# Nonlinear bad constraints
I have an optimization problem with linear objective function. The constraints are in two different groups. The first set of constraints are linear while the second set is nonlinear. The nonlinear constraints are in the form: $ab-cd \geq 0$.
-The optimization problem should be an instance of a convex optimization problem. right? - Is it polynomial solvable? - Is it possible to transformed it to a semidefinite optimization problem (SDP)?
-
## 1 Answer
No, constraints of the form $ab-cd\geq 0$ are not convex.
We can prove this by showing that the set $$\mathcal{C}\triangleq \{(a,b,c,d)\,|\,ab-cd\geq 0\}$$ fails the standard midpoint test for convex sets. That is, given any two points $(a_1,b_1,c_1,d_1)$ and $(a_2,b_2,c_2,d_2)$ in $\mathcal{C}$, the midpoint $$(a_3,b_3,c_3,d_3)=(a_1+a_2,b_1+b_2,c_1+c_2,d_1+d_2)/2$$ must be in $\mathcal{C}$ as well. Let us choose $$(a_1,b_1,c_1,d_1)=(2,2,1,1) \quad \text{and} \quad (a_2,b_2,c_2,d_2)=(-2,-2,1,1).$$ Since $a_ib_i-c_id_i=4-1=3\geq 0$ in both cases, both points are in $\mathcal{C}$. But the midpoint $$(a_3,b_3,c_3,d_3)=(0,0,1,1)$$ does not, since $a_3b_3-c_3d_3=0-1=-1\not\geq 0$. So the set is not convex.
Thus your problem is not a convex optimization problem. This necessarily means that it cannot be represented using semidefinite programming, either.
Unless you can somehow create a new, convex model for your application, you will not be able to solve it using convex methods.
-
Additional nonnegativity constraints on the variables might be enough to make the feasible set convex. If the original poster could provide more detail, it's possible that we could find an SDP formulation of his problem. – Brian Borchers Apr 19 at 20:53
– Michael C. Grant Apr 19 at 21:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272592663764954, "perplexity_flag": "head"}
|
http://scientopia.org/blogs/goodmath/2012/11/10/for-every-natural-number-n-theres-a-cantor-crank-cn/
|
Just another Scientopia Blogs site
# For every natural number N, there's a Cantor Crank C(n)
Nov 10 2012 Published by MarkCC under Bad Logic, Bad Math, Cantor Crankery
More crankery? of course! What kind? What else? Cantor crankery!
It's amazing that so many people are so obsessed with Cantor. Cantor just gets under peoples' skin, because it feels wrong. How can there be more than one infinity? How can it possibly make sense?
As usual in math, it all comes down to the axioms. In most math, we're working from a form of set theory - and the result of the axioms of set theory are quite clear: the way that we define numbers, the way that we define sizes, this is the way it is.
Today's crackpot doesn't understand this. But interestingly, the focus of his problem with Cantor isn't the diagonalization. He thinks Cantor went wrong way before that: Cantor showed that the set of even natural numbers and the set of all natural numbers are the same size!
Unfortunately, his original piece is written in Portuguese, and I don't speak Portuguese, so I'm going from a translation, here.
The Brazilian philosopher Olavo de Carvalho has written a philosophical “refutation” of Cantor’s theorem in his book “O Jardim das Aflições” (“The Garden of Afflictions”). Since the book has only been published in Portuguese, I’m translating the main points here. The enunciation of his thesis is:
Georg Cantor believed to have been able to refute Euclid’s fifth common notion (that the whole is greater than its parts). To achieve this, he uses the argument that the set of even numbers can be arranged in biunivocal correspondence with the set of integers, so that both sets would have the same number of elements and, thus, the part would be equal to the whole.
And his main arguments are:
It is true that if we represent the integers each by a different sign (or figure), we will have a (infinite) set of signs; and if, in that set, we wish to highlight with special signs, the numbers that represent evens, then we will have a “second” set that will be part of the first; and, being infinite, both sets will have the same number of elements, confirming Cantor’s argument. But he is confusing numbers with their mere signs, making an unjustifiable abstraction of mathematical properties that define and differentiate the numbers from each other.
The series of even numbers is composed of evens only because it is counted in twos, i.e., skipping one unit every two numbers; if that series were not counted this way, the numbers would not be considered even. It is hopeless here to appeal to the artifice of saying that Cantor is just referring to the “set” and not to the “ordered series”; for the set of even numbers would not be comprised of evens if its elements could not be ordered in twos in an increasing series that progresses by increments of 2, never of 1; and no number would be considered even if it could be freely swapped in the series of integeres.
He makes two arguments, but they both ultimately come down to: "Cantor contradicts Euclid, and his argument just can't possibly make sense, so it must be wrong".
The problem here is: Euclid, in "The Elements", wrote severaldifferent collections of axioms as a part of his axioms. One of them was the following five rules:
1. Things which are equal to the same thing are also equal to one another.
2. If equals be added to equals, the wholes are equal.
3. If equals be subtracted from equals, the remainders are equal.
4. Things which coincide with one another are equal to one another.
5. The whole is greater that the part.
The problem that our subject has is that Euclid's axiom isn't an axiom of mathematics. Euclid proposed it, but it doesn't work in number theory as we formulate it. When we do math, the axioms that we start with do not include this axiom of Euclid.
In fact, Euclid's axioms aren't what modern math considers axioms at all. These aren't really primitive ground statements. Most of them are statements that are provable from the actual axioms of math. For example, the second and third axioms are provable using the axioms of Peano arithmetic. The fourth one doesn't appear to be a statement about numbers at all; it's a statement about geometry. And in modern terms, the fifth one is either a statement about geometry, or a statement about measure theory.
The first argument is based on some strange notion of signs distinct from numbers. I can't help but wonder if this is an error in translation, because the argument is so ridiculously shallow. Basically, it concedes that Cantor is right if we're considering the representations of numbers, but then goes on to draw a distinction between representations ("signs") and the numbers themselves, and argues that for the numbers, the argument doesn't work. That's the beginning of an interesting argument: numbers and the representations of numbers are different things. It's definitely possible to make profound mistakes by confusing the two. You can prove things about representations of numbers that aren't true about the numbers themselves. Only he doesn't actually bother to make an argument beyond simply asserting that Cantor's proof only works for the representations.
That's particularly silly because Cantor's proof that the even naturals and the naturals have the same cardinality doesn't talk about representation at all. It shows that there's a 1 to 1 mapping between the even naturals and the naturals. Period. No "signs", no representations.
The second argument is, if anything, even worse. It's almost the rhetorical equivalent of sticking his fingers in his ears and shouting "la la la la la". Basically - he says that when you're producing the set of even naturals, you're skipping things. And if you're skipping things, those things can't possible be in the set that doesn't include the skipped things. And if there are things that got skipped and left out, well that means that it's ridiculous to say that the set that included the left out stuff is the same size as the set that omitted the left out stuff, because, well, stuff got left out!!!.
Here's the point. Math isn't about intuition. The properties of infinitely large sets don't make intuitive sense. That doesn't mean that they're wrong. Things in math are about formal reasoning: starting with a valid inference system and a set of axioms, and then using the inference to reason. If we look at set theory, we use the axioms of ZFC. And using the axioms of ZFC, we define the size (or, technically, the cardinality) of sets. Using that definition, two sets have the same cardinality if and only if there is a one-to-one mapping between the elements of the two sets. If there is, then they're the same size. Period. End of discussion. That's what the math says.
Cantor showed, quite simply, that there is such a mapping:
There it is. It exists. It's simple. It works, by the axioms of Peano arithmetic and the axiom of comprehension from ZFC. It doesn't matter whether it fits your notion of "the whole is greater than the part". The entire proof is that set comprehension. It exists. Therefore the two sets have the same size.
### Share this:
Tags: cantor, cantor crank, infinity
36 responses so far
• Peter Krautzberger says:
In fairness, there are good mathematical theories that try to capture other notions of "size", "whole" and "part". They don't pretend to refute Cantorian set theory, of course.
• I am no expert mathematician, but the different scale of the infinities of the integers and the real numbers seems very intuitively obvious, as does the identical scale of the infinities of the integers and the even (or odd) integers.
• Yiab says:
The different sizes of the integers and the real numbers becomes a lot less intuitively obvious when you take into account the fact that the integers and the algebraic numbers (roots of polynomials with integer coefficients) are sets of the same size (here size=cardinality).
• Peter Krautzberger says:
Or to make matters worse: the set of computable numbers -- still only the size of the integers.
• Sophie says:
Let's play a round of rescue the crank. The rules are simple: Make a true and sensible statement out of what the crank said, although he obviously did not say it.
One could argue that he says that Cantors proof works well in the category of sets, where elements are so called "signs". But to talk about the even numbers as "numbers" we need to look at the category of integral domains without one, because in SET the whole point of the even numbers being even is not visible. Now in this category we have indeed that the even numbers are a true subobject of the whole numbers and the reason for this is that the whole numbers have the number one, which is a unit, while the even numbers don't have such an element and skipped it.
• Nick Johnson says:
I like to imagine that cantor, while writing up his proof, was giggling to himself and muttering "this is totally going to blow their minds".
Regarding the title, are you sure? There seem to be uncountably many cantor cranks.
• Snoof says:
It wouldn't be hard to find out. We could just hold a convention at Hilbert's Hotel. If there's anyone who can't get a room, then there must be an uncountable number of cranks.
• Invincible Irony Man says:
What's the problem, that Cantor doesn't conform to our naive intuitions enough so it must be wrong? Since when was the concept of infinity intuitive in the first place? It seems to me that if you want to know its properties, then do the math! Why assume that your unmathematical concept of infinity - ie. trying to intuit it's properties without doing the math - is informed by anything other than prejudice?
• Invincible Irony Man says:
Carvalho cites Euclid’s fifth common notion "that the whole is greater than its parts".
You cite it as "The whole is greater than the part."
(You are right)
It doesn't take recourse to mathematics to see what is wrong with that!
Carvalho seems to be confusing Euclid's fifth common notion with the phrase "the whole is greater than the sum of its parts", which may sound similar to Euclid, but clearly means something different. Surely Euclid was talking about "part" (singular), not "parts" (plural).
The phrase "the whole is greater than the sum of its parts" is difficult to source definitively. I can find it attributed to John Stuart Mill (poorly) and Aristotle (for no reason at all, as far as I can tell), or it may be a mis-quote of Kurt Koffka's phrase, "The whole is other than the sum of the parts" from gestalt psychology.* One thing I am pretty sure of though, it fails pretty miserably as an axiom of mathematics, which is probably why Euclid never said it.
* One person had it, "the hull is greater than the sum of its ports" - Mr. Spock
• John says:
Carvalho is confused, but not in this. He's comparing the whole (the integers) with one part (the even integers).
The problem is that "the whole is greater than the part" comes with the unstated assumption that the part is finite. That's how Euclid always uses it. It is not necessarily true for infinite objects, but Euclid isn't interested in them.
I'm sure I've seen this used as a definition of infinite sets: a set is infinite if is bijective with some proper subset of itself.
• Reinier Post says:
In geometry, it is true even for infinite objects, but I think you're right that Euclid was considering them. In any case, I think you hit the nail on the head. By the way, Euclid's notion of 'greater than' is captured pretty exactly in mathematics by the subset relationship,by which the set of whole numbers is indeed greater than the set of even numbers. However,t it doesn't follow that the *number* of whole numbers is greater than the *number* of even numbers, because, being infinite sets, they can't be numbered with natural numbers, and what Cantor showed is that you can in fact extend the notion of numbers to number infinite sets by using the existence of bijections, but at the expense of having to assigning the same number to sets and many of their subsets (e.g. the whiole and even numbers). I think it's a stretch to call Carvalho a 'crank': nobody seems to have explained these missing steps, andhe's trying to make up his own explanation, which boils down to: by comparing the sets with a bijection, you're only putting the *denotations* (not: signs) of the numbers into correspondence, and you're ignoring their magnitudes, and you can't do that. The problem with that explanation is that he doesn't succeed in explaining why you can't ignore them, and that is no surprise, because his feeling that you can't is based on a confusion between the magnitude of a set and the magnitudes of its elements.
• Reinier Post says:
"wasn't" - sorry
• Joseph McCauley says:
My high school students loved finding out about different infinities. (This is sooo cool!) Still, none of them succumbed to new-age ideas that, therefore, anything is possible. I had some sharp students. No cranks.
• David says:
"For every natural number N, there's a Cantor Crank C(n)"
I would have thought that you would need IRRATIONAL numbers to count the Cantor Cranks. Putting them in correspondence with rationals seems to be inviting trouble.
• clonus says:
yeah it definitely seems like there are at least as many as the reals
But, if there are N Cantor Cranks for every natural number N, are there uncountably many Cantor Cranks?
• delosgatos says:
Nope, If N_k denotes crank k corresponding to natural number N, the mapping N_k -> 1 + 2 + ... + (N-1) + k is a bijection between the cranks and the natural numbers.
• Davi says:
The refutation seems pretty clear and well funded to me.
Of course there is a mapping. Simply because the sets are the same! That is the refutation. The set of integers and the set of evens are the same set. Where's the cranckery in this?
The refutation simply says that Cantor's error is not in the mapping, but in the sets. For Cantor, the set of integers is ONE set and the set of evens is ANOTHER set (different from the ONE).
The counter argument to the refutation only addresses some notions of Euclid which seem hardly to be the main case or the foundation of it.
The fact that Cantor did not address representations or signs, is EXACTLY the reason why he failed to understand his mistake. When he dealt with the set of even numbers, he did not realize that this set is the same as the set of integers, BUT represented in a different way.
Maybe I'm wrong. But the refutation has nothing to do with math, simply because the error is not mathematical per se. It's simply a confusion between signs and quantities. I.e., the sign "4" is not the double of the sign "2", but the quantity "Four" is the double of quantity "Two". Cantor is mixing both and simply stating that the symbol "4" (which sometimes is used to indicate the quantity "four") is the actual double of the symbol "2" (which sometimes is used to indicate the quantity "two"). Put in simpler terms, it would be as if Cantor is saying that "D" is the double of "B", and when finding out that the number of letters is the same (have same cardinality), he deduced some strange notions for the set that contains "B" and the set that contains "D" as if they were different.
• delosgatos says:
He's talking about sets, not signs or quantities. It's not that you're wrong, it's that you're not *even* wrong.
• Davi says:
Sets of what? Signs or quantities? Call it "even"?
• delosgatos says:
Sets of elements.
• Davi says:
Exactly!
• Vicki says:
Davi,
If "four" is a quantity and "4" is an arbitrary sign, we can do this mapping by spelling out those quantities using the Latin alphabet: one, two, three, four, five....
The mapping gives us one → two
two → four
three → six
It's more tedious to do this way than with the conventional Arabic numerals, but it's the same valid mapping of the natural numbers onto the even numbers.
This is the same sort of error as asserting that a proof is valid in English but not in Portuguese, or vice versa: that's a sign of a flawed translation, not evidence that the math is different in Rio than in New York.
(You could write it as "map I onto II, map II onto IV, map III onto VI, map IV onto VIII,..." and risk new sources of confusion confusion, but the actual statements wouldn't change.)
• Davi says:
I'm sorry, but you seem to be getting back to "mapping". Again, the problem is not in the mapping. The problem is in the sets.
It's sort of like this: You have houses. Identical to one another. Side by side. In order to differentiate the houses you decide to put a label on them. The labels are "1", "2, "3", "4".... Is it easy to see that the house labeled as "4" is not the double of the house labeled as "2"? If you change the labels of the houses to "2", "4", "6", "8"... you don't get the doubles of the houses neither the "even" houses. You get the same houses labeled in a different way.
Cantor's mistake is this: He took an infinite set labeled as "1", "2", "3"; later on, took the same infinite set labeled as "2", "4", "6", BUT he did not notice that they were the same set. Because they were labeled in a different way, Cantor thought that they were different sets but with the same "size" (cardinality). He failed to grasp that the mapping only occurred because they were the same sets!
It's like getting the sets {1,2,3} and {2,4,6} and saying that they are "trans-three-finites" sets.
• delosgatos says:
Don't you think it's rather interesting that you can throw out all the odd numbered labels on the houses labeled {1,2,3,4,5...}, and then redistribute all the remaining labels so that every house has a label and there are none left over?
Note that we didn't throw out any houses, so clearly the set of houses originally labeled with even labels is different from the set of houses ultimately labeled with even labels.
Also note that this doesn't work for any finite set of houses.
• Davi says:
Of course it's interesting. But only because infinity is interesting (hint: the left over part is where it gets interesting). As it's also interesting that you can throw out all numbered labels and substitute them for {!,@,#,\$,%....} such that each symbol is different from all the previous ones and finding out that all houses have labels and there are none left over.
• John says:
You appear to be claiming that the natural numbers are indistinguishable (at least, that's what appears to be the case from your identical houses). They aren't. Arithmetic would be impossible otherwise. Students could write '7' as the answer to every problem, and be right. Chaos. But fortunately, numbers have identities independent of the names we give them.
The labels are arbitrary, but once we've chosen an assignment we generally don't change it. The two sets {1, 2, 3, ...} and {2, 4, 6, ...}, if they're using the same assignment of labels to numbers, are different sets. One of them contains the number one, and the other doesn't. You're not claiming that the number one and the number two are the same, are you?
• Davi says:
From the refutation point of view, you're getting back to Cantor's confusion. You're talking about quantities and signs and mixing both. The quantity represented by the number "1" is not the same as the quantity represented by the number "2". But if I'm simply relabelling a house from "1" to "2", then yes 1 and 2 are the same. They are the same house! Only the label changed.
• John says:
But numbers are not houses, and Cantor-style proofs do not change the identity of the elements of the sets, or rely on (or indeed use) any kind of labelling. And even your houses can be distinguished, by measuring how far they are along the road (which is infinite only in one direction).
Start with two sets, { 1, 2, 3, ... } and { 2, 4, 6, ... }. They are different sets, as can be seen from the presence of 1 in the first but not the second. 1 is a number that we can recognise, and distinguish from any other (no other number is the multiplicative identity, for a start).
Now we define cardinality. Two sets have the same cardinality if we can find a bijection between them. There is no labelling going on. We're just lining up the elements side by side, and seeing that for each element in one set there is one in the other.
In this case, we can define the mapping 12, 23, 24, and so on. We haven't done anything to the elements of the two sets; we've just found a partner for each element from the other set. 1 is still 1, 2 is still 2. All we're saying is that if we put 1 from the first set next to 2 from the second (and similarly with the rest), then every element has a partner. You seem to be misunderstanding this bit.
Since there is a bijection, the two sets have the same cardinality.
• clonus says:
Sets are defined by their members, so {1,2,3...} and {2,4,6...} really are different. By {1,2,3..} we literally mean the set containing the natural numbers, not "a set with a bunch things labeled with numbers".
• Robin Adams says:
Just for historical interest: the discovery that an infinite set can be the same size as a proper subset is older than Cantor. Galileo noted it in his ''Two New Sciences'' (1638).
• brunosaboia says:
I can translate the portuguese for you, if you want. Very interesting post, anyway, thanks. I myself struggled to grasp it a long time ago, but I understood it with the "bijective" approach.
• [...] For every natural number N, there’s a Cantor Crank C(n) (scientopia.org) [...]
• Pedro Goncalves says:
I have browsed through the books' ideas and i think it's more of a provocation than anything else. He's looking at Cantor from a philosophical point of view, not mathematical. Also if we followed Olavo's method literally anything we can define would be true just because we defined it. Again, a strictly metaphysical notion.
• Alex says:
Ok...
• Marcelo says:
I remember reading two or three "essays" from this guy Olavo de Carvalho some years ago, but didn't have any math on it... It was all about a worldwide conspiracy trying to disrupt the sanctity of the traditional christian family.
Crank magnetism?
• Scientopia Blogs
• ## Recent Comments
• eric on Probability and Interpretations
• Bard Bloom on Probability and Interpretations
• Jonas on Probability and Interpretations
• John Miller on Probability and Interpretations
• Brandon Wilson on Probability and Interpretations
• ## Tags
Bad Behavior has blocked 986 access attempts in the last 7 days.
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9636033773422241, "perplexity_flag": "middle"}
|
http://skullsinthestars.com/2012/05/04/coherence-plasmons-and-me/
|
The intersection of physics, optics, history and pulp fiction
## Coherence, plasmons, and me!
Posted on May 4, 2012 by
I don’t often talk about my own research on this blog… heck, I don’t think I’ve ever talked about my own research here, come to think of it! I thought it would be a nice change of pace to describe a paper that recently appeared in the journal Plasmonics of which I am a co-author. The paper, titled, “Coherence converting plasmonic hole arrays”, describes how one can use an array of subwavelength-size holes in a thin metallic screen to alter the statistical properties of a light beam incident upon it! It has appeared online at Springer’s site and will be “officially” published later this year.
For those not familiar with optics, there’s a lot to unpack in even the title of the paper: What is “coherence”? What is a “plasmon”? Why do we care about “converting” coherence? Let’s take a look at each of these ideas in turn as we build an explanation of what my collaborators and I have accomplished!
Let’s start with a short review of what we mean by optical coherence. Light has long been known to possess wave properties, and like all waves it can generate interference patterns: alternating regions of high and low intensity. The archetypical example is produced in Young’s double slit experiment, in which light is transmitted through a pair of slits in an opaque screen. The light emanating from the two slits interferes and produces a pattern on a secondary screen further “downstream”.
If the experiment is prepared correctly, one sees alternating bands of light and darkness on the second screen, as shown in a simulation below.
However, if one does this experiment with an ordinary light source, like a light bulb, the interference pattern will almost certainly not be seen. The reason for this is that a light bulb produces light that has different wave vibrations at different points in space — the light reaching the two slits in Young’s experiment are uncorrelated, or spatially incoherent. A crude illustration of the difference between correlated and uncorrelated waves is shown below; basically, correlated waves are doing the same thing, and uncorrelated waves are “doing their own thing”.
We can only get interference patterns by the interference of light from highly spatially coherent sources, such as a laser or a conventional source which has been appropriately collimated and filtered.
The propagation characteristics of light strongly depend on its spatial coherence. In addition to interference effects, the spatial coherence affects the directionality of light. A coherent laser beam, for instance, creates a beam of light that is essentially in a single direction, whereas a light bulb radiates in almost every direction. There are applications where it is useful to have light which is partially coherent: highly directional, but not able to produce strong interference patterns. For instance, it has been shown that partially coherent light will propagate through atmospheric turbulence with less distortion than a comparable fully coherent beam, making partially coherent beams potentially useful for free-space optical communications systems. Any technique which allows us to modify the spatial coherence of light would therefore be an excellent tool.
This is where surface plasmons come in! Speaking technically, a surface plasmon is an electron density wave that can be excited along the surface of certain metals by light. This is illustrated schematically below.
The “+” and “-” represent lower and higher densities of electrons along the surface, respectively, and the “E” and “H” represent the directions of the electric and magnetic fields. (The right-most figure crudely illustrates how the intensity of the surface plasmon is maximum at the surface and decays away rapidly away from it.)
A plasmon is an electron density wave, which means the thing that oscillates and travels along the surface of the metal is the highs and lows of the density of electrons. This is quite analogous to the behavior of sound waves, in which the density of the air molecules is “waving”.
There are a number of important aspects of surface plasmons (to be referred to simply as “plasmons” from now on) that make them very useful for optics applications. First, plasmons have a shorter wavelength, and consequently higher momentum, than the light that excites them. This means that plasmons cannot be excited on a smooth metal surface, but only at bumps or holes on that surface. Also, once excited, those plasmons will carry energy along the surface until they are either absorbed or hit another hole and get converted back into light. Second, plasmons are coherently excited at one of these holes or bumps, which means that there is a perfect correlation between the light illuminating the hole and the plasmon wave that gets generated.
Around 2007, my postdoc advisor started to think that these surface plasmons could change the state of coherence of a light wave. How would this work? Let’s look at a cartoon version of Young’s double slit experiment, and imagine that each slit is illuminated by an independent light beam. Therefore one hole is illuminated by “apples”, and the other by “oranges”.
The “apple light” and the “orange light” cannot interfere, and after passing through the holes, that hasn’t changed — no interference pattern will be created.
But now let us suppose that the screen with the slits can support surface plasmons. When the apple light hits the slit, part of it will be directly transmitted through the hole, but some of it will convert into a surface plasmon. This “apple plasmon” will travel to the other hole, where it will convert back into apple light! A similar argument applies for the orange light, and the result is illustrated below.
The light going into the pair of slits was completely uncorrelated (apples & oranges), but the light coming out of the slits is now correlated — apple/orange & apple/orange! By essentially swapping a part of the light between the two slits, the spatial coherence of the light has been increased.
My collaborators and I published a theoretical paper¹ on these results in 2007, titled, “Surface Plasmons Modulate the Spatial Coherence of Light in Young’s Interference Experiment.” In this work, we demonstrated that, depending on the separation of the slits and the wavelength of light, the spatial coherence of light could be increased or even decreased on passing through the pair of slits. The increase in spatial coherence was demonstrated experimentally and published later that same year.²
This work got me thinking: is it possible to change the spatial coherence of an entire beam of light using surface plasmons? In the plasmonic double slit experiment, we are only changing the spatial coherence of light at two points of a beam of light — the rest of the light is being blocked by the metal screen.
The natural thought was to imagine transmitting light through an array of holes in a metal screen.
When this array is illuminated by a beam of light, plasmons will propagate back and forth between multiple holes, and the light emanating from the collection of illuminated holes will presumably form a beam of light that has a modified state of coherence. But will it work?
Studying the interactions of plasmons bouncing around between multiple holes on a metal surface requires rather complicated numerical simulations, and things cannot be done with simple pen and paper calculations. As an intermediate step, my student Choon How and I studied³ the behavior of surface plasmons in Young’s three slit experiment!
The big question in the three slit study was whether the middle slit would hinder the propagation of plasmons to the outer holes. That is: can plasmons from the left hole “jump” the middle hole and influence the right hole?
It was found via simulation that plasmons do indeed jump the middle hole, and that the presence of the middle hole could help enhance the coherence between the outer holes. This opened the door to studying how the overall coherence of a beam of light could be enhanced by an array of holes in a plasmonic screen.
Every researcher eventually runs into an annoying project that just takes forever to get completed and written up. This study of a “coherence-converting plasmonic array” was such a project for me, and it took roughly 3 years to get it submitted for publication. It turned out to be difficult in large part for two reasons: 1. the complexity of the simulations, and 2. the challenge in defining the “overall” coherence of a beam of light.
In general, it can be tricky to perform exact simulations of the interaction of light with matter. The multi-hole plasmonic array we were interested in turned out to be a particularly difficult system to model, and in fact exact simulations were beyond the computing capabilities I had available at the time. However, it turned out that the exact results of the double slit and the triple slit cases could be reproduced quite accurately with a “toy model” of the system. For the double slit case, this toy model really involved treating the light propagation as consisting of the two parts described above: direct transmission through the slit and slit-plasmon-slit coupling. We implemented a similar model to study the interaction of light and plasmons for an array of holes; we were partially justified by the fact that others had done essentially the same thing to study different plasmonic systems.
Defining the “overall” or “global” state of coherence of a light beam also turned out to be somewhat of a challenge. Spatial coherence is usually characterized between two points in a light wave, as for instance the coherence between the light emanating from the two slits in Young’s experiment. Though it is clear that some light beams are more coherent than others (a laser produces light more coherent than a light bulb), quantifying this overall coherence has never been done satisfactorily. In the end, we opted to simply look at the coherence of light emanating from multiple pairs of holes in the array; if the coherence of most or all pairs increased/decreased dramatically, it was a good sign that the overall coherence had been modified.
We studied 3 by 3 hole arrays, 4 by 4 hole arrays, and a 7-hole hexagonal array. We looked at how the degree of coherence of light emanating from the holes, called $\mu_f$, changed as a function of hole separation $d$. A sketch of the hexagonal array, and the coherence results for that hexagonal array, are shown below.
Hexagonal hole array with hole separation d.
The left figure above shows the degree of coherence — 0 for completely incoherent, 1 for fully coherent — between the holes A and C as a function of the array spacing $d$, and compares it to what the degree of coherence would look like in the absence of plasmons (dashed line). It can be seen that for certain hole separations, the spatial coherence increases dramatically. In the right figure, we see the degree of coherence compared for holes A and B and for holes A and C as a function of array spacing. It can be seen that, for certain values of $d$, the coherence increases dramatically for both hole pairs; this is an indication that the transmitted beam as a whole has had its coherence increased! For instance, at hole separation 1.1 μm, 1.6 μm and 2.2 μm, both coherence values spike sharply, almost to full coherence!
We learned a few things from the simple simulations done in this research project. First and foremost, we found that we could in fact change the global state of coherence of a light beam by the use of a plasmonic hole array. This puts me one step further to my goal of designing “coherence converting plasmonic devices”, which could be attached to the front of a light source to change its statistical properties for various applications. We also noted that, unlike the two slit and three slit cases, there does not seem to be a hole separation that results in a global decrease in coherence. For some hole separations, the coherence between particular pairs of holes can be lower than the illuminating light, but we found no unique values of $d$ for which all holes exhibited a decrease in coherence. This may change, however, if we use more complicated hole arrangements.
Future research will involve the development of exact numerical simulations of the plasmonic systems in question, as well as investigations of more complicated hole arrays.
This work is also a good example of why it is important to diversify one’s research interests. I had developed a strong independent backgrounds in the theories of optical coherence and surface plasmons; this made it very easy to combine my knowledge to study the coherence of surface plasmons, a topic that nobody else had investigated before!
*******************
¹ C.H. Gan, G. Gbur and T.D. Visser, “Surface plasmons modulate the spatial coherence in Young’s interference experiment,” Phys. Rev. Lett. 98 (2007), 043908.
² N. Kuzmin, G.W. ‘t Hooft, E.R. Eliel, G. Gbur, H.F. Schouten and T.D. Visser, “Enhancement of spatial coherence by surface plasmons,” Opt. Lett. 32 (2007), 445.
³ C.H. Gan and G. Gbur, “Spatial coherence conversion with surface plasmons using a three-slit interferometer,” Plasmonics 3 (2008), 111.
*******************
Gan, C., Gu, Y., Visser, T., & Gbur, G. (2012). Coherence Converting Plasmonic Hole Arrays Plasmonics DOI: 10.1007/s11468-011-9309-1
### Like this:
This entry was posted in Optics. Bookmark the permalink.
### 10 Responses to Coherence, plasmons, and me!
1. zitterbewegung says:
Stuck in traffic and reading this, pretty interesting.
• skullsinthestars says:
Thanks!
2. Nicholas Condon says:
Hmmm. The thought of using plasmonics to modify coherence had never occurred to me, so I found this very interesting. Talks and articles about plasmonics always get me feeling as if there’s a spectacular research idea in the back of my head that I can’t quite drag up to the conscious level. Hmmm.
Even though I can follow your original papers (for suitably small values of “follow”), it’s always nice to see research written up at a more accessible and informal level.
• skullsinthestars says:
Thanks! I tend to avoid writing about my own work, but I thought this was a suitably interesting result to describe.
3. Yoron says:
Sweet stuff
Any way to test this without the ‘bumps and grooves’ of matter?
A electron density can be presumed without a metal too, as I understand?
Can one make a electron density field in air?
Or in a vacuum?
• skullsinthestars says:
Any way to test this without the ‘bumps and grooves’ of matter?
Not quite sure what you mean! It’s well known both experimentally and theoretically that plasmons & light only couple on surfaces with some sort of roughness.
A electron density can be presumed without a metal too, as I understand?
Can one make a electron density field in air?
Or in a vacuum?
Surface plasmons only appear in materials with the right dielectric and conductive properties. These properties mostly appear only in metals over a limited range of frequencies. Can’t do it in vacuum — there’s nothing there, including electrons!
4. agm says:
I can imagine that building the simulations was quite challenging. Similar sorts of studies of distance dependence in geostatistics can be fiendishly difficult to calibrate and perform.
• skullsinthestars says:
The simulations weren’t *too* bad, as the system was a relatively simple one. However, the problem had to be solved “self-consistently”, so that one takes into account all of the multiple interactions possible between light and plasmons. Also, we had to determine certain fitting parameters for the simulations by other means, and the results crucially depended on those parameters!
5. Yoron says:
Ouch.
You’re right.
Didn’t think there, it was/is met wondering if ‘fields’ could interact.
But they can’t, can they?
Not in a vacuum.
It’s only in interactions with matter then.
6. Yoron says:
But then again?
Free electrons in a vacuum, with a EM field directing them exist, doesn’t it?
And electrons have a rest mass.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928382933139801, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/94195?sort=oldest
|
## Representing groups with two generators as graph automorphisms
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have a group $G$ which can be generated by two elements $x$, $y$. Call $H$, $K$, $L$ the subgroups of $G$ generated by $x$, $y$ and $y^{-1}x^{-1}$, respectively.
With these data, we can build up a graph $\mathcal{G}$, by declaring the vertices of $\mathcal{G}$ to be the right-cosets of $H$, $K$ and $L$, and by adding an edge between two right-cosets if they have nonempty intersection. Obviously, $G$ acts faithfully on $\mathcal{G}$.
Geometrically, $\mathcal{G}$ can be seen a planar graph, made by triangles (the vertices of each triangle corresponds to right-cosets of different $H$, $K$, and $L$). Then, acting, e.g., by $x$ means to rotate $\mathcal{G}$ around the vertex $H$. Furthermore, if $\mathcal{G}$ is represented on a constant-curvature surface, then the angle of such a rotation is $2\pi/|H|$. Similarly for $y$ and $y^{-1}x^{-1}$. In other words, $G$ is represented in the group of automorphisms of a regular tiling.
I've heard about this construction years ago during an undergraduate class, but back then I wasn't interested (I even forgot who was the lecturer). Lately, I rediscovered this, and I've spent few days searching the web, but I couldn't find anything resembling what I've explained above. Posting my question here is my last hope! Somehow, it looks such a simple idea, that it is hard to believe that it cannot be found anywhere!
More precisely, my question is the following:
Is there a standard construction to associate a graph $\mathcal{G}$ with a group with two generators $G$, in such a way that
• $\mathcal{G}$ can be realized as a regular tiling made of triangles on a constant-curvature surface, and
• $G$ can be seen as a subgroup in the group of automorphisms of $\mathcal{G}$?
-
3
Something's strange here. How do you know that $\mathcal{G}$ is planar? Note that there are 2-generator groups that do not preserve a regular tiling of the plane. – HW Apr 16 2012 at 9:58
Where are you getting triangles from? On a side note, this construction is very standard, and is an instance of a Tits geometry. – Steve D Apr 16 2012 at 11:46
@Steve: The triangles are triples of cosets $Ha, Ka, La$ for all $a$. Different $a$ give different triangles provided the intersection of $H, K, L$ is trivial. There is something strange with the question: some phrases seem to mean that $H,K,L$ are finite and some phrases seem to indicate that there are many $H,K,L$. – Mark Sapir Apr 16 2012 at 12:53
Thank you very much, Steve D, you solved my problem! BTW, triangles appear as those complete subgraphs whose three vertices are right-cosets of different types. And, HW, you are right - but I had in my mind a tiling of a constant-curvature surface (plane, sphere, hyperbolic plane). Also, by "regular" I meant not made of regular triangles, but obtained by applying a rigid motion to a prescribed, not necessarily regular, triangle. Sorry for being so imprecise! – G_infinity Apr 16 2012 at 13:35
@Mark Sapir: thank for your remark about triangles. And, yes, I forgot to mention that there are precisely three subgroups $H$, $K$, and $L$, and also that these subgroups are of finite order. – G_infinity Apr 16 2012 at 13:39
## 1 Answer
I think what you are trying to remember is the triangle group $\langle x,y\mid x^k=y^l=(xy)^m=1\rangle$ (see Wiki). Depending on whether $1/k+1/l+1/m$ is less than, equal to or greater than 1, the group corresponds to a tessellation of a hyperbolic plane, Euclidean plane or a 2-sphere. The tesselation is constructed as you described.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518235921859741, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/tagged/statistics+soft-question
|
# Tagged Questions
1answer
48 views
### Applications of information geometry to the natural sciences
I am contemplating undergraduate thesis topics, and am searching for a topic that combines my favorite areas of analysis, differential geometry, graph theory, and probability, and that also has ...
3answers
78 views
### Statistics Workshop for High School Students
We are going to hold an introductory workshop about the statistics. The participants will be students who have just finished their 8th or 9th grade. The workshop consists of 10 two-hour sessions. The ...
1answer
30 views
### Field of mathematics which deals with similarity of a set of objects each with property variables
(Contextual word of warning: Question written by mathematical novice.) I have a large set of objects. Each object has three variables. Each variable is a number between 0 and 1. For each object in ...
0answers
27 views
### Likely related elements
I have lists of pair like so : KHLM1800, (a,b,c) KHLM1900, (a,b,d) KHLM1840, (a,b,c,d) KHLM1845, (a,b,c) TCMB9001, (a,c) . . . Naively it looks like KHLM ...
0answers
18 views
### What is the real-world intuition of a data set that exhibits a gamma distribution with shape>1?
From the special case of the exponential distribution (shape=1), we know that the interpretation of scale has to do with the rate of the success, but how can I interpret the shape? Does it mean that ...
2answers
68 views
### Intuition Of Conditional Probability Equation
I was wondering if any one of you had any intuitive insight regarding the conditional probability equation, $P(A\mid B) = \large \frac{P(A \cap B)}{P(B)}$. In my textbook, they give a mere definition, ...
1answer
23 views
### Box Plot Log Scaled?
When making box plots to represent data, is it logical to set our axis scale to logarithmic? Or does that defeat the whole point of a box plot?
4answers
187 views
### Math vs Probability vs Statistics
For a certain job interview, I gave myself a 6 in SQL and 8 in Statistics. I love math and probability but I always found significance testing and confidence intervals rather dry. What is the ...
0answers
74 views
### Can a batsman's score be predicted in the sport of cricket?
I was browsing The Signal and the Noise in bookstore and chanced upon a chapter about predictions in baseball and found out about Moneyball and sabermetrics. I understand the closest to predicting a ...
1answer
189 views
### In general, is it easier to solve problems that have random variables, or problems that are basically the same that don't [closed]
This question is asked because I don't understand how random variables will affect various math problems, and knowledgeable mathematicians would. By easier, we mean less steps If we make our own ...
2answers
48 views
### Averages and Team
I have a question: Suppose $5$ players each score an average of $10$ points per game. Then collectively, do they score on average $50$ points per game? So player 1 scores an average of 10 points ...
3answers
146 views
### How come in statistics there is very little justification for the formulas used and proofs are almost nonexistent [closed]
I don't understand why people accept certain formulas in statistics without a mathematical proof style argument. You see this a lot in statistics textbooks and unfortunately this spills over with the ...
1answer
116 views
### Name of probability distribution
Does this distribution have a name: $f(x) = yx^{y-1}$ for $0 < x<1$ and $y>0$? It looks like an exponential distribution. Or is it a nameless distribution?
4answers
1k views
### What is the purpose of the standard deviation?
I don't have any knowledge of statistics beyond high school common sense. Why is the standard deviation usually seen in combinatorics textbooks, and why is the standard deviation defined ...
0answers
316 views
### How did Target figure out a teen girl was pregnant before her father did?
First of all I do not have a mathematics degree only a B.S. in finance so please take that into account when writing an answer. Generally what type of mathematics is involved here? And specifically ...
0answers
53 views
### Sufficient statistics for Data Cleaning
If you want to do data cleaning (e.g. suppose you have 1000 data points), is finding sufficient statistics a good thing to do? Because this seems to reduce the extra noise of the data.
3answers
84 views
### z-interval and sample size what is a normal sample size
Can a Z-interval be used when the sample size is between 15-30? does the variable play a role? I'm not too sure if it makes a difference. I know it can be used if the population is a normal or large ...
2answers
130 views
### A theorem about inductive inference
In the book 'Introduction of the theory of Statistics' by Mood,Graybill,Boes (third edition)on page 220 (Chapter 6 on Sampling) you can read: 'Inductive inference is well known to be a hazardous ...
1answer
302 views
### Single number to represent a ratio?
There's probably a very simple answer to this, but I can't put my finger on it. I have two numbers. I want one to be large, and the other to be small. I'd like to identify these with a single ...
4answers
463 views
### What is the deepest / most interesting known connection between Trigonometry and Statistics?
I'm teaching both at the same time to different classes in high school, so I just wondered about this. Added by OP on 16.May.2011 (Beijing time) I mean Statistics only, without Probability. In ...
0answers
612 views
### Calculate relative contribution to percent change
Let me use a simple example to illustrate my problem. First, assume we are calculating rate r at time t such that rt = xt / yt. Furthermore, each measure has two component parts: X = xa + xb and Y = ...
1answer
186 views
### Is the above statement true for maths?
In maths, you can use something as simple as statistical analysis to intuit the theory, and in comp-science you can use simulators. Is the above statement true for maths ?
4answers
792 views
### Why does Benford's Law (or Zipf's Law) hold?
Both Benford's Law (if you take a list of values, the distribution of the most significant digit is rougly proportional to the logarithm of the digit) and Zipf's Law (given a corpus of natural ...
8answers
984 views
### Real life usage of Benford's Law
I recently discovered Benford's Law. I find it very fascinating. I'm wondering what are some of the real life uses of Benford's law. Specific examples would be great.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316738843917847, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/33676/ghosts-in-pauli-villars-regularization
|
# Ghosts in Pauli Villars Regularization
I'm trying to understand how Pauli Villars Regularization works. I know we add ghost particles, but I want to see more precisely. To do this, we'll work with $\phi^3$ theory. The Lagrangian is $${\cal L} = \frac{1}{2}(\partial_\mu\phi)^2 - \frac{1}{2} m^2 \phi^2 + \frac{\lambda}{3!}\phi^3$$ We wish to calculate the correction to the scalar propagator. We know that this integral diverges. We then introduce a Pauli Villars ghost in the Lagrangian $${\cal L}' = - \frac{1}{2} (\partial_\mu \phi')^2 + \frac{1}{2}M^2 \phi'^2 + \frac{\lambda}{3!}\phi'^2 \phi$$ This Lagrangian has a negative norm. We can see this by calculating the contribution of the free part to the Hamiltonian $${\cal H}' = -\frac{1}{2} {\dot \phi'}^2 -\frac{1}{2} \pi'^2 - \frac{1}{2} M^2 \phi'^2$$ Because it has a negative norm, such a particle is called a GHOST PARTICLE. Now, from what I understand from current text, this field has a propagator $$D_F'(x-y) = \int \frac{d^4p}{(2\pi)^4} \frac{e^{ i p (x-y) } } { p^2 - M^2 + i \epsilon}$$ This is where I'm having so much trouble. How can we prove that this is the propagator? I'm trying to use the usual method to find the propagator, and I seem to be stuck. Any help?
-
I think your propagator is missing a $-i$ (from $e^{iS}$), no? – Guy Gur-Ari Aug 8 '12 at 2:02
That is a convention. – drake Aug 8 '12 at 2:05
## 1 Answer
The ghost propagator connected with your Lagrangian has the opposite sign to the standard one. This makes the sum of both propagators well-defined.
How can we prove that this is the [free] propagator?
The free propagator is the green function of the free equation of motion (with the suitable boundary conditions given by the $i\epsilon$ terms), so applying the Klein-Gordon operator to the free propagator you must get the Dirac delta modulo a $i$ factor which depends on the convention one is using.
Added: Regularizing an integral is to replace a sick-defined integral by a well defined one. This process entails the introduction of a dimension-full parameter (and in some cases like in dimensional regularization a dimension-less parameter as well). There are several ways to do this and one usually —but not always— chooses the most symmetric one according to the problem in question. However, none of the known methods to deal with ultraviolet divergences in relativistic QFT is physical, that is, none of them corresponds to a physical effect. Some of them improve the behaviour of the integrand for high (ultraviolet) momenta: imposing a sharp cut-off (step function in the integrand) or a smother one like a gaussian $e^{-p^2/M^2}$. Likewise one can replace the propagator $\frac{1}{k^2+m^2}$ with:
$$\frac{1}{k^2+m^2}\frac{M^2}{k^2+M^2}$$ that is equal to ($M>>m$) $$\frac{1}{k^2+m^2}-\frac{1}{k^2+M^2}$$ So that one can think of the new term as something equivalent to add a very massive scalar particle with the wrong sign in the kinetic term (and therefore something unphysical). Or maybe one prefers to think of it as adding a very massive scalar particle with the wrong statistics in order to get a minus sign in each closed loop... each interpretation may be more convenient depending on the particular diagram one is regulating, but the interpretations are unnecessary because they are not physical. The significant is to define an expression that was undefined. At least, until somebody finds a physical regularization. Some people think that quantum gravity (through a violation of Lorentz invariance, for example) may provide a physical regulator for the UV divergencies of QFT. We do not know yet if quantum gravity is able to give us that gift.
-
What do you mean by "It is a convention"? I don't think that's it. We add a new field to the Lagrangian so as to cancel out the divergence in the integral. Also, I don't agree with your statement that-The ghost propagator has opposite sign to the standard one. This is because if that were true, the net contribution to the one-loop diagram is $$\int \frac{d^4k}{(2\pi)^4} \left[ \frac{i}{k^2 - m^2 + i \epsilon} \frac{ i } { (p-k)^2 - m^2 + i \epsilon} + \frac{ -i}{k^2 - m^2 + i \epsilon} \frac{ -i } { (p-k)^2 - m^2 + i \epsilon} \right]$$ That wouldn't regulate the divergence at all! – Prahar Aug 8 '12 at 2:30
I meant that for a general field one can define the propagator as a vacuum expectation value of time ordered fields or as $i$ times the same thing as long as one be consistent. In PV regularization, depending on the graph one wants to regularize, one can introduce fields with the wrong sign in the kinetic term or with wrong statistics. The purpose is to replace a sick defined integral by a well defined one, the rest is blah, blah. Regarding your expression, the net contribution of P-V field in the loop should go with a minus sign to cancel high energy modes. – drake Aug 8 '12 at 7:16
(cont.) With this end one chooses the PV field to be a fermi field. It has a wrong statistic but nothing happens because it decouple s and is not physical. But in this case one obviously needs a complex field and not the real one the OP wrote. I did not realize that in my answer. – drake Aug 8 '12 at 7:17
@Prahar And the graph you are considering can also be regularized changing $\phi$ by $\phi '$ in the interaction term of $\mathcal{L'}$. – drake Aug 8 '12 at 7:29
Ahh! I see now. Thanks a lot. Cleared a lot of things up. – Prahar Aug 8 '12 at 13:36
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393622279167175, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/tagged/likelihood-function
|
Tagged Questions
The likelihood-function tag has no wiki summary.
2answers
50 views
How to incorporate costs (into logit model) of false positive, false negative, true positive, true negative if they are different costs?
How to incorporate costs (into logit model) of false positive, false negative, true positive, true negative responses, if they are different costs ? Is it possible to do that on the level of ...
0answers
30 views
Manipulating Binomial Distribution
Recently, I've been reading Yudi Pawitan's book, In All Likelihood. In the book, there's a section on profile likelihood; the methods explored in this section are subsequently applied to some data on ...
0answers
43 views
Simple question about notation
In the context of likelihood-based inference, I've seen some notation concerning the parameter(s) of interest which I've found a little confusing. For example, notation such as $p_{\theta}(x)$ and ...
1answer
46 views
Optim result highly dependent on starting value
I want to fit a standardized Student's-t distribution. The log-likelihood is given by: \begin{align*} log \mathcal{L}(\nu | l_1,...,l_n)=\sum_{i=1}^n \left( log \left( (\pi ...
1answer
42 views
Multi-parameter log likelihood of Normal distribution for two separate samples
I have been given two sets of independent random variables distributed by two different normal distributions $X_1,...,X_n \sim N(\theta_1,1)$ and $Y_1,...,Y_m \sim N(\theta_2,1)$. And have been asked ...
1answer
114 views
MLE/Likelihood of lognormally distributed interval
I have a variable set of responses that are expressed as an interval such as the sample below. ...
1answer
61 views
Maximum likelihood solution in classification problem
I have a couple of questions regarding the maximum likelihood solution in a classification problem (with only two classes $C_{1}$ and $C_{2}$) Basically, I have the following likelihood function: ...
0answers
73 views
Validity of maximising log-likelihood for maximum likelihood estimation
For reasons owing to mathematical convenience, when finding MLEs (maximum likelihood estimates), it is often the log-likelihood function---as opposed to the standard likelihood function---which is ...
1answer
115 views
What is the name of the estimator that takes the mean of likelihood?
Let $X,Y$ be input and output (observed) continuous variables in $\mathbb{R}$. Let $\{y_1,...,y_n\}$ be the set of $n$ observations. Is there a name for the estimator \$\hat x = \int_{x \in X} x ...
1answer
99 views
Calculating the likelihood of time series data when there are missing data
I am trying to calculate the log-likelihood of some time series data given parameter sets estimated in BUGS. I can not figure out how to handle some missing values at random points in time. For the ...
0answers
110 views
Not quite sure where the log-likelihood function comes from
A Poisson variable $Y_i$ is believed to depend on a covariate $x_i$ and it is propsed that log-linear model with a systematic component $a + bx_i$ is appropriate. In an experiment the following ...
1answer
146 views
Log - likelihood function, why does the summation sign vanish?
I have the log-likelihood function: $$l(p_i,y_i) = \sum_{i = 1}^n \left( \ln(p_i) + y_i \ln(1 - p_i) \right)$$ And I need to calculate the maximum likelihood estimator of $p_i$. When I do this, ...
1answer
147 views
Kolmogorov-Smirnov test in likelihood function
I want to test how well my data fits a uniform distribution and use this as one factor in a likelihood function I am constructing. Unfortunately, I have no solid basis in statistics. So far, I ...
2answers
250 views
Why does log likelihood function for a model use SSE/n and not SSE/df?
I'm trying to find out how log-likelihood function works for linear regression. I found the formula here and here. Making some experiments with it (see code below), I was quite surprised that the ...
1answer
92 views
KS, AD and loglike results
I'm using R to test some distribution families to my data. I've done KS, AD tests and determined the loglike. For one of the data the indications given by KS and AD do not agree with the ones given ...
2answers
239 views
Models for calculating consumer behavior at coffee shop
I have the occasion to sit in a Starbuck's almost every day. I have noticed there are rush hours sometimes. It's like hundred of people decided to buy something at Starbucks at the very same time. ...
3answers
190 views
MCMC to handle flat likelihood issues
I have a quite flat likelihood leading Metropolis-Hastings sampler to move through the parameter space very irregularly, i.e. no convergence can be achieved no matter what the parameters of proposal ...
0answers
87 views
Likelihood for Poisson data
In my book, it says: Independent random variables $X_1, X_2, \dots, X_n$ are modeled by a Poisson distribution with mean $\lambda > 0$. The likelihood for $\lambda$ based on data ...
1answer
385 views
Is it okay to compare fitted distributions with the AIC?
suppose I have a data set $x_1, \ldots, x_n$ and I would fit a normal, an exponential and a uniform distribution to them. The fitting function spits out a bunch of goodness-of-fit statistics, e.g. the ...
2answers
733 views
What is the reason that a likelihood function is not a pdf?
What is the reason that a likelihood function is not a pdf (probability density function)?
3answers
419 views
What are some illustrative applications of empirical likelihood?
I have heard of Owen's empirical likelihood, but until recently paid it no heed until I came across it in a paper of interest (Mengersen et al. 2012). In my efforts to understand it, I have gleaned ...
2answers
124 views
Gaussian Process goodness of fit
Let's say I got a Gaussian Process model $M$ based on some training data. Now I get a stream of sample data of a certain batch size coming in. The GP does not model a time series, but it's trying to ...
1answer
155 views
Likelihood based model selection
Let's say I got a set of models $M = \{M_1, M_2, \dots M_n\}$. Now say I got some data $x$ and I would like to know, which model represents the data best. I know how to calculate the likelihood ...
4answers
593 views
How to rigorously define the likelihood?
The likelihood could be defined by several ways, for instance : the function $L$ from $\Theta\times{\cal X}$ which maps $(\theta,x)$ to $L(\theta \mid x)$ the random function $L(\cdot \mid X)$ we ...
1answer
170 views
Maximizing: likelihood vs likelihood ratio
Say I have an observed data set ($n_i$) and I want to obtain the best fit out of 10 data sets produced by a model dependent on a single parameter $a$ ($m_i(a)\;a=1..10$). Suppose I use a Poisson ...
6answers
875 views
Why would someone use a Bayesian approach with a 'noninformative' improper prior instead of the classical approach?
If the interest is merely estimating the parameters of a model (pointwise and/or interval estimation) and the prior information is not reliable, weak, (I know this is a bit vague but I am trying to ...
1answer
157 views
A question about hypothesis testing and maximum likelihood ratio test
Assume there are 2000 students, $m$ boys and the rest are girl. Now we take a sample of 5 students which contains only 1 boy. We claim that there are more girls than boys in the population. So what we ...
3answers
245 views
Finding the MLE for a univariate exponential Hawkes process
The univariate exponential Hawkes process is a self-exciting point process with an event arrival rate of: $\lambda(t) = \mu + \sum\limits_{t_i<t}{\alpha e^{-\beta(t-t_i)}}$ where $t_1,..t_n$ ...
0answers
220 views
Difference between likelihood principle and repeated sampling principle
In statistical inference, there are many fundamental statistical principles, such as likelihood principle and repeated sampling principle. I am wondering whether there are any other principles? And ...
0answers
300 views
Interpretation of a log likelihood function for PROC NLMIXED in SAS
I have a data set of skewed nutrient intake values, from around 7800 individuals, of whom around 3000 had two measures of daily nutrient intake (the others only had one measure), so this is a repeated ...
1answer
149 views
Property of KL-divergence
Let $p_1$ and $p_2$ be two distinct probability distributions. Define $$L(q)=D(q||p_1)-D(q||p_2)$$ where $D$ is the usual Kullback-Leibler divergence. Assume the support of $p_2$ is included in ...
0answers
156 views
Likelihood function of a Linear probability model
What is the Likelihood function of a linear probability model? I know the likelihood function is the joint probability density, but how to construct the likelihood function when we only have the ...
0answers
70 views
Is there any stochastic method to approximate the likelihood?
I am looking for a list of stochastic algorithms to approximate likelihoods.
0answers
257 views
Developing the multivariate normal likelihood function (in matrix notation)
I am searching to get a detailed development for the multivariate normal likelihood function in order to enter it to Wikipedia. Can anyone suggest to me a good reference book (or if you have it ...
0answers
95 views
Likelihood function for a multiperiod probit with autoregressive latent variable
I'd like to evaluate the likelihood function of a multperiod ordered probit model with an autoregressive random component, but i am having trouble arriving at the likelihood function. As an example ...
4answers
509 views
How is the bayesian framework better in interpretation when we usually use uninformative or subjective priors?
It is often argued that the bayesian framework has a big advantage in interpretation (over frequentist), because it computes the probability of a parameter given the data - $p(\theta|x)$ instead of ...
1answer
256 views
How can I calculate a probability from a likelihood, e.g. in the Metropolis-Hastings algorithm?
This is a follow-up to my previous question, how can I compute a posterior density estimate from a prior and a likelihood I am having difficulty understanding how it is possible to calculate the ...
3answers
209 views
What is the correct posterior when data are sufficient statistics?
Say you have N observations that are iid. $$\forall i, \quad p(X_i=x_i|\mu,\sigma,I) = \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{1}{2\sigma^2}(x_i-\mu)^2\right)$$ then ...
0answers
75 views
Help evaluating a posterior probability expression
Consider $\boldsymbol{x}= [x_1,x_2,...x_n]$ and $\boldsymbol{y}= [y_1,y_2,...y_n]$ to be two multivariate Gaussians with an isotropic diagonal variance structure and uninformative priors so that: ...
2answers
241 views
Correlation as a likelihood measure
Various forms of the correlation, e.g., $r = \frac{\Sigma_i x_i * y_i}{\sigma_x \sigma_y}$ or $r = \frac{\Sigma_i (x_i-\bar{x}) * (y_i-\bar{y})}{\sigma_x \sigma_y}$ are popular similarity measures ...
1answer
423 views
What are the disadvantages of the profile likelihood?
Consider a vector of parameters $(\theta_1, \theta_2)$, with $\theta_1$ the parameter of interest, and $\theta_2$ a nuisance parameter. If $L(\theta_1, \theta_2 ; x)$ is the likelihood constructed ...
1answer
173 views
Expression for conditional density for ARCH processes
I am reading Stephen Taylor's Asset Dynamics book and came across something I didn't fully understand. For an ARCH process, the return series is modeled as $r_t = \mu_t + h_t^{1/2}z_t$ where is $z_t$ ...
2answers
1k views
Standardized Student's-t distribution
We know that density for a student-t distribution is given as \frac{\Gamma(\frac{\nu + 1}{2})}{\Gamma(\frac{\nu}{2})} \left(\frac{\lambda}{\pi\nu}\right)^{\frac{1}{2}} ...
1answer
1k views
How to calculate likelihood for a bayesian model?
I am trying to do Bayesian posterior predictive checking, whereby I calculate the DIC for my fitted model, and compare to DIC from data simulated from the fitted model. I can get the DIC out of ...
7answers
2k views
Why do people use p-values?
Roughly speaking a p-value gives a probability of the observed outcome of an experiment given the hypothesis (model). Having this probability (p-value) we want to judge our hypothesis (how likely it ...
1answer
191 views
How to estimate the likelihood function for random generator of three events?
I have a random events generator. I know in advance the set of event that can be generated (in my case I have only three possible events). The probabilities of the events are not known. I need to ...
1answer
263 views
Likelihood function of DSGE model using Kalman filter
In Frank Schorfheide's class notes on likelihood functions of DSGE models, he expresses the value of the likelihood function for a given vector of parameters $\theta$, and time series $Y^T$ as: ...
7answers
18k views
What is the difference between “likelihood” and “probability”?
The wikipedia page claims that likelihood and probability are distinct concepts. In non-technical parlance, "likelihood" is usually a synonym for "probability," but in statistical usage there is a ...
2answers
112 views
What is the correct likelihood function for an sequential, adaptive data generation process?
Consider the following sequential, adaptive data generating process for $Y_1$, $Y_2$, $Y_3$. (By sequential I mean that we generate $Y_1$, $Y_2$, $Y_3$ in sequence and by adaptive I mean that $Y_3$ is ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9067378044128418, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/60734/simple-connectedness-via-closed-curves-or-simple-closed-curves
|
Simple connectedness via closed curves or simple closed curves?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I've recently read some papers and books involving simply connected domains in Euclidean space (dimension at least 2), where domain is an open connected set. The usual definition is a (connected) set for which every continuous closed curve is (freely) contractible while some authors only require that every continuous simple closed curve is contractible. The authors who define simple connectedness using simple closed curves do so in order to use Stokes' Theorem or the Jordan curve theorem somewhere in the sequel; however, they never mention (not even with a reference) that their definition is equivalent to the usual one! My question is if there is a proof written down somewhere (with all the details) proving the equivalence (for domains in $\mathbb{R}^n$ with $n \geq 2$)? If not, does someone know of an "easy" proof using a minimal amount of knowledge, say that of a first course in topology?
-
8
Any curve in an open set of euclidean space is homotopic to a piecewise-linear curve (cover the curve with balls contained in the domain...). And in turn, any piecewise-linear curve is homotopic to a product of simple ones (product in the sense of the fundamental group). So if all simple curves are contractible, then all curves are contractible. Btw you may want to worry about base-points, and whether the homotopies (in the book you are reading) are assumed to be through simple curves or not. – Pierre Apr 5 2011 at 20:59
3
@Pierre: your comment seems like a correct answer to me. Would you be so kind as to leave it as such? – Pete L. Clark Apr 6 2011 at 2:55
1
Topologists invoke "transversality" to such questions, which refers to a series of theorems that rigorously prove intuitive statements like "every continuous function from a circle to a smooth manifold of dimension > 2 is $\epsilon$-homotopic to a smooth embedding", or "every continuous function from a circle to a smooth 2-manifold is $\epsilon$-homotopic to a finite union of simple closed curves." These theorems aren't trivial to prove, but they form the basis on which most statements like the one you ask follow. Careful expositions are found in e.g. in Hirsch's book. – Paul Apr 6 2011 at 4:04
@Pierre: Is there a reference to prove that "any piecewise-linear curve is homotopic to a product of simple ones (product in the sense of the fundamental group)". Although this statement is intuitively clear, how does one prove it carefully? (Last week I've asked a famous topologist exactly this question and he showed me carefully how to do it --- the proof is very long if one were to write it out in all details.) – simply connected Apr 6 2011 at 6:44
2 Answers
As pointed out by Pierre and Paul in comments, there are several standard ways to deal with this kind of issue. A good answer really depends what you're assuming you start from, and where you're trying to go to. The Jordan curve theorem and Stoke's theorem are both fairly sophisticated and difficult for beginners to grasp, so it's a bit hard to see how only analyzing embedded curves is streamlining anything, except perhaps helping with people's intuitive images---but even so, it may do more harm than good.
Perhaps it's worth pointing out that this statement is false in greater generality, for instance for closed subsets of $\mathbb R^3$. Here's an example in $\mathbb R^3$: consider a sequence of ellipsoids that get increasingly getting long and thin; to be specific, they can have axes of length $2^{-k}$, $2^{-k}$ and $2^k$. Stack them in $\mathbb R^3$ with short axes contained in the $z$-axis, so each one touches the next in a single point with long axes parallel to the $x$-axis, and let $X$ be their union together with the $x$-axis.
Any simple closed curve in $X$ is contained in a single ellipsoid, since to go from one to the next it has to cross a single point, so every simple closed curve is contractible.
However, a closed curve in the $yz$-cross-section that goes down one side and back up the other sides is not contractible. The fundamental group is in fact rather large and crazy.
Anyway, here are some lines of reasoning that can overcome whatever hurdle needs to be ovrcome:
1. PL approximation, as suggested by Pierre: this is easy, the keyword is "simplicial approximation". I'll phrase it for maps of a circle to Euclidean space as in the question, even though essentially the same construction works in far greater generality. Given an open subset $U \subset \mathbb{R}^n$ and given a map $f: S^1 \to U$, then by compactness $S^1$ has a finite cover by neighborhoods that are components of $f^{-1}$ of a ball. If $U_i$ is a minimal cover of this form, there is a point $x_i$ that is in $U_i$ but not in any other of elements of the cover; this gives a circular ordering to the $U_i$. There is a sequence of points $y_i \in U_i \cap U_{i+1}$, indices taken mod the number of elements of the cover. The line segment between $y_i$ and $y_{i+1}$ is contained in $U_i$, since balls are convex. (This generalizes readily to the statement that for any simplicial complex, there is a subdivision where the extension that is affine on each simplex has image contained in $U$. It also generalizes readily to the case that $U$ is an open subset of a PL or differentiable manifold).
2. Raising the dimension: if you take the graph of a map of $S^1$ into a space $X$, it is an embedding. If you're (needlessly) worried about integrating differential forms on non-embedded curves, pull the forms back to the graph, where the curve is embedded. If you want to map to a subset of Euclidean space with the same homotopy type, just embed the graph of the map (a subset of $S^1 \times U$ into $\mathbb R^2 \times U$. (There's a very general technique to do this, if the domain is a manifold more complicated than $S^1$, even when it's just a topological manifold, using coordinate charts together with a partition of unity to embed the manifold in the product of its coordinate charts).
3. The actual issue for integration, using Stoke's theorem etc., is regularity --- to make it simple, restrict to rectifiable curves, and don't worry about embededness. Any continuous map into Euclidean space is easily made homotopic to a smooth curve, by convolving with a smooth bump function---the derivatives are computed by convolving with the variation of the bump, as you move from point to point.
4. Similarly, you can approximate any continuous map by a real-analytic function, if you convolve with a time $\epsilon$-solution of the heat equation (a Gaussian with very small variance, wrapped around the circle). This remains in $U$ if $\epsilon$ is small enough. A real analytic map either has finitely many double points, or is a covering space to its image; in either case you reduce simple connectivity to the case of simple curves.
5. Sard's theorem and transversality, as mentioned by Paul. Sard's theorem is nice and elegant and has many applications, including the statement that a generic smooth map of a curve into the plane is an immersion with finitely many self-intersection points, as is any generic smooth map of an $n$-manifold into a manifold of dimension $2n$. If the target dimension is greater than $2n$, then a generic smooth map is an embedding.
-
It took me a few minutes to figure this out, so in case this helps anyone else to "get" the picture in the second paragraph: a "short circle" means a circle like a belt on one of the (rotationally symmetric) ellipsoids with radius equal to the semi-minor axis, say $2^{-k}$. – jc Apr 7 2011 at 1:10
@jc7: Thanks for the comment. I hadn't anticipated this way my phrasing could mislead --- I'll edit and see if I can bring forward the intended mental image. – Bill Thurston Apr 7 2011 at 9:07
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The equivalence is conceptually easy: each closed curve is a union of simple closed curves. If you can contract each simple closed curve, you can contract the whole curve. Each simple closed curve also lives in the set of closed curves, so the equivalence the other direction is simple. This sort of proof shouldn't be too hard for you to construct, assuming you have the knowledge of a first course in topology. Some care might need to be taken in constructing the explicit homotopy and in dealing with a curve which has infinitely many self-intersection points, but these are both issues you should have seen in such a first course in topology.
-
2
This seems to fall short of rigorous. Suppose for instance that the image of the curve contains the unit cube in Euclidean space. How are you going to decompose it as a union of simple closed curves? – Pete L. Clark Apr 6 2011 at 2:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9579524397850037, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/101661?sort=newest
|
## Going in the direction of the gradient
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
First, a motivating example. Suppose $f(x)$ is convex, differentiable, with a single minimum $x^*$.
Then the differential equation $$\dot{x}(t) = -\nabla f(x(t))$$ drives $x(t)$ to $x^*$.
Now my question is about a generalization of this. Let $f(x), g(x)$ be two smooth convex functions and let ${\cal G}$ be the set of minima of $g(x)$, which we assume to be nonempty. Consider the differential equation $$\dot{x}(t) = - \frac{1}{t} \nabla f(x(t)) - \nabla g(x(t))$$ Is it true that this equation drives $x(t)$ to the minimum of $f(x)$ on ${\cal G}$? If not, would it be true if we replaced $1/t$ by a different function, say one which perhaps decays slower? Or perhaps by adding some additional conditions on $f$, e.g., strong convexity?
This statement seems to be true in a few simple examples I tried. For example, taking $g(x)=(x_1+x_2-2)^2$ and $f(x)=x_1^2+x_2^2$ and solving the resulting equation numerically, I get that solutions seem to approach $(1,1)$.
Note: I asked this on math.SE without receiving an answer
-
Do you mean instead that $x(t)$ is driven to the minimum of $g$ ? – Denis Serre Jul 9 at 9:02
No, $x(t)$ should be driven to the minimum of $f(x)$ on ${\cal G}$, which is the set of minimizers of $g$ (at least, that is what I think should be true!) For example, in the example I gave in the final paragraph, the set of minimizers of $g$ is the set $x_1+x_2=2$ and the minimum of $f$ on it is $(1,1)$ - which is where indeed the solution appears to go numerically. – robinson1 Jul 9 at 9:21
## 1 Answer
Assume $f$ is strongly convex (i.e., $f''>\varepsilon$ for some fixed $\varepsilon>0$). Then for any two solutions $x$ and $y$ we have $|x(t)-y(t)|\le \tfrac C t$. In particular if one solution converges then all of them converge to the same point.
If this point $x^*$ is not the minimum of $f$ on $\mathcal G$ then you get an immediate contradiction. (There is a draught in one direction near $x^*$ for all $t$'s.)
-
Thank you for answering. Could I ask for more details? I tried to understand your answer but failed. 1. Why do solutions satisfy $|x(t)−y(t)| < C/t$? 2. How do you know at least one solution converges? 3. Can you spell out the contradiction that converging to a non-minimum results in? Thanks! – robinson1 Jul 8 at 14:56
The estimate follows along the same lines as Picard's existence theorem. For the second part of argument, look at a neighborhood of $x^*$ and try to estimate the behavior of solutions for big $t$, it will slide in a direction of a fixed vector tangent to $\mathcal{G}$; that means this can not be a limit point. – Anton Petrunin Jul 8 at 20:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415762424468994, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/54224-non-trivial-proof-using-pigeonhole-principle.html
|
# Thread:
1. ## Non-Trivial Proof Using Pigeonhole Principle
Hello,
Attached is a statement whose proof requires use of the Pigeonhole Principle. I came rather close to proving it on my own, but I would rather not share my sketch of the proof since it probably isn't in the right direction. In my experience, it can be misleading to see a partial solution that is, in fact, impossible to complete.
Thanks in advance to anyone willing to share their insights.
Attached Thumbnails
2. With your statement, I'd say: the subsequences $\{a_1,\ldots,a_k\}$ and $\{b_1,\ldots,b_l\}$ have the same sum (and so do the empty subsequences). Do you want a strict subsequence? Writing your own steps could have helped to guess what you're looking for.
My guess (since it makes my proof work): your hypothesis should be $\sum_{i=1}^k a_i + \sum_{j=1}^l b_j \leq kl$ and the numbers are non-zero (to avoid $a_i=0$ for all $i$). In this case, here is a solution: consider the differences $\delta(K,L)=\sum_{i=1}^K a_i -\sum_{j=1}^L b_j$, where $K\in\{1,\ldots,k\}$ and $L\in\{1,\ldots,l\}$. There are $kl$ of them. What about the possible values? $\delta(K,L)$ is an integer, it is less than $\sum_{i=1}^k a_i-1$ and greater than $1-\sum_{j=1}^L b_j$, so there are at most $\sum_{i=1}^K a_i+\sum_{j=1}^L b_j -1 <kl$ values for the $\delta(K,L)$'s. By the pigeonhole principle, there are two couples $(K,L)\neq (K',L')$ such that $\delta(K,L)=\delta(K',L')$. For instance, $K>K'$ (they can't be equal since it would imply that $L=L'$ because the numbers are non-zero). Then $\sum_{i=K'+1}^K a_i = \sum_{j=L+1}^{L'} b_j$ and these subsequences aren't empty. qed
3. Thanks for your help. Allow me to clarify a few things: Since I translated the question from another language, I accidentally omitted the word "non-empty". Not only that, but I forgot to note that since it is not required to find a strict subsequence, it is likely that the hypothesis in the exercise is stated incorrectly. Your own hypothesis makes perfect sense, but I interpreted it as $\sum_{i=1}^k a_i$ , $\sum_{j=1}^l b_j$ $<kl$. I am now convinced that your hypothesis is the correct one; however, I would like to find a counter-example for the statement as I interpreted it.
I, too, considered the delta-differences in your proof. The possible values under my hypothesis are any number between 2-kl and kl-2 (inclusive), which means that there are 2kl-3 such numbers, with only kl differences. This is why I thought that my direction was off; taking the absolute values of the differences limits the number to kl-1, but then the equal absolute values predicted by the Pigeonhole Principle may not necessarily be written as two regular differences that agree on the sign of the $a_i$'s.
I suspect that fairly large numbers are required to form a counter-example. I should note that in the "ideal" case where there exist $1 \leq c < l$ , $1 \leq d < k$ such that
$\forall$ $1 \leq i \leq k$ . $a_i=c$ and $\forall$ $1 \leq j \leq l$ . $b_j=d$, the proof is trivial: Any d-term subsequence of $(a_i)$ has the same sum as any c-term subsequence of $(b_j)$. I haven't had any luck finding a counter-example by brute force, but so far I have limited myself to a low range of k and l.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9658109545707703, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/46373/infty-groupoid-of-a-infty-algebras
|
## $\infty-$groupoid of $A_{\infty}$ algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
Consider first the following $2-$groupoid of Algebras over $\mathbb{C}$. Objects are Algebras, $1-$morphisms are isomorphisms, and a $2-$morphism between the isoms $f$ and $g$ from $A$ to $B$ is an element of $B^{\times}$ such that $f(a) b = b g(a)$ for all $a \in A$.
This is a certain sub$2$groupoid of the $2-$category of categories where the objects are the linear categories $A-$mod, the $1-$morphisms are (certain) functors (I think preserving the tensor structure), and the $2-$morphisms are natural transformations. Alternatively, take linear categories with a point as object and morphisms $A$ and consider functors and natural tranformations of those.
Is there a natural analogue of this in the case of $A_{\infty}$ algebras? I was hoping to understand some systematic way of writing down all the higher morphisms in an $\infty-$groupoid of $A_{\infty}$ algebras and hoping that they are all non-trivial. However, I don't even understand the analogue of the elements $b$ from above.
Is this done in the literature somewhere where one can extract explicit formulae? One guess would be to consider $A_{\infty}-$algebras as $A_{\infty}-$categories with one object, but this does not help unless one can explicitly write down some explicit $\infty-$groupoid structure on $A_{\infty}-$categories which I guess would require one to first convert these $A_{\infty}-$categories to $\infty-$categories
--Oren
-
## 2 Answers
Notice that you are looking at the 2-category of algebras, bimodules and bimodule homomorphism.
Because of this: if you regard a morphism $f : A \to B$ of algebras as an $A$-$B$ bimodule $B_f$ ($B$ equipped with the obvious right $B$-action and with left $A$-action induced by $f$) then the 2-morphisms that you are looking at are bimodule homomorphism $B_g \to B_f$ given on $B$ by left multiplication with $b \in B$ (this trivially respects the right $B$-action and the equation $f(a)b = b g(a)$ is precisely the condition that it also respects the left $A$-action.)
So you are looking for the $A_\infty$-version of (the maximal higher groupoid inside) the 2-category of algebras, bimodules and bimodule homomorphisms.
Now, in
Berger, Moerdijk, Resolution of coloured operads and rectification of homotopy algebras http://arxiv.org/PS_cache/math/pdf/0512/0512576v2.pdf
there is described in section 6 a model-category theoretic construction of a simplicial category whose objects are $A_\infty$-algebras, morphisms are bimodules of $A_\infty$-algebras, 2-morphisms are bimodule homomorphisms, and so on.
This simplicial category you may think of as presenting an $\infty$-category of $A_\infty$-algebras.
(Notice that this applies to $A_\infty$-algebras over any suitable enriching category, say tor $A_\infty$-spaces You are probably thinking of the standard dg-case, enriched over chain complexes, to which it applies in particular.)
Some paragraphs on this you can also find here:
http://ncatlab.org/nlab/show/model+structure+on+algebras+over+an+operad#InfCatOfMods
-
Excellent, I am especially eager to read about the "so on" part, thanks! Yes, I was thinking of the dg enriched case (although unfortunately when one considers curved $A_{\infty}$ algebras and curved morphisms its not quite that case anymore) Although this could be related to the Seidel construction via reinterpreting $A_{\infty}-$cats as $\infty-$cats, that sounds like a difficult path, whereas Berger, Moerdijk sounds more directly related to what I need. – Oren Ben-Bassat Nov 17 2010 at 20:25
It seems that the first new feature that is present for $A_{\infty}$ algebras as opposed to regular algebras is that the bimodule homomorphisms themselves have non-identity morphisms between them, even when the bimodules come from algebra isomorphisms. Do the bimodules between two $A_\infty$ algebras form a simplicial category in some standard way? What are the morphisms between two isomorphisms $f_{1}$ and $f_{2}$ between the $A_{1}-A_{2}$ bimodules $B_{1}$ and $B_{2}$? – Oren Ben-Bassat Nov 17 2010 at 23:32
You mean for a fixed pair of $A_\infty$-algebras, what's the $\infty$-category of bimodules betweem them? Recall from Berger-Moerdijk the strategy to get that: there is an operad whose algebas are bimodules. Pass to the correspnding model category of its homotopy algebras as described by Berger-Moerdijk. That presents the $\infty$-category of all $A_\infty$-bimodules. It comes with two functors to that of just $A_\infty$-algebras, so take the fiber over your chosen ones. ... – Urs Schreiber Nov 17 2010 at 23:46
... This is actually the precise way to get $\infty$-categories by presenting them by model categories. Regarding that simplicial category that we talked about as an $\infty$-category really requires a bit more discussion. – Urs Schreiber Nov 17 2010 at 23:47
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Seidel's book Fukaya categories and Picard-Lefschetz theory, sections (1a-e), describes in explicit terms the $A_\infty$-category of non-unital functors between two fixed (small) $A_\infty$-categories, as well as the functors between functor-categories obtained by composing with a fixed functor on the left or right. This is not quite the whole story concerning composition.
Unsurprisingly, you can formulate everything as sums over trees, but the signs are nasty.
-
No need to climb higher: an $A_\infty$-category is already a model for a (stable) $\infty$-category. – Urs Schreiber Nov 17 2010 at 19:25
Urs: thanks! I'll remove that sentence. – Tim Perutz Nov 17 2010 at 19:43
Thanks for this reference! I also see that this issue was discussed over here: golem.ph.utexas.edu/category/2006/11/…. There they discussed how to cook up a quasicategory of $L_{\infty}$ algebras and gave some references. It would be great to even have some explicit understanding of what the invertible $2$ and $3-$morphisms look like, even in a case where $m_{4}$ and higher vanish for all the algebras involved. – Oren Ben-Bassat Nov 17 2010 at 20:05
Notice the difference between the oo-categories of homotopy algebras whose 1-morphisms are just plain morphisms and those where the 1-morphisms are more generally bimodules. For the first version a comprehensive construction and discussion of oo-categories of homotopy algebras over any suitable operad is at ncatlab.org/nlab/show/… . In the Examples-section is discussed how to generalize this to $\infty$-categories whose 1-morphisms are allowed to be bimodules. – Urs Schreiber Nov 17 2010 at 20:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9101071357727051, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/45773-separable-equations-please-help-print.html
|
# Separable Equations please help
Printable View
• August 11th 2008, 01:21 PM
eawolbert
I've been trying to solve this problem for quite a while now but can't seem to get the right answer. please any pointer will be greatly appreciated.
A tank contains 50 kg of salt and 1000 L of water. A solution of a concentration 0.025 kg of salt per liter enters a tank at the rate 7 L/min. The solution is mixed and drains from the tank at the same rate.
Find the amount of salt in the tank after 2.5 hours.
Find the concentration of salt in the solution in the tank as time approaches infinity.
my dy/dt equation looks like this dy/dt = .175-7y/1000
after doing the steps my book shows I end up with a really long equation that doesn't help at all.
I know that y(0)=80 kg has to help me find c but the answer is totally wrong. I don't know where did I messed up. Please help.
• August 11th 2008, 01:26 PM
Matt Westwood
Easiest way to address stuff like this is replace the numbers with letters and put the numbers back in afterwards. So you get dy/dt = a - by/d which indeed is separable and should give you a solution involving exponentials. Yes it will be a bit messy (these revolting things are). Take it slowly, do it neatly, works for me every time.
• August 11th 2008, 02:04 PM
eawolbert
I tried but i made a mess and didn't come up with the right equation. :b
what really messes me up it's that extra 7y.
can someone help me figure out the equation with respect to y. please I really need to see what steps I messed up. Thanks.
• August 11th 2008, 04:28 PM
mr fantastic
Quote:
Originally Posted by eawolbert
I've been trying to solve this problem for quite a while now but can't seem to get the right answer. please any pointer will be greatly appreciated.
A tank contains 50 kg of salt and 1000 L of water. A solution of a concentration 0.025 kg of salt per liter enters a tank at the rate 7 L/min. The solution is mixed and drains from the tank at the same rate.
Find the amount of salt in the tank after 2.5 hours.
Find the concentration of salt in the solution in the tank as time approaches infinity.
my dy/dt equation looks like this dy/dt = .175-7y/1000
after doing the steps my book shows I end up with a really long equation that doesn't help at all.
I know that y(0)=80 kg has to help me find c but the answer is totally wrong. I don't know where did I messed up. Please help.
The DE can be re-written as
$\frac{dy}{dt} = \frac{175 - 7y}{1000}$
$\Rightarrow \frac{dt}{dy} = \frac{1000}{175-7y} = - \frac{1000}{7y - 175}$ subject to the boundary condition y(0) = 50 (or is it 80??).
If you're solving differential equations then it's expected that you can integrate things like $\frac{1000}{7y - 175}$ with respect to y.
• August 11th 2008, 05:59 PM
eawolbert
I know how to get there but after I try to find c(which is y(0)=50). I end up with this equation:
-1/7 ln(175-7y)= t/1000 + c
my c= -1/7ln(-175)
leaving
-1/7 ln(175-7y)= t/1000 -1/7ln(-175)
from here is where I mess something up and cannot find the answer.
• August 11th 2008, 07:52 PM
mr fantastic
Quote:
Originally Posted by eawolbert
I know how to get there but after I try to find c(which is y(0)=50). I end up with this equation:
-1/7 ln(175-7y)= t/1000 + c
my c= -1/7ln(-175)
leaving
-1/7 ln(175-7y)= t/1000 -1/7ln(-175)
from here is where I mess something up and cannot find the answer.
My advice is to get y in terms of t before using the boundary condition:
$\ln | 175 - 7y| = -\frac{7t}{1000} - 7C$
note the use of absolute value | | NOT () ......
$\Rightarrow 175 - 7y = e^{-\frac{7t}{1000} - 7C} = e^{-\frac{7t}{1000}} e^{- 7C} = A e^{-\frac{7t}{1000}}$
where $A = e^{- 7C}$ is just as arbitrary as $e^{- 7C}$
$\Rightarrow y = \left(175 - A e^{-\frac{7t}{1000}}\right)/7$.
Now substitute y = 50 (80??) when t = 0 and solve for A.
All times are GMT -8. The time now is 02:13 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9573066830635071, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/tagged/statistical-significance
|
# Tagged Questions
Statistical significance refers to the probability that, if, in the population from which this sample were drawn the true effect were 0 (or some hypothesized value) a test statistic as extreme or more extreme than the one gotten in the sample could have occurred.
0answers
16 views
### Statistical Significance test for difference in FScore
For a classification task, I have developed two methods and FScore(harmonic mean of precision and recall) of both the classes serves as the performance criteria. How can I check whether the difference ...
1answer
44 views
### Can I use a regression model with ANOVA significance greater than 0.05?
I have a multiple regression using SPSS. The significance of my model in the ANOVA table is p=0.174 which is >0.05. what does this mean for my model? Can I still use it and proceed in the ...
1answer
32 views
### How to rank order performance of four systems using aggregates of pairwise assessments of system performance?
I am currently designing an experiment to evaluate 4 iterations of a system. As it is rather difficult for a human to judge the output of the system by assigning it a score or several scores, I plan ...
2answers
79 views
### Stats: Relationship between Alpha and Beta
My question has to do with the relationship between alpha and beta and their definitions in statistics. alpha = type I error rate = significance level under consideration that the NULL hypothesis is ...
1answer
25 views
### Statistical tool(s) to correlate perceptions with Likert-type scales
In relation to perceptions of teachers, I want to measure: perception of helpfulness required level of helpfulness for the respondent to be satisfied, and how important helpfulness is (as an aspect ...
1answer
28 views
### Simple Mixed Model with 1 Fixed and 1 Random Effect
I have various datasets I need to analyse regarding soil properties, all in the same fashion, with one fixed effect (which is a position along a transect, indicating different land uses). Now my main ...
0answers
14 views
### Significance testing for a group of samples
For instance, I have a bar chart - in which I have 4 samples/bars. for these 4 bars, I want to check statistical significance for all combinations. How can I do it? Can unpaired t-test be used - to ...
2answers
59 views
### Correlation not significant because there is not enough variance?
I have a question about correlations again. I have a dichotomous variable that I want to correlate with the another one (metric) by using the point-biserial correlation coefficient. I get a ...
2answers
77 views
### How to compare percentages for two categories from one sample?
I want to compare two percentages on a single categorical variable from one sample. For example, in my data set, a variable can take on two values (i.e. A and B). Of 586 instances, 87.4% is in ...
1answer
20 views
### Statistical significance and winner of multivalent test
This should be a fairly straight forward, but I can't seem to figure out the answer. I'm doing some A/B(C/D/E...) testing on a website and measuring impressions and clicks. What method should I be ...
2answers
62 views
### Pearson correlation is not significant although effect is .30 What could be the reason?
I have a correlation between two variables of 0.30, hence a medium effect. but it is not statistically significant and I want to report the reasons for that. looking at the test statistic I see that ...
3answers
115 views
### Problem with p-values from A/B testing
I'm pretty confused right now and the more I look at it, the more confused I get. Overthinking! I'm doing some A/B tests for my website. I'm using the significance calculator that's in this article: ...
0answers
13 views
### How to test significance of pre-post difference scores on 8 measures (1 group)
I have a set of pre and post treatment scores on 8 measures. I'd like to test which of these show significant improvement after treatment. If I use within-subjects t tests it will mean carrying out 8 ...
2answers
191 views
### Is this regression significant?
Hi I get this output from R summary of an lm: ...
0answers
32 views
### Two-way anova with interaction term: what is the point of a post-hoc test?
I have a statistics interpretation question. I've recently performed a two-way anova to identify an interaction term between my categorical independent variables (genotype + temperature) that ...
1answer
67 views
### How to compare two datasets using metrics drawn from unknown distributions and with small sample sizes?
I have two datasets consisting of metrics from several experiments. Dataset 1 is the collection of results of experiments E performed by user A on product A, repeated N times. Dataset 2 is the ...
1answer
112 views
### Looking for help with interpreting my OLS models
I have been busy building an explainatory regression model for a companies' social behavior. In the table below you find the variables, ordered by theme. The ...
0answers
146 views
### Ratios of means - statistical comparison test using Fieller's theorem?
I would really appreciate any suggestions with the following data analysis issue. Please read till the end as the problem at first may appear trivial, but after much researching, I assure you it is ...
0answers
19 views
### Test regression parameter against a constant in SPSS
This is a pretty basic question, but I can't find an answer by searching for different statements of the same problem. Is there a straightforward way to test if a regression parameter is different ...
0answers
31 views
### Correcting Classical Linear Regression Model (CLRM) violation
How would you correct for violated error term assumption of CLRM in stata. If our hypothesis is correct, our assumption of error term is violated. I am trying to correct this on stata but don't know ...
1answer
31 views
### How to prove the number of distinct distributions in a group of distributions?
Let's say we had 5 distributions: A,B,C,D,E. An ANOVA test would tell us whether or not all of the means are equal, and thus a low p-value would mean at least one of the means is unlikely to be ...
0answers
28 views
### Hidden Markov model & statistical significance
I am using HMM to explore the language development of one individual with six variables as input, which are trained into three sequences. In the first state, the covariance of variable A and B is ...
1answer
64 views
### Interpretation of Spearman correlation for small sample
I am currently being confused by different opinions regarding the spearman correlation interpretation. Some says, $\rho<0.2$ can be ignored as small/weak relationship, some says $\rho<0.1$, for ...
0answers
42 views
### Statistical test for multiple comparison with dependent observations (MATLAB)
I want to use matlab to perform a statistical test to find if there's a significantly higher mean return in any of n different sample populations. However, I can only find post hoc (although they can ...
1answer
74 views
### Test for significance of correlation matrix
If one wishes to test if the correlations in a correlation matrix are statistically significant as a whole group, one can perform a likelihood ratio test of the hypothesis that the correlation matrix ...
0answers
26 views
### How to test if two populations are different with the x axis a time variable
I have two populations: Class A and Class B. Both populations have two variables: ...
0answers
26 views
### What kind of statistical analysis to use for a homework paper? Anova?
I am writing a paper where I have to pretend to have conducted an experiment. Each of the experimental groups and control has different people. There are 8 experimental groups and 1 control. 1a 1b ...
0answers
25 views
### Calculating differences between groups and across time (age)
Let's say I have data of one variable $V$ measured for two groups of individuals ($a$ and $b$) and I want to see whether there is a significant difference between them and over time, i.e. $V_a$ stays ...
0answers
60 views
### Expected value of sums of squares of a blocking factor in a split plot design
How can we show that in a split plot design, the expected value of the sum-of-squares of the blocking factor has expected value equal to $k\sigma^2_{\text{block}} + \sigma^2_{\text{error}}$? I tried ...
0answers
26 views
### Why is an ANOVA interaction no longer significant after including a covariate? [duplicate]
I have initially run a 2x2x2 Anova and found some significant interaction between the independent variables. I then ran an ANCOVA where I added a covariant. I am confused with how to interpret the ...
0answers
4 views
### Resolution of a Test Method
How does one calculate the resolution of a test method, that is, the smallest difference between two measurements, also called the resolving power or discrimination?
1answer
43 views
### Conservative confidence interval
One month before the election, a poll of a large number of randomly selected voters showed 65% planning to vote for a certain candidate. The newspaper article reported a 90% conservative margin of ...
0answers
31 views
### Repeated 100x 10-fold cross validation, what is the sample size when doing an significance test?
I iterated my 10-fold cross validation 100 times for several methods. Now I want to use a t-test to test if the results are significant. However, I'm not sure what the sample size is. Is the sample ...
0answers
27 views
### Statistical analysis of multiple algorithms over a single dataset
I have a dataset $X = \{x_1, ..., x_n\}$. I also have three algorithms, $A_1, A_2, A_3$, that each take a single data point as input and produce some measure of how well they performed. If I apply ...
1answer
18 views
### Statistics test for concentration of extract effecting germination
I am undertaking biology course work and am a little stuck. My experimental hypothesis is: There is a significant correlation between increasing tomato extract concentrations and reduced seed ...
0answers
45 views
### How do I test for significant difference between sets of time ordered data?
I have data about activity on a game site over a period of time (~1.5 years). I want to compare a sub-population's activity between certain time segments so that I can say things like "Group A played ...
0answers
19 views
### Deciding if my mutant strains are significantly different from my wild type from data measured over time or from gradients
I have a set of data which is the oxygen consumption of a wild type (WT) and a few mutants. I averaged the data and plotted graphs with each mutant and WT series and plotted a line of best fit. Is ...
2answers
76 views
### How to compare results of a questionnaire consisting of 40 likert items, before and after an intervention?
I need some help! In my research (medical education field), I had 32 participants answering a questionnaire consisting of 40 likert items rating their level of confidence in different aspects of ...
1answer
83 views
### Speed up web a/b tests with sample size checkpoints
Before starting an a/b test with a control and one experimental route, I can calculate a required sample size based on conversion rate estimates for both routes. I can get a good estimate of the ...
1answer
65 views
### Overall rank from multiple ranked lists
I've looked through a lot of literature available online, including this forum without any luck and hoping someone can help a statistical issue I currently face: I have 5 lists of of ranked data, ...
0answers
36 views
### How do I prove that my results are significantly better in this situation?
I have a test suit for an optimization problem with 270 problem instances. For all these instances I have a lower bound and a single solution of another algorithm. I do not own this algorithm and am ...
0answers
22 views
### Difference between chi statistical methods (pearson/likelyhood/linear-by-linear)
Can someone briefly explain the differences between these three tests? When would you use one over the other? Which tests are associated with a test of independence, homogeneity, or goodness-of-fit? ...
1answer
38 views
### Chi squared test of homogeneity
Suppose you want to test the null hypothesis: H0: p1 = 0.65, p2 = 0.27, p3 = 0.01, p4 = 0.05, p5 = 0.02 to determine whether a population's distribution matches the proposed proportions. You ...
3answers
51 views
### Area to the right or left when computing test statistic
Can someone generally explain when the area to the right or area to the left is sufficient in solving a statistical problem (using z, t, or chi)? For example, in a chi-squared problem, if we're ...
1answer
67 views
### Multiple hypothesis testing and F tests?
Does the multiple hypothesis testing problem apply to the calculation of an F statistic for joint significance? It seems to me that the more variables you are including in your test for joint ...
0answers
46 views
### One way ANOVA and Duncan's test
I'm working on a program for statistical calculations.Always have two sets of data... I calculated all the other values from ANOVA and Duncan's test: SS Effect,df effect,MS Effect,SS Error,df ...
0answers
31 views
### Whats the biggest difference in calculating a simple regression model with/without a constant term?
I have calulated an OLS with and without a constant term. However, besides that the values are different I haven`t really found anything valueable? Therefore my question is: Whats the biggest ...
2answers
26 views
### Hypothesis / Statsitical significance between continous variable and dependent binary variable
Sorry I am a stats newbie, I understood that I can use the search feature, I tried to search but I am afraid that I am not using the right terms and the results returned are not quite relevant to my ...
0answers
34 views
### Evaluate Quantile Regression results and t-stats at different levels.
I have a few questions regarding quantile regression and how to interpret the results. I have several independent variables and I want to figure out which combination of variables have the highest ...
1answer
41 views
### Finding value of residual
Consider the LSR line: y = 1158.86 - 5.54x A researcher sought to examine the effect of average teacher salary on the average (total) SAT score. The data, measured at the state level so 50 ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317044019699097, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/71206?sort=votes
|
## Discrete-analytic functions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I do not know if such concept already exists but lets consider functions which are equal to its Newton series.
We know that functions which are equal to their Taylor series are called analytic, so lets call functions that are equal to their Newton series "discrete analytic".
The formula is alalogious to Taylor series but uses finite differences instead of dirivatives, so for any discrete-analytic function:
$$f(x) = \sum_{k=0}^\infty \binom{x-a}k \Delta^k f\left (a\right)$$
It is known that for a functional equation $\Delta f=F\,$there are infinitely many solutions which differ by any 1-periodic function. But it appears that there is only one (up to a constant) discrete-analytic solution, i.e. all discrete-analytic solutions differ only by a constant term.
Thus I have the following questions:
• Do discrete-analytic functions express special properties on the complex plane?
• Is there a method to extend the notion of discrete analiticity to a range of functions for which Newton series does not converge (so to make it possible to choose the distinguished solution to the abovementioned equation)?
For the second part of the question, as I know there is at least one one similar attempt, the Mueller's formula:
If $$\lim_{x\to{+\infty}}\Delta f(x)=0$$ then $$f(x)=\sum_{n=0}^\infty\left(\Delta f(n)-\Delta f(n+x)\right)$$
although it seems not to be universal and I do not now whether it is always useful.
-
Also posted to math.stackexchange.com/questions/53646/… – Gerry Myerson Jul 25 2011 at 12:51
## 2 Answers
An analytic characterization of functions represented by the Newton series is known - that is a class of functions analytic on some half-plane $\operatorname{Re}x>\lambda$ and satisfying there some estimates. See
Gelʹfond, A. O. Calculus of finite differences. Translated from the Russian. International Monographs on Advanced Mathematics and Physics. Hindustan Publishing Corp., Delhi, 1971 (or several earlier Russian language editions).
On the other hand, there is a more recent attempt to develop a kind of complex analysis based on more complicated difference operators. See
Dynnikov, I. A.; Novikov, S. P. Geometry of the triangle equation on two-manifolds. Mosc. Math. J. 3, No. 2, 419-438 (2003),
and the review in http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1046.39016&format=complete
-
@Anatoly: why are you advising him only Russian editions? – Ilya Jul 25 2011 at 13:15
1
@Gortaur: If you know other appropriate material, you are welcome. – Anatoly Kochubei Jul 25 2011 at 14:12
Thanks, I looked through the Russian edition, found on the Internet. There is a section that deals with "functions that can be represented as Newton series" (he did not give a special name to the class). The properties are mostly dealing with the maximum growth rate of such functions. Another section indeed gives the conditions when an analytic function can be represented exactly with a Newton series. The question about generalizations still remains though... – Anixx Jul 25 2011 at 20:28
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Also look at the book titled "Discrete calculus by analogy" wirtten by Izadi, Aliev, and Bagirev.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230818748474121, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/253818/example-of-set-which-contains-itself
|
# Example of set which contains itself
I am trying to understand Russells's paradox
How can a set contain itself? Can you show example of set which is not a set of all sets and it contains itself.
-
1
The set of all possible things is an element of itself since it itself is a thing...but, of course, this is precisely what Russell's paradox came to point at and since then there is no set of all sets or set of all things or weird stuff like that. Thus, within the usual ZF (with AC or without) logical system, we cannot have stuff as above. – DonAntonio Dec 8 '12 at 15:37
1
In mathematics, sets that are elements of themselves don't come up in practice. They are just a problematic, theoretical possibility in discussions of things like RP. Various convoluted workarounds for them have nevertheless been put forward, e.g. type theory, some axioms of ZFC. But IMHO, they are overkill. You just have to make sure you cannot prove $\exists k\forall x (x\in k\leftrightarrow\neg x\in x)$ in the set theory you adopt. There are simpler, more natural ways than TT or ZFC. – Dan Christensen Dec 10 '12 at 14:42
## 4 Answers
In modern set theory (read: ZFC) there is no such set. The axiom of foundation ensures that such sets do not exist, which means that the class defined by Russell in the paradox is in fact the collection of all sets.
It is possible, however, to construct a model of all the axioms except the axiom of foundation, and generate sets of the form $x=\{x\}$. Alternatively there are stronger axioms such as the Antifoundation axiom which also imply that there are sets like $x=\{x\}$. Namely, sets for which $x\in x$.
For the common mathematics one can assume the foundation is based on ZFC or not (because there is a model of ZFC within a model of ZFC-Foundation), so there is no way to point out at a particular set for which it is true.
Also interesting:
-
And as Russell showed, well-foundedness is also closely related to typeability in a system without recursive types. – Jon Purdy Dec 8 '12 at 20:19
$x = \{ x \}$ ...
... but actually one of the axioms of ZFC (the "usual" axioms of set theory) has the immediate consequence that no set has itself as a member.
-
Aczel's anti-foundational axiom is a classic example: http://en.wikipedia.org/wiki/Aczel%27s_anti-foundation_axiom
"It states that every accessible pointed directed graph corresponds to a unique set. In particular, the graph consisting of a single vertex with a loop corresponds to a set which contains only itself as element, i.e. a Quine atom."
-
Example, $R=\{{R,2,4,6,8,10,...\}}$ it can just be written explicitly like this.
-
That looks just like something of the form: $$\{\{\cdots \{\{x\}\}\cdots \}\}$$ which seems to be frowned upon by set theorists. – Peter Tamaroff Dec 8 '12 at 15:43
1
@PeterTamaroff: Not frowned upon as much as just violating regularity. A set containing itself as an element is by definition not well-founded. Russel's paradox is a paradox in naive set theory, so without anything like the axiom of foundation. There are axiomatizations on set theory that allow regularity to be violated. One of the most famous axioms which imply that sets like the one proposed exist is Aczel's anti-foundation axiom (and its various flavours). – tomasz Dec 8 '12 at 17:24
1
@Peter: To add on tomasz' comment, regardless to well-foundedness the paradox shows that there is a definable collection which is not a set. This doesn't even have to be the collection of all well-founded sets, but it is not a set. If the universe is not well-founded it is possible that the class defined is not all the sets in the universe. – Asaf Karagila Dec 8 '12 at 17:50
OK. ${}{}{}{}{}{}$ – Peter Tamaroff Dec 8 '12 at 17:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557218551635742, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/57430-equation-help.html
|
Thread:
1. Equation help
The teacher gave this problem. She gave the weight of an old penny as 3.09g and the weight of a new penny as 2.51g. There are ten pennies in a container and the combined weight is 26.78. She wants an equation to find how many of each penny is in the container. The equation created was 3.09X+2.51y=26.78. Was told that this is incorrect because cannot have two variables in this equation.
Help. Mom lost.
2. You're on the right track, you just need a 2nd equation because you do have two variables to solve for. The total amount of pennies is 10, so you can also say x+y=10.
$3.09x+2.51y=26.78$
$x+y=10$
Now you either use y=10-x or x=10-y to make a substitution into the 1st equation and solve for them both.
3. We were substituting the wrong set of numbers. There was a second container with pennies as well and I was trying to solve for x and substitute that set but it wasn't working. I guess it's been too long since I did algebra. Thanks so much for putting us on the right track.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9857851266860962, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/74938/foundations-existence-of-uncountable-ordinals/74940
|
## Foundations: Existence of uncountable ordinals.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This isn't really a research question, but at least it's research-level mathematics. I'm talking with some other people about the first uncountable ordinal, and I want some facts to inform this discussion. Specifically, what useful or interesting foundations of mathematics do or don't allow one to prove the existence of an uncountable ordinal?
If you don't have a better interpretation, then for "useful", you can probably take "capable of encoding most if not all rigorous applied mathematics"; for "interesting", you can probably take "popular for study by researchers in foundations". For "existence of an uncountable ordinal", you could take "existence of a well-ordered uncountable set", "existence of a set whose elements are precisely the countable ordinals", etc.
Hopefully there is a body of known results or obvious corollaries of such, since it could be a matter of some work to apply this question to foundational system X, and I don't expect anybody to do that.
-
I think you at least need the power set axiom. – Quinn Culver Sep 8 2011 at 21:11
1
You might want to see this on math.SE math.stackexchange.com/questions/4778/… and perhaps also possibly this could be a question of interest for you as well. math.stackexchange.com/questions/46833/… – Asaf Karagila Sep 9 2011 at 5:59
Thanks, Asaf. It looks like I shouldn't neglect math.stackexchange! – Toby Bartels Sep 12 2011 at 16:24
## 2 Answers
For the existence of an uncountable set with a definable well-ordering, it suffices to have `$\mathcal P(\mathcal P(\mathbb N))$`, where $\mathcal P$ means the power set. Any countable order-type is represented by a well-ordering of $\mathbb N$, and can therefore be coded by a subset of $\mathbb N$. Call two such codes equivalent if the well-orderings they code are isomorphic. Then the equivalence classes are elements of `$\mathcal P(\mathcal P(\mathbb N))$`, they correspond naturally to the countable ordinals, and so they constitute an uncountable well-ordered set. (If you want the well-ordering itself to be a set, rather than given by a definition, and if you want it to be literally a set of ordered pairs, with ordered pairs defined in the usual Kuratowski fashion, then you'll need some more assumptions to ensure the existence of the relevant ordered pairs, etc. But if you're willing to settle for some other coding of ordered pairs, then `$\mathcal P(\mathcal P(\mathbb N))$` seems to suffice.)
In particular, the existence of an uncountable well-ordered set is well within the power of Zermelo set theory (like ZF but without the axiom of replacement). If, on the other hand, you want the set of countable ordinals and if you use von Neumann's (nowadays standard) representation of ordinals by sets, then Zermelo set theory is not enough. One natural model of Zermelo set theory is the collection of sets of rank smaller than `$\omega+\omega$`; this contains plenty of uncountable well-ordered sets, but its ordinals are just those below `$\omega+\omega$`. (The moral of this story is that, in Zermelo set theory and related systems, ordinals should not be defined using the von Neumann representation but rather as isomorphism classes of well-orderings.)
-
2
To avoid a possible confusion: Some people omit the axiom schema of separation from the ZF axioms, since it can be deduced from replacement. When deleting replacement, to produce Zermelo set theory, these people should restore the separation axioms. – Andreas Blass Sep 8 2011 at 22:43
Thanks, Andreas; this is exactly what I need! – Toby Bartels Sep 12 2011 at 16:26
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The number of countable ordinals is $\aleph_1$. The set of all countable ordinals is the smallest uncountable ordinal $\omega_1$. So is there such a set, or is that a proper class? Suppose you look at the set of all sets of rank less than $\omega_1$. In ZF, that actually is a set, not a proper class. If I'm not mistaken, it is a model of all of the axioms of ZF except the power set axiom. Within that model, the set of countable ordinals is a proper class. Are all of them countable within that model? I.e., each of them has an enumeration, and the question is whether such an enumeration is one of the sets in that model. I'm rusty on some of this, but if I have all of this right, then this shows you cannot get uncountability in ZF minus the power-set axiom. Does this mean you need the power-set axiom for that? I don't think so, you might be able to add some weaker axiom or other axiom that would give you that. In fact, if you just add a new axiom that says there is a set containing every countable ordinal as a member, I think that's quite a bit weaker than the power-set axiom.
-
5
$V_{\omega_1}$ does satisfy the powerset axiom, as does any $V_\alpha$ for a limit ordinal $\alpha$. That's because taking a powerset only increases rank by 1. However, $V_{\omega_1}$ fails to satisfy the replacement axiom since there are plenty of very, very large sets in $V_{\omega_1}$, but so few ordinals... – François G. Dorais♦ Sep 8 2011 at 22:08
3
@Michael: Your statement about satisfying the ZF axioms except for power set would become correct if the set `$V_{\omega_1}$` of sets of countable rank were replaced with the set `$H(\omega_1)$` of sets whose transitive closures are countable. `$H(\omega_1)$` contains the same ordinals as `$V_{\omega_1}$`, namely the countable ordinals, but in other respects `$H(\omega_1)$` is much smaller than `$V_{\omega_1}$`. – Andreas Blass Sep 8 2011 at 22:50
Thanks again, Andreas, this is a useful model to have. – Toby Bartels Sep 12 2011 at 16:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440819621086121, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/8876/convergence-of-sequence-of-random-variables/8878
|
# convergence of sequence of random variables
What does this expression mean - $\lim_{n\rightarrow\infty} E|X_n-X|=0$? $X_n$ is a sequence of random variables and $X$ is a random variable. What does this expression imply? Can I say that the sequence $X_n$ converges to $X$ in probability and almost sure convergence?
-
## 2 Answers
This is called convergence in the mean or convergence in the $L^{1}$-norm. In general, if \begin{eqnarray} \lim_{n \to \infty} \mathbb{E}(| X_{n} - X|^{p}) = 0, \end{eqnarray} then $X_{n}$ is said to converge to $X$ in the $L^{p}$-norm (provided that $\mathbb{E}(|X_{n}|^{p})$ is finite for all $n \geq 1$). Analytically, there are nice implications of such convergence. For example, convergence in an $L^{p}$-norm implies convergence in an $L^{q}$-norm if $p \geq q$. (See http://en.wikipedia.org/wiki/Convergence_of_random_variables).
Markov's inequality states \begin{eqnarray} \mathbb{P}(|X_{n} - X| > \epsilon) \leq \epsilon^{-p} \, \mathbb{E}(|X_{n} - X|^{p}). \end{eqnarray} Thus, $L^{p}$-norm convergence implies convergence in probability.
-
Wiki page mentions convergence in mean implies convergence in probability. Why is that? – user957 Nov 4 '10 at 7:39
When $\lim _{n \to \infty } E|X_n - X| = 0$, we say that $X_n$ converges in mean to $X$. It is very well known that this implies that $X_n$ converges in probability to $X$, but not that $X_n$ converges almost surely to $X$. Consider a sequence $(X_n)$ of independent random variables such that $P(X_n = 0) = 1 - 1/n$ and $P(X_n = 1) = 1/n$. Then $E|X_n|=1/n$, and hence $X_n$ converges in mean to $0$. However, since $\sum\nolimits_{n = 1}^\infty {P(X_n = 1)} = \infty$ and the $X_n$ are independent, almost surely the sequence $(X_n)$ contains infinitely many $1$'s; in particular $X_n$ does not converge to $0$ in the almost sure sense.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8938006162643433, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/10516/uniformization-theorem-for-riemann-surfaces/10521
|
## Uniformization theorem for Riemann surfaces
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How does one prove that every simply connected Riemann surface is conformally equivalent to the open unit disk, the complex plane, or the Riemann sphere, and these are not conformally equivalent to each other?
I would like to know about different ways of proving it, and appropriate references. This is not to know the best way; but to know about various possible approaches. Therefore I wouldn't be choosing a best answer.
-
Please skip the proof that these three are not conformally equivalent! :) – Anweshi Jan 2 2010 at 20:41
## 10 Answers
As has been pointed out, the inequivalence of the three is elementary.
The original proofs of Koebe and Poincare were by means of harmonic functions, i.e. the Laplace equation ${\Delta}u = 0$. This approach was later considerably streamlined by means of Perron's method for constructing harmonic functions. Perron's method is very nice, as it is elementary (in complex analysis terms) and requires next to no topological assumptions. A modern proof of the full uniformization theorem along these lines may be found in the book "Conformal Invariants" by Ahlfors.
The second proof of Koebe uses holomorphic functions, i.e. the Cauchy-Riemann equations, and some topology.
There is a proof by Borel that uses the nonlinear PDE that expresses that the Gaussian curvature is constant. This ties in with the differential-geometric version of the Uniformization Theorem: Any surface (smooth, connected 2-manifold without boundary) carries a Riemannian metric with constant Gaussian curvature. (valid also for noncompact surfaces).
There is a proof by Bers using the Beltrami equation (another PDE).
For special cases the proof is easier. The case of a compact simply connected Riemann surface can be done by constructing a nonconstant meromorphic function by means of harmonic functions, and this is less involved than the full case. There is a short paper by Fisher, Hubbard and Wittner where the case of domains on the Riemann sphere is done by means of an idea of Koebe. (Subtle point here: Fisher et al consider non-simply connected domains on the Riemann sphere. The universal covering is a simply connected Riemann surface, but it is not obvious that it is biholomorphic to a domain on the Riemann sphere, so the Riemann Mapping Theorem does not apply).
The Uniformization Theorem lies a good deal deeper than the Riemann Mapping Theorem. The latter is the special case of the former where the Riemann surface is a simply connected domain on the Riemann sphere.
I decided to add a comment to clear up a misunderstanding. The theorem that a simply connected surface (say smooth, connected 2-manifold without boundary) is diffeomorphic to the plane (a.k.a. the disk, diffeomorphically) or the sphere, is a theorem in topology, and is not the Uniformization Theorem. The latter says that any simply connected Riemann surface is biholomorphic (or conformally equivalent; same in complex dimension $1$) to the disk, the complex plane or the Riemann sphere.
But the topology theorem is a corollary to the Uniformization Theorem. To see this, suppose $X$ is a simply connected (smooth etc.) surface. Step (1): Immerse it in $\mathbb{R}^3$ so as to miss the origin. Step (2): Put the Riemann sphere (with its complex structure!) in $\mathbb{R}^3$ in the form of the unit sphere. Step (3): For every tangent space $T_pX$ on $X$, carry the complex structure $J$ from the corresponding tangent space on the Riemann sphere by parallell transport (Gauss map) to $T_pX$. This is well-defined by choosing a basepoint and recalling that $X$ is simply connected. Step (4): Presto! $X$ is now a Riemann surface (it carries a complex structure), so it is biholomorphic to the disk or the plane or the Riemann sphere, thus diffeomorphic to one of the three.
Of course, I have glided over the question of immersing the surface in 3-space, because this is topology. Actually, I vaguely recall that there is a classification of noncompact topological surfaces by Johannsen (sp?), and no doubt the topological theorem would immediately fall out of that.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I learned the proof from this paper, Uniformization of Riemann Surfaces. They actually provide three methods of proof, and I found the first of these the easiest to follow. Its not too difficult to go through the proof in detail if you know a bit of topology and complex analysis. The idea is to construct a global analytic function by minimizing a Dirichlet integral. Then, by working out the properties of "flow lines" - on which the function has constant imaginary part - you can get a good idea of what the map looks like, and then show that it is either a mapping onto the Riemann sphere, the Riemann sphere with the origin removed (equiv to the complex plane) or the Riemann sphere with a line segment removed (equiv to the open unit disk).
The second method they provide relies on triangulating the surface. The proof constructs the mapping inductively on larger and larger sets of triangles, using the Riemann mapping theorem to construct maps and the Schwarz reflection principle to join them together.
They also provide a third method, based on sheaf cohomology, although I am not so familiar with this method. The idea is to construct geometric realizations of projective structures on the Riemann surface. However, it does not seem to be quite complete, and there are some unresolved problems posed at the end.
-
Do you have the link to the paper? – Ilya Nikokoshev Jan 2 2010 at 23:25
The link should be provided in my answer, if you click on it. (initially made a mistake, but I edited and fixed the link). – George Lowther Jan 2 2010 at 23:30
I've just taken a course which concluded with a sketch of the uniformisation theorem for Riemann surfaces, following the last chapter of Gamelin's Complex Analysis. The idea is:
-If the Green's function exists for your surface, use it to construct a conformal map from the surface to a bounded region in the complex plane. Now apply the Riemann mapping theorem.
-If the Green's function doesn't exist, construct a meromorphic variant called the bipole Green's function. Similar to the first case, we can use this to construct an injective map from the surface to the Riemann sphere. Now if this map misses more than one point of the Riemann sphere, simply-connectedness of the domain (and hence the image) means that the image is bounded, so we can use the Riemann mapping theorem. Otherwise the map misses one point (so the surface is conformally equivalent to the complex plane) or is surjective (conformally equivalent to the Riemann sphere).
Complex analysis is very far from my field, so I'm afraid I can't explain this any further (and I apologise for any inaccuracies in the above).
-
I was going to answer Anweshi's other complex analysis question too (predictably, the fundamental theorem of algebra turned up in our courses too) but the question seemed to have disappeared, it says "page not found" when I click on it?? – Amy Pang Jan 2 2010 at 22:53
Anweshi deleted it in when he feared that it will be closed. Anweshi was harassed too much and even got a ban for asking questions. – Anweshi Jan 3 2010 at 15:24
The moderator Anton revived that question after Anweshi's request. – Anweshi Jan 3 2010 at 18:35
There is a nice book about the uniformization theorem by Henri Paul de Saint Gervais, Uniformisation des surfaces de Riemann. The author is actually a collective of French mathematicians.
-
The Riemann sphere isn't conformally equivalent to the others because it is not homeomorphic to them. :)
-
1
And the complex plane and the disc aren't because the map from the plane to the disc would be a bounded non-constant analytic function. – Ryan Budney Jan 2 2010 at 20:36
Yes of course .. :) You know what I had mainly in mind; it is the other part, that every simply connected Riemann surface is conformally equivalent to one of the above three. – Anweshi Jan 2 2010 at 20:37
I believe there is a formal proof of the classification along the following lines.
Being simply connected means that whenever you have curve, you can always get a disk inside. If there is more than one way to glue a disk, you must have a Riemann sphere. If there is always exactly one way, you can take the picture and put it step-by-step on a complex plane. Once there, you have either the whole plane or you're inside the complement of a ray. In the latter case, you are between a disk and a disk, so there's some approximation thing that says you're also a disk.
The three cases can be distinguished by their global symmetry group: there are three different constant curvature metrics, with the curvature resp. 1, 0, -1 for the sphere, plane, and disk case. Therefore you have three slightly different symmetry groups which can be written as the isometries of quadratic forms in 3 dimensions: $SO(x^2 + y^2 + z^2)$, $SO(x^2+y^2)$, $SO(x^2+y^2-z^2)$.
-
A good reference? – Anweshi Jan 2 2010 at 20:42
Sorry, don't know any specific reference, that's just how I think about them :) – Ilya Nikokoshev Jan 2 2010 at 20:44
Interesting... How do you go about rigorously proving statements like, "If there is more than one way to glue a disk, you must have a Riemann sphere", and "If there is always exactly one way, you can take the picture and put it step-by-step on a complex plane." ? I suppose, if these two are done, then the rest is the Riemann mapping theorem .. maybe that is what Gerald Edgar above had in mind. – Anweshi Jan 2 2010 at 20:58
The first is very geometric: imagine you have circle and fill it twice by a disk. By definition, you now have a map of a sphere. Now to say that these two ways are inequivalent is exactly the same as saying that this sphere cannot be filled with a ball. So your $\pi_2$ is nontrivial; this is only possible if your whole surface is a sphere. – Ilya Nikokoshev Jan 2 2010 at 21:00
The second proceeds as follows: You take a triangulation of your surface and construct a map from your surface to the plane one triangle at a time. Because every loop can be extended to one and only one disk, you never have a contradiction. I've heard it only as a proof sketch and not actual proof though. – Ilya Nikokoshev Jan 2 2010 at 23:22
show 3 more comments
On my point of view the best modern reference is the book by J. Hubbard, Teichmuller theory and its applications.
A very interesting (and little known) proof is due to Lavrentiev, and can be found in an Appendix to the book of Goluzin, Geometric theory of functions of a complex variable.
-
One proof not yet cited is by Ricci flow. It is a proof of the differential geometric version of Uniformization : in each conformal class of Riemannian metric, there is a metric of constant curvature. The idea is very natural : by construction, metric of constant curvature are fixed points of Ricci flow so take a general metric, evolve it by Ricci flow and show that it converges. This is essentially a work of Hamilton (reference : "The Ricci flow :an introdution" Chow, Knopf) which was completed by Chen, Lu and Tian.
I don't claim that it is the best proof of unifomization theorem, it is just a way to see in a simple case how the Ricci flow can be used to obtain classification results. (The point is that for surfaces Ricci flow has no singularity in finite time, which is not the case in 3 dimensions).
-
There is an entire book dedicated to this theorem:Uniformisation des surfaces de Riemann-Henri Paul de Saint-Gervais= (Aurélien Alvarez, Christophe Bavard, François Béguin, Nicolas Bergeron, Maxime Bourrigan, Bertrand Deroin, Sorin Dumitrescu, Charles Frances, Étienne Ghys, Antonin Guilloux, Frank Loray, Patrick Popescu-Pampu, Pierre Py, Bruno Sévennec, Jean-Claude Sikorav.) http://www.amazon.fr/Uniformisation-surfaces-Riemann-th%C3%A9or%C3%A8me-centenaire/dp/2847882332/ref=sr_1_1?s=books&ie=UTF8&qid=1344144095&sr=1-1 A lot of historical insight!
Here you find some video-lectures on the same subject by one of the authors:Etienne Ghys:
http://video.impa.br/index.php?page=escola-de-altos-estudos-the-uniformization-theorem-old-and-new
-
The book is in french,but the lectures are in english. – pi2000 Aug 5 at 6:05
It is the same book as in Ben McKay answer.(sorry),but the link to video is new – pi2000 Aug 5 at 6:08
Isn't this called the "Riemann mapping theorem"? At least the only nontrivial part of it. Ahlfors proves it using Montel's theorem.
For the rest:
These are not equivalent: The sphere is compact, the others not. The disk admits a bounded nonconstant analytic function, the plane does not.
-
1
How does it become a triviality, if Riemann mapping theorem is assumed? – Anweshi Jan 2 2010 at 20:40
2
The Riemann mapping theorem, as far as I know, only states that a simply connected region in the complex plane is conformally equivalent to the unit disk (unless it is the entire plane). – Harald Hanche-Olsen Jan 2 2010 at 20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317760467529297, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/4481/why-doesnt-orbital-electron-fall-into-the-nucleus-of-rb85-but-falls-into-the-n
|
Why doesn't orbital electron fall into the nucleus of Rb85, but falls into the nucleus of Rb83?
Rb83 is unstable and decays to Kr-83. Mode of the decay is electron capture. Rb85 is stable.
The nuclei Rb83 and Rb85 have the same charge. Rb85 is heavier than Rb85, but gravitation is too weak to take into account. So why orbital electron falls into the nucleus of Rb83? What is the reason?
-
Google for "Valley of Stability" – Georg Feb 2 '11 at 22:21
1 Answer
It is not a matter of "falling in": all s orbitals have non-trivial probability densities at the center.
It is about energy balance in the nucleus.
Kr-83 is a lower energy configuration than Rb-83 by enough to make up for the neutrino and the gamma(s). Evidently Kr-85 is not a sufficiently lower energy state than Rb-85.
-
can it be considered as elastic and inelastic collision of orbital electron and nucleus? – voix Feb 2 '11 at 19:30
1
@voix: I suspect that you are thinking in terms of classical trajectories (i.e. little billiard balls zooming around on well defined paths), and that picture is not very useful in this case. Quantum mechanics dominates here and the deal is that the electrons are---in some sense---fractionally in the nucleus all the time. The reaction $e + p \to n + \nu$ (and generally one or more subsequent gammas as the resulting nucleus rearranges itself) is allowed by that proximity. – dmckee♦ Feb 2 '11 at 19:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434302449226379, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/79335/definition-of-a-subspace-possible-to-express-with-mathematical-notation
|
# Definition of a Subspace - Possible to Express with Mathematical Notation?
Theorem: Let V be a vector space over the field K, and let W be a subset of V. Then W is a subspace if and only if it satisfies the following three conditions:
• The zero vector, 0, is in W.
• If u and v are elements of W, then any linear combination of u and v is an element of W
• If u is an element of W and c is a scalar from K, then the scalar product c u is an element of W
I was just wondering if there was a mathematical way of expressing these facts which are necessary to prove a the existence of a subspace in a subset.
Here is what I've come up with I don't know if it's correct though:
• $0 \in$ W
• $u + v = v + u$ $\forall$ $u,v \in$ W
• Couldn't figure out how to properly express this.
-
2
"$u+v = v+u$ [...]" Excuse me? Everything already takes place inside the vector space $V$, so this is always going to be true. – kahen Nov 5 '11 at 22:10
Also, one generally puts quantifiers before the formula they refer to, not after. – Arturo Magidin Nov 5 '11 at 22:34
One thing to note is that your original statement is a way to mathematically express those conditions. Words have just as big a place in math as symbols. – Riley E Nov 5 '11 at 23:38
## 1 Answer
The first one is essentially correct, though you should use $\mathbf{0}$ (boldface) to distinguish it from the scalar $0$, just like the original does.
It is more common to put quantifiers first and the clause to which they apply following them, in parenthesis. So you would write the second one as $$\forall u,v\in W (u+v=v+u).$$ That said, this does not correspond to the second statement above. Instead, what it says is that addition is commutative for elements of $W$ (it says: if $u$ and $v$ are any vectors in $W$, then the result of adding $u$ to $v$ is the same as the result of adding $v$ to $u$, which is not "any linear combination of $u$ and $v$ is an element of $W$).
Instead, you want to say something like (here $K$ is the field of scalars) $$\forall \alpha,\beta\in K\Biggl(\forall \mathbf{u},\mathbf{v} \Bigl( \mathbf{u},\mathbf{v}\in W\longrightarrow \alpha \mathbf{u}+\beta\mathbf{v}\in W\Bigr)\Biggr).$$ This says: for any two scalars $\alpha$ and $\beta$, and any two vectors $\mathbf{u}$ and $\mathbf{v}$, if $\mathbf{u}$ and $\mathbf{v}$ are both in $W$, then (their linear combination) $\alpha\mathbf{u}+\beta\mathbf{v}$ is also in $W$.
For the third, it's very similar to the second (in fact, I'm surprised it is included, since it is a special case of the second one, taking $\mathbf{v}=\mathbf{0}$): $$\forall c\in K\Bigl(\forall \mathbf{u}\bigl( \mathbf{u}\in W\longrightarrow c\mathbf{u}\in W\bigr)\Bigr).$$ This says: for every $c\in K$, for every vector $\mathbf{u}$, if $\mathbf{u}$ is in $W$, then $c\mathbf{u}$ is in $W$.
The above are a bit of "pidgin logical notation" rather than completely formal, because of the use of things like $\forall \alpha,\beta\in K$; but it is a common (ab)use of the notation.
If you really wanted to be more formal, the quantifiers should only have one variable each, and the conditions would be put in the statement. So the third statement could be $$\forall c\forall\mathbf{u}\Bigl( \bigl((c\in K)\land (\mathbf{u}\in W)\bigr) \longrightarrow (c\mathbf{u}\in W)\Bigr),$$ (you can put the universal quantifiers in the other order) and the second could be $$\forall\mathbf{u}\forall\mathbf{v}\forall\alpha\forall\beta\Bigl(\bigl( (\mathbf{u}\in W)\land(\mathbf{v}\in W)\land (\alpha\in K)\land (\beta\in K)\bigr)\longrightarrow (\alpha\mathbf{u}+\beta\mathbf{v}\in W)\Bigr).$$
-
Thanks for the thorough explanation, I am not fully experienced yet with proof/notation based mathematics, so I wasn't sure of the proper expression to use for the axioms. – eWizardII Nov 5 '11 at 23:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9590137600898743, "perplexity_flag": "head"}
|
http://en.wikisource.org/wiki/On_the_Theory_of_Aberration_and_the_Principle_of_Relativity
|
# On the Theory of Aberration and the Principle of Relativity
From Wikisource
On the Theory of Aberration and the Principle of Relativity (1910) by Henry Crozier Keating Plummer
Monthly Notices of the Royal Astronomical Society, Vol. 70, p. 252-266 Online
On the Theory of Aberration and the Principle of Relativity Henry Crozier Keating Plummer
1910
On the Theory of Aberration and the Principle of Relativity.
By H. C. Plummer, M.A.
I. In a former paper (vol. lxix. p. 496) the theory of aberration has been discussed from the standpoint of ordinary optical theory. This suffices for the conclusion that beyond the astronomical effects as commonly understood no first-order optical effects can be put in evidence, and that, in the absence of greater experimental refinement, it is impossible for the observer to detect his own absolute motion in the ether. With this position the astronomer must be content. If we have reason to be dissatisfied with the results of the efforts hitherto made to determine the constant of aberration, we have little ground for taking into account the second-order effects. But on the other hand an optical experiment has been performed which cannot be reconciled with the ordinary theory, and we have been forced to admit that the theory, if not actually erroneous, can be no more than an approximation to the truth. Meanwhile the electronic theory of matter has been developed, and has embraced in its synthesis an explanation of this difficulty. The result is that the Principle of Relativity, with its far-reaching implications, has obtained a cardinal position in modern science. It is possible that there are some astronomers who are not familiar with the literature of the subject, and to whom an elementary account of the new ideas in physics may be of interest. Accordingly the subject will be approached from the purely optical side, and an attempt will be made to present the theory, which is a product of the last decade and is due chiefly to Professor Lorentz,[1] in the simplest possible way.
2. According to the principle of relativity, it is impossible to find experimental evidence of the absolute motion through the ether. Let us consider the bearing of this on a very simple experiment. Let AB and AC (fig. 1) be actual lengths l' and l at right angles to one another, and let mirrors be placed at B and C perpendicular to AB, AC, so that rays of light from A will be reflected back to A. Now suppose that the whole apparatus is carried through the ether with the velocity v in the direction AB, the velocity of light being U. The time from A to B and back again will be
$t_{1}=l'/(U-v)+l'/(U+v)=2l'U/\left(U^{2}-v^{2}\right).$
But the time by the path AC'A' in the ether will be
$t_{2}=2AC'/U=2l/\sqrt{U^{2}-v^{2}}$
Hence if $t_{1}=t_{2}$,
$l'=l\sqrt{1-v^{2}/U^{2}}$
The actual lengths l', l must thus be different if the light-times are the same. Nevertheless no difference can be detected between the lengths by any test that we can apply, or the principle of relativity will be contravened. Hence we are led to admit a universal contraction of all material systems in the direction of their motion through the ether, according to the law
$l'=l\ \cos\ \beta$, where $\sin\ \beta=v/U$.
The dimensions transverse to the motion are considered unaltered.
An image should appear at this position in the text. If you are able to provide it, see Wikisource:Image guidelines and Help:Adding images for guidance.
If it were objected that the light-times considered above could not be determined with sufficient accuracy to support so astonishing an inference, the objection would apply only to the simple form of argument adopted. In practice the light-times can be compared with extraordinary precision by making the beams interfere on returning to A. In fact we are dealing, as it were, with a schematic representation of the Michelson-Morley experiment, the null result of which is the fundamental fact on the optical side which has to be explained.
3. The uniform contraction in one dimension of the moving material system, the possibility of which we have thus been led to entertain, suffices of itself for the discussion of some interesting optical problems. As an example, the position of the focus of a moving parabolic mirror may be chosen. Let BA1B' be the mirror (fig. 2) and F1 its focus when at rest. Let BA2B' be the figure of the mirror when in motion towards a star situated on its axis and F2 the simultaneous position of the point of the (contracted) apparatus which corresponds to F1. The incident wave-front is BCB'. If CF1 is taken as the axis of x and CB as the axis of y, the equation of BA1B' will be
$z^{2}+y^{2}=4f(x+b)$
where A1F1 = f, A1C = b. If we admit a contraction λ-1 which will make CF1 = λ.CF2, the equation of BA2B' becomes
$z^{2}+y^{2}=4f\lambda(x+b/\lambda)$.
An image should appear at this position in the text. If you are able to provide it, see Wikisource:Image guidelines and Help:Adding images for guidance.
This is a real deformation of the figure of the mirror. But if BA3B' is the surface at which the advancing wave-front actually meets the moving mirror
$A_{2}A_{3}/v=A_{3}C/U=A_{2}C/(v+U)$
where v is the velocity of the mirror, and the virtual surface on which the wave falls becomes
$z^{2}+y^{2}=4f\lambda(1+v/U)\{x+b/\lambda(1+v/U)\}$.
This is still a paraboloid, and the wave BCB' will reach its focus F3 after a time
$t=f\lambda(U+v)/U^{2}+b/\lambda(U+v)$.
Hence no change of focus will be detected provided F2F3 = vt. Now
$\begin{array}{l} CF_{3}=f\lambda(1+v/U)-b/\lambda(1+v/U)\\ \\CF_{2}=f/\lambda-b/\lambda\end{array}$
so that
$F_{2}F_{3}=f\lambda(1+v/U)-f/\lambda+bv/\lambda(U+v)$
which is equal to vt if
$f\lambda.v(U+v)/U^{2}=f\lambda.(U+v)/U+f/\lambda$
or
$\lambda^{-2}=(U+v)\left(1/U-v/U^{2}\right)=1-v^{2}/U^{2}.$
This is the law of contraction previously obtained, and the result is a complete compensation of the optical effect due to the motion of the mirror.
When a telescope is moving directly towards a star, Veltmann’s theorem[2] shows that the motion will not affect the relative position of the focus to the first order. But a second order effect will remain outstanding, although it will be too small to be ascertained by focal settings. According to our present ideas, however, no effect of any order is to be expected, and our example shows how the compensation operates in a particularly simple case.
4. In what precedes we have contemplated only a change in one dimension of the moving system, or, as it may be expressed, a transformation of one coordinate in space. We have now to consider a related transformation to apparent or, as it is called, "local" time. Let axes be taken attached to the moving system, the measured coordinates being ξ, η, ζ. Let their position at the time t = 0 be the axes fixed in space, the corresponding coordinates being x, y, z. The motion of the system is supposed parallel to the axis of x or ξ.
We now imagine a new system of time τ, which depends not only on the absolute time t but also on the position in space. It is sufficient for our purpose to suppose that
$\tau=at-bx,\ \xi=cx-dt,\ \eta=y,\ \zeta=z.$
All optical phenomena which would be described by an observer at rest in space in terms of x, y, z and t will be described by an observer in motion in terms of ξ, η, ζ and τ. Now one result of the principle of relativity and of the constant velocity of light is that the spherical wave-front
$x^{2}+y^{2}+z^{2}=U^{2}t^{2}$
must appear to the moving observer as
$\xi^{2}+\eta^{2}+\zeta^{2}=U^{2}\tau^{2},$
for a spherical wave which actually converges to a point in reality must appear to converge to a point and to move with the velocity U. This requires
$(cx-dt)^{2}-U^{2}(at-bx)^{2}\equiv x^{2}-U^{2}t^{2},$
or
$c^{2}-b^{2}U^{2}=1,\ a^{2}-d^{2}-d^{2}U^{-2}=1,\ cd=abU^{2},$
which can be satisfied by
$a=c=\sec\beta,\ bU=d/U=\tan\beta.$
Here β is arbitrary, but we must also have ξ= 0 identical with x = vt, or
$v=d/c=U\ \sin\beta,$
and thus the complete transformation is determined:
$\begin{array}{l} \tau=t\ \sec\beta-x\ \tan\beta/U\\ \\\xi=x\ \sec\beta-Ut\ \tan\ \beta\\ \\\eta=y,\ \zeta=z,\ \sin\beta=v/U.\end{array}$
5. The physical interpretation of this transformation is simple. The coordinates y and z are unaltered, but we have at any given time t
$\xi_{2}-\xi_{1}=\left(x_{2}-x_{1}\right)\sec\beta.$
The measured distance between two points in the direction of motion is therefore greater than the actual distance in the ratio of sec β : 1. This accords with the idea already entertained that the corresponding dimension of the material system, including any scale which may be used for making measures, is actually diminished in consequence of the motion in the ratio cos β : 1.
At a given position in the ether
$\frac{\partial\tau}{\partial t}=\sec\beta,$
which means that a stationary clock made to synchronise with passing clocks keeping "local" time must be accelerated in the ratio sec β : 1 when compared with the standard time of space t. But on the other hand,
$\frac{d\tau}{dt}=\sec\beta-v\ \tan\beta/U=\cos\beta,$
which means that a "local" clock moving through the ether with the velocity v has a rate retarded in the ratio cos β : 1 when compared with the "standard" clock. The result of the transformation is that we have established a consistent system of apparent time which is such that if we imagine luminous clock-faces at all points of the moving system indicating local time, those which are at equal measured distances from a given station will appear to show the same time, and this a time differing from the local time of the station by an amount equal to the apparent constant distance divided by U.
6. We now see that if the laws of optics relative to the moving system, expressed in terms of ξ, η, ζ and τ, are formally the same as the laws for a stationary system, expressed in terms of x, y, z and t, there will be no possibility of detecting the fact of motion by any optical experiment made with apparatus which is carried with the system.
Now we have from the transformation of § 4,
$\begin{array}{l} \frac{d\xi}{d\tau}=\left(\frac{dx}{dt}-U\ \sin\beta\right)/\left(1-\frac{dx}{dt}\sin\beta/U\right)\\ \\\frac{d\eta}{d\tau}=\frac{dy}{dt}\cos\beta/\left(1-\frac{dx}{dt}\sin\beta/U\right)\\ \\\frac{d\zeta}{d\tau}=\frac{dz}{dt}\cos\beta/\left(1-\frac{dx}{dt}\sin\beta/U\right)\end{array}$
for the relative velocities. Let W be the absolute velocity of a ray which is parallel to the axis of x or ξ in a moving medium which has the refractive index μ when at rest. Then
$\frac{d\xi}{d\tau}=(W-v)/\left(1-Wv/U^{2}\right).$
But this is μ-1U if the apparent ray-velocity is unaltered by the motion. Hence
$W-v=\mu^{-1}U\left(1-Wv/U^{2}\right)$
or
$W=\mu^{-1}U+\left(1-\mu^{-2}\right)v/\left(1+\mu^{-1}v/U\right).$
The second member on the right is the apparent drift of plane light-waves which are normal to the direction of motion. To the first order it agrees with Fresnel’s expression (1 - μ-2)v and is consistent with Fizeau’s experiment, which cannot be performed accurately enough to verify it to a higher order. For the rest we have a general explanation of the null effect of optical experiments without supposing that the ether is carried along in the neighbourhood of the Earth without relative motion.
7. According to the electromagnetic theory a plane wave of light depends upon two periodic vectors, the components of which contain the factor
$\sin\ 2\pi w\{t-(ax+by+cz)/W\}$,
where a, b, c are the direction cosines of the wave-normal and W is the wave-velocity. To the observer moving with velocity v, the corresponding factor is
$\sin\ 2\pi w'\{\tau-(a'\xi-b'\eta+c'\zeta)/W'\}$.
The transformation of § 4 gives the relations
$\begin{array}{rl} w & =w'(\sec\beta+a'U\ \tan\beta/W')\\ \\wa/W & =w'(a'\ \sec\beta/W'+\tan\ \beta/U)\\ \\wb/W & =w'b'/W'\\ \\wc/W & =w'c'/W'\end{array}$
or, if the transformation be reversed (which only requires the sign of β to be changed),
$\begin{array}{rl} w' & =w\ \sec\beta(1-aU\sin\beta/W)\\ \\a'/W' & =(a/W-\sin\beta/U)/(1-aU\ \sin\beta/W)\\ \\b'/W' & =b\ \cos\beta/W(1-aU\ \sin\beta/W)\\ \\c'/W' & =c\ \cos\beta/W(1-aU\ \sin\beta/W)\end{array}$
The latter equations give
$1/W'^{2}=\left\{ 1-2aW\ \sin\beta/U-\left(1-a^{2}-W^{2}/U^{2}\right)\sin^{2}\beta\right\} \div W^{2}(1-aU\ \sin\beta/W)^{2},$
or
$1/W{}^{2}=\left\{ 1+2a'W'\ \sin\beta/U-\left(1-a'^{2}-W'^{2}/U^{2}\right)\sin^{2}\beta\right\} \div W'^{2}(1+a'U\ \sin\beta/W')^{2},$
which express the apparent wave-velocity in a moving medium in terms of the absolute, or vice versa. For a vacuum we put W = W' = U, and we have
$\begin{array}{l} w'=w\ \sec\beta(1-a\ \sin\beta)\\ \\a'=(a-\sin\beta)(1-a\ \sin\beta)\\ \\b'=b\ \cos\beta/(1-a\ \sin\beta)\\ \\c'=c\ \cos\beta/(1-a\ \sin\beta).\end{array}$
8. The laws of stellar aberration and of the Doppler effect are at once deducible, as Einstein[3] has shown. For a star situated in the direction making an angle φ with the direction of motion of the observer we have a = - cos φ and a' = - cos φ', where φ' is the apparent direction. Then we see that
$\begin{array}{rl} w'= & w\ \sec\beta(1+\sin\beta\ \cos\phi)\\ \\\cos\phi'= & (\cos\phi+\sin\beta)/(1+\sin\beta\ \cos\phi)\end{array}$
These expressions can, however, be simplified. For the latter equation gives
$\sin\phi'=\cos\beta\ \sin\phi/(1+\sin\beta\ \cos\phi).$
Hence
$w'\sin\phi'=w\ \sin\phi.$
Again,
$\frac{1-\cos\phi'}{1+\cos\phi'}=\frac{(1-\sin\beta)(1-\cos\phi)}{(1+\sin\beta)(1+\cos\phi)}$
or
$\tan\frac{1}{2}\phi'=\sqrt{\frac{U-v}{U+v}}\tan\frac{1}{2}\phi,$
which puts the law of aberration in an extremely simple form. If we use wave-lengths instead of wave-frequencies we have
$\lambda'/\sin\phi'=\lambda/\sin\phi.$
This form, which connects directly the apparent wave-length with the apparent aberrational position of the star, becomes useless when φ = φ' = 0, but in this case the product of the last two equations gives in the limit
$\lambda'=\sqrt{\frac{U-v}{U+v}}\lambda,$
which is the new form of Doppler’s principle. It must he remembered that v is the motion of the observer relative to the medium, and that λ depends on the unknown velocity of the source of light relative to the medium. In some cases we may fairly assume that λ is constant, but λ as well as φ is originally unknown and, if the principle of relativity be accepted in its widest extent, remains unknowable.
9. The geometrical significance of the new law of aberration is interesting. In the first place we may consider the stereographic projection of the celestial sphere on the tangent plane perpendicular to the direction of motion. The form of the law of aberration
$\tan\frac{1}{2}\phi'=\sqrt{\frac{U-v}{U+v}}\tan\frac{1}{2}\phi$
shows that the effect of aberration is simply to alter the scale of the projection. But the stereographic projection is a conformal representation of the sphere. Hence actual configurations on the sphere are only changed conformally by the effect of aberration, or in other words any small area is altered only in size and not in shape. We know that aberration merely changes the scale of a photograph of a small part of the sky, and the truth of this fact now becomes independent of the velocity (however large) of the observer. Also stars which appear to lie on a circle at any one time will continue to do so permanently.
Another geometrical representation is obtained by assimilating φ' to the eccentric and φ to the true anomaly in an ellipse whose eccentricity is v/U = sin β. This means that we view the apparent celestial sphere from the centre. Then, to pass from the apparent direction of a star to its actual direction, we must imagine the sphere transformed into an ellipsoid by contracting its dimensions perpendicular to the axis along which motion takes place in the ratio of cos β : 1. The true direction will then be inferred by viewing the ellipsoid of revolution from the focus which is in advance of the centre. Conversely, we may interchange φ and φ' if we employ the other focus. We thus see that if we consider the true celestial sphere to undergo the contraction just specified, the apparent positions of the stars are given by viewing the ellipsoid from that focus which follows the centre.
10. Secular Aberration.— In what precedes, the question of all second-order effects connected with aberration has taken a new form. But we may return to the old order of ideas to consider briefly the subject of secular aberration which arises from the motion of the solar system through the ether of space, and which was first discussed by Villarceau.[4] It is a very simple matter when regarded from the old point of view, but it has led to some apparent confusion. It has been seen (vol. lxix. p. 505) that if the light which leaves a star at the time T is observed at the time t, the observed direction of the star coincides with the actual direction of a fictitious body which is supposed to start from the position of the star at time T, and to continue in motion during the time (t — T) with a velocity equal and parallel to that of the Earth at the time t.
Now the velocity of the Earth is compounded of its velocity v relative to the centre of mass of the solar system and the velocity V (supposed constant) of the system. If V' is the resultant and U the velocity of light, we have
$ES=U(t-T),\ SS'=V'(t-T),$
E (fig. 3) being the position of the Earth at time t, S the position of the star at time T, ES' the observed direction, and SS' the virtual displacement of the star. But this displacement is compounded of two, SS0 and S0S', such that
An image should appear at this position in the text. If you are able to provide it, see Wikisource:Image guidelines and Help:Adding images for guidance.
$SS_{0}=V(t-T),\ S_{0}S'=v(t-T).$
Let ES0 = Uτ. If then we ignore the velocity V, as in practice we do, and assume a virtual star in the permanently displaced position SO, we shall infer from observation a fictitious velocity of the Earth v' such that
$S_{0}S'=v(t-T)=v'\tau$
or
$v'/v=(t-T)/\tau=ES/ES_{0}=U/\sqrt{U^{2}+V^{2}+2UV\ \cos\phi},$
where φ is the angle between the true direction of the star and the direction of the secular motion of the solar system. Otherwise expressed, the result is as if the constant of aberration for the given star is changed in the ratio of $U:\sqrt{U^{2}+V^{2}+2UV\ \cos\phi}$. This is an elementary deduction of Vi1larceau’s result, and on the assumed premises no other effect is to be expected.
The subject of secular aberration was discussed by Seeliger,[5] who arrived at results which are quite inadmissible. His expressions for the effects of aberration in R.A. and declination become infinite at the pole, whereas it is obvious that there is no singularity in the neighbourhood of this point in the sky. There is no special connection between the pole and the direction of the operative vectors considered above, and such results can only be due to faulty analysis. It would have been unnecessary to refer to this paper, had not the same errors been reproduced by Weinek[6] in an elaborate memoir of recent date. A correct treatment of the question has been given by Schwarzschild[7] on the lines of the present paragraph. But of course the whole question has now entered on a new phase.
11. It is easy to see that the old and the new laws of aberration agree to the first order of small quantities. Beyond the first order, the present accuracy of astronomical observations does not encourage us to look for any means of discriminating between them. Nevertheless it is interesting to ask whether the principle of relativity, as outlined above, has robbed us of the theoretical possibility of detecting the effect of the motion of the solar system through the ether of space. This has been asserted, and we have now to find a satisfactory justification for the assertion. It may be thought that a sufficient argument is contained in the statement italicised in § 6. But this does not appear to cover the case of aberration without a more critical examination. The motion of the observer does affect his observations, even when expressed in terms of the transformed variables, and we cannot dispense with the necessity of explaining in detail how the expected compensation operates when the observations are corrected in a definite, specified manner. And the question is subtle enough to leave ample room for misapprehension. The most obvious instance of this seems worthy of attention. Let us imagine observations made simultaneously by an observer E on the Earth and an observer S on the Sun (or rather the centre of gravity of the solar system, supposed to be moving with uniform velocity through space). Referred in the usual way to a sphere (fig. 4), let S be the apex of the Sun’s ether-velocity V0, and E the apex of the Earth’s ether-velocity V1. Let P represent the true direction of a star, and P0, P1 the positions which it appears to the observers S and E respectively to occupy. Now we should have complete agreement between the two observers, and total compensation for E, if the correction applied by E had the effect of changing P1 into P0. The observer E must
An image should appear at this position in the text. If you are able to provide it, see Wikisource:Image guidelines and Help:Adding images for guidance.
suppose that he is moving relatively to S in the direction of some point E' on the great circle SE; and if he is to deduce the position P0 in space from the observed position of the star P1, the point E' must lie also on the great circle P0P1. Since P0, P1, E' are points on the sides of the triangle SPE, and S, E, E' are points on the sides of the triangle PP0P1, we have
$\begin{array}{ccccc} \frac{\sin SE'}{\sin EE'} & \cdot & \frac{\sin EP_{1}}{\sin P_{1}P} & \cdot & \frac{\sin PP_{0}}{\sin P_{0}S}=1\\ \\\frac{\sin P_{0}E'}{\sin P_{1}E'} & \cdot & \frac{\sin P_{1}E}{\sin PE} & \cdot & \frac{\sin PS}{\sin P_{0}S}=1.\end{array}$
Let
$PS=\phi_{0},\ P_{0}S=\phi'_{0},\ PP_{0}=\phi_{0}\ \phi'_{0}$
$PE=\phi_{1},\ P_{1}E=\phi'_{1},\ PP_{1}=\phi_{1}-\phi'_{1}.$
Hence
$\frac{\sin\ SE'}{\sin\ EE'}=\frac{\sin\left(\phi_{1}-\phi'_{1}\right)}{\sin\phi'_{1}}\cdot\frac{\sin\phi'_{0}}{\sin\left(\phi_{0}-\phi'_{0}\right)}$
$\frac{\sin P_{0}E'}{\sin P_{1}E'}=\frac{\sin\phi_{1}}{\sin\phi'_{1}}\cdot\frac{\sin\phi'_{0}}{\sin\phi{}_{0}}.$
The first of these shows that
$\frac{\sin\left(\phi_{1}-\phi'_{1}\right)}{\sin\phi'_{1}}\div\frac{\sin\left(\phi_{0}-\phi'_{0}\right)}{\sin\phi'_{0}}=const.$
for all stars, which happens to be true according to the ordinary theory of aberration. But it is not true according to the "relativity" law of aberration, which gives
$\frac{\sin\left(\phi-\phi'\right)}{\sin\phi'}=\frac{\sin\beta+(1-\cos\beta)\cos\phi}{\cos\beta}.$
In fact it appears that a law of aberration, which would explain the absence of a visible effect arising from the secular motion by supposing that the corrected observation coincides in space with the standard observation, would require simultaneously
$\sin(\phi-\phi')/\sin\phi'=f_{1}(V),\ \sin(\phi-\phi')/\sin\phi=f_{2}(V),$
and this is not possible. We have then to look in another direction for the explanation of the way in which the "relativity" law effects a compensation, and for the necessary hint I am indebted to Mr. Eddington, who very kindly supplied me with a particular case which was both simple and illuminating.
12. A general proof will now be given. Let the ether-velocity of the observer S be $V_{0}=U\ \sin\beta_{0}$, and of the observer E be $V_{1}=U\ \sin\beta_{1}$, and let the angle between them be α, being measured from V0 towards V1. The components of V0 along and perpendicular to V1 are $V_{0}\cos\alpha,\ -V_{0}\sin\alpha$. Hence for E the components of the apparent velocity of S are (by § 6)
$\frac{V_{0}\cos\alpha-U\ \sin\beta_{1}}{1-V_{0}\cos\alpha\sin\beta_{1}/U},\ \frac{-V_{0}\sin\alpha\ \cos\beta_{1}}{1-V_{0}\cos\alpha\sin\beta_{1}/U}$
or
$\frac{U\left(\cos\alpha\ \sin\beta_{0}-\sin\beta_{1}\right)}{1-\cos\alpha\sin\beta_{0}\sin\beta_{1}},\ \frac{-U\ \sin\alpha\ \sin\beta_{0}\sin\beta_{1}}{1-\cos\alpha\sin\beta_{0}\sin\beta_{1}}.$
Now E must infer that his own velocity relative to S, $V_{10}=U\ \sin\beta_{10}$, has these components reversed in sign; and if θ be the angle between V1 and the resultant we have
$\sin\beta_{10}\cos\theta=\left(\sin\beta_{1}-\cos\alpha\ \sin\beta_{0}\right)/\left(1-\cos\alpha\sin\beta_{0}\sin\beta_{1}\right)$
$\sin\beta_{10}\cos\theta=\sin\alpha-\sin\beta_{0}\ \cos\beta_{1}/\left(1-\cos\alpha\sin\beta_{0}\sin\beta_{1}\right)$
and we deduce
$\cos\beta_{10}=\cos\beta_{0}\ \cos\beta_{1}/\left(1-\cos\alpha\sin\beta_{0}\sin\beta_{1}\right).$
We take the z-axis perpendicular to V0 and V1 throughout, and
(i) The x-axis parallel to V0. If (λ, μ, ν) are the direction cosines of a star in its true position, S observes this star in the direction (by § 7)
$\lambda_{0}=\frac{\lambda+\sin\beta_{0}}{1+\lambda\sin\beta_{0}},\ \mu_{0}=\frac{\mu\ \cos\beta_{0}}{1+\lambda\sin\beta_{0}},\ \nu_{0}=\frac{\nu\ \cos\beta_{0}}{1+\lambda\sin\beta_{0}}.$
(ii) We turn the x-axis through the angle a to bring it into the direction of V1. The direction cosines of the star in its true position become
$\lambda\ \cos\alpha+\mu\ \sin\alpha,\ \mu\ \cos\alpha-\lambda\ \sin\alpha,\ \nu,$
and E will observe it in the direction
$\begin{array}{lr} \lambda_{1}= & \left(\lambda\ \cos\alpha+\mu\ \sin\alpha+\sin\beta_{1}\right)/\left\{ 1+\left(\lambda\ \cos\alpha+\mu\ \sin\alpha\right)\sin\beta_{1}\right\} \\ \\\mu_{1}= & (\mu\ \cos\alpha-\lambda\ \sin\alpha)\cos\beta_{1}/\left\{ 1+\left(\lambda\ \cos\alpha+\mu\ \sin\alpha\right)\sin\beta_{1}\right\} \\ \\\nu_{1}= & \nu\ \cos\beta_{1}/\left\{ 1+\left(\lambda\ \cos\alpha+\mu\ \sin\alpha\right)\sin\beta_{1}\right\} \end{array}$
(iii) We turn the x-axis through the further angle θ to bring it into the direction of V10. The direction cosines of the star in its apparent position become
$\lambda_{1}\cos\theta+\mu_{1}\sin\theta,\ \mu_{1}\cos\theta-\lambda_{1}\sin\theta,\ \nu_{1},$
and when E has corrected this position for his observed relative velocity V10 he will infer that the star lies in the direction
$\begin{array}{lr} \lambda_{10}= & \left(\lambda_{1}\ \cos\theta+\mu_{1}\ \sin\theta-\sin\beta_{10}\right)/\left\{ 1-\left(\lambda_{1}\ \cos\theta+\mu_{1}\ \sin\theta\right)\sin\beta_{10}\right\} \\ \\\mu_{10}= & (\mu_{1}\ \cos\theta-\lambda_{1}\ \sin\theta)\cos\beta_{10}/\left\{ 1-\left(\lambda_{1}\ \cos\theta+\mu_{1}\ \sin\theta\right)\sin\beta_{10}\right\} \\ \\\nu_{10}= & \nu_{1}\ \cos\beta_{10}/\left\{ 1-\left(\lambda_{1}\ \cos\theta+\mu_{1}\ \sin\theta\right)\sin\beta_{10}\right\}. \end{array}$
These have to be compared with $\lambda_{0},\ \mu_{0},\ \nu_{0}$, remembering that the axes have been turned through an angle α + θ.
13. The process of eliminating $\lambda_{1},\ \mu_{1},\ \nu_{1}$ and θ is perfectly simple and straightforward, if rather complicated. We find
$1-\left(\lambda_{1}\cos\theta+\mu_{1}\sin\theta\right)\sin\beta_{10}=\frac{\left(1+\lambda\ \sin\beta_{0}\right)\cos^{2}\beta_{1}}{\left\{ 1+\left(\lambda\ \cos\alpha+\mu\ \sin\alpha\right)\sin\beta_{1}\right\} \left(1-\cos\alpha\ \sin\beta_{0}\ \sin\beta_{1}\right)}$
whence
$\nu_{10}=\nu\ \cos\beta_{0}/\left(1+\lambda\sin\beta_{0}\right).$
Also
$\begin{array}{r} \sin\beta_{10}\left(\mu_{1}\cos\theta-\lambda_{1}\sin\theta\right)\left\{ 1+\left(\lambda\ \cos\alpha+\mu\ \sin\alpha\right)\sin\beta_{1}\right\} \left(1-\cos\alpha\ \sin\beta_{0}\ \sin\beta_{1}\right)\\ \\=\cos\beta_{1}\left\{ \mu\left(\cos\alpha\ \sin\beta_{1}-\sin\beta_{0}\right)-\left(\lambda+\sin\beta_{0}\right)\sin\alpha\ \sin\beta_{1}\right\} \end{array}$
whence
$\mu_{10}\sin\beta_{10}=\frac{\cos\beta_{0}\left\{ \mu\left(\cos\alpha\ \sin\beta_{1}-\sin\beta_{0}\right)-\left(\lambda+\sin\beta_{0}\right)\sin\alpha\ \sin\beta_{1}\right\} }{\left(1+\lambda\ \sin\beta_{0}\right)\left(1-\cos\alpha\ \sin\beta_{0}\ \sin\beta_{1}\right)}.$
Similarly a little reduction gives
$\lambda_{10}\sin\beta_{10}=\frac{\left(\lambda+\sin\beta_{0}\right)\left(\cos\alpha\ \sin\beta_{1}-\sin\beta_{0}\right)+\mu\ \sin\alpha\cos^{2}\beta_{0}\sin\beta_{1}}{\left(1+\lambda\ \sin\beta_{0}\right)\left(1-\cos\alpha\ \sin\beta_{0}\ \sin\beta_{1}\right)}.$
We come now to the interpretation of these expressions. The components of V1 along and perpendicular to V0 are V1 cos α, V1 sin α. Hence the apparent components of the relative velocity of E as observed by S are
$\frac{V_{1}\cos\alpha-U\ \sin\beta_{0}}{1-V_{1}\cos\alpha\sin\beta_{0}/U},\ \frac{V_{1}\sin\alpha\cos\beta_{0}}{1-V_{1}\cos\alpha\sin\beta_{0}/U},$
or
$\frac{U\left(\cos\alpha\ \sin\beta_{1}-\sin\beta_{0}\right)}{1-\cos\alpha\sin\beta_{0}\sin\beta_{1}},\ \frac{U\ \sin\alpha\cos\beta_{0}\sin\beta_{1}}{1-\cos\alpha\sin\beta_{0}\sin\beta_{1}}.$
Hence if χ be the angle between V0 and the resultant $V_{01}=U\ \sin\beta_{01}$, we have
$\sin\beta_{01}\cos\chi=\left(\cos\alpha\ \sin\beta_{1}-\sin\beta_{0}\right)/\left(1-\cos\alpha\ \sin\beta_{0}\ \sin\beta_{1}\right)$
$\sin\beta_{01}\sin\chi=\sin\alpha\ \cos\beta_{0}\sin\beta_{1}/\left(1-\cos\alpha\ \sin\beta_{0}\ \sin\beta_{1}\right)$
and we deduce
$\cos\beta_{01}=\cos\beta_{0}\cos\beta_{1}/\left(1-\cos\alpha\ \sin\beta_{0}\ \sin\beta_{1}\right).$
We see that $\beta_{01}=\beta_{10}$, and when we introduce $\lambda_{0},\ \mu_{0},\ \nu_{0}$ and χ into the last expressions for $\lambda_{10},\ \mu_{10},\ \nu_{10}$ we obtain simply
$\lambda_{10}=\lambda_{0}\cos\chi+\mu_{0}\sin\chi,\ \mu_{10}=\mu_{0}\cos\chi-\lambda_{0}\sin\chi,\ \nu_{10}=\nu_{0}.$
The interpretation is that the sky as seen by the observer E and referred to the direction in which he observes himself to be moving relative to S, is identical with the sky as seen by the observer S, and referred to the direction in which he observes E to be moving relative to himself. Thus the observations of E, corrected for his observed relative velocity, give the same configuration as the observations of S uncorrected, and the latter is a permanent standard so long as his velocity is uniform. But for E the whole universe is rotated in space through an angle θ + α - χ. This rotation is precisely that which is required to enable the two observers to identify the same star as occupying the position which each observer imagines to represent the apex of the motion of E relative to S, and thus it could not be detected even though the two observers were in communication. The difficulty discussed in § 11 arises from the fact that the observed velocity of E relative to S and the observed velocity of S relative to E are not in the same straight line. We find that
$\tan(\chi-\theta)=\frac{\sin\alpha\left(\cos\beta_{0}+\cos\beta_{1}\right)}{\cos\alpha\left(1+\cos\beta_{0}\cos\beta_{1}\right)-\sin\beta_{0}\sin\beta_{1}},$
which can be reduced to the form
$\tan\frac{1}{2}(\chi-\theta)=\frac{\cos\frac{1}{2}\left(\beta_{1}-\beta_{0}\right)}{\cos\frac{1}{2}\left(\beta_{1}+\beta_{0}\right)}\tan\frac{1}{2}\alpha$
Thus χ - θ is not equal to α, as it would be in ordinary kinematics.
14. We have thus given a perfectly precise meaning to the application of the principle of relativity to stellar aberration. In doing so we have excluded everything which does not belong to the purely optical aspect of the problem. But stellar aberration is not a purely optical problem. For in practice we do not actually observe the apparent motion of the Sun and use the result to correct our observed positions of the stars. The motion which we do use is derived by calculation from the theory of gravitation. Hence, if we are to be consistent, we must regard Keplerian motion as an appearance, not as a reality. And here we come in contact with the general problem of the dynamics of the electron, which in the historical sense is responsible for the introduction of the principle of relativity. In this wider problem there is a profound modification of the notion of mass, in addition to the transformation which we have admitted in the time and the spatial coordinates. The result of the work of Lorentz and others is to show that these transformations suffice to explain the complete compensation of effects arising from the motion of any system through space over the whole field of electrodynamics as well as of optics. The same will be true of gravitation if gravity can be expressed in terms of electrodynamic entities. If, on the other hand, the nature of gravity were essentially not electrodynamic, it would be possible for some deviation from the accepted laws of dynamics, owing to a motion of translation through space, to be revealed by direct astronomical observations. The possibility of such deviations and of the compensations, partial or complete, which may accompany them, places some astronomical problems in a new position. Thus Poincaré has recently reconsidered[8] the problem of Laplace concerning the finite propagation of gravitation, and has concluded, contrary to Laplace’s negative result, that the propagation of gravitation with the velocity of light can be brought into consistency with the facts of observation owing to the partial compensations which are now introduced into the theory. Such questions are in the highest degree subtle and intricate as well as interesting. What it is desired to point out is that the developments of modern physical theory concern the astronomer no less than the physicist.
1. The fundamental hypothesis concerning the contraction of matter in motion is due to Fitzgerald; the first mention of his suggestion occurs in an abstract of a paper by Sir O. Lodge (Nature, 1892, June 16). This reference I owe to Professor Whittaker.
2. Mon. Not., lxix. p. 504. Professor Whittaker considers that Veltmann’s paper added nothing to Fresnel’s original paper in Annales de Chemie, ix. p. 57 (1818). In attaching importance to Veltmann’s theorem I have followed Jamin (Cours de Physique).
3. Conn. des Temps, 1878.
4. A.N., 2610.
5. Denkschr. der Wiener Akad., Bd. 77, p. 204.
6. A.N., 3246.
7. C.R., cxl. p. 1504.
This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1946, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 60 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 91, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393559098243713, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/6256/what-are-alternatives-to-the-random-oracle-model-for-modelling-hash-functions
|
# What are alternatives to the random oracle model for modelling hash functions? [closed]
I was looking for more realistic alternatives to the ROM for describing hash functions in theoretical proofs. I came across the common reference string model (where hash functions can be modeled as having been picked from a family of functions). Are there any other?
EDIT: http://cstheory.stackexchange.com/questions/16383/what-are-alternatives-to-the-random-oracle-model-for-modelling-hash-functions -- explains exactly what I was looking for.
-
I just commented at the cstheory version of your question. $\:$ – Ricky Demer Feb 8 at 21:26
Please don't post the same question at two StackExchange sites. (Cross-posting is discouraged on StackExchange.) Pick one to keep, and flag the other the moderators to ask them to close it: you can click the "flag" button beneath the question to do so. – D.W. Feb 8 at 21:55
## closed as off topic by mikeazo♦Feb 13 at 13:07
Questions on Cryptography Stack Exchange are expected to relate to cryptography within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here.
## 2 Answers
Another approach is to assume that the hash function is collision-resistant, and see if you can prove your protocol is secure under this weaker assumption. For some protocols, it is possible, and then you're good. For others, it's not. (More precisely, you demonstrate that any successful attack on the protocol can be turned into an algorithm that produces collisions for the hash function.)
-
There's the common random string model (where hash functions can be modeled as having been picked from a family of functions using public coins).
There are also "whatever-tractable random oracles", where adversaries also have an oracle that finds a whatever with respect to the random oracle.
(Usually 'whatever' is one of {'preimage','second-preimage','collision'}.)
There's also something I've thought of that I've never heard of actually being used, where the oracle is drawn from a distribution of oracles such that, for any algorithm that adaptively makes a feasible number of queries to the oracle and then outputs a guessed $\:\langle \hspace{.01 in}x\hspace{.01 in},\hspace{-0.02 in}y\rangle\:$ pair, the probability that
the algorithm did not query the oracle on $\hspace{.01 in}x$ $\:$ and $\:$ the oracle's output on $x$ is $y$
is negligible. (Perhaps the "Unpredictable Oracle Model"?)
-
2
Okay, just a question ... why do you manually add line breaks and complicated spaces in your answer? This seems like a lot of work for little benefit (it might even look worse on some browsers). – Paŭlo Ebermann♦ Feb 6 at 20:24
Well, the last two line breaks and the spaces between them were to aid in parsing. I put in the "purely spacing" TeX and the rest of the line breaks because I didn't realize that some (at least mobile) browsers would not display the TeX and/or break lines on their own, and so thought it would be worth making the answer look nicer (on browsers that didn't do either of those). I put and just kept the spacing in other TeX because I thought and still think that any browser that does display the TeX will display it in the same way, so I think that is worth the extra work. – Ricky Demer Feb 7 at 0:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445546269416809, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/92596-definite-integration.html
|
# Thread:
1. ## Definite Integration
$F(x)$ is a differentiable function such that
$F'(a-x)=F'(x)$
for all $x$ satisfying $0\leq x\leq a$.
Evaluate
$\int_{0}^aF(x)dx$
and give an example of such a function.
2. Originally Posted by pankaj
$F(x)$ is a differentiable function such that
$F'(a-x)=F'(x)$
for all $x$ satisfying $0\leq x\leq a$.
Evaluate
$\int_{0}^aF(x)dx$
and give an example of such a function.
$f'(a-x)=f'(x)$ gives us $f(x)+f(a-x)=c=\text{constant}.$ put $x=a/2$ to get $c=2f(a/2).$ now substitute $x \to a-x$ to get $\int_0^af(x) \ dx = \int_0^a f(a-x) \ dx = \int_0^a (c-f(x)) \ dx.$
thus: $\int_0^a f(x) \ dx = \frac{ac}{2}=af(a/2).$ an example is $f(x)=x.$ here we have $c=a$ and $\int_0^a x \ dx = \frac{a^2}{2}=a \cdot a/2=af(a/2).$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8740686178207397, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/189703-verification.html
|
# Thread:
1. ## Verification?
I'm trying to verify my solution to this problem:
Suppose $a$ and $x$ are both elements in group $G$. Solve the following equations simultaneously for $x$:
$x^2=a^2$ and $x^5=e$ where $e$ is the identity element of $G$
OK. I find that $x=(a^4)^{-1}$.
I'm having trouble verifying this result. CAN I verify?
*note: $x*x=x^2$ is an example of the notation employed here.
2. ## Re: Verification?
what do you mean by verify?
3. ## Re: Verification?
Well, you can do this:
$(a^{-4})^{5}=a^{-20}=(a^{2})^{-10}\dots,$ and see if you get $e.$
I would also work by attempting to compute the order of $a.$ That might help you verify what
$(a^{-4})^{2}$ is.
4. ## Re: Verification?
$x^2 (a^2)^{-1} = e = x^5$
$(a^2)^{-1} = x^{-1}x^{-1} x^5 = x^3 = x^2 x = a^2 x$
$a^{-1}a^{-1}(a^2)^{-1}=x$
Note that $xa^4=(a^{-1}a^{-1}(a^2)^{-1})(a^4)=a^{-1}a^{-1}(a^2)^{-1}a^2aa=e$
So since inverses are unique $x=(a^4)^{-1}$
5. ## Re: Verification?
the reason for my question, in my earlier post, is that simultaneous equations like this aren't like what you may be used to, where you "plug in the value for x" and see if it works.
we aren't told what a is (or even what the group operation is), so there's not some kind of computational verification you can do.
if you have derived a value of x in terms of a correctly (using the group axioms properly), that's all the verification you need (as in the post above, and i think there was an earlier thread on this same topic).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512409567832947, "perplexity_flag": "middle"}
|
http://nrich.maths.org/564&part=
|
### 14 Divisors
What is the smallest number with exactly 14 divisors?
### Repeaters
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
### Oh! Hidden Inside?
Find the number which has 8 divisors, such that the product of the divisors is 331776.
# Legs Eleven
##### Stage: 3 Challenge Level:
This problem is in two parts. The first part provides some building blocks which will help you to solve the final challenge. These can be attempted in any order. Of course, you are welcome to go straight to the Final Challenge!
Click a question from below to get started.
Question A
Choose any two numbers from the $7$ times table. Add them together. Repeat with some other examples. Notice anything interesting?
Now do the same with a different times table. What do you notice this time? Convince yourself it always happens.
Question B
Choose two digits and arrange them to make two double-digit numbers.
For example, if you choose $5$ and $2$, you can make $52$ and $25$.
Repeat with some other examples.
Notice anything interesting? Convince yourself it always happens.
Question C
Look at this sequence of numbers: $11, 101, 1001, 10001, 100001, ...$
Divide numbers in this sequence by $11$, WITHOUT using a calculator.
Notice anything interesting? Convince yourself it always happens.
FINAL CHALLENGE
Take any four-digit number, move the first digit to the 'back of the queue' and move the rest along. For example $5238$ would become $2385$.
Is the answer always a multiple of $11$? Can you convince yourself?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8677864670753479, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/112150/games-that-never-begin
|
## Games that never begin
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Games that never end play a major role in descriptive set theory. See for example Kechris' GTM.
Question: Does there exist a literature concerning games that never begin?
I have in mind two players, Alice and Bob, making alternate selections from ${\Bbb N}$, their moves indexed by increasing non-positive integers, the game terminating when Bob plays his move 0.
As for payoff sets and strategies, define these as for games that never end, mutatis mutandis.
One major difference: a pair of strategies, one for Alice, one for Bob no longer determines a unique run of the game, but rather now a set of runs, possibly empty. Even so one may still say that Alice's strategy beats Bob's if every compatible run of one strategy against the other belongs to the payoff set.
Another major difference involves the set-theoretic size of strategies. Now Alice and Bob play every move in the light of infinite history. So size considerations mean that certain familiar arguments, for example non-determined games from the axiom of choice, don't work in any obvious way?
Question: What payoff sets give determined games that never begin?
Edits: By the cold light of morning, I see that I abused the words "mutatis mutandis." A la Kechris, Alice's strategy beats Bob's if the the unique run of the game falls in the payoff set. What I had in mind here was that Alice's strategy beats Bob's if the run set (consistent with both strategies) is a subset of the payoff set. Joel David Hamkins' clever answer remains trenchant, only now with the import that Alice always wins by playing a strategy with empty run sets regardless of Bob's strategy.
Joel's Alice needs an infinite memory, but if her strategy at each move consists of increasing her previous move by 1, that also necessarily produces an empty run set regardless of Bob's choice.
Possible fix 1 Alice must play an engaged strategy, a strategy that produces a nonempty run set against at least one strategy of Bob's.
Possible fix 2 The residual game at any turn only has a finite future, so one player has a winning strategy from that point on. Call a strategy rational if it requires the player to play a winning move in the residual game whenever one exists. Call a strategy strongly rational if it requires the player to play the least possible winning move whenever one exists. To avoid easy disengaged strategies that don't even reference the payoff set, insist on rational, or even strongly rational strategies.
Possible fix 3 Combine the previous fixes.
-
Regarding your fixes, consider the game with payoff set for Alice consisting of all sequences with only finitely many nonzero numbers. Bob would seem to have a winning strategy with the "always play $3$" strategy, among others, but this is actually not correct, since we may let Alice play the strategy which plays $0$, as long as the history was almost all $0$, and otherwise adds $1$ to her previous move. This strategy is strongly rational and also engaged, since it has a play when Bob plays almost all $0$s. But the only plays that accord with it are almost all $0$ and hence wins for Alice. – Joel David Hamkins Nov 12 at 18:40
Come to think of it, for this game Bob's always-play-$3$ strategy is also both rational and engaged, since it does produce a play for many strategies of Alice, and all of these are wins for Bob. So once again, both Alice and Bob have rational, engaged winning strategies. But these two strategies have no common play. – Joel David Hamkins Nov 12 at 19:03
When two strategies have no common run, Alice wins, since the empty set is a subset of the payoff. So my clarified definition rules out having winning strategies for both, right? – David Feldman Nov 12 at 20:38
Yes, that's right. So Alice alone wins the game, even though it seems that Bob's always-play-$3$ strategy is a good one. – Joel David Hamkins Nov 12 at 21:11
So I still can't yet rule out that all such games are determined, right? – David Feldman Nov 12 at 21:36
show 4 more comments
## 3 Answers
This is a very nice question.
Observation 1. Some strategies have no play that accords with them. Consequently, such a strategy for Alice is winning in any game, since every play in conformance with it is (vacuously) in her payoff set. (Similar strategies exist for Bob.)
Proof: Here, I consider a strategy to be a function mapping a game position to the number to be played. A game position is an almost-infinite sequence, with only the final finitely many digits remaining unspecified. Consider the strategy for Alice: faced with a game position of prior play, she inspects her own previous moves; if infinitely many of them were $0$, she plays $1$, and otherwise she plays $0$. There can be no play that accords with this strategy, since if the play shows Alice to have played infinitely many $0$s, then she should have been playing $1$ at any one of them; and conversely, if she had played only finitely many $0$s, then she should have started playing $0$ much earlier than she did.
Another instance is the always-add-one strategy you mentioned in response to this, and I find that to be quite elegant. If Alice plays so as to always add one to her previous move, then clearly she cannot have done this forever. This strategy makes sense in games with natural number plays, but actually one can use the same idea for binary games, where the players play $0$ or $1$, by having Alice play a (strictly longer) sequence of $n$ consecutive $1$s on her next $n$ moves (unless playing time runs out).
An earlier answer of mine (see edit history) contains another argument, using diagonalization and the axiom of choice. QED
Thus, it seems that Alice wins every game, according to the definition you have provided. But I prefer to say that both players have winning strategies, because they both have strategies such that any play that conforms with them is in their respective payoff sets.
Observation 2. If one modifies the definition of strategy so that one's moves depend only on the opponent's moves in a position, then not every pair of strategies for Alice and Bob have a conforming play.
Proof: Consider the strategy for Bob that simply copies Alice's previous move, and the strategy for Alice that plays $1$ if and only if all prior moves of Bob were $0$. There can be no conforming play for this pair of strategies, since if Bob was previously playing all zeros up to a point, then Alice should have played $1$ much earlier, and if not, then Alice must have played $1$ without cause. QED
Observation 3. There is a game for which both players have rational engaged winning strategies.
Proof: Consider the game where Alice wins every play having only finitely many $1$s. The always-play-$3$ strategy is a rational, engaged winning strategy for Bob, since it has conforming plays, every conforming play is a win for Bob, and from any position, it makes a move that is winning in the finitely remaining game. Meanwhile, Alice also has a winning strategy: to play $0$, if almost all previous moves were $0$, and otherwise add one to her previous move. This strategy is engaged, since Bob might have played $0$s, and it is strongly rational for Alice, since she is playing $0$s whenever she is in a winning position; and it is winning for Alice, since the only conforming plays are almost all $0$ and hence wins for Alice. QED
Theorem. (AC) There is a game for which neither player has winning rational engaged strategy.
Proof: This theorem will work regardless of whether one allows the strategies to depend on the full position or only on the opponent's prior moves. Let's say that two sequences are almost equal if they differ on only finitely many values. Using the axiom of choice, we may select a representative from each almost-equality class. Let $A$ be the game where Alice wins a play, if the play deviates from the representative of its class for the first time on her turn, and Bob wins if the play deviates for the first time on his turn, or not at all. The thing to notice is that if $s$ is a play of the game, then both Alice and Bob had incentive to have played differently earlier, for by making a much earlier different move, they would have caused a much earlier deviation in the play, causing them to have won earlier. Indeed, it was irrational of them not to have made the earlier move, since their opponent might have won on the next move. Thus, there can be no rational strategy for either player resulting in such a play. So neither player has a rational engaged strategy. QED
Define that a set is a tail set if it is invariant under finite modification. These are precisely the sets that are saturated with respect to the almost-equality relation.
Observation 4. In every game whose payoff set is a tail set, every strategy is rational.
Proof: The point is that when you are playing a game whose payoff set is a tail set, then the game is already won or lost when any particular move is made, since the tail equivalence class is already determined. So in such a game, no particular individual move affects the outcome of the game. QED
Finally, let me mention that your game concept reminds me of the archeological model of infinite time computation, where the infinite computation grows out of an infinite past rather than stretching into an infinite future. The idea is that, having opened up a chamber under the pyramid, you find a Turing machine, still running, with an infinite tape all filled out and with all indications that it has been running from time stretching infinitely into the past. What kind of problems are decidable in principle by such machines? For an interesting theory, assume we may find pyramids corresponding to any given program.
-
I've updated my answer with the non-determinacy theorem, and also to incorporate some of the examples from the comments. – Joel David Hamkins Nov 13 at 13:46
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As a supplement to Joel's answer, you may want to look at this nice paper of Bollobas, Leader, and Walters concerning continuous games. As a starting point they discuss the classical Lion and Man game introduced by Rado. In this game there is a lion chasing a man inside the unit disk. Both have identical maximum speeds. The lion wins if he catches the man, and the man wins if he is never caught by the lion. If the lion chooses to always run directly toward the man, then he will get arbitrarily close to the man, but never catch him. On the other hand, if the lion instead moves at top speed so that he is always on the radial vector from the centre to the man, it was 'clear' that this was a winning strategy. Proof: without loss of generality, the man stays on the boundary of the disk. However, in 1952, Besicovitch exhibited an ingenious winning strategy for man! Thus, staying on the boundary is with loss of generality for man. Nonetheless, one can ask the perplexing question if lion also has a winning strategy? In this particular game, it turns out that the answer is no. But by changing the metric space, Bollobas, Leader, and Walters prove that there are games in a similar vein where both lion and man have winning strategies!
-
A slightly more tangential answer, but one which I hope is still useful: there is a well-known connection between infinite games and infintary logic. In the usual context of games with no ending, determinacy principles can be viewed as versions of De Morgan's Law for certain infinitary sentences: for $\Gamma$ a pointclass, $\Gamma$-Det is the statement that each of the disjunctions $$\forall x_0\exists x_1\forall x_2\exists x_3 . . . . ((x_0, x_1, x_2, x_3, . . . )\not\in X) \vee \exists x_0\forall x_1\exists x_2\forall x_3 . . . ((x_0, x_1, x_2, x_3, . . . ) \in X)$$ for $X\in\Gamma$ is true. Similarly, games with no beginnings should be connected to the semantics of infinitary sentences with ill-founded strings of quantifiers. In the paper "On languages with non-homogeneous strings of quantifiers" (http://www.springerlink.com/content/b6r2738460434847/), Saharon Shelah did some work on the behavior of such sentences (his semantics for these sentences is in terms of Skolem functions; it appears to avoid Joel's observation by requiring that a strategy for one player look only at moves made by the other player, but I'm not certain of this - please correct me if I'm wrong!). The main result is that "every linear string of quantifiers can be replaced by a well-ordered sequence of quantifiers," which goes some way towards reducing the study of beginningless games to the study of endless games.
However, it should be noted that non-linear "strings" (posets?) of quantifiers have also been studied (cf. "Dependence Logic"), and I have no idea what happens if we look at branching, ill-founded collections of quantifiers, or if this has been looked at in the past (although I vaguely recall a paper by either Hintikka or Vaananen on the subject, but I can't find it, so maybe it doesn't exist). I also don't know a good game-theoretic interpretation of such collections of quantifiers, but I imagine one would not be too hard to come by.
-
That is a good idea of having the strategies only depend on the opponent's play, but it doesn't avoid the kind of issues in my answer. For example, it is not true that any two such strategies have a common play: consider the strategy for Alice that plays $1$, if Bob had played infinitely many $0$s, and otherwise $0$; versus the strategy for Bob that just copies Alice's last move. There is no common play for these two strategies, since if the tail has infinitely many $0$, then it should have been all $1$s, and it not, then it should have been all $0$s. – Joel David Hamkins Nov 12 at 21:20
Ah, I think I see - in Shelah's semantics, it's not quite a game, in the normal sense, although it still feels slightly like a game. Alice (existential) plays a collection of believed-to-be Skolem functions which depend on only the relevant variables; then Bob (universal) plays a single variable assignment on the universally-quantified variables; then plugging Bob's variable assignment into Alice's "Skolem" functions gives a variable assignment (=run) for the whole thing, at which point the validity of the matrix can be checked. Does this sound correct? I'm not sure. – Noah S Nov 12 at 21:55
To the best of my knowledge, the literature on ill-founded quantifier prefixes (and, at least implicitly, the literature on beginningless discrete-time games) begins with Henkin's 1959 "Some remarks on infinitely long formulas". I don't recall if Henkin mentioned games in that paper, but the connection between games and quantifier logic was certainly well understood by that time. Henkin concluded with a remark that can at least be interpreted as saying, sometimes both players have winning strategies. – Butch Malahide May 12 at 6:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9655159711837769, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/194216-absolute-maximum-b.html
|
# Thread:
1. ## absolute maximum in [a,b)
Suppose that the function $f$ is continuous on $[a,b)$ and the limit $L=\lim_{x\rightarrow{b^-}}{f(x)}$ exists.
(i) Prove that if there is an $x_0\in{[a,b)}$ such that $f(x_0)>L$, then $f$ has an absolute maximum in $[a,b)$.
How to approach this question?
(ii) If there is an $x_1\in{[a,b)}$ such that $f(x_1)=L$, does $f$ necessarily have an absolute maximum in $[a,b)$?
2. ## Re: absolute maximum in [a,b)
I'm assuming L is a finite real number, if that's the case then
(i) By the definition of limit there is an $s>0$ such that if $0<|x-b|<s$ then $f(x)-L\leq |f(x)-L|< f(x_0)-L$ so that $f(x)<f(x_0)$ on $(b-s,b)$. From here it should be obvious.
(ii) If we define $f(b)=L$ then $f$ is continous on $[a,b]$, by the extreme value theorem we have an absolute maximum $M$, then $M\geq L$. Consider the cases $M=L$, $M>L$.
3. ## Re: absolute maximum in [a,b)
Originally Posted by Jose27
I'm assuming L is a finite real number, if that's the case then
(ii) If we define $f(b)=L$ then $f$ is continous on $[a,b]$, by the extreme value theorem we have an absolute maximum $M$, then $M\geq L$. Consider the cases $M=L$, $M>L$.
This method can also be adapted to do part (i) thus saving work by using the same idea for both.
CB
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8997305631637573, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Matrix_equation
|
# Matrix (mathematics)
(Redirected from Matrix equation)
"Matrix theory" redirects here. For the physics topic, see Matrix string theory.
Specific elements of a matrix are often denoted by a variable with two subscripts. For instance, a2,1 represents the element at the second row and first column of a matrix A.
In mathematics, a matrix (plural matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns.[1][2] The individual items in a matrix are called its elements or entries. An example of a matrix with 2 rows and 3 columns is
$\begin{bmatrix}1 & 9 & -13 \\20 & 5 & -6 \end{bmatrix}.$
Matrices of the same size can be added or subtracted element by element. The rule for matrix multiplication is more complicated, and two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation. If R is a rotation matrix and v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of a system of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Eigenvalues and eigenvectors provide insight into the geometry of linear transformations.
Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[3] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to the structure of particular matrix structures, e.g. sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example is the matrix representing the derivative operator, which acts on the Taylor series of a function.
## Definition and notation
A matrix is a rectangular array of numbers or other mathematical objects, for which operations such as addition and multiplication are defined.[4] Most commonly, a matrix over a field F is a rectangular array of scalars from F.[5][6] Most of this article focuses on real and complex matrices, i.e., matrices whose elements are real numbers or complex numbers, respectively. More general types of entries are discussed below. For instance, this is a real matrix:
$\mathbf{A} = \begin{bmatrix} -1.3 & 0.6 \\ 20.4 & 5.5 \\ 9.7 & -6.2 \end{bmatrix}.$
The numbers, symbols or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines in a matrix are called rows and columns, respectively.
### Size
The size of a matrix is defined by the number of rows and columns it contains. A matrix with m rows and n columns is called an m × n matrix or m-by-n matrix, while m and n are called its dimensions. For example, this is a 2 × 3 matrix of integer numbers:
$\mathbf{A} = \begin{bmatrix} 9 & 13 & 5 \\ 1 & 11 & 7 \end{bmatrix}.$
Matrices which have a single row are called row vectors, and those which have a single column are called column vectors. A matrix which has the same number of rows and columns is called a square matrix:
Name Size Example Description
Row vector 1 × n $\begin{bmatrix}3 & 7 & 2 \end{bmatrix}$ A matrix with one row, sometimes used to represent a vector
Column vector n × 1 $\begin{bmatrix}4 \\ 1 \\ 8 \end{bmatrix}$ A matrix with one column, sometimes used to represent a vector
Square matrix n × n $\begin{bmatrix} 9 & 13 & 5 \\ 1 & 11 & 7 \\ 2 & 6 & 3 \end{bmatrix}$ A matrix with the same number of rows and columns, sometimes used to represent a linear transformation from a vector space to itself, such as reflection, rotation, or shearing.
A matrix with an infinite number of row or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or columns (0 × 0) which is called an empty matrix.
### Notation
Matrices are commonly written in box brackets:
$\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}.$
An alternative notation uses large parentheses instead of box brackets:
$\mathbf{A} = \left( \begin{array}{rrrr} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{array} \right).$
The specifics of symbolic matrix notation varies widely, with some prevailing trends. Matrices are usually simbolized using upper-case letters (such as A in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., a11, or a1,1), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface upright (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, (e.g., $\underline{\underline{A}}$).
The entry in the i-th row and j-th column of a matrix A is sometimes referred to as the i,j, (i,j), or (i,j)th entry of the matrix, and most commonly denoted as ai,j, or aij. Alternative notations for that entry are A[i,j] or Ai,j. For example, the (1,3) entry of the following matrix A is 5 (also denoted a13, a1,3, A[1,3] or A1,3):
$\mathbf{A}=\begin{bmatrix} 4 & -7 & \color{red}{5} & 0 \\ -2 & 0 & 11 & 8 \\ 19 & 1 & -3 & 12 \end{bmatrix}$
Sometimes a matrix is referred to by giving a formula for its (i,j)th entry, often with double parenthesis around the formula for the entry, for example, if the (i,j)th entry of A were given by aij, A would be denoted ((aij)).
The set of all m-by-n matrices is denoted $\mathbb{M}$(m, n).
A common shorthand is
A = [ai,j]i = 1,...,m; j = 1,...,n or more briefly A = [ai,j]m×n
to define an m × n matrix A. Usually the entries ai,j are defined separately for all integers 1 ≤ i ≤ m and 1 ≤ j ≤ n. They can however sometimes be given by one formula; for example the 3-by-4 matrix
$\mathbf A = \begin{bmatrix} 0 & -1 & -2 & -3\\ 1 & 0 & -1 & -2\\ 2 & 1 & 0 & -1 \end{bmatrix}$
can alternatively be specified by A = [i − j]i = 1,2,3; j = 1,...,4, or simply A = ((i-j)), where the size of the matrix is understood.
Some programming languages start the numbering of rows and columns at zero, in which case the entries of an m-by-n matrix are indexed by 0 ≤ i ≤ m − 1 and 0 ≤ j ≤ n − 1.[7] This article follows the more common convention in mathematical writing where enumeration starts from 1.
## Basic operations
There are a number of basic operations that can be applied to modify matrices, called matrix addition, scalar multiplication, transposition, matrix multiplication, row operations, and submatrix.[9]
### Addition, scalar multiplication and transposition
Main articles: Matrix addition, Scalar multiplication, and Transpose
Operation Definition Example
Addition The sum A+B of two m-by-n matrices A and B is calculated entrywise:
(A + B)i,j = Ai,j + Bi,j, where 1 ≤ i ≤ m and 1 ≤ j ≤ n.
$\begin{bmatrix} 1 & 3 & 1 \\ 1 & 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 & 5 \\ 7 & 5 & 0 \end{bmatrix} = \begin{bmatrix} 1+0 & 3+0 & 1+5 \\ 1+7 & 0+5 & 0+0 \end{bmatrix} = \begin{bmatrix} 1 & 3 & 6 \\ 8 & 5 & 0 \end{bmatrix}$
Scalar multiplication The scalar multiplication cA of a matrix A and a number c (also called a scalar in the parlance of abstract algebra) is given by multiplying every entry of A by c:
(cA)i,j = c · Ai,j.
$2 \cdot \begin{bmatrix} 1 & 8 & -3 \\ 4 & -2 & 5 \end{bmatrix} = \begin{bmatrix} 2 \cdot 1 & 2\cdot 8 & 2\cdot -3 \\ 2\cdot 4 & 2\cdot -2 & 2\cdot 5 \end{bmatrix} = \begin{bmatrix} 2 & 16 & -6 \\ 8 & -4 & 10 \end{bmatrix}$
Transpose The transpose of an m-by-n matrix A is the n-by-m matrix AT (also denoted Atr or tA) formed by turning rows into columns and vice versa:
(AT)i,j = Aj,i.
$\begin{bmatrix} 1 & 2 & 3 \\ 0 & -6 & 7 \end{bmatrix}^\mathrm{T} = \begin{bmatrix} 1 & 0 \\ 2 & -6 \\ 3 & 7 \end{bmatrix}$
Familiar properties of numbers extend to these operations of matrices: for example, addition is commutative, i.e., the matrix sum does not depend on the order of the summands: A + B = B + A.[10] The transpose is compatible with addition and scalar multiplication, as expressed by (cA)T = c(AT) and (A + B)T = AT + BT. Finally, (AT)T = A.
### Matrix multiplication
Main article: Matrix multiplication
Schematic depiction of the matrix product AB of two matrices A and B.
Multiplication of two matrices is defined only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B:
$[\mathbf{AB}]_{i,j} = A_{i,1}B_{1,j} + A_{i,2}B_{2,j} + \cdots + A_{i,n}B_{n,j} = \sum_{r=1}^n A_{i,r}B_{r,j}$,
where 1 ≤ i ≤ m and 1 ≤ j ≤ p.[11] For example, the underlined entry 2340 in the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340:
$\begin{align} \begin{bmatrix} \underline{2} & \underline 3 & \underline 4 \\ 1 & 0 & 0 \\ \end{bmatrix} \begin{bmatrix} 0 & \underline{1000} \\ 1 & \underline{100} \\ 0 & \underline{10} \\ \end{bmatrix} &= \begin{bmatrix} 3 & \underline{2340} \\ 0 & 1000 \\ \end{bmatrix}. \end{align}$
Matrix multiplication satisfies the rules (AB)C = A(BC) (associativity), and (A+B)C = AC+BC as well as C(A+B) = CA+CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined.[12] The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and m ≠ k. Even if both products are defined, they need not be equal, i.e., generally one has
AB ≠ BA,
i.e., matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:
$\begin{bmatrix} 1 & 2\\ 3 & 4\\ \end{bmatrix} \begin{bmatrix} 0 & 1\\ 0 & 0\\ \end{bmatrix}= \begin{bmatrix} 0 & 1\\ 0 & 3\\ \end{bmatrix},$
whereas
$\begin{bmatrix} 0 & 1\\ 0 & 0\\ \end{bmatrix} \begin{bmatrix} 1 & 2\\ 3 & 4\\ \end{bmatrix}= \begin{bmatrix} 3 & 4\\ 0 & 0\\ \end{bmatrix} .$
### Other kinds of products
Besides the ordinary matrix multiplication just described, there exist other less frequently used operations on matrices that can be considered forms of multiplication, such as the Hadamard product and the Kronecker product.[13] They arise in solving matrix equations such as the Sylvester equation.
### Row operations
Main article: Row operations
There are three types of row operations:
1. row addition, that is adding a row to another.
2. row multiplication, that is multiplying all entries of a row by a constant;
3. row switching, that is interchanging two rows of a matrix;
These operations are used in a number of ways, including solving linear equations and finding matrix inverses.
### Submatrix
A submatrix of a matrix is obtained by deleting any collection of rows and/or columns. For example, for the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2:
$\mathbf{A}=\begin{bmatrix} \color{red}{1} & 2 & \color{red}{3} & {\color{red} 4} \\ \color{red}{5} & 6 & {\color{red}7} & {\color{red}8} \\ 9 & 10 & 11 & 12 \end{bmatrix} \rightarrow \begin{bmatrix} 1 & 3 & 4 \\ 5 & 7 & 8 \end{bmatrix}$
The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.
## Linear equations
Main articles: Linear equation and System of linear equations
Matrices can be used to compactly write and work with multiple linear equations, i.e., systems of linear equations. For example, if A is an m-by-n matrix, x designates a column vector (i.e., n×1-matrix) of n variables x1, x2, ..., xn, and b is an m×1-column vector, then the matrix equation
Ax = b
is equivalent to the system of linear equations
A1,1x1 + A1,2x2 + ... + A1,nxn = b1
...
Am,1x1 + Am,2x2 + ... + Am,nxn = bm .[14]
## Linear transformations
Main articles: Linear transformation and Transformation matrix
The vectors represented by a 2-by-2 matrix correspond to the sides of a unit square transformed into a parallelogram.
Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known as linear maps. A real m-by-n matrix A gives rise to a linear transformation Rn → Rm mapping each vector x in Rn to the (matrix) product Ax, which is a vector in Rm. Conversely, each linear transformation f: Rn → Rm arises from a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f(ej), where ej = (0,...,0,1,0,...,0) is the unit vector with 1 in the jth position and 0 elsewhere. The matrix A is said to represent the linear map f, and A is called the transformation matrix of f.
For example, the 2×2 matrix
$\mathbf A = \begin{bmatrix} a & c\\b & d \end{bmatrix}\,$
can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d). The parallelogram pictured at the right is obtained by multiplying A with each of the column vectors $\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix}$ and $\begin{bmatrix}0 \\ 1\end{bmatrix}$ in turn. These vectors define the vertices of the unit square.
The following table shows a number of 2-by-2 matrices with the associated linear maps of R2. The blue original is mapped to the green grid and shapes. The origin (0,0) is marked with a black point.
| | | | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| Horizontal shear with m=1.25. | Horizontal flip | Squeeze mapping with r=3/2 | Scaling by a factor of 3/2 | Rotation by π/6R = 30° |
| $\begin{bmatrix} 1 & 1.25 \\ 0 & 1 \end{bmatrix}$ | $\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix}$ | $\begin{bmatrix} 3/2 & 0 \\ 0 & 2/3 \end{bmatrix}$ | $\begin{bmatrix} 3/2 & 0 \\ 0 & 3/2 \end{bmatrix}$ | $\begin{bmatrix}\cos(\pi / 6^{R}) & -\sin(\pi / 6^{R})\\ \sin(\pi / 6^{R}) & \cos(\pi / 6^{R})\end{bmatrix}$ |
| | | | | |
Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition of maps:[15] if a k-by-m matrix B represents another linear map g : Rm → Rk, then the composition g ∘ f is represented by BA since
(g ∘ f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x.
The last equality follows from the above-mentioned associativity of matrix multiplication.
The rank of a matrix A is the maximum number of linearly independent row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors.[16] Equivalently it is the dimension of the image of the linear map represented by A.[17] The rank-nullity theorem states that the dimension of the kernel of a matrix plus the rank equals the number of columns of the matrix.[18]
## Square matrices
Main article: Square matrix
A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix.
### Main types
Name Example with n = 3
Diagonal matrix $\begin{bmatrix} a_{11} & 0 & 0 \\ 0 & a_{22} & 0 \\ 0 & 0 & a_{33} \\ \end{bmatrix}$
Lower triangular matrix $\begin{bmatrix} a_{11} & 0 & 0 \\ a_{21} & a_{22} & 0 \\ a_{31} & a_{32} & a_{33} \\ \end{bmatrix}$
Upper triangular matrix $\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ 0 & a_{22} & a_{23} \\ 0 & 0 & a_{33} \\ \end{bmatrix}$
#### Diagonal or triangular matrix
If all entries outside the main diagonal are zero, A is called a diagonal matrix. If only all entries above (or below) the main diagonal are zero, A is called a lower (or upper) triangular matrix.
#### Identity matrix
The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.
$I_1 = \begin{bmatrix} 1 \end{bmatrix} ,\ I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} ,\ \cdots ,\ I_n = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix}$
It is a square matrix of order n, and also a special kind of diagonal matrix. It is called identity matrix because multiplication with it leaves a matrix unchanged:
AIn = ImA = A for any m-by-n matrix A.
#### Symmetric or skew-symmetric matrix
A square matrix A that is equal to its transpose, i.e., A = AT, is a symmetric matrix. If instead, A was equal to the negative of its transpose, i.e., A = −AT, then A is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy A∗ = A, where the star or asterisk denotes the conjugate transpose of the matrix, i.e., the transpose of the complex conjugate of A.
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.[19] This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below.
#### Invertible matrix and its inverse
A square matrix A is called invertible or non-singular if there exists a matrix B such that
AB = BA = In.[20][21]
If B exists, it is unique and is called the inverse matrix of A, denoted A−1.
#### Definite matrix
Positive definite matrix Indefinite matrix
$\begin{bmatrix} 1/4 & 0 \\ 0 & 1/4 \\ \end{bmatrix}$ $\begin{bmatrix} 1/4 & 0 \\ 0 & -1/4 \end{bmatrix}$
Q(x,y) = 1/4 x2 + 1/4y2 Q(x,y) = 1/4 x2 − 1/4 y2
Points such that Q(x,y)=1
(Ellipse).
Points such that Q(x,y)=1
(Hyperbola).
A symmetric n×n-matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors x ∈ Rn the associated quadratic form given by
Q(x) = xTAx
takes only positive values (respectively only negative values; both some negative and some positive values).[22] If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive.[23] The table at the right shows two possibilities for 2-by-2 matrices.
Allowing as input two different vectors instead yields the bilinear form associated to A:
BA (x, y) = xTAy.[24]
#### Orthogonal matrix
An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:
$A^\mathrm{T}=A^{-1}, \,$
which entails
$A^\mathrm{T} A = A A^\mathrm{T} = I, \,$
where I is the identity matrix.
An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. A special orthogonal matrix is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure rotation, while every orthogonal matrix with determinant -1 is either a pure reflection, or a composition of reflection and rotation.
The complex analogue of an orthogonal matrix is a unitary matrix.
### Main operations
#### Trace
The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned above, the trace of the product of two matrices is independent of the order of the factors:
tr(AB) = tr(BA).
This is immediate from the definition of matrix multiplication:
$\scriptstyle\operatorname{tr}(\mathsf{AB}) = \sum_{i=1}^m \sum_{j=1}^n A_{ij} B_{ji} = \operatorname{tr}(\mathsf{BA}).$
Also, the trace of a matrix is equal to that of its transpose, i.e.,
tr(A) = tr(AT).
#### Determinant
Main article: Determinant
A linear transformation on R2 given by the indicated matrix. The determinant of this matrix is −1, as the area of the green parallelogram at the right is 1, but the map reverses the orientation, since it turns the counterclockwise orientation of the vectors to a clockwise one.
The determinant det(A) or |A| of a square matrix A is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in R2) or volume (in R3) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
The determinant of 2-by-2 matrices is given by
$\det \begin{bmatrix}a&b\\c&d\end{bmatrix} = ad-bc.$
The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalises these two formulae to all dimensions.[25]
The determinant of a product of square matrices equals the product of their determinants:
det(AB) = det(A) · det(B).
Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.[27] Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, i.e., determinants of smaller matrices.[28] This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.[29]
#### Eigenvalues and eigenvectors
Main article: Eigenvalues and eigenvectors
A number λ and a non-zero vector v satisfying
Av = λv
are called an eigenvalue and an eigenvector of A, respectively.[nb 1][30] The number λ is an eigenvalue of an n×n-matrix A if and only if A−λIn is not invertible, which is equivalent to
$\det(\mathsf{A}-\lambda \mathsf{I}) = 0.\$[31]
The polynomial pA in an indeterminate X given by evaluation the determinant det(XIn−A) is called the characteristic polynomial of A. It is a monic polynomial of degree n. Therefore the polynomial equation pA(λ) = 0 has at most n different solutions, i.e., eigenvalues of the matrix.[32] They may be complex even if the entries of A are real. According to the Cayley–Hamilton theorem, pA(A) = 0, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix.
## Computational aspects
Matrix calculations can be often performed with different techniques. Many problems can be solved by both direct algorithms or iterative approaches. For example, the eigenvectors of a square matrix can be obtained by finding a sequence of vectors xn converging to an eigenvector when n tends to infinity.[33]
To be able to choose the more appropriate algorithm for each specific problem, it is important to determine both the effectiveness and precision of all the available algorithms. The domain studying these matters is called numerical linear algebra.[34] As with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability.
Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, e.g., multiplication of matrices. For example, calculating the matrix product of two n-by-n matrix using the definition given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. The Strassen algorithm outperforms this "naive" algorithm; it needs only n2.807 multiplications.[35] A refined approach also incorporates specific features of the computing devices.
In many practical situations additional information about the matrices involved is known. An important case are sparse matrices, i.e., matrices most of whose entries are zero. There are specifically adapted algorithms for, say, solving linear systems Ax = b for sparse matrices A, such as the conjugate gradient method.[36]
An algorithm is, roughly speaking, numerically stable, if little deviations in the input values do not lead to big deviations in the result. For example, calculating the inverse of a matrix via Laplace's formula (Adj (A) denotes the adjugate matrix of A)
A−1 = Adj(A) / det(A)
may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix' inverse.[37]
Although most computer languages are not designed with commands or libraries for matrices, as early as the 1970s, some engineering desktop computers such as the HP 9830 had ROM cartridges to add BASIC commands for matrices. Some computer languages such as APL were designed to manipulate matrices, and various mathematical programs can be used to aid computing with matrices.[38]
## Decomposition
Main articles: Matrix decomposition, Matrix diagonalization, Gaussian elimination, and Montante's method
There are several methods to render matrices into a more easily accessible form. They are generally referred to as matrix decomposition or matrix factorization techniques. The interest of all these techniques is that they preserve certain properties of the matrices in question, such as determinant, rank or inverse, so that these quantities can be calculated after applying the transformation, or that certain matrix operations are algorithmically easier to carry out for some types of matrices.
The LU decomposition factors matrices as a product of lower (L) and an upper triangular matrices (U).[39] Once this decomposition is calculated, linear systems can be solved more efficiently, by a simple technique called forward and back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaussian elimination is a similar algorithm; it transforms any matrix to row echelon form.[40] Both methods proceed by multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding multiples of one row to another row. Singular value decomposition expresses any matrix A as a product UDV∗, where U and V are unitary matrices and D is a diagonal matrix.
An example of a matrix in Jordan normal form. The grey blocks are called Jordan blocks.
The eigendecomposition or diagonalization expresses A as a product VDV−1, where D is a diagonal matrix and V is a suitable invertible matrix.[41] If A can be written in this form, it is called diagonalizable. More generally, and applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say matrices whose only nonzero entries are the eigenvalues λ1 to λn of A, placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right.[42] Given the eigendecomposition, the nth power of A (i.e., n-fold iterated matrix multiplication) can be calculated via
An = (VDV−1)n = VDV−1VDV−1...VDV−1 = VDnV−1
and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for A instead. This can be used to compute the matrix exponential eA, a need frequently arising in solving linear differential equations, matrix logarithms and square roots of matrices.[43] To avoid numerically ill-conditioned situations, further algorithms such as the Schur decomposition can be employed.[44]
## Abstract algebraic aspects and generalizations
Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general fields or even rings, while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension are tensors, which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realised as sequences of numbers, while matrices are rectangular or two-dimensional array of numbers.[45] Matrices, subject to certain requirements tend to form groups known as matrix groups.
### Matrices with more general entries
This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any field, i.e., a set where addition, subtraction, multiplication and division operations are defined and well-behaved, may be used instead of R or C, for example rational numbers or finite fields. For example, coding theory makes use of matrices over finite fields. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist only in a larger field than that of the coefficients of the matrix; for instance they may be complex in case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (e.g., to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an algebraically closed field, such as C, from the outset.
More generally, abstract algebra makes great use of matrices with entries in a ring R.[46] Rings are a more general notion than fields in that a division operation need not exist. The very same addition and multiplication operations of matrices extend to this setting, too. The set M(n, R) of all square n-by-n matrices over R is a ring called matrix ring, isomorphic to the endomorphism ring of the left R-module Rn.[47] If the ring R is commutative, i.e., its multiplication is commutative, then M(n, R) is a unitary noncommutative (unless n = 1) associative algebra over R. The determinant of square matrices over a commutative ring R can still be defined using the Leibniz formula; such a matrix is invertible if and only if its determinant is invertible in R, generalising the situation over a field F, where every nonzero element is invertible.[48] Matrices over superrings are called supermatrices.[49]
Matrices do not always have all their entries in the same ring – or even in any ring at all. One special but common case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need not be quadratic matrices, and thus need not be members of any ordinary ring; but their sizes must fulfil certain compatibility conditions.
### Relationship to linear maps
Linear maps Rn → Rm are equivalent to m-by-n matrices, as described above. More generally, any linear map f: V → W between finite-dimensional vector spaces can be described by a matrix A = (aij), after choosing bases v1, ..., vn of V, and w1, ..., wm of W (so n is the dimension of V and m is the dimension of W), which is such that
$f(\mathbf{v}_j) = \sum_{i=1}^m a_{i,j} \mathbf{w}_i\qquad\mbox{for }j=1,\ldots,n.$
In other words, column j of A expresses the image of vj in terms of the basis vectors wi of W; thus this relation uniquely determines the entries of the matrix A. Note that the matrix depends on the choice of the bases: different choices of bases give rise to different, but equivalent matrices.[50] Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix AT describes the transpose of the linear map given by A, with respect to the dual bases.[51]
These properties can be restated in a more natural way: the category of all matrices with entries in a field $k$ with multiplication as composition is equivalent to the category of finite dimensional vector spaces and linear maps over this field.
More generally, the set of m×n matrices can be used to represent the R-linear maps between the free modules Rm and Rn for an arbitrary ring R with unity. When n = m composition of these maps is possible, and this gives rise to the matrix ring of n×n matrices representing the endomorphism ring of Rn.
### Matrix groups
Main article: Matrix group
A group is a mathematical structure consisting of a set of objects together with a binary operation, i.e., an operation combining any two objects to a third, subject to certain requirements.[52] A group in which the objects are matrices and the group operation is matrix multiplication is called a matrix group.[nb 2][53] Since in a group every element has to be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the general linear groups.
Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (i.e., a smaller group contained in) their general linear group, called a special linear group.[54] Orthogonal matrices, determined by the condition
MTM = I,
form the orthogonal group.[55] Every orthogonal matrix has determinant 1 or −1. Orthogonal matrices with determinant 1 form a subgroup called special orthogonal group.
Every finite group is isomorphic to a matrix group, as one can see by considering the regular representation of the symmetric group.[56] General groups can be studied using matrix groups, which are comparatively well-understood, by means of representation theory.[57]
### Infinite matrices
It is also possible to consider matrices with infinitely many rows and/or columns[58] even if, being infinite objects, one cannot write down such matrices explicitly. All that matters is that for every element in the set indexing rows, and every element in the set indexing columns, there is a well-defined entry (these index sets need not even be subsets of the natural numbers). The basic operations of addition, subtraction, scalar multiplication and transposition can still be defined without problem; however matrix multiplication may involve infinite summations to define the resulting entries, and these are not defined in general.
If R is any ring with unity, then the ring of endomorphisms of $M=\bigoplus_{i\in I}R$ as a right R module is isomorphic to the ring of column finite matrices $\mathbb{CFM}_I(R)$ whose entries are indexed by $I\times I$, and whose columns each contain only finitely many nonzero entries. The endomorphisms of M considered as a left R module result in an analogous object, the row finite matrices $\mathbb{RFM}_I(R)$ whose rows each only have finitely many nonzero entries.
If infinite matrices are used to describe linear maps, then only those matrices can be used all of whose columns have but a finite number of nonzero entries, for the following reason. For a matrix A to describe a linear map f: V→W, bases for both spaces must have been chosen; recall that by definition this means that every vector in the space can be written uniquely as a (finite) linear combination of basis vectors, so that written as a (column) vector v of coefficients, only finitely many entries vi are nonzero. Now the columns of A describe the images by f of individual basis vectors of V in the basis of W, which is only meaningful if these columns have only finitely many nonzero entries. There is no restriction on the rows of A however: in the product A·v there are only finitely many nonzero coefficients of v involved, so every one of its entries, even if it is given as an infinite sum of products, involves only finitely many nonzero terms and is therefore well defined. Moreover this amounts to forming a linear combination of the columns of A that effectively involves only finitely many of them, whence the result has only finitely many nonzero entries, because each of those columns do. One also sees that products of two matrices of the given type is well defined (provided as usual that the column-index and row-index sets match), is again of the same type, and corresponds to the composition of linear maps.
If R is a normed ring, then the condition of row or column finiteness can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent series also form a ring.
In that vein, infinite matrices can also be used to describe operators on Hilbert spaces, where convergence and continuity questions arise, which again results in certain constraints that have to be imposed. However, the explicit point of view of matrices tends to obfuscate the matter,[nb 3] and the abstract and more powerful tools of functional analysis can be used instead.
### Empty matrices
An empty matrix is a matrix in which the number of rows or columns (or both) is zero.[59][60] Empty matrices help dealing with maps involving the zero vector space. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows from regarding the empty product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants.
## Applications
There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given (finite) set of alternatives the players choose.[61] Text mining and automated thesaurus compilation makes use of document-term matrices such as tf-idf to track frequencies of certain words in several documents.[62]
Complex numbers can be represented by particular real 2-by-2 matrices via
$a + ib \leftrightarrow \begin{bmatrix} a & -b \\ b & a \end{bmatrix},$
under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A similar interpretation is possible for quaternions,[63] and also for Clifford algebras in general.
Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break.[64] Computer graphics uses matrices both to represent objects and to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three-dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation.[65] Matrices over a polynomial ring are important in the study of control theory.
Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan equations to obtain the molecular orbitals of the Hartree–Fock method.
### Graph theory
An undirected graph with adjacency matrix $\begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}.$
The adjacency matrix of a finite graph is a basic notion of graph theory.[66] It saves which vertices of the graph are connected by an edge. Matrices containing just two different values (0 and 1 meaning for example "yes" and "no") are called logical matrices. The distance (or cost) matrix contains information about distances of the edges.[67] These concepts can be applied to websites connected hyperlinks or cities connected by roads etc., in which case (unless the road network is extremely dense) the matrices tend to be sparse, i.e., contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory.
### Analysis and geometry
The Hessian matrix of a differentiable function ƒ: Rn → R consists of the second derivatives of ƒ with respect to the several coordinate directions, i.e.[68]
$H(f) = \left [\frac {\partial^2f}{\partial x_i \, \partial x_j} \right ].$
At the saddle point (x = 0, y = 0) (red) of the function f(x,−y) = x2 − y2, the Hessian matrix $\begin{bmatrix} 2 & 0 \\ 0 & -2 \end{bmatrix}$ is indefinite.
It encodes information about the local growth behaviour of the function: given a critical point x = (x1, ..., xn), i.e., a point where the first partial derivatives $\partial f / \partial x_i$ of ƒ vanish, the function has a local minimum if the Hessian matrix is positive definite. Quadratic programming can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices (see above).[69]
Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map f: Rn → Rm. If f1, ..., fm denote the components of f, then the Jacobi matrix is defined as [70]
$J(f) = \left [\frac {\partial f_i}{\partial x_j} \right ]_{1 \leq i \leq m, 1 \leq j \leq n}.$
If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the implicit function theorem.[71]
Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has decisive influence on the set of possible solutions of the equation in question.[72]
The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen with respect to a sufficiently fine grid, which in turn can be recast as a matrix equation.[73]
### Probability theory and statistics
Two different Markov chains. The chart depicts the number of particles (of a total of 1000) in state "2". Both limiting values can be determined from the transition matrices, which are given by $\begin{bmatrix}.7&0\\.3&1\end{bmatrix}$ (red) and $\begin{bmatrix}.7&.2\\.3&.8\end{bmatrix}$ (black).
Stochastic matrices are square matrices whose rows are probability vectors, i.e., whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states.[74] A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain like absorbing states, i.e., states that any particle attains eventually, can be read off the eigenvectors of the transition matrices.[75]
Statistics also makes use of matrices in many different forms.[76] Descriptive statistics is concerned with describing data sets, which can often be represented in matrix form, by reducing the amount of data. The covariance matrix encodes the mutual variance of several random variables.[77] Another technique using matrices are linear least squares, a method that approximates a finite set of pairs (x1, y1), (x2, y2), ..., (xN, yN), by a linear function
yi ≈ axi + b, i = 1, ..., N
which can be formulated in terms of matrices, related to the singular value decomposition of matrices.[78]
Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics.[79][80]
### Symmetries and transformations in physics
Further information: Symmetry in physics
Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors.[81] For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses.[82]
### Linear combinations of quantum states
The first model of quantum mechanics (Heisenberg, 1925) represented the theory's operators by infinite-dimensional matrices acting on quantum states.[83] This is also referred to as matrix mechanics. One particular example is the density matrix that characterizes the "mixed" state of a quantum system as a linear combination of elementary, "pure" eigenstates.[84]
Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles.[85]
### Normal modes
A general application of matrices in physics is to the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of systems consisting of mutually bound component atoms.[86] They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.[87]
### Geometrical optics
Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix: the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. Actually, there are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices.[88]
### Electronics
Traditional mesh analysis in electronics leads to a system of linear equations that can be described with a matrix.
The behaviour of many electronic components can be described using matrices. Let A be a 2-dimensional vector with the component's input voltage v1 and input current i1 as its elements, and let B be a 2-dimensional vector with the component's output voltage v2 and output current i2 as its elements. Then the behaviour of the electronic component can be described by B = H · A, where H is a 2 x 2 matrix containing one impedance element (h12), one admittance element (h21) and two dimensionless elements (h11 and h22). Calculating a circuit now reduces to multiplying matrices.
## History
Matrices have a long history of application in solving linear equations. The Chinese text The Nine Chapters on the Mathematical Art (Jiu Zhang Suan Shu), from between 300 BC and AD 200, is the first example of the use of matrix methods to solve simultaneous equations,[89] including the concept of determinants, over 1000 years before its publication by the Japanese mathematician Seki in 1683[90] and the German mathematician Leibniz in 1693. Cramer presented his rule in 1750.
Early matrix theory emphasized determinants more strongly than matrices and an independent matrix concept akin to the modern notion emerged only in 1858, with Cayley's Memoir on the theory of matrices.[91][92] The term "matrix" (Latin for "womb", derived from mater—mother[93]) was coined by Sylvester in 1850,[94] who understood a matrix as an object giving rise to a number of determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows.[95] In an 1851 paper, Sylvester explains:
I have in previous papers defined a "Matrix" as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent.[96]
The study of determinants sprang from several sources.[97] Number-theoretical problems led Gauss to relate coefficients of quadratic forms, i.e., expressions such as x2 + xy − 2y2, and linear maps in three dimensions to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are non-commutative. Cauchy was the first to prove general statements about determinants, using as definition of the determinant of a matrix A = [ai,j] the following: replace the powers ajk by ajk in the polynomial
$a_1 a_2 \cdots a_n \prod_{i < j} (a_j - a_i)\;$,
where Π denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric matrices are real.[98] Jacobi studied "functional determinants"—later called Jacobi determinants by Sylvester—which can be used to describe geometric transformations at a local (or infinitesimal) level, see above; Kronecker's Vorlesungen über die Theorie der Determinanten[99] and Weierstrass' Zur Determinantentheorie,[100] both published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established.
Many theorems were first established for small matrices only, for example the Cayley–Hamilton theorem was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century the Gauss–Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Jordan. In the early 20th century, matrices attained a central role in linear algebra.[101] partially due to their use in classification of the hypercomplex number systems of the previous century.
The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many rows and columns.[102] Later, von Neumann carried out the mathematical formulation of quantum mechanics, by further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking, correspond to Euclidean space, but with an infinity of independent directions.
### Other historical usages of the word “matrix” in mathematics
The word has been used in unusual ways by at least two authors of historical importance.
Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (1910–1913) use the word “matrix” in the context of their Axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the “bottom” (0 order) the function is identical to its extension:
“Let us give the name of matrix to any function, of however many variables, which does not involve any apparent variables. Then any possible function other than a matrix is derived from a matrix by means of generalization, i.e., by considering the proposition which asserts that the function in question is true with all possible values or with some value of one of the arguments, the other argument or arguments remaining undetermined”.[103]
For example a function Φ(x, y) of two variables x and y can be reduced to a collection of functions of a single variable, e.g., y, by “considering” the function for all possible values of “individuals” ai substituted in place of variable x. And then the resulting collection of functions of the single variable y, i.e., ∀ai: Φ(ai, y), can be reduced to a “matrix” of values by “considering” the function for all possible values of “individuals” bi substituted in place of variable y:
∀bj∀ai: Φ(ai, bj).
Alfred Tarski in his 1946 Introduction to Logic used the word “matrix” synonymously with the notion of truth table as used in mathematical logic.[104]
## Notes
1. K. Bryan and T. Leise. The \$25,000,000,000 eigenvector: The linear algebra behind Google. SIAM Review, 48(3):569–581, 2006.
2. "How to organize, add and multiply matrices - Bill Shillito". TED ED. Retrieved April 6, 2013.
3. For example, Mathematica, see Wolfram 2003, Ch. 3.7
4. See any standard reference in group.
5. Lang 1987a, Ch. XVI.5. For a more advanced, and more general statement see Lang 1969, Ch. VI.2
6. Šolin 2005, Ch. 2.5. See also stiffness method.
7. Healy, Michael (1986), Matrices for Statistics, Oxford University Press, ISBN 978-0-19-850702-4
8. Shen, Crossley & Lun 1999 cited by Bretscher 2005, p. 1
9. Needham, Joseph; Wang Ling (1959). Science and Civilisation in China III. Cambridge: Cambridge University Press. p. 117. ISBN 9780521058018.
10. Merriam–Webster dictionary, Merriam–Webster, retrieved April, 20th 2009
11. Although many sources state that J. J. Sylvester coined the mathematical term "matrix" in 1848, Sylvester published nothing in 1848. (For proof that Sylvester published nothing in 1848, see: J. J. Sylvester with H. F. Baker, ed., The Collected Mathematical Papers of James Joseph Sylvester (Cambridge, England: Cambridge University Press, 1904), vol. 1.) His earliest use of the term "matrix" occurs in 1850 in: J. J. Sylvester (1850) "Additions to the articles in the September number of this journal, "On a new class of theorems," and on Pascal's theorem," The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, 37 : 363-370. From page 369: "For this purpose we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This will not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants … "
12. Per the OED the first usage of the word "matrix" with respect to mathematics appears in James J. Sylvester in London, Edinb. & Dublin Philos. Mag. 37 (1850), p. 369: "We commence with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This will not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants by fixing upon a number p, and selecting at will p lines and p columns, the squares corresponding to which may be termed determinants of the pth order.
13. Whitehead, Alfred North; and Russell, Bertrand (1913) Principia Mathematica to *56, Cambridge at the University Press, Cambridge UK (republished 1962) cf page 162ff.
14. Tarski, Alfred; (1946) Introduction to Logic and the Methodology of Deductive Sciences, Dover Publications, Inc, New York NY, ISBN 0-486-28462-X.
1. Eigen means "own" in German and in Dutch.
## References
• Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
• Arnold, Vladimir I.; Cooke, Roger (1992), Ordinary differential equations, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-54813-3
• Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1
• Association for Computing Machinery (1979), Computer Graphics, Tata McGraw–Hill, ISBN 978-0-07-059376-3
• Baker, Andrew J. (2003), Matrix Groups: An Introduction to Lie Group Theory, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-85233-470-3
• Bau III, David; Trefethen, Lloyd N. (1997), Numerical linear algebra, Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-361-9
• Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X
• Bretscher, Otto (2005), Linear Algebra with Applications (3rd ed.), Prentice Hall
• Bronson, Richard (1989), Schaum's outline of theory and problems of matrix operations, New York: McGraw–Hill, ISBN 978-0-07-007978-6
• Brown, William A. (1991), Matrices and vector spaces, New York, NY: M. Dekker, ISBN 978-0-8247-8419-5
• Coburn, Nathaniel (1955), Vector and tensor analysis, New York, NY: Macmillan, OCLC 1029828
• Conrey, J. Brian (2007), Ranks of elliptic curves and random matrix theory, Cambridge University Press, ISBN 978-0-521-69964-8
• Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1
• Fudenberg, Drew; Tirole, Jean (1983), Game Theory, MIT Press
• Gilbarg, David; Trudinger, Neil S. (2001), Elliptic partial differential equations of second order (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-41160-4
• Godsil, Chris; Royle, Gordon (2004), Algebraic Graph Theory, Graduate Texts in Mathematics 207, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95220-8
• Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0-8018-5414-9
• Greub, Werner Hildbert (1975), Linear algebra, Graduate Texts in Mathematics, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90110-7
• Halmos, Paul Richard (1982), A Hilbert space problem book, Graduate Texts in Mathematics 19 (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90685-0, MR 675952
• Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6
• Householder, Alston S. (1975), The theory of matrices in numerical analysis, New York, NY: Dover Publications, MR 0378371
• Krzanowski, Wojtek J. (1988), Principles of multivariate analysis, Oxford Statistical Science Series 3, The Clarendon Press Oxford University Press, ISBN 978-0-19-852211-9, MR 969370
• Itõ, Kiyosi, ed. (1987), Encyclopedic dictionary of mathematics. Vol. I-IV (2nd ed.), MIT Press, ISBN 978-0-262-09026-1, MR 901762
• Lang, Serge (1969), Analysis II, Addison-Wesley
• Lang, Serge (1987a), Calculus of several variables (3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96405-8
• Lang, Serge (1987b), Linear algebra, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96412-6
• Lang, Serge (2002), Algebra, Graduate Texts in Mathematics 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR1878556
• Latouche, Guy; Ramaswami, Vaidyanathan (1999), Introduction to matrix analytic methods in stochastic modeling (1st ed.), Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-425-8
• Manning, Christopher D.; Schütze, Hinrich (1999), Foundations of statistical natural language processing, MIT Press, ISBN 978-0-262-13360-9
• Mehata, K. M.; Srinivasan, S. K. (1978), Stochastic processes, New York, NY: McGraw–Hill, ISBN 978-0-07-096612-3
• Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, ISBN 978-0-486-66434-7
• Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76-91646
• Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, p. 449, ISBN 978-0-387-30303-1
• Oualline, Steve (2003), Practical C++ programming, O'Reilly, ISBN 978-0-596-00419-4
• Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), "LU Decomposition and Its Applications", Numerical Recipes in FORTRAN: The Art of Scientific Computing (2nd ed.), Cambridge University Press, pp. 34–42
• Punnen, Abraham P.; Gutin, Gregory (2002), The traveling salesman problem and its variations, Boston, MA: Kluwer Academic Publishers, ISBN 978-1-4020-0664-7
• Reichl, Linda E. (2004), The transition to chaos: conservative classical systems and quantum manifestations, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-98788-0
• Rowen, Louis Halle (2008), Graduate Algebra: noncommutative view, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4153-2
• Šolin, Pavel (2005), Partial Differential Equations and the Finite Element Method, Wiley-Interscience, ISBN 978-0-471-76409-0
• Stinson, Douglas R. (2005), Cryptography, Discrete Mathematics and its Applications, Chapman & Hall/CRC, ISBN 978-1-58488-508-5
• Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95452-3
• Ward, J. P. (1997), Quaternions and Cayley numbers, Mathematics and its Applications 403, Dordrecht, NL: Kluwer Academic Publishers Group, ISBN 978-0-7923-4513-8, MR 1458894
• Wolfram, Stephen (2003), The Mathematica Book (5th ed.), Champaign, IL: Wolfram Media, ISBN 978-1-57955-022-6
### Physics references
• Bohm, Arno (2001), Quantum Mechanics: Foundations and Applications, Springer, ISBN 0-387-95330-2
• Burgess, Cliff; Moore, Guy (2007), The Standard Model. A Primer, Cambridge University Press, ISBN 0-521-86036-9
• Guenther, Robert D. (1990), Modern Optics, John Wiley, ISBN 0-471-60538-7
• Itzykson, Claude; Zuber, Jean-Bernard (1980), Quantum Field Theory, McGraw–Hill, ISBN 0-07-032071-3
• Riley, Kenneth F.; Hobson, Michael P.; Bence, Stephen J. (1997), Mathematical methods for physics and engineering, Cambridge University Press, ISBN 0-521-55506-X
• Schiff, Leonard I. (1968), Quantum Mechanics (3rd ed.), McGraw–Hill
• Weinberg, Steven (1995), The Quantum Theory of Fields. Volume I: Foundations, Cambridge University Press, ISBN 0-521-55001-7
• Wherrett, Brian S. (1987), Group Theory for Atoms, Molecules and Solids, Prentice–Hall International, ISBN 0-13-365461-3
• Zabrodin, Anton; Brezin, Édouard; Kazakov, Vladimir; Serban, Didina; Wiegmann, Paul (2006), Applications of Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-4020-4530-1
### Historical references
• Bôcher, Maxime (2004), Introduction to higher algebra, New York, NY: Dover Publications, ISBN 978-0-486-49570-5 , reprint of the 1907 original edition
• Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, I (1841–1853), Cambridge University Press, pp. 123–126
• Dieudonné, Jean, ed. (1978), Abrégé d'histoire des mathématiques 1700-1900, Paris, FR: Hermann
• Hawkins, Thomas (1975), "Cauchy and the spectral theory of matrices", 2: 1–29, doi:10.1016/0315-0860(75)90032-4, ISSN 0315-0860, MR 0469635
• Knobloch, Eberhard (1994), "From Gauss to Weierstrass: determinant theory and its historical evaluations", The intersection of history and mathematics, Science Networks Historical Studies 15, Basel, Boston, Berlin: Birkhäuser, pp. 51–66, MR 1308079
• Kronecker, Leopold (1897), in Hensel, Kurt, Leopold Kronecker's Werke, Teubner
• Mehra, Jagdish; Rechenberg, Helmut (1987), The Historical Development of Quantum Theory (1st ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96284-9
• Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999), Nine Chapters of the Mathematical Art, Companion and Commentary (2nd ed.), Oxford University Press, ISBN 978-0-19-853936-0
• Weierstrass, Karl (1915), Collected works 3
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8711215257644653, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/39016/how-did-one-get-the-defining-equation-of-probability-current-and-conservation-of?answertab=oldest
|
# How did one get the defining equation of probability current and conservation of probability current and density?
I'm reading the Wikipedia page for the Dirac equation:
$$\rho=\phi^*\phi$$
and this density is convected according to the probability current vector
$$J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$$
with the conservation of probability current and density following from the Schrödinger equation:
$$\nabla\cdot J + \frac{\partial\rho}{\partial t} = 0$$
The question is, how did one get the defining equation of the probability current vector? It seems that in most texts, this was just given as a rule, yet I am thinking, there must be somehow reasons for writing the equation like that..
Also, why is the conservation equation - the last equation - is kept?
-
## 1 Answer
The probability current is just that - the rate and direction that probability flows past a point. It is analogous to electric current or to a fluid current, and the continuity equation is the same as for those concepts.
For example, if the the probability current is high on the left-hand side of a region and low on the right hand side, more probability is flowing in from the left than out from the right, and the total probability for the particle to be found in that region is increasing.
To calculate this, the probability that a particle is found in a region is
$$\int_{region} \phi^* \phi \,\, \mathrm{d}x$$
The time derivative of this is the rate that the probability for the particle to be in that region changes.
$$\int_{region} \left(\frac{\partial \phi^*}{\partial t} \phi + \phi^*\frac{\partial \phi}{\partial t}\right) \,\,\mathrm{d}x$$
We know what the time derivative of $\phi$ is, though, from the Schrodinger equation. If you plug that in and assume the potential is real, this simplifies to
$$\frac{i \hbar}{2m} \int_{region} \left((\nabla^2\phi^*) \phi - \phi^* (\nabla^2 \phi) \, \,\right)\mathrm{d}x$$
If you integrate this by parts, you see it's the same as the integral of the flux of the probability current over the surface. Thus the probability current is a flow of probability the same way the electric current is a flow of charge.
The continuity equation is just the differential form of this same relation. Since we had to use the Schrodinger equation to find $\frac{\partial \phi}{\partial t}$, we've shown that the continuity equation follows from Schrodinger's equation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528270959854126, "perplexity_flag": "head"}
|
http://topologicalmusings.wordpress.com/tag/propositional-calculus/
|
Todd and Vishal’s blog
Topological Musings
# Tag Archive
You are currently browsing the tag archive for the ‘Propositional Calculus’ tag.
## Propositional Calculus – the basics
January 3, 2008 in Math Topics, Propositional Calculus | Tags: conjunction, disjunction, logical equivalence, material implication, negation, Propositional Calculus, propositional logic, propositions, statements | by Vishal | Leave a comment
Let’s see if we can build this from ground up. We first define a statement (or sometimes, a proposition) to be a meaningful assertion that is either true or false. Well, meaningful means we should be able to say for sure if a statement is true or false. So, something like “Hello, there!” is not counted as a statement but “the sun is made of butter” is. The latter is evidently false but the former is neither true nor false. Now, it can get quite cumbersome after a while if we keep using statements such as “the sun is made of butter” every time we need to use them. Thus, it is useful to have variables, or to be precise, propositional variables, to denote all statements. We usually prefer to use $p, q, r$ and so on for such variables.
Now, all of this would be rather boring if we had just symbols such as $p, q, r$ etc. to denote statements. Thus, a statement like “Archimedes was a philosopher” is not that interesting in itself. In fact, all the statements (in our formal system) would be “isolated” ones in the sense that we wouldn’t be able to logically “connect” one statement to another. We want to be able to express sentences like “$x = -2$ and $y=2$“, “$(x = -2)$ implies $(x^2 = 4)$” and so on. So, we add something called logical connectives (also called operator symbols) to the picture. There are four basic ones: $\wedge$ (conjunction), $\vee$ (disjunction), $\rightarrow$ (material implication), which are all of arity 2 and $\neg$ (negation) which is of arity 1. Using these logical connectives, we can now form compound statements such as $p \wedge q$ (i.e. $p$ and $q$), $p \vee q$ (i.e. $p$ or $q$), $\neg p$ (i.e. $\mbox{not} (p)$), and $p \rightarrow q$ (i.e. $p$ implies $q$.) Note that each of $\wedge, \vee$ and $\rightarrow$ requires two propositional variables in order for it to make any sense; this is expressed by saying their arity is 2. On the other hand, $\neg$ has arity 1 since it is applied to exactly one propositional variable.
We also introduce another logical operator called logical equivalence ($\equiv$,) which has arity 2. It is really convenient to have logical equivalence on hand, as we shall see later. We say $p \equiv q$ if and only if “$(p \rightarrow q) \mbox{ and } (q \rightarrow p)$“. What this basically means is, if $p$ is true then so is $q$ and if $q$ is true then so is $p$. Another equivalent way of saying this is, if $p$ is true then so is $q$ and if $p$ is false then so is $q$.
Before we proceed further, we make a few observations. First, if $p$ and $q$ are propositional variables, then by definition each of those is either true or false. Formally speaking, the truth value of $p$ or $q$ is either true or false. This is equally true of the compound statements $p \wedge q,\, p \vee q,\, p \rightarrow q$ and $\neg p$. Of course, the truth values of these four compound statements depend on $p$ and $q$. We will delve into this in the next post.
Second, we don’t really need all the four basic operators. Two of those, viz. $\rightarrow$ and $\neg$ suffice for all logical purposes. This means all statements involving $\wedge$ and/or $\vee$ can be “converted” to ones that involve only $\rightarrow$ and $\neg$. However, we can also choose the “minimal” set $\{ \wedge, \,\neg \}$, instead, for the purpose for which we chose the minimal set $\{ \rightarrow, \neg \}$. In fact, there are lots of other possible combinations of operators that can serve our purpose equally well. Which minimal set of operators we choose depends sometimes on personal taste and at other times on practical considerations. So, for example, while designing circuits in the field of computer hardware, the minimal operator set that is used is $\{ \downarrow \}$. In fact, all that’s really needed is this particular operator set. Here $p \downarrow q \equiv \neg (p \wedge q)$.
So, what have we got so far? Well, we have a formal notion of a statement (or proposition.) We have access to propositional variables ($p, \, q, \, r$, etc.) that may be used to denote statements. We know how to create the negation of a given statement using the $\neg$ logical connective. We also know how to “connect” any two statements using conjunction, disjunction and material implication that are symbolically represented by the logical connectives $\wedge, \, \vee$ and $\rightarrow$, respectively. And, lastly, given any two statements $p$ and $q$, we have defined what it means for the two to be logically equivalent (which is symbolically represented by $\equiv$) to each other. Indeed, $p \equiv q$ if and only if ($p \rightarrow q \mbox{ and } q \to p$).
We shall see in the later posts that the above “small” formal system (for propositional calculus) we have built thus far is, in fact, quite powerful. We can, indeed, already employ quite a bit of it in “ordinary” mathematics. But, more on this, later!
## Propositional Calculus
December 24, 2007 in Propositional Calculus | Tags: predicate calculus, Propositional Calculus, relational algebra | by Vishal | Leave a comment
I wish to use this part of the blog to quickly go through the basic elements of propositional calculus, and then later move on to predicate calculus in another part of the blog, followed by the fundamentals of relational algebra in yet another part. I might then go through the problem of query optimization in RDBMS after that. Let’s see how far this goes.
• 221,467 hits
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 62, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935735285282135, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/29090/direct-construction-of-the-integers/80221
|
## Direct construction of the integers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a direct construction of the integers which does not involve taking any quotients? I am of course aware of the usual construction. I am also aware of the nice axiomatic characterization of the integers.
I am most interested in a direct construction. I am sure that one could probably use a disjoint union of $\mathbb{N}$ and $\mathbb{N}^{+}$ to construct $\mathbb{Z}$. But this involves 2 intermediate constructions (as well as dealing with cases).
Edit: by direct construction, I mean something like the Peano construction for $\mathbb{N}$, seen as the inductive type built from $0$ and $\mathit{succ}$. Then one also constructs the operations of addition, multiplication, etc. Another way to think of it: suppose you wanted to have a datatype of 'integers' in a lambda calculus which only allows inductive constructions and no quotients, how would you do it?
-
1
The free group on one generator? Or do you want to define multiplication, as well? Maybe you should provide an example of a "direct" construction of something else to show what you have in mind. – Gerald Edgar Jun 22 2010 at 14:13
4
Strings of symbols, from a three-letter alphabet (representing digits 0, 1, -1; thought of as base 3 expansions). All but finitely many digits must be zero. Define operations essentially as in grade-school. Is that what you want for "direct"? I took balanced ternary, since you don't want to start with positive integers... – Gerald Edgar Jun 22 2010 at 14:18
@Gerald: isn't your second construction 'redundant' (some numbers are multiply represented) so that you would need to take a quotient? Yes, that is direct enough. – Jacques Carette Jun 22 2010 at 14:33
2
0 (zero), S (successor) , P (predecessor) Add the axioms PS(x)=SP(x)=x. – Kaveh Aug 1 2010 at 7:33
4
Could you explain your motivation? Are you trying to get an efficient implementation of the integers, or something that works well in a proof assistant, or something that is mathematically elegant, or what? – Andrej Bauer Nov 7 2011 at 14:17
show 11 more comments
## 5 Answers
Informally speaking, taking the limit of two's complement as the number of bits goes to $\infty$, the integers are just the eventually constant binary sequences (which are naturally represented by finite binary sequences). For this to work, said sequences must start with the least significant bit, i.e., $1001011\overline{0}$ is interpreted as $2^0+2^3+2^5+2^6$ and $1001010\overline{1}$ is interpreted as $2^0+2^3+2^5-2^7$. The arithmetic and ordering of these strings is natural (and efficient for microprocessors when we restrict from $\mathbb{Z}$ to, say, $\{-2^{63},\ldots,2^{63}-1\}$).
The above can be reinterpreted as the following less direct construction. If $R$ is the inverse limit of rings $\lim_{\infty\leftarrow n}\mathbb{Z}/2^n\mathbb{Z}$, then the diagonal map $\Delta\colon\mathbb{Z}\rightarrow R$ given by $m\mapsto \lim_{\infty\leftarrow n}(m\mod 2^n)$ is an injective ring homomorphism. [Edit: The image is characterized as the set of $\vec x\in R$ for which the truth value of $x(n+1)=x(n)$ is eventually constant.] Moreover, the ordering of $\mathbb{Z}$ is coded via $m\geq 0\Leftrightarrow(m\mod 2^n: n\in\mathbb{N})$ is eventually constant.
Update: I couldn't resist the temptation to write a functional programming implementation.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is a paper here http://www.denison.edu/academics/departments/mathcs/fressola_paper.pdf that seems to do what you want to achieve.
-
It definitely answers the question that I asked. I guess I really wanted to ask for the result to have a more natural ordering. More like balanced ternary. – Jacques Carette Nov 6 2011 at 13:43
You could try base -2 representations, also called negabinary strings. These are finite strings drawn from the alphabet `$\{ 0, 1\}$`, starting with 1 (except when zero or empty, depending on your choice of convention), where we weight places by powers of $-2$. You have unique representations, and reasonably straightforward arithmetic operations.
-
Is it not enough to modify Peano's construction? An idea (which is different from the onw linked by Iii) might be the following: Peano's construction makes use a function $succ(n)$ which verify the classical properties:
1. There is no $n$ such that $0=succ(n)$
2. $succ(n)=succ(m)$ implies $n=m$
3. If $0\in A$ and $succ(n)\in A$ for all $n\in A$, then $A=\mathbb N$
Maybe it is possible characterize $\mathbb Z$ making use of two (different) functions, $prec(\cdot)$ and $succ(\cdot)$, related by $prec(succ(n))=succ(prec(n))=n$. Of course, now the first property cannot be true, the second property above has to be required for both $prec$ and $succ$ and, finally, the third property has to be replaced with the following
Induction on $\mathbb Z$: If $A\subseteq Z$ contains at least one element and, moreover, for any $a\in A$ one has $prec(a),succ(a)\in A$, then $A=\mathbb Z$.
Should work.
-
2
This seems to be the same as what Kaveh suggested in a comment on the question. The OP objected there that the axiom prec(succ(n)) = succ(prec(n)) = n induces a quotient. That is, the integers will not be simply the terms built from succ, pred, and 0 but rather equivalence classes of such terms. – Andreas Blass Nov 6 2011 at 23:38
Andreas is correct: I was looking for a quotient-free construction. And axioms 1-3 above are really another way of stating Goguen's "no junk, no confusion" axioms. – Jacques Carette Nov 7 2011 at 0:40
OK, sorry, I missed that comment. – Valerio Capraro Nov 7 2011 at 7:08
I would say: the free group on one element. I guess you can translate this into a series of first-order axioms. Notice that multiplication comes for free as composition between automorphisms of the group with itself.
Addendum: Prompted by the comment below, I am not thinking about the usual description of the free group through a chain of $1$'s and $-1$'s but on the universal property.
Let me give some specifics. A group is a tuple $(G,m,e,i)$ with $G$ a set, $m \colon G \times G \to G$ a map $e \in G$ and $i \colon G \to G$ satisfying certain commutativities that amount to the defining properties of group (associativity, $e$ is the neutral element and $i(g)$ is the inverse of the element $g \in G$). A free group in one element is such a tuple $(F, \dot , 1, op)$ satisfying that for any choice of a $g \in G$ from a group $(G,m,e,i)$ there is one and only one homomorphism $(F, \dot , 1, op) \to (G,m,e,i)$ taking $1$ to $g$. I propose to translate this description into a series of first order formulas, that was my suggestion.
Addendum 2: I have just realized that this way the description is second-order.
-
But this still has quotients (eg $xy^{-1}y=x$) which is what the OP was trying to avoid. – Richard Rast Nov 7 2011 at 13:45
@Richard Rast. Thanks for your comment. I didn't notice the first comment in the question either. However I wonder if there is a description of the free group that does not go through a quotient, something using the universal property. – Leo Alonso Nov 7 2011 at 15:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436745047569275, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/165624-convert-polar-integrate-question-hints-appreciated.html
|
# Thread:
1. ## convert to polar and integrate question. (Hints appreciated)
I was able to solve the integral below however when I draw the region and attempt to switch to polar I can't figure out what to set the upper and lower limits of r at.
Original:
integrate 1 dydx where x = 0 to sqrt(6) and y = -x to x
The solution is 6.
However, I must convert to polar for this exercise.
After I drew the region. (A large triangle that can be viewed as two separate triangles. Both with legs of length sqrt(6) and hypontenuse of sqrt(12). These triangles would meet at the x-axis and be in quadrants I and IV.) I was able to determine that theta went from -pi/4 to pi/4. However I cannot figure out what to do for the limits of integration with r.
So the new integral needs to be r drdt (let t be theta) r = ? to ? and t = -pi/4 to pi/4. Any hints on where to go from here?
2. Originally Posted by EuptothePiI1
I was able to solve the integral below however when I draw the region and attempt to switch to polar I can't figure out what to set the upper and lower limits of r at.
Original:
integrate 1 dydx where x = 0 to sqrt(6) and y = -x to x
The solution is 6.
However, I must convert to polar for this exercise.
After I drew the region. (A large triangle that can be viewed as two separate triangles. Both with legs of length sqrt(6) and hypontenuse of sqrt(12). These triangles would meet at the x-axis and be in quadrants I and IV.) I was able to determine that theta went from -pi/4 to pi/4. However I cannot figure out what to do for the limits of integration with r.
So the new integral needs to be r drdt (let t be theta) r = ? to ? and t = -pi/4 to pi/4. Any hints on where to go from here?
The vertical line on the right has the equation $x=\sqrt{6}$ but also remember that $x=r\cos(\theta)$ combining these two we get
$\displaystyle r\cos(\theta)=\sqrt{6} \iff r=\frac{\sqrt{6}}{\cos(\theta)}$
3. Howdy. I tried to implement that into my solution but I must have messed up in another section of my process. I attempted to solve the integral using dr dt where r is from 0 to sqrt(6)/cos(t) and t is from -pi/4 to pi/4. This leaves us with 2sec^3(t) from -pi/4 to pi/4 at the end which gives us zero. I feel as if I missed something of great importance. Can you see where I went wrong?
4. Originally Posted by EuptothePiI1
Howdy. I tried to implement that into my solution but I must have messed up in another section of my process. I attempted to solve the integral using dr dt where r is from 0 to sqrt(6)/cos(t) and t is from -pi/4 to pi/4. This leaves us with 2sec^3(t) from -pi/4 to pi/4 at the end which gives us zero. I feel as if I missed something of great importance. Can you see where I went wrong?
$\displaystyle \int_{-\frac{\pi}{4}}^{\frac{\pi}{4}}\int_{0}^{\frac{\sqr t{6}}{\cos(\theta)}}rdrd\theta \int_{-\frac{\pi}{4}}^{\frac{\pi}{4}} \frac{1}{2}\left( \frac{\sqrt{6}}{\cos(\theta}\right)^2d\theta=3\int _{-\frac{\pi}{4}}^{\frac{\pi}{4}}\sec^{2}(\theta)d\th eta=3\tan(\theta)\bigg|_{-\frac{\pi}{4}}^{\frac{\pi}{4}}=6$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9655073285102844, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/109989?sort=oldest
|
## Concentration results for inner products of two independent random gaussian vectors
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi,
I wanted to know if there are standard results on concentration of absolute value of inner products of two random vectors. Thus if $X, Y \in R^m$ are two independent random vectors with each entry distributed as $\mathcal{N}(0, 1/m)$, then how can we bound the following probability expression: $P ( | X^T Y | > \epsilon )$ ? Here, $\epsilon > 0$ is a given constant that is small.
-
## 4 Answers
Since you're trying to bound the sum of zero-mean i.i.d. RVs, I would recommend you try to develop a Chernoff bound: $$\Pr(X^TY>\epsilon)\leq \inf_{s\geq 0}(e^{-s\epsilon }(Ee^{sZ})^m)$$ where $Z=X_1Y_1$ is distributed according to a Normal Product distribution. I haven't carried out the calculation in full but I believe the moment generating function $Ee^{sZ}$ can be computed in close form using the expression (6) for $K_0$ found here.
As to tightness of the bound, notice that $$\Pr(X^TY>\epsilon)=\Pr(\sum_{i=1}^m\hat{Z}_i>m\epsilon)$$ where the $\hat{Z}_i$ are i.i.d. and each one is the product of two independent standard ($\mathcal{N}(0,1)$) Gaussian RVs. It is a standard Large Deviations result that such probability goes to zero exponentially fast as $m\to\infty$ for every constant $\epsilon>0$. I am 99% sure that the Chernoff bound always yields the correct exponential rate (but not the correct coefficient of the leading exponent).
-
1
How good are those bounds when $\epsilon$ is small? – Douglas Zare Oct 18 at 10:10
Good question - see my edit above (since the question was about concentration results it is reasonable to assume $m\to\infty$) – Yair Carmon Oct 18 at 15:23
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If $m=2$ then this is a Laplace distribution. Equivalently, the distribution of the determinant of a $2\times2$ matrix with IID centered normal entries is a Laplace distribution. See whuber's comment.
A Laplace distribution is also the difference of two IID exponentials. So, if $m$ is even, then the inner product can be written as a sum of $m/2$ IID Laplace distributions, or the difference of two IID gamma distributions. See "tight bounds on probability of sum of laplace random variables" for the density function as a single sum.
-
Thanks for the answers.
@Yair Carmon: Will work on computing the characteristic function. I am quite sure Chernoff bound should be tight enough for my purpose. Thnx !!
-
An alternative method is to exploit the rotational invariance of the Gaussian. You can write $$X^T Y = |X| \left( \left(\frac{X}{|X|}\right)^T Y \right).$$ Because $Y$ is rotationally invariant, the inner product is now independent of $X$, and in fact just has distribution $N(0,1/m)$. Now let $C>1$ be an arbitrary parameter. We can bound the probability $X^T Y > \epsilon$ by the probability one of the following two events occur.
1. $\left(\frac{X}{|X|}\right)^T Y \geq \frac{\epsilon}{C}$. Assuming $\epsilon \sqrt{m}/C$ tends to infinity, this occurs with probability $\Phi (\frac{\epsilon \sqrt{m}}{C})=(1+o(1)) \sqrt{\frac{m}{2 \pi}} \exp(-\frac{\epsilon^2 m}{C^2})$.
2. $|X| \geq C$. The norm of a Gaussian vector is well studied, and it is standard (see, for example Chapter 2 of these notes, that $|X|$ is tightly concentrated around its expectation. For example, applying Corollary 2.3 of the linked notes gives that the probability this occurs is at most $\exp(-\frac{1}{4} (1-\frac{1}{C^2})^2 m)$
For $\epsilon$ bounded away from $0$ you can choose $C$ to optimize the sum of the two terms getting a bound that is exponential in $m$ but with a non-optimal exponent. If $\epsilon$ is tending to $0$ with $m$, then the first term is dominant. That term remains small so long as $\epsilon$ is much larger than $\sqrt{\frac{\log m}{m}}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376266002655029, "perplexity_flag": "head"}
|
http://nrich.maths.org/5951/note
|
### Biscuit Decorations
Andrew decorated 20 biscuits to take to a party. He lined them up and put icing on every second biscuit and different decorations on other biscuits. How many biscuits weren't decorated?
### Constant Counting
You can make a calculator count for you by any number you choose. You can count by ones to reach 24. You can count by twos to reach 24. What else can you count by to reach 24?
### Skip Counting
Find the squares that Froggie skips onto to get to the pumpkin patch. She starts on 3 and finishes on 30, but she lands only on a square that has a number 3 more than the square she skips from.
# One of Thirty-six
## One of Thirty-six
Can you find the chosen number from this square using the clues below?
1. The number is odd.
2. It is a multiple of three.
3. It is smaller than $7\times 4$.
4. Its tens digit is even.
5. It is the greater of the two possibilities.
You might like to print off this sheet of the problem.
### Why do this problem?
One of Thirty Six encourages pupils to apply their knowledge of number properties in a logical way. The challenge in this problem is to decide in which order the information in the clues is useful.
### Possible approach
A nice way to start a lesson which focused on this problem would be to play a version of the "What's my rule?" game. Decide on a number property, for example odd numbers, and draw a large circle on the board. Explain to the children that they have to work out your rule by suggesting just, for example, ten numbers. If the number they suggest fits your rule, you write it in the circle. If not, write it outside the circle. After ten suggestions, can they work out the rule? (You might have to alter the number of suggestions they're allowed!) Play this game a few times, perhaps asking individuals to come to the board and be the one to choose the rule.
You could then look at this problem all together and ask pairs of children to try and solve it. Emphasise that after a specified length of time you will be asking them HOW they went about solving it rather than just wanting to know the answer. It would be useful for each pair to have a copy of the problem - you could print off this sheet . It might be worth stopping them after just a few minutes to share good ways of keeping track of what they're doing. Some children might suggest crossing out or circling numbers on the grid, for example.
Once the children have worked on this problem (and possibly the extension too), the key point to bring out in a whole class discussion is the idea that the first clue was not necessarily the most useful to start with. Invite pairs to describe how they found the solution, emphasising where choices were made as to which clue they used next.
### Key questions
How did you go about finding the solution?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617635607719421, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/56014/parallel-circuits-overall-resistance-decreases-with-additional-resistor/56015
|
# Parallel circuits - Overall resistance decreases with additional resistor
Let's say that there is a parallel circuit with two identical resistors in parallel with each other. If a third resistor, identical to the other two, is added in parallel with the first two, the overall resistance decreases.
Why does this overall resistance decrease?
-
– Qmechanic♦ Mar 10 at 18:27
1
This question is straight out of chapter 19 of the third edition of Matter & Interactions by Chabay and Sherwood. It is associated with figure 19.48. Since it's a direct plea for a homework question answer, I think this question should be closed. There is at least one other one for which I similarly recommend the same. – user11266 Mar 10 at 22:51
– Chris Mar 11 at 0:27
Please take a look at other questions that have been closed for this very reason. – user11266 Mar 11 at 1:06
## 5 Answers
For resistors $R_1, R_2, \dots, R_N$ in parallel, the equivalent resistance $R_e$ is given by $$\frac{1}{R_e} = \frac{1}{R_1} + \frac{1}{R_2} + \cdots + \frac{1}{R_N}$$ If two resistors with equal resistance $R_1 = R_2 = R$ are in parallel, then this gives $$R_e^{(2)} = \frac{R_1R_2}{R_1+R_2} = \frac{R^2}{2R} = \frac{R}{2}$$ If three identical resistors $R_1 = R_2 = R_3 = R$ are in parallel, then the equivalent resistance is $$R_e^{(3)} = \frac{R_1R_2R_3}{R_1R_2 + R_2R_3 + R_3R_1} = \frac{R^3}{3R^2} = \frac{R}{3}$$ In fact, for $N$ identical resistors one has $$\frac{1}{R_3^{(N)}} = \frac{N}{R}$$ so that $$R_e^{(N)} = \frac{R}{N}$$ and therefore the resistance decreases with the addition of each successive resistor in parallel.
-
This answer doesn't answer the question that was asked; it merely restates the question in mathematical terms and provided no mechanism. Besides, it's a homework question from Matter & Interactions. – user11266 Mar 10 at 22:00
3
@JoeH The question was "Why does this overall resistance decrease?" which, in my opinion, is sufficiently vague that it affords the following interpretation, which is the one I took: "Given that we know how to mathematically determine the resistance of parallel resistors, how do we show that adding a resistor to a set of resistors in parallel will decrease its resistance?" I respect that you may have interpreted the question differently, and I'm glad you added a distinct response. Given that I received the check, perhaps my interpretation was closer to the intent of the author? – joshphysics Mar 10 at 22:27
I didn't think the question was vague at all. The OP is asking questions straight out of the M&I textbook and there is a unique story line which the authors follow. If the instructor is implementing M&I correctly, then any answer that includes resistance won't be acceptable. Look for other questions by this same poster. This particular question is from chapter 19 and is associated with figure 19.48. – user11266 Mar 10 at 22:50
2
@JoeH I think you make fair points. I did, in fact, restate the question in mathematical language, and I also did answer the question having restated it in that form. I feel that both my response, and yours, are useful. Cheers^2! – joshphysics Mar 10 at 23:08
1
YAY! We agree on something! :) – user11266 Mar 10 at 23:10
show 3 more comments
Why does this overall resistance decrease?
A more elegant, sophisticated way to see why is through the notion of duality.
In electric circuit theory, conductance (the reciprocal of resistance) is dual to resistance. Other dual pairs are:
voltage - current
series - parallel
inductance - capacitance
Thevenin - Norton
and so on ...
For example, consider Ohm's Law: $v = iR$. The dual is: $i = vG$
You probably intuitively understand that adding a resistor in series increases the total resistance.
The dual of this is adding a conductance in parallel increases the total conductance.
But, if the conductance increases, the reciprocal, i.e., the resistance, decreases.
Mathematically:
Conductances in parallel add:
$G_{total}=G_1 + G_2 + G_3 = \dfrac{1}{R_1} + \dfrac{1}{R_2} + \dfrac{1}{R_3} = \dfrac{1}{R_{total}}$
or
$R_{total} = \dfrac{1}{\dfrac{1}{R_1} + \dfrac{1}{R_2} + \dfrac{1}{R_3}}$
Now, it's clear that adding another resistor in parallel, increases the denominator thus decreasing the total resistance.
-
The reason adding resistors in parallel decreases overall resistance is that adding resistors in parallel increases the effective cross sectional area of the circuit. Current is proportional to cross sectional area, so the overall current drawn from the battery (assuming a DC circuit) increases. However, adding a resistor in parallel doesn't change the potential difference across the new resistor or the other existing parallel resistors (that follows from the very definition of parallel). If the circuit obeys the macroscopic Ohm's law ($\Delta V = IR$), then resistance is the quotient of potential difference and current. Potential difference doesn't change when adding a new parallel resistor, but current increases so the quotient decreases. The accepted answer is mathematically correct, but doesn't actually address your question.
-
Suppose we have a simple circuit with only one resistor. Assume the voltage across the circuit is 10V, and current is say, 2Amps. Now assume we add an additional component to the circuit in parallel as in the following diagram:
The voltage accross each component is not reduced by this action, and now we have a 10V pull over both components in the circuit.
If we consider the first component, we can use the formula $V = I_1R_1$, to find the current to be $I_1 = \frac{V}{R_1}$ (This will equal 2Amps, juas as in the original circuit). Similarly, we can find the current over the second component using the formula $V=I_2R_2$ and therefore $I_2 = \frac{V}{R_2}$. The voltage here is the same as before, because adding additional resistors in parallel does not reduce the pulling power of the battery.
The total current in the circuit is then obtained by adding the current flowing in each of the components, to give $I_T = I_1 + I_2 = \frac{V}{R_1} + \frac{V}{R_2} = V(\frac{1}{R_1} + \frac{1}{R_2})$.
Now we can think of the two resistors as one big resistor, of resistance $R_T$, and use the formula $V = I_TR_T$.
Therefore, $R_T = \frac{V}{V(\frac{1}{R_1} + \frac{1}{R_2})} = \frac{1}{(\frac{1}{R_1} + \frac{1}{R_2})}$, which is the formula used in one of the above answers.
We now can see intuitively why the resistance has decreased. This is because adding the second resistor allowed for an additional current $I_2$, which combines with the original current, $I_1$, to form a larger current than before. When current increases over a constant voltage, we can say that the total resistance of the circuit has decreased.
-
I almost didn't answer, but maybe it can help someone better understand the accepted answer.
Some people will cringe at the thought of this, but I often find myself comparing electricity to plumbing. It works reasonably well in your question and helps me visualize joshphysics' correct answer.
It is the same as a large tub filled with holes. Adding more drain pipes will cause the tub to drain faster. In this case, the drain pipes are more resistors. Adding a tiny drain hole (representing an additional resistor of large value) will undeniably result in more current flow.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492347836494446, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/45785/why-isnt-temperature-measured-in-units-of-energy/45787
|
# Why isn't temperature measured in units of energy?
Temperature is the average of the kinetic energies of all molecules of a body. Then, why do we consider it a different fundamental physical quantity altogether [K], and not an alternate form of energy, with a dimensional formula derived from three initial fundamental quantities, length [L], mass [M] and time [T]?
-
– Steve B Dec 3 '12 at 16:10
– Nathaniel May 5 at 2:23
2
Oh, sorry, I just realised this question is older than that one. – Nathaniel May 5 at 2:24
## 5 Answers
Temperature is nothing else than energy per degree of freedom. It is purely for historical reasons that energy per degree of freedom is measured in Kelvin, and not in, say, micro-eV. It is just that these systems of units got fixed and became widely used before the statistical meaning of temperature became clear.
For the same reason, mass measured in kg and not in, say, Tera-eV.
If you would correct all of this, and apply more rational choices of units, you would end up with a natural system of units. This is what many physicists do in their professional lives. In such a system constants like the speed of light and Boltzmann's constant end up as being defined equal to unity. This makes it clear that these are not constants of nature, but man-made artifacts caused by the use of clumsy systems of units. In that respect Boltzmann's constant k is no different than the constant measuring the number of cubic inches in a gallon.
-
Particle physicists indeed do like to express temperature in the units of energy, usually in their most favorite unit of energy, a gigaelectronvolt.
Kelvins are used for historical reasons as well as for the sake of having reasonable numbers in everyday conditions. Before the temperature-energy relationship $$E \sim kT$$ was realized in the late 19th century ($k=1.38\times 10^{-23} {\rm J/K}$), people used various temperature scales such as the Celsius scale (they still do). The Celsius scale divides the interval between the freezing and boiling points of water to 100 parts (percent of the length of the interval). The linearity is given by the volume of an ideal gas at fixed pressure, $V\sim T$.
Later, it was appreciated that the freezing point isn't a terrible fundamental special temperature worth the label "zero", so a shifted Celsius scale, the Kelvin scale, was defined. It's still convenient to use the Celsius degrees because it's sensible to talk about temperatures near 0 and not 300 kelvins.
Today, we know it's sensible to express the temperature via the equivalent energy per degree of freedom (times two), $E=kT$, but it's still reasonable to use kelvins and Celsius degrees because the room temperature is of order $10^{-20}$ joules.
-
From the phenomenological point of view of thermodynamics, the unit of temperature is somewhat arbitrary and has been fixed by historical 'accident' - as long as $$Q = T\cdot\Delta S$$ ends up being an energy, everything works out.
From the point of view of statistical mechanics, it makes sense to make the entropy $S$ unitless, same as the probabilities in terms of which it is defined.
This corresponds to setting Boltzmann's constant $k_B=1$ and makes temperature an energy. This as well as further unifications (like setting speed of light $c=1$ to unify the dimensions of space and time as well as mass and energy) are features of 'natural' systems of units.
-
You actually explained why already by yourself and well done for noticing the problem. Dimensional analysis does show that temperature and energy are not the same.
Just to convince ourselves of this, lets look at the ideal gas equation for a single mole, (i.e. number moles n=1 below) of ideal gas:
$$pV=nRT=RT$$
With $nR=8.31\mathrm{m^2kg s^{-2}K^{-1}}$ looking like it has the units we use for energy over those for temperature (i.e. the units are $\mathrm{\text{ } m^2kg s^{-2}K^{-1}}=JK^{-1}$).
$$\frac{pV}{R}=T$$
What happens to the units now? The units in this expression do not amount tot units of energy. This means $\text{Energy}\neq\text{Temperature}$ and is what we expected.
As others have said, in various ways; temperature is a function of state. The sum of variables the function of state depends on tells us the number of degrees of freedom it has. These are represented by the number of invariables in the function of the state. i.e. general form:
$$g(x_1,x_2,..,x_n)$$
Where subscript n, is a positive integer which gives the degree of freedom. The Empirical temperature is a single valued result found at a particular time (i.e. so we have a result that is independent of time) when we examine our function of state. Look at the function of state variables volume, V and pressure, p which are represented as follows:
$$\Phi(p,V)=T_{empirical}$$
Proof:
Proof for existence of temperature (derivation is from C. J. Adkins equilibrium thermodynamics and paraphrased a bit here to illustrate the concept) added for anyone who wants to convince themselves of what I just said.
The condition for thermal equilibrium is described by the zeroth law of thermodynamics that is:
If two bodies A and B are in equilibrium with a third, C then they are also in equilibrium with each other.
Consider a simple example where we model a thermodynamic process involving fluid of fixed mass (i.e. independent of mass).
The condition for thermodynamic equilibrium for function, F of two arbitrary states,with body A at state 1 and and body C at state 3. (You may have plotted a p/V indicator diagram so it will help you to think of this now)
$$F_1=(p_1,V_1,p_3,V_3)=0$$
Next the condition for equilibrium for body B at state 2 and C at 3.
$$F_2=(p_2,V_2,p_3,V_3)=0$$
Solving both for $p_3$
$$p_3=f_1(p_1,V_1,V_3)=0$$
$$p_3=f_2(p_2,V_2,V_3)=0$$
and equating the two:
$$p_3=f_1(p_1,V_1,V_3)=f_2(p_2,V_2,V_3)$$
Solve for $p_1$. The reasoning for this change of variables did actually confuse me for a little while, but it is again just a matter of being comfortable manipulating expressions.
$$p_1=g(V_1,p_2,V_2,V_3)$$
$$F_3=(p_1,V_1,p_2,V_2)=0$$
$$p_1=f_3(V_1,p_2,V_2)$$
$$p_3=f_1(p_1,V_1,V_3)=f_2(p_2,V_2,V_3)$$
$$p_3=f_1(p_1,V_1)=f_2(p_2,V_2)$$
$$\Phi_1(p_1,V_1)=\Phi_2(p_2,V_2)$$
Which gives the conditions for thermal equilibrium the existence of temperature as a function of p and V.
Hence the result expressed earlier.
$$\Phi(p,V)=T_{empirical}$$
This should convince you that temperature and energy are not the same thing!
-
Downvoter care to explain? – Magpie May 5 at 6:41
Concerning the first question, I think the first dimensional argument why temperature and energy are not the same is a good one, and indeed it could be that this is all what the OP wanted since the question is titled "Dimensional Analysis". – Dilaton May 8 at 11:05
the rest confuses me a little bit. For example as Johannes said in his answer, temperature is the kinetic energy of a degree of freedom, so I think it should not depend on the number of degrees of freedom as I see it. In a system with different kinds of degrees of freedom in extreme nonequilibrium each degree of freedom can have its own temperature. In theormodynamics and statistical mechanics, the temperature is the intensive quantitiy (it does not depend on the size of the system) which corresponds to the energy of the system, so they are related. – Dilaton May 8 at 11:10
When deriving the equilibrium state by maximazing the entropy, the temperature is the Lagrange multiplier corresponding to the energy. – Dilaton May 8 at 11:10
1
I saw ;-) I do also see what you mean now about the degrees of freedom bit now. I think I misused the word depend there. I'll change it. – Magpie May 9 at 20:57
show 2 more comments
Kinetic temperature is the average of the kinetic energies. The thermodynamic concept of temperature $T$ is more general.
Temperature measures the partial ratio of energy to entropy changes $T\equiv (\partial E / \partial S)$. Temperature cannot be considered an equivalent of energy, because the concept of entropy is needed for the definition as well. Notice from the definition that energy is an extensive quantity whereas temperature is an intensive quantity. It is possible to have a composite system with a given energy, but without a thermodynamic temperature (e.g. a composite system made of two thermally isolated solids: one hot and the other cold)
Using the fundamental equation of thermodynamics we find that $T=T(E,V,N,\dots)$, which implies that the concept of temperature surpasses that of energy. For instance, temperature can change whereas energy remain constant.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428455233573914, "perplexity_flag": "head"}
|
http://thescamdog.wordpress.com/2011/11/03/graphing-inequalities/
|
# Zero-Knowledge Proofs
Feeds:
Posts
Comments
## Graphing Inequalities
November 3, 2011 by John Scammell
Last week, I mentioned to a group of teachers that I had never come up with a good way to teach kids where to shade when graphing an inequality. Vicky from one of our local high schools shared her method with me. It’s pretty nice.
Vicky gives her students an inequality like $2x-y \le 7$
She asks them to each find two coordinates that satisfy the inequality, and then plot them on a giant grid at the front of the room. When 30 kids come up and plot points, it will look something like this.
From this graph, it becomes pretty obvious that there is a line involved, and which side of the line we should shade. It also becomes obvious that one kid made a mistake.
We could extend this method to quadratic inequalities. If the students were given the inequality $y>x^2+3x-2$ , we could ask students to find ordered pairs that satisfy the inequality, and plot them on a grid at the front. It might look like this.
Students could then have conversations about which of the shading should include the boundary, and which should not, and how to deal with that.
### Like this:
Posted in Math 20-1 | Tagged Math 20-1 | 3 Comments
### 3 Responses
1. Love this – I used something similar to this (because I went through so many GReader posts last night, I couldn’t remember it exactly). I had students choose a point in our grid (both x and y went from -6 to 6) and test it in the inequality to see if it worked. We plotted the ones that did and then I went through with them the process. The second and third times I did it with my students, I had them plot the points on their example paper before we actually graphed the inequality on the same grid. In my last class, right after I had students graph the line but before we shaded the half that worked, one of my students who normally struggled said “Oh, I get it!” even before we talked about how to determine which half to shade. That was really cool! Thanks for sharing!
2. Nice, simple method to help students understand what is usually just a procedure to them (1. Graph line (or parabola or …). 2. Dashed line if… solid line if…. 3. Test a coordinate point. 4. Shade appropriately.).
If we can get our students to understand the idea, I am confident they will have no trouble remembering the procedure, since they can just recreate it themselves.
3. [...] reading John Scammell’s recent post on linear inequalities, I realized that I had it backwards. I’d begin by graphing the line. I’d explain that a [...]
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9717864990234375, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/57410?sort=newest
|
## Rate of convergence of smooth mollifiers
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How does one figure out/prove the rate of convergence (in some norm) of mollifiers given a function bounded in some other norm (say Sobolev space, Besov space)? Also, is there a dimensional analysis heuristic which will predict what the rate will be?
For example, it is true that
`$$ ||u - u^{(\epsilon)}||_{L^3} \leq C \epsilon^\alpha ||u||_{B_3^{\alpha,\infty}} $$`
where the norm on the right hand side is a Besov space norm. (This fact is used in Constantin, E and Titi's paper on Onsager's Conjecture for Solutions to Euler's Equation).
-
## 1 Answer
You can do something with simple scaling as long as you work on the full space. Assume that $|\cdot|$ and $\|\cdot\|$ are two shift-invariant (semi)norms and for the scaling operator $T_a f(x)=f(x/a)$ you have $|T_af|=a^t|f|$, $\|T_a f\|=a^s\|f\|$. Then, if you do the $\delta$-mollifying on $f$, it is equivalent to the $a\delta$-mollifying on $T_a f$, which means that if any inequality of the form $|f-f_\delta|\le C\delta^r\|f\|$ exists at all, you must have $a^t=a^r a^s$, i.e., $r=t-s$. Now, to check that the inequality is there, you just need to check that $\|f\|$ dominates the deviation of $f$ from some faithfully reproduced function in the norm $|\cdot|$. Note that we need the homogeneous spaces for such scaling tricks, so the Sobolev norm will really mean the $L^p$ norm of the highest derivative involved, not the full sum of $L^p$-norms of the previous derivatives.
Example: $C$ and $C^k$ (with uniform bounds on the entire line). If we have any chance to get anything at all, the speed is $\delta^k$ by scaling. We have this chance realized if and only if our mollifier (which I assume to be compactly supported for simplicity) has first $k-1$ moments correct (i.e., reproduces the polynomials of degree up to $k-1$ precisely).
-
So I see how to get the exponent. I'm not sure what you meant about how to prove it. (Like what is a faithfully reproduced function..?) Are you saying that it suffices to bound $|f - f_1| \leq C ||f||$? – Phil Isett Mar 5 2011 at 14:12
"Faithfully reproduced" just means $f_\delta=f$. The usual spherical cap mollifiers reproduce constants and linear functions faithfully but have a bias on quadratic polynomials. That's why you cannot go beyond $C^2$ and $\delta^2$ with them. The way you stated it is fine too. – fedja Mar 5 2011 at 18:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9289411902427673, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/16534/different-ways-of-proving-that-two-sets-are-equal/22198
|
## Different ways of proving that two sets are equal
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm not sure if this is a soft question, or should be community wiki.
I was explaining to a student how to prove that two sets were equal using what I called the 'oldest trick in the book': to show that $A = B$, prove $A \subseteq B$ and $B \subseteq A$. This got me thinking: what are the other ways of showing that two sets are equal. There's of course the bijection method (establish a 1-1 onto correspondence), but I couldn't think of others off the top of my head.
Are there many more general-ish techniques for proving two sets equal ?
-
6
Showing two sets are equal is a different thing than showing there is a bijection. Do you really mean the former or the latter? – Ryan Budney Feb 26 2010 at 18:11
ah fair enough. I really mean the former. although you can use a bijection to prove equality – Suresh Venkat Feb 26 2010 at 18:26
7
Community wiki? – Qiaochu Yuan Feb 26 2010 at 19:28
I was debating this. I looked over the meta.MO link on CW, and thought that Andrew Stacey's comment on direct vs indirect experience was relevant here, in that people are sharing direct experiences of their techniques, rather than just citing references. But I'm fine either way. – Suresh Venkat Feb 26 2010 at 20:19
The only way to use a bijection to show 2 sets are equal is to use the identity map between them.So I don't really see how this is different from using containment. – Andrew L Apr 22 2010 at 17:44
## 11 Answers
To show that a set $A$ is equal to $\mathbb{N}$ use the amazing method of mathematical induction.
-
5
And similarly, you can use transfinite induction to test equality of a subset of any well-ordered set. – Pete L. Clark Feb 26 2010 at 18:51
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In a similar spirit to the dense+closed answer, there are some proofs where to show that a subset of a connected space is the whole space, one shows that it is non-empty, open and closed. An example of this is the proof that a connected open subset of $\mathbb{R}^n$ is path-connected (the set of points you can meet from x is open and non-empty and its complement is open). Another is the proof of the identity theorem in complex analysis (the set where your two analytic functions have the same Taylor expansion locally and therefore agree locally is obviously open, and it is also closed because the set of points where the nth derivatives agree is obviously closed, and the set where they have the same Taylor expansion is therefore an intersection of closed sets).
-
1
In PDE, this is known as the continuity method. – Terry Tao Feb 26 2010 at 21:56
5
Another example of this is when $U$ is an open subset containing the identity element of a connected topological group, then $U$ generates the group. – Akhil Mathew Feb 26 2010 at 22:08
If $A$ and $B$ are both finite, it suffices to show:
(i) $A \subseteq B$ and (ii) `$\# A \geq \#B$`.
There are variations on this when the sets have more structure e.g.:
If $A$ and $B$ are finite-dimensional vector spaces over a field $K$, it suffices to show that one is contained in the other and that they have the same dimension.
For "naked" infinite sets, I am tempted to say that I know of no way to show equality other than directly from the definition: every element of $A$ is an element of $B$ and conversely. So I'm interested to hear what others have to say. [Edit: I feel that Qiaochu's answer successfully meets this challenge: it is another way to show equality, and it is both obvious and useful.]
-
1
I like the vector space example. that's the kind of answer I was looking for ! – Suresh Venkat Feb 26 2010 at 18:36
7
In fact, you only need #A >= #B above. – Douglas S. Stones Feb 26 2010 at 23:15
This is an extremely general answer, but often it's easiest to show that there exists some other set (or structured set) $C$ such that $A = C$ and $B = C$. For example, one way to show that two vector spaces are the same is to exhibit a common basis of both.
-
2
+1: OK, you win. I guess you could even show $A = C_1 = C_2 = \ldots = C_N = B$. Or the same with a well-ordered index set of $C_i$'s (with a maximal element) and a transfinite induction argument... – Pete L. Clark Feb 26 2010 at 21:17
5
I should mention a somewhat sophisticated example of this technique so it doesn't sound like too much of a tautology: one proves identities between modular forms by showing that they span the same subspace of the space of modular forms of some weight and level, and one way to do that is to verify that the two spans are both equal to the subspace of modular forms satisfying such-and-such properties at certain points. The same kind of technique is behind computer algebra methods to prove identities between holonomic generating functions. – Qiaochu Yuan Feb 26 2010 at 21:25
7
One runs this principle in reverse for combinatorics: Given a single $C$, describe it first as $A$, then as $B$, and count $A$ and $B$ separately to deduce (hopefully) a combinatorial identity. – L Spice Feb 26 2010 at 22:19
One may show that $A$ is dense in $B$ and $A$ closed. For instance, if $A \subset \mathbb{C}^n$ is an algebraic variety that contains $\mathbb{Z}^n$, it is all of $\mathbb{C}^n$ (of course, I am using the Zariski topology here).
One way of showing this density is to use the Hahn-Banach theorem (an example is the Muntz-Szasz theorem, cf. here, and there are other such applications such as the Stone-Weierstrass theorem that can be proved similarly). If every continuous linear functional vanishing on $B$ vanishes on $A$ and $A,B$ are linear subspaces of a Banach space, then $A$ is dense in $B$.
-
To expand on Pete's comment where we have additional structure, in any context where duality makes sense, to show that $A=B$, it suffices to show that they both have the same duals. Of course this does not answer the question, since we still need a way of showing that their duals are equal, but sometimes this may be easier. Examples of duals are when $A$ and $B$ are both subsets of a common universe $U$ and the dual is just the complement. Or, if $A$ and $B$ are both subspaces of $R^n$ and the dual is the orthogonal complement.
Edit: Although I'm not an expert, I'm guessing there are examples coming from topological structure as well. For example, to show that a subset $A$ of a topological space $X$ is equal to $X$, it suffices to show that $A$ is closed and contains a dense subset of $X$.
-
1
This isn't true if A and B are Banach spaces. You need a context where duality is an involution. – Qiaochu Yuan Feb 26 2010 at 19:27
Agreed. I should have made this explicit in my non-technical definition of duality. In my defense, both my cited examples are involutions. – Tony Huynh Feb 26 2010 at 20:44
I guess Akhil and I answered at the same time, so see his answer and ignore my edit. – Tony Huynh Feb 26 2010 at 21:12
And sometimes you show that vector subspaces $A,B$ of $V$ are the same by proving $A \subseteq B$ and $A^* \subseteq B^*$. – Noam D. Elkies Aug 9 2011 at 0:23
This won't work for bare sets, but for most things with additional structure (groups, modules, etc.), the first isomorphism theorem is often used to show that some set (with structure) is equal to a quotient of other sets.
For sets with G-action, the orbit-stabilizer theorem is used for the same purpose.
-
1
A more impressive example in my opinion: kth roots are unique in the class of finite L-structures. By a homorphism counting argument, Lovasz shows that if A^k and B^k are isomorphic as L-structures (on a finite set) then A is isomorphic to B as L-structures. Gerhard "Ask Me About System Design" Paseman, 2010.02.26 – Gerhard Paseman Feb 27 2010 at 0:17
In bijective combinatorics, there is another method. Say, both $A$ and $B$ are subsets of a large set $X$ which is split into a "positive" and a "negative" part: $X = X_+ \sqcup X_-$, such that $A,B \subset X_+$. Suppose $\alpha, \beta$ are two sign-reversing involutions on $X$, i.e. such that $\alpha(X_-) \subset X_+$ and $\beta(X_-) \subset X_+$. Suppose further that $A$ is the set of fixed points of $\alpha$ and $B$ is the set of fixed points of $\beta$. Then, obviously, $|A| = |X_+|-|X_-| = |B|$. Moreover, the action of the infinite dihedral group $\ D_\infty = \langle \alpha,\beta\rangle \$ on $X$ gives a bijection between $A$ and $B$ (take $a \to b$ if they lie in the same orbit).
This idea is known as the "Garsia-Milne involution principle" and can be used to construct bijections proving various partition identities (see here). Other places where a version of this principle comes up is this Zagier's famous proof of Fermat's theorem on sums of two squares, and Doyle & Conway's famous "Division by three" paper (read the "division by 2" section first to understand the connection).
-
To show that two sets are equal, show that both satisfy a condition $P$ for which it is known that there exists a unique set $X$ with $P(X)$. What I really have in mind here is to use the uniqueness part of universality. For example, if two arrows to a limit (in some category) have the same compositions with the limiting cones, then they are the same. We can think of these arrows as sets (e.g., in ZFC without urelements), and so we proved that two sets are the same. (In any case, we can restrict attention to $\mathbf{Set}$ where arrows (function) are sets.)
As another example, if a functor $V:A\to X$ is known to creates $J$-limits, and two cones $\nu$ and $\tau$ in $A$ happen to satisfy $V\nu=V\tau=$ a limiting cone in $X$, then $\nu=\tau$.
-
In measure theory, one often wants to show some property $P$ is true of all the sets in some $\sigma$-algebra $\mathcal{B}$. So one sets $\mathcal{A}$ to be all those sets in $\mathcal{B}$ for which $P$ holds, and tries to show $\mathcal{A} = \mathcal{B}$. $\mathcal{A} \subset \mathcal{B}$ is obvious. The reverse is hard to show directly because the sets in $\mathcal{B}$ are usually hard to characterize explicitly. So one often uses something like the Dynkin $\pi$-$\lambda$ theorem. One finds a subset $\mathcal{C} \subset \mathcal{A}$ which is closed under intersection (a $\pi$-system) and with $\sigma(\mathcal{C}) = \mathcal{B}$. Then one shows that $\mathcal{A}$ is a $\lambda$-system (closed under set subtraction and increasing countable union), usually by showing that these operations preserve the property $P$. Using the theorem, one concludes that $\mathcal{B} \subset \mathcal{A}$.
This is really in the spirit of the "closed and dense" technique from topology / analysis.
-
Uh,isn't this just a convoluted way of doing the standard containment proof? – Andrew L Apr 22 2010 at 22:26
Here's a generalization of Pete's answer. If $A$ and $B$ are subsets of some measure space with $A \subseteq B$ and $\mu(A) \ge \mu(B)$, then $A$ and $B$ are equal up to a set of measure 0, which is good enough for some purposes.
-
2
Wow! I didn't know $(-1,1)$ and $(-1,0)\cup(0,1)$ were equal! Thanks! – Gerald Edgar Apr 22 2010 at 15:48
4
You're welcome; it's a surprisingly useful fact. I'm not sure what I was thinking there, but I'll edit my answer into something that's actually true. – Mark Meckes Apr 22 2010 at 16:37
I remember now what I was thinking in my original answer: I meant for A to be closed, not open. – Mark Meckes Apr 22 2010 at 16:44
3
But I already used your result to prove the Riemann Hypothesis. Now I have to send in a correction for that paper... – Gerald Edgar Apr 22 2010 at 17:08
1
Are you sure you should do that? mathoverflow.net/questions/22071/… – Mark Meckes Apr 22 2010 at 17:13
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 110, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587987065315247, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/45429/the-category-of-abelian-group-objects/45434
|
## The category of abelian group objects
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $C$ be a category, say with finite products. What can be said about the category $Ab(C)$ of abelian group objects of $C$? Is it always an abelian category? If not, what assumptions on $C$ have to be made? What happens when $C$ is the category of smooth proper geometrically integral schemes over some locally noetherian scheme $S$?
For example if $C=Set$, we get of course the abelian category of abelian groups. If $C=Ring/R$ for some ring $R$, then we get the abelian category $Mod(R)$ (cf. nlab). In general, I have already trouble to show that $Hom(A,B) \times Hom(B,C) \to Hom(A,C)$ is linear in the left coordinate if $A,B,C$ are abelian group objects.
-
Wow, thanks for all these answers :) – Martin Brandenburg Nov 9 2010 at 18:26
## 4 Answers
Is $\mathscr{C}$ is regular/(exact in Barr sense) then for any algeraic theory $T$ the category $T$-$Alg(\mathscr{C})$ of internal $T$-algebras is regular/(exact), in particular for $\mathscr{C}$ exact and $Ab$= "commutative groups theory" we have $Ab(\mathscr{C})$ is exat, this is also addittive (i.e. abelian, a category is abelian iff is additive and exact ) :
give $f, g: A\to B$ in $Ab(\mathscr{C})$ get $f+g$ in any of these following two way:
1) appling the Yoneda valutation $h^X,\ X\in \mathscr{C}$ to $f, g: A\to B$ (and considers Yoneda Lemma).
2) $f+g: G \xrightarrow{(f, g)} G\times G \xrightarrow{+}G$
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
No, it fails badly in general. A simple example might be where $C = Top$: topological abelian groups do not form an abelian category. For example, this category is not balanced.
In general one will need some exactness assumptions on $C$; I'll have to check into this carefully later (have to run now), but I think Barr-exactness of $C$ (regular, and equivalence relations are kernel pairs of their coequalizers) may be close to an ideal assumption.
-
The abelian group objects in any elementary topos form an abelian category, see P.T. Johnstone's "Topos Theory", Theorem 8.11 (page 259).
-
I'll point out that toposes are Barr-exact, so all three answers that have appeared so far are in confluence. – Todd Trimble Nov 9 2010 at 15:24
If $C$ is any additive category then every object has a unique structure as an abelian group object so $Ab(C)=C$; but typically $C$ is not abelian. For example, this applies to the category of free abelian groups. One can also think about triangulated categories, which are usually not abelian, although a nice theorem of Freyd gives a canonical embedding in an abelian category. One example that has been studied extensively is the category of spectra in the sense of stable homotopy theory. Similarly, one can consider abelian group objects in the homotopy category of spaces, otherwise known as commutative H-spaces.
The question also says:
In general, I have already trouble to show that $Hom(A,B)\times Hom(B,C)\to Hom(A,C)$ is linear in the left coordinate
Surely this is formal? I have drawn the diagram but sadly I cannot get MathJax to display it.
Update:
Just to be clear about notation, I'll write $\mathcal{C}(X,Y)$ for morphism sets in $\mathcal{C}$, and $Hom(A,B)$ for morphism sets in $Ab(\mathcal{C})$. An object $A\in Ab(\mathcal{C})$ has a natural abelian group structure on $\mathcal{C}(T,A)$ for all $T\in\mathcal{C}$. Naturality means that $q\circ p+r\circ p=(q+r)\circ p$ for all $p:S\to T$ and $q,r:T\to A$. Now let $B$ be another object of $Ab(\mathcal{C})$. A morphism in $Ab(\mathcal{C})$ from $A$ to $B$ is just a morphism $f:A\to B$ in $\mathcal{C}$ with the property that $f\circ(p+q)=f\circ p+f\circ q$ for all $T$ and all $p,q\in\mathcal{C}(T,A)$. Now suppose we have such morphisms $f,g:A\to B$ and $h,k\:B\to C$. We then have
$(f+g)\circ(p+q) = f\circ(p+q) + g\circ(p+q) = f\circ p + g\circ p + f\circ q + g\circ q =$ $(f+g)\circ p + (f+g)\circ q$
(using the naturality of addition, the homomorphism property of $f$ and $g$, and then naturality again). This shows that $f+g$ is again a homomorphism. A similar argument shows that $h\circ f$, $h\circ g$ and $h\circ(f+g)$ are homomorphisms. We have $h\circ(f+g)=h\circ f+h\circ g$ by the homomorphism property of $h$. We also have $(h+k)\circ f=h\circ f+k\circ f$ by the naturality of addition.
-
2
Neil, if you ever have trouble getting mathjax to display a diagram or something, just leave it as far as you can get it, and someone here will come along and fix it. – Harry Gindi Nov 9 2010 at 16:22
Concerning the edit: Ah of course, I used the hom sets of C instead of the sets of group hom. ... Thanks. – Martin Brandenburg Nov 9 2010 at 22:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251581430435181, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/39501/list
|
## Return to Answer
4 fix one C_S to S.
As has been noted in the comments, your definition of "free group on $S$" is not quite right. The map $g\colon S\to F_S$ is fixed, and is part of the "free group" (that is, the free group on $S$ is the pair $(F_S,g)$, with $g\colon S\to F_S$ a set-theoretic map). The universal property is that for every set map $f\colon S\to G$ into an arbitrary group, there exists a unique homomorphism $\varphi\colon F_S\to G$ such that $g = \varphi f$. But $f$ is not allowed to depend on $g$.
It is not hard to see that no such "cofree group" can exist on sets with more than one element. Suppose that you have a set $S$ with more than one element, and a "cofree group" on $S$, $C_S$, together with a set-theoretic map $f\colon C_S\to S$ such that for every group $G$ and every set-theoretic map $g:G\to C_S$S$, there exists a unique homomorphism$\varphi\colon G\to C_S$such that$f = g\varphi$. Let$a\in S$be different from$f(e)$; then the map$g\colon G\to S$with$g(x)=a$cannot factor through$C_S\$.
As for the case $S=\{s_0\}$, uniqueness of $\varphi$ forces $C_S$ to be the trivial group, because both the zero map and the identity on $C_S$ would satisfy the universal property relative to $f$.
The free group construction is the left adjoint of the underlying set functor. In general, left adjoints respect colimits and right adjoints respect limits; that is why the underlying set of a product of groups is the set-theoretic product of the underlying sets (underlying set is the right adjoint, so it respects limits like the product), and why the free group on the disjoint union of two sets is the free product of the free groups on the two sets (disjoint union being the coproduct in $Sets$, free product the coproduct in $Groups$, and coproduct being a colimit). As James Borger notes, if you had a dual of the free group construction, it would be the right adjoint of the underlying set functor and would therefore have to respect colimits. So that the underlying set of a free product of groups would have to be the disjoint union of the underlying sets of the groups. This does not occur, so no such object can exist.
P.S. As has also been pointed out in the comments, the universal property is nice and all, and can prove uniqueness, but in general one needs either very high-power categorical/universal algebraic theorems to deduce existence, or one must actually construct the objects in some way. In the case of free groups, while there are many constructions (e.g., as a "big direct product"; see the reference I'm about to give), it is via words or other equivalent constructions (e.g., the fundamental group of a bouqet of circles) that one can get a better handle of them. But if you like universal constructions (nothing wrong with that!) I recommend taking a look at George Bergman's An Invitation to General Algebra and Universal Constructions. It has three different constructions of the free group in Chapter 2.
3 add P.S.
As has been noted in the comments, your definition of "free group on $S$" is not quite right. The map $g\colon S\to F_S$ is fixed, and is part of the "free group" (that is, the free group on $S$ is the pair $(F_S,g)$, with $g\colon S\to F_S$ a set-theoretic map). The universal property is that for every set map $f\colon S\to G$ into an arbitrary group, there exists a unique homomorphism $\varphi\colon F_S\to G$ such that $g = \varphi f$. But $f$ is not allowed to depend on $g$.
It is not hard to see that no such "cofree group" can exist on sets with more than one element. Suppose that you have a set $S$ with more than one element, and a "cofree group" on $S$, $C_S$, together with a set-theoretic map $f\colon C_S\to S$ such that for every group $G$ and every set-theoretic map $g:G\to C_S$, there exists a unique homomorphism $\varphi\colon G\to C_S$ such that $f = g\varphi$. Let $a\in S$ be different from $f(e)$; then the map $g\colon G\to S$ with $g(x)=a$ cannot factor through $C_S$.
As for the case $S=\{s_0\}$, uniqueness of $\varphi$ forces $C_S$ to be the trivial group, because both the zero map and the identity on $C_S$ would satisfy the universal property relative to $f$.
The free group construction is the left adjoint of the underlying set functor. In general, left adjoints respect colimits and right adjoints respect limits; that is why the underlying set of a product of groups is the set-theoretic product of the underlying sets (underlying set is the right adjoint, so it respects limits like the product), and why the free group on the disjoint union of two sets is the free product of the free groups on the two sets (disjoint union being the coproduct in $Sets$, free product the coproduct in $Groups$, and coproduct being a colimit). As James Borger notes, if you had a dual of the free group construction, it would be the right adjoint of the underlying set functor and would therefore have to respect colimits. So that the underlying set of a free product of groups would have to be the disjoint union of the underlying sets of the groups. This does not occur, so no such object can exist.
P.S. As has also been pointed out in the comments, the universal property is nice and all, and can prove uniqueness, but in general one needs either very high-power categorical/universal algebraic theorems to deduce existence, or one must actually construct the objects in some way. In the case of free groups, while there are many constructions (e.g., as a "big direct product"; see the reference I'm about to give), it is via words or other equivalent constructions (e.g., the fundamental group of a bouqet of circles) that one can get a better handle of them. But if you like universal constructions (nothing wrong with that!) I recommend taking a look at George Bergman's An Invitation to General Algebra and Universal Constructions. It has three different constructions of the free group in Chapter 2.
2 grammar
As has been noted in the comments, your definition of "free group on $S$" is not quite right. The map $g\colon S\to F_S$ is fixed, and is part of the "free group" (that is, the free group on $S$ is the pair $(F_S,g)$, with $g\colon S\to F_S$ a set-theoretic map). The universal property is that for every set map $f\colon S\to G$ into an arbitrary group, there exists a unique homomorphism $\varphi\colon F_S\to G$ such that $g = \varphi f$. But $f$ is not allowed to depend on $g$.
It is not hard to see that no such "cofree group" can exist on sets with more than one element. Suppose that you have a set $S$ with more than one element, and a "cofree group" on $S$, $C_S$, together with a set-theoretic map $f\colon C_S\to S$ such that for every group $G$ and every set-theoretic map $g:G\to C_S$, there exists a unique homomorphism $\varphi\colon G\to F_S$ C_S$such that$f = g\varphi$. Let$a\in S$be different from$f(e)$; then the map$g\colon G\to S$with$g(x)=a$cannot factor through$C_S\$.
As for the case $S=\{s_0\}$, uniqueness of $\varphi$ forces $C_S$ to be the trivial group, because both the zero map and the identity on $C_S$ would satisfy the universal property relative to $f$.
The free group construction is the left adjoint of the underlying set functorsfunctor. In general, left adjoints respect colimits and right adjoints respect limits; that is why the underlying set of a product of groups is the set-theoretic product of the underlying sets (underlying set is the right adjoint, so it respects limits like the product), and why the free group on the disjoint union of two sets is the free product of the free groups on the two sets (disjoint union being the coproduct in $Sets$, free product the coproduct in $Groups$, and coproduct being a colimit). As James Borger notes, if you had a dual of the free group construction, it would be the right adjoint of the underlying set functor and would therefore have to respect colimits. So that the underlying set of a free product of groups would have to be the disjoint union of the underlying sets of the groups. This does not occur, so no such object can exist.
1
As has been noted in the comments, your definition of "free group on $S$" is not quite right. The map $g\colon S\to F_S$ is fixed, and is part of the "free group" (that is, the free group on $S$ is the pair $(F_S,g)$, with $g\colon S\to F_S$ a set-theoretic map). The universal property is that for every set map $f\colon S\to G$ into an arbitrary group, there exists a unique homomorphism $\varphi\colon F_S\to G$ such that $g = \varphi f$. But $f$ is not allowed to depend on $g$.
It is not hard to see that no such "cofree group" can exist on sets with more than one element. Suppose that you have a set $S$ with more than one element, and a "cofree group" on $S$, $C_S$, together with a set-theoretic map $f\colon C_S\to S$ such that for every group $G$ and every set-theoretic map $g:G\to C_S$, there exists a unique homomorphism $\varphi\colon G\to F_S$ such that $f = g\varphi$. Let $a\in S$ be different from $f(e)$; then the map $g\colon G\to S$ with $g(x)=a$ cannot factor through $C_S$.
As for the case $S=\{s_0\}$, uniqueness of $\varphi$ forces $C_S$ to be the trivial group, because both the zero map and the identity on $C_S$ would satisfy the universal property relative to $f$.
The free group construction is the left adjoint of the underlying set functors. In general, left adjoints respect colimits and right adjoints respect limits; that is why the underlying set of a product of groups is the set-theoretic product of the underlying sets (underlying set is the right adjoint, so it respects limits like the product), and why the free group on the disjoint union of two sets is the free product of the free groups on the two sets (disjoint union being the coproduct in $Sets$, free product the coproduct in $Groups$, and coproduct being a colimit). As James Borger notes, if you had a dual of the free group construction, it would be the right adjoint of the underlying set functor and would therefore have to respect colimits. So that the underlying set of a free product of groups would have to be the disjoint union of the underlying sets of the groups. This does not occur, so no such object can exist.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 120, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428900480270386, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/32576/particle-accelerators-and-heisenberg-uncertainty-principle/32658
|
# particle accelerators and Heisenberg uncertainty principle
In accelerators we shoot very high momentum particles at each other to probe their structure at very small length scales. Has that anything to do with the HUP that addresses the spread of momentum and space?
Related, when we accelerate a proton at exactly, say, 1 GeV, then we know exactly its momentum. But for high momentum particles the Broglie wave length also shrinks, the particles position becomes more precise. But that would violate HUP.
What goes on with high momentum particles and their momentum spread?
thanks
-
Good question. My curiosity with this is the role that interaction (via EM) with the accelerator machine itself plays in the spread of the wavefunction. It seems that within a certain limit, you couldn't reliably "shoot" particles from the accelerator at a target due to quantum limits. That would require releasing it into an empty field (I think) in the first place so one role that the babysitting of the particle's trajectory by EM fields plays would be to keep the particle's wave not much larger than the quntum limit. – AlanSE Jul 22 '12 at 18:30
## 4 Answers
The reason accelerators increase in energy is in order to be able to probe smaller distances, the smaller the wavelength the more details as with optics.
The de Broglie wavelength does not describe the location of the particle in space time, that is the function of the wave packet, as elucidated in this note of Hans de Vries: the de Broglie wavelength is a consequence of the HUP.
Those protons in the accelerator are wave packets and do not have a unique frequency and because of HUP, it is not possible to have exactly 1GeV energy.
-
## Did you find this question interesting? Try our newsletter
email address
Thanks for all the answers! After studying them and some more thinking on my own part, here is my answer to my questions. (Again, any responses highly welcome.)
1. Particles with higher momentum have a shorter de Broglie wave length. The de Broglie wavelength has nothing to do with HUP. The spacing between the wavecrests are shorter for particles with higher momentum. You get a higher resolution, so to speak. But if you have a precise momentum, no matter how high, the probabilties to find the particle in space are represented by sine waves. They are only shorter/denser for higher momentum. (And lower, so that total probability is one.)
2. To get loacalized wavepackets you need a superpostion of many sine waves, i.e. a great uncertainty in your momentum. But how great? According to HUP deviations at the order of h are enough. For particles with large momentum as in particle accelerators, for momenta that are many orders higher than h, already small deviations will localize the particle in space. So the reasoning in point 1. does not matter much. The high resolution waves only work when the momentum is very precise, precise at the order of h.
3. Due to relativity and the speed limit c, and the connection of space and momentum measurement through HUP, there must be also a limit in the localization and narrowing of particles. If momentum reaches relativistic limits, particle creation takes place.
-
The thing is,
$\Delta p \Delta x = \frac{\hbar}{2}$ (Check out the size of $h$ versus $1 \text{ GeV}$)
Quick answer: it is a order of magnitude problem
Long answer:
When you have a particle accelerator you don't accelerate your proton to exactly one GeV. Because at one GeV, $\hbar$ becomes insignificantly small and the change to the velocity is even more insignificant due to relativistic effects (at velocities near $c$, particles don't get only a little bit faster, for a lot of energy). Besides that Proton (assuming a ring accelerator) is constantly loosing energy due to it being a charged particle in an Magnetic field. And a Proton consists out of a quark-gluon mixture so the definition to where it is would be very interesting. Electrons have currently no lower bound in their size are a better subject for this kind of question.
-
Hi miceterminator. Welcome to Physics.SE. We have the MathJax rendering engine active on the site which allows you to write mathematics in a LaTeX alike form and this is the preferred method. I've done this one for you and taken the opportunity to make some minor changes. – dmckee♦ Jul 22 '12 at 18:11
Note that Bremstrahlung losses go down rapidly for more massive particles and thus are much lower in protons than in electrons, so the argument you make on that basis is not actually correct. – dmckee♦ Jul 22 '12 at 18:12
Yes Bremstrahlung is lower for Protons so there is of cause a drawback in using electrons. The argument with Bremsstrahlung however was aimed at the exactness of a 1 Gev Proton energy (which is actually pretty low, it is just a little bit more then rest mass) so because you are constantly loosing energy you can not know the exact Proton energy. – miceterminator Jul 23 '12 at 6:41
Notice that the Heisenberg uncertainty relationship involves only the precision with which the two quantities are known, not their magnitude. In $$\Delta p \Delta x \ge \frac{\hbar}{2}$$ we see $\Delta p$ but not $p$.
That means that the magnitude of the momentum makes no difference at all as far as the uncertainty principle is concerned.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399093389511108, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/11904?sort=newest
|
## A polynomial map from ℝ^n to ℝ^n mapping the positive orthant onto ℝ^n?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Question: Is there a polynomial map from ℝn to ℝn under which the image of the positive orthant (the set of points with all coordinates positive) is all of ℝn?
Some observations:
My intuition is that the answer must be 'no'... but I confess my intuition for this sort of geometric problem is not very well-developed.
Of course it is relatively easy to show that the answer is 'no' when n=1. (In fact it seems like a nice homework problem for some calculus students.) But I can't seem to get any traction for n>1.
This feels like the sort of thing that should have an easy proof, but then I remember feeling that way the first time I saw the Jacobian conjecture... now I'm wary of statements about polynomial maps of ℝn!
-
1
See mathoverflow.net/questions/38868/… for some apparently minimal degree maps when n>1. – jc Sep 16 2010 at 2:24
## 1 Answer
The map $z\in\mathbb C\mapsto z^4\in\mathbb C$, when written out in coordinates, is a polynomial map which sends the closed first quadrant to the whole of $\mathbb R^2$---and by considering cartesian products you get the same for $\mathbb R^{2n}=\mathbb C^n$.
Later: as observed in a comment by Charles, this can be turned into a solution for the open quadrant by composing with a translation, as in $z\in\mathbb C\mapsto (z-z_0)^4\in\mathbb C$ with $z_0$ in the open first quadrant.
-
This yields maps for all even dimensions. Is there a simple answer to the odd-dimensional case? – jc Jan 15 2010 at 19:44
7
Yes, because you can use this trick for any pair of coordinates in $\mathbb{R}^n$, regardless of the parity of $n$, and then compose until positivity has been eliminated for all coordinates. – Greg Kuperberg Jan 15 2010 at 19:45
The question asks about the open orthant, not the closed one! – Alberto García-Raboso Jan 15 2010 at 19:48
6
Then take $(z-z_0)^4$ and it should be patched, when $z_0$ is in the open first orthant. – Charles Siegel Jan 15 2010 at 19:50
1
Thanks very much indeed, this is a great answer! (Now I wonder what can be said about the form of such polynomial maps.) – Louis Deaett Jan 15 2010 at 20:38
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9077358841896057, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=522627
|
Physics Forums
Page 1 of 2 1 2 >
## Divergence in spherical polar coordinates
I took the divergence of the function 1/r2$\widehat{r}$ in spherical coordinate system and immediately got the answer as zero, but when I do it in cartesian coordiantes I get the answer as 5/r3.
for $\widehat{r}$ I used (xi+yj+zk)/(x2+y2+z2)1/2
what am i missing?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
What you're missing is the handling of the singularity at r=0.
Recognitions: Homework Help Hi Idoubt! If I take the cartesian derivative with respect to x I get: $${\partial \over \partial x}{x \over (x^2+y^2+z^2)^{3/2}} = {-2 x^2+y^2+z^2 \over (x^2+y^2+z^2)^{5/2}}$$ Of course the derivatives wrt to y and z have to be added yet... How did you get 5/r3?
## Divergence in spherical polar coordinates
sorry I made a silly error in my calculation
$\frac{\partial}{\partial x}$ x(x2+y2+z2)-$\frac{3}{2}$ =
(x2+y2+z2)-$\frac{3}{2}$ - 3x2(x2+y2+z2)-$\frac{5}{2}$
When i add y & z parts it cancels and becomes zero, so the div of $\frac{1}{r^2}$$\widehat{r}$ is zero.
The reason that confused me was Electric fields can have a non zero divergence. Is that because of the singularity at r=0?
Recognitions:
Homework Help
Quote by Idoubt The reason that confused me was Electric fields can have a non zero divergence. Is that because of the singularity at r=0?
Not every electric field is generated by exactly 1 symmetrical charge.
And in practice there is no singularity, since the charge would have to be a point charge for that.
Isn't the field due to just one charged assumed to be a field created by a point charge? ( I mean it makes no difference if we make that assumption right? ) But, how can an electric field have a non zero divergence then?
Recognitions: Homework Help Yes. It would only make a difference when you get very close to where the point charge is supposed to be. Take for instance 2 point charges and you'll have non-zero divergence.
But when we take two charges we no longer have the $\frac{1}{r^2}$ function ( unless we approximate at large distances) It is my understanding that even the electric field of a point charge has non zero divergence is it not so?
Recognitions: Homework Help Right on both counts. :) My point about point charge is unrelated to divergence. Only that a real electric field has no singularity.
my question is , how can a field which is scalar multiple of the function $\frac{1}{r^2}$$\hat{r}$ have a divergence when i just proved that the divergence of this function is zero?
Recognitions: Science Advisor The divergence is not 0, it's $4 \pi \delta^{(3)}(\vec{x})$. It's not too tricky to prove with help of Green's first integral theorem applied to the whole space with a little sphere around the origin taken out and then taking its radius to 0.
Quote by vanhees71 The divergence is not 0, it's $4 \pi \delta^{(3)}(\vec{x})$. It's not too tricky to prove with help of Green's first integral theorem applied to the whole space with a little sphere around the origin taken out and then taking its radius to 0.
This by definition is a singularity isn't it?
Quote by Idoubt This by definition is a singularity isn't it?
In physics it's often modelled thusly AAMOI:
I know not relevant but...
Quote by Galron In physics it's often modelled thusly AAMOI: I know not relevant but...
greek to me :)
Quote by Idoubt greek to me :)
meh a and b are just constants and in this case F(X) is obviously the Function of X.
The Greek symbol that looks like an O with a hat means Tensor.
E(X) I would assume means the Energy of the system in question in terms of the vector.
Quote by Galron meh a and b are just constants and F(X) is obviously the Function of X. E(X) being the Energy of the system.
greek with a few constants and a function of x now =)
Quote by Idoubt greek with a few constants and a function of x now =)
If you've done matrices you'll know a tensor is expressed as a matrix of a matrix.
A vector in this case is simply a matrix with a scalar.
A vector space is a mathematical structure formed by a collection of vectors: objects that may be added together and multiplied ("scaled") by numbers, called scalars in this context. Scalars are often taken to be real numbers, but one may also consider vector spaces with scalar multiplication by complex numbers, rational numbers, or even more general fields instead. The operations of vector addition and scalar multiplication have to satisfy certain requirements, called axioms, listed below. An example of a vector space is that of Euclidean vectors which are often used to represent physical quantities such as forces: any two forces (of the same type) can be added to yield a third, and the multiplication of a force vector by a real factor is another force vector. In the same vein, but in more geometric parlance, vectors representing displacements in the plane or in three-dimensional space also form vector spaces. Vector spaces are the subject of linear algebra and are well understood from this point of view, since vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. The theory is further enhanced by introducing on a vector space some additional structure, such as a norm or inner product. Such spaces arise naturally in mathematical analysis, mainly in the guise of infinite-dimensional function spaces whose vectors are functions. Analytical problems call for the ability to decide if a sequence of vectors converges to a given vector. This is accomplished by considering vector spaces with additional data, mostly spaces endowed with a suitable topology, thus allowing the consideration of proximity and continuity issues. These topological vector spaces, in particular Banach spaces and Hilbert spaces, have a richer theory. Historically, the first ideas leading to vector spaces can be traced back as far as 17th century's analytic geometry, matrices, systems of linear equations, and Euclidean vectors. The modern, more abstract treatment, first formulated by Giuseppe Peano in the late 19th century, encompasses more general objects than Euclidean space, but much of the theory can be seen as an extension of classical geometric ideas like lines, planes and their higher-dimensional analogs. Today, vector spaces are applied throughout mathematics, science and engineering. They are the appropriate linear-algebraic notion to deal with systems of linear equations; offer a framework for Fourier expansion, which is employed in image compression routines; or provide an environment that can be used for solution techniques for partial differential equations. Furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques. Vector spaces may be generalized in several directions, leading to more advanced notions in geometry and abstract algebra.
http://en.wikipedia.org/wiki/Vector_space
So to steal more wiki imagery a tensor is:
Page 1 of 2 1 2 >
Thread Tools
| | | |
|----------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Divergence in spherical polar coordinates | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 0 |
| | Differential Equations | 7 |
| | Calculus & Beyond Homework | 0 |
| | Calculus & Beyond Homework | 1 |
| | Calculus & Beyond Homework | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436124563217163, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/28213-how-reduce-simple-logical-expression.html
|
# Thread:
1. ## How to reduce simple logical expression?
If I had the following logical expression, how would I simplify it? I need to be sure that negations appear only applied to predicates (that is, so that no negation is outside a quantifier or an expression involving logical connectives). In the expression below, ~ means "not," $\vee$ means "or," and $\wedge$ means "and."
$\sim \left[ \forall x ( \exists y \forall z P(x,y,z) \wedge \exists z \forall y P(x,y,z) \right]$
I'm fairly certain that the not can be distributed inwards to achieve:
$\left[ \sim \left( \forall x ( \exists y \forall z P(x,y,z)) \right] \vee \left[ \sim (\exists z \forall y P(x,y,z) \right) \right]$
... but I am not sure how to proceed from there.
Thanks in advance!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650393128395081, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/47278?sort=votes
|
Is there such a thing as the sigma-completion of a Boolean algebra?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi all,
Suppose that $\mathcal{B}$ is a Boolean algebra. It there a way to extend $\mathcal{B}$ to a smallest Boolean algebra $\mathcal{B}'$ that contains an isomorphic copy of $\mathcal{B}$ and is countably complete, i.e. every countable subset of $\mathcal{B}'$ has a least upper bound in $\mathcal{B}'$? By "smallest" I mean that the inclusion $i: \mathcal{B} \hookrightarrow \mathcal{B}'$ has the obvious universal property, i.e. for every homomorphism $f$ from $\mathcal{B}$ to a countably complete Boolean algebra $\mathcal{C}$ there exists a unique homomorphism $g: \mathcal{B}' \to \mathcal{C}$ such that $g \circ i = f$ (it would be nice if $g$ turned out to commute with countable sups too). If no such $\mathcal{B}'$ exists, is there some other useful definition of "smallest" countably complete Boolean algebra containing $\mathcal{B}$?
If it makes any difference, I'm mostly interested in the special case where $\mathcal{B}$ is a direct limit of a sequence of finite Boolean algebras.
Edit: Thanks very much for the replies, it's a shame I can only mark one as the answer. It will take me a while to absorb the various references I've been given, so if I run into difficulty I'll bump the thread with an edit.
Edit 2: Bumping with followup question, please see my answer below.
-
5 Answers
The short answer is "yes", and it's a special case of a much, much more general theorem on relatively free algebraic constructions.
In other language, you are asking whether the underlying functor from countably complete Boolean algebras to Boolean algebras has a left adjoint. The more general question is whether, given a homomorphism $\phi: S \to T$ between two monads on $Set$, the evident underlying functor
$$Set^\phi: Set^T \to Set^S$$
from the category of $T$-algebras to the category of $S$-algebras has a left adjoint. For this I'll direct your attention to this nLab article.
Of course, we have to know that countably complete Boolean algebras can be described as algebras of a monad on $Set$, but this too follows from general theory. I'll refer you to another nLab article for this; the article is not complete but it should give the idea. The upshot is that for any algebraic theory with only a small set of operations of each arity, there is a corresponding monad on $Set$ whose algebras are the models of the theory. The general constructions go back to work in the sixties, due to Lawvere, Linton, and others.
Edit: I'll remark that had you said "complete" instead of "countably complete", then the answer would have been no. In fact, the underlying functor from complete Boolean algebras to sets has no left adjoint; this is mentioned for instance in Categories for the Working Mathematician. But in your case, the theory is generated by a set of operations and equations, and all is well.
-
... but the link I mention below gives a completion functor with a useful adjointness property, albeit not the most obvious one. – Neil Strickland Nov 24 2010 at 23:31
Apropos of what, Neil? Are you referring to the corollary at the bottom of the page? It is not a morphism of complete Boolean algebras, although it is I reckon an answer to one of OP's questions (where the extension need not preserve countable joins). – Todd Trimble Nov 24 2010 at 23:51
@Todd: yes, I am referring to the Corollary at the bottom of Johnstone's page 108 and the associated remarks on page 109, which cover essentially the same ground as your discussion with Joel below. – Neil Strickland Nov 25 2010 at 10:39
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Every Boolean algebra $\mathbb{B}$ embeds densely in its completion as a Boolean algebra, which is a complete Boolean algebra (more than just countably complete). The completion $\bar\mathbb{B}$ can be constructed as the regular open algebra, the set of all regular open subsets of $\mathbb{B}-\{0\}$, where the topology is generated by the basic open sets consisting of lower cones. A set is regular open if it is the interior of its closure.
Indeed, the completion operation is extremely general, and is used pervasively in set theory in the context of the forcing technique. Every separative partial order $\mathbb{P}$, and this includes any Boolean algebra (minus $0$), embeds densely in its regular open algebra, and this is always a complete Boolean algebra. This fact is the main connection between the poset-based account of forcing and the Boolean-algebra based account of forcing.
The wikipedia link above lists several universal properties of this completion.
-
But this is not a free construction that the OP was asking about (see his universal property). There is a theorem due to Gaifman and Hales that free complete Boolean algebras do not exist. – Todd Trimble Nov 24 2010 at 23:46
Every homomorphism of $\mathbb{B}$ to a complete boolean algebra $\mathbb{C}$ extends uniquely to a homomorphism of $\bar\mathbb{B}$ into $\mathbb{C}$. Isn't this what we should want? – Joel David Hamkins Nov 24 2010 at 23:55
We're both right. :-) You are answering a question where one doesn't demand that the extension preserve arbitrary joins, but be just a homomorphism of Boolean algebras, yes? I was referring to morphisms of complete Boolean algebras, where the condition is that the extension preserve arbitrary joins. This is where Gaifman-Hales comes in. – Todd Trimble Nov 25 2010 at 0:03
@Todd : I never knew this was due to Gaifman (and Hales). Any chance you know a reference? – Andres Caicedo Nov 25 2010 at 0:17
1
What I am saying is that if $f$ respects those infinitary joins that exist in $\mathbb{B}$, an obvious necessary requirement, then the extension of $f$ to the completion $\bar\mathbb{B}$ will respect all infinitary joins. And this seems to be exactly what the OP wants, right? – Joel David Hamkins Nov 25 2010 at 2:49
show 20 more comments
You can also look at III.3.11 in Johnstone's book on Stone Spaces:
-
This question was answered in topological terms by J. Vermeer in The smallest basically disconnected preimage of a space. Topology Appl. 17 (1984), no. 3, 217–232. See here for a review and here for the paper.
-
Hi all,
I've just come up with something that's relevant to the question I asked here. I'm bumping the thread partly in case anyone else cares, and partly in case (as is more likely) I've made an error and someone can point it out. Anyway: I believe I can prove that the $\sigma$-algebra generated by a Boolean algebra $A$ (in the sense of Todd's answer, i.e. the image of a left adjoint to the forgetful functor from $\sigma$-algebras to Boolean algebras) has a rather natural representation, namely as the $\sigma$-field generated by the double dual of $A$, i.e. the smallest $\sigma$-field containing all the clopen subsets of the dual space of $A$. Here is the proof:
Let $A$ be a Boolean algebra, let $A^\star$ be its dual Boolean space and $A^{\star \star}$ the dual algebra of its dual space, i.e. the set of clopen subsets of $A^\star$. Let $\bar{A}$ be the $\sigma$-algebra of Baire sets in $A^\star$, i.e. the $\sigma$-field of subsets of $A^\star$ generated by $A^{\star \star}$. Let $\alpha: A \cong A^{\star \star}$ be the canonical isomorphism, and let $\eta: A \to \bar{A}$ be the composition of $\alpha$ with the inclusion.
Suppose given a $\sigma$-algebra $B$ and a homomorphism (of Boolean algebras) $h: A \to B$. Define $B^\star$, $B^{\star \star}$, $\bar{B}$ and $\beta: B \cong B^{\star \star}$ as before. By Theorem 41, p. 376 of [1], $B^\star$ is a $\sigma$-space, i.e. the closure of every open Baire set is open. By Theorem 42, p. 381, there is a $\sigma$-homomorphism $\phi: \bar{B} \to B^{\star \star}$ such that $\phi$ maps ever clopen set to itself.
$\beta h \alpha^{-1}$ is a homomorphism $A^{\star \star} \to B^{\star \star}$, so by duality there is a unique continuous function $f: B^\star \to A^\star$ such that $f^{-1} P = \beta h \alpha^{-1} (P)$ for every $P \in A^{\star \star}$. It is easy to see that $f^{-1} S$ is a Baire set whenever $S$ is, so define
$f^\star : \bar{A} \to \bar{B}$; $S \mapsto f^{-1} S$.
$f^\star$ is clearly a $\sigma$-homomorphism. Let
$\bar{h} \equiv \beta^{-1} \phi f^\star: \bar{A} \to B$.
Then one may check, using the defining property of $f$ and the fact that $\phi$ maps clopen sets to themselves, that $\bar{h} \eta = h$. The uniqueness of $\bar{h}$ with this property follows from the fact that the range of $\eta$ generates $\bar{A}$.
So there's the alleged proof; I can't see anything wrong with it but the result strikes me as being "too good to be true", and if it is true then I'm surprised I didn't see any reference to it online before I started this thread. So I'll be grateful if anyone can spot a mistake.
[1] Steven Givant and Paul Halmos, Introduction to Boolean Algebras, Springer 2009
-
The unit of the Stone adjunction $\beta$ is not in general $\sigma$-continuous, so why is the factorization $\overline{h}$ $\sigma$-continuous? – G. Rodrigues Mar 20 2012 at 20:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383527636528015, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/12906/the-role-of-metric-in-the-wave-equation?answertab=active
|
# The role of metric in the Wave Equation
The wave equation is often written in the form
$$(\partial^2_t-\Delta)u=0,$$
involving the Laplace-Beltrami operator $\Delta$. However, the Laplace-Beltrami operator $\Delta$ is defined only in the presence of Riemannian metric $g$. Indeed, one can think of $\Delta u$ as of difference between $u$ and its mean value in the balls, and the notion of ball only makes sense in the presence of a metric.
What is the physical meaning of metric $g$ here? My guess is it's connected with the property of isotropy of space: in the vacuum, the waves should propagate uniformly in all directions, and this situation corresponds to Euclidean metric. The case of general Riemannian (= ‘perturbed Euclidean’) metric should then correspond to the propagation of waves in a medium of some sort, so that the property of ambient space isotropy is not satisfied, hence balls in it do not look like usual ‘Euclidean’ balls, and wave equation takes its general form $(\partial^2_t-\Delta)u=0$.
Is my guess true? If not, what is the correct interpretation of metric in the general wave equation?
-
2
Good question but I think you essentially answered it yourself: the correct geometrical way of thinking about Laplacian is as an averaging operator over a ball given by your metric. But since this is an infinitesimal statement, you can actually regard the deformed ball simply as an ellipsoid with length and orientation of axes determined by the eigenvalue problem of the metric. – Marek Jul 28 '11 at 19:47
I like this question/thinking. However I'm confused about the second equation. Shouldn't that one be the vector wave equation? – whoplisp Jul 28 '11 at 21:38
@whoplisp Two equations in the original post are identical. It is an equation for time-dependent scalar field u. What made you think of equation for vector field? – Akater Jul 28 '11 at 22:43
If you go into the Maxwell equations with anisotropic medium you don't generally end up with a scalar equation involving u but you actually have 6 equations for $E_x$, $E_y$, $E_z$, $B_x$, $B_y$ and $B_z$. Those are coupled in non-homogeneous space. – whoplisp Jul 29 '11 at 7:28
I was talking about an abstract wave equations, not Maxwell equations. However, this remark is very interesting. I have to to think about it. Do you mean that physically meaningful scalar fields sometimes lose their invariant properties in anisotropic medium, and one is then forced to deal with vector fields? – Akater Jul 29 '11 at 15:47
show 1 more comment
## 1 Answer
The interpretation of the metric as anisotropy may or may not be physical, depending on your point of view. The main problem with this interpretation is that it requires two distinct metrics. One of the lasting influences of general relativity in other fields of physics is the idea of treating objects by thinking about their intrinsic geometry. So when you are dealing with the propagation of the wave in a medium, and you have a Riemannian metric $g$ with respect to which the wave propagates exactly like $\partial_t^2 - \triangle$, you want to use that Riemannian metric $g$ as the preferred one when you are studying the equations. You are of course free to impose a background coordinate system and attach to it a secondary metric, but that secondary metric can be argued to be unphysical in the situation, if it does not actually affect the physics of the situation, but rather only the way your present your measurements.
Example: consider the case where your metric is $4 dx^2 + dy^2 + dz^2$. If you are only given this metric, you would not say that "one unit" in the $x$ direction is the same length as "one unit" in the $y$ direction. In fact, you would be tempted to redefine your coordinate system with $x' = \frac{1}{2}x$ such that the metric "look" more symmetric. In general, given any symmetric, positive definite bilinear form on a vector space, you can always find an orthonormal basis. So if the relevant geometry for the physics is described by a single Riemannian metric, then there is automatically a notion of infinitesimal isotropy built-in to the physics. (This is related to the notion of local Lorentz covariance in relativity, as well as to the notion of frame bundles in Riemannian geometry.)
The important thing to keep in mind is that the spectral theorem for positive definite bilinear forms can be stated in the following way:
Given any finite dimensional real vector space $V$, and any two positive definite symmetric bilinear forms $A$ and $B$ on $V$. We can find a basis $\{ e_1, e_2, e_3, e_4\}$ in which $A$ and $B$ are simultaneously diagonalized.
In other words, if you have a physical problem in which there are two distinct natural Riemannian metrics used in the equations, what you have is that the two parts of the physics corresponding to the use of the two Riemannian metrics are anisotropic relative to each other. Another way of saying the same thing is that the wave propagation in a medium that is governed by $\partial_t^2 - \triangle$ is still uniform in all directions, it is just that now the correct notion of directions should be the one given by the Riemannian metric.
Another example: imagine you are driving around on a plane. Because of friction (and other dissipative forces), to maintain constant velocity you need to keep the accelerator pedal depressed. Now above you there is a pigeon. Because of drag, for it to maintain constant air velocity it also needs to keep putting in energy. Now there is a wind blowing at the height where the pigeon flies. From your point of view, the pigeon's motion is anisotropic: because to maintain the same ground velocity with wind and against wind requires different amount of energy input. But from the pigeon's point of view, your motion is anisotropic: the pigeon's "stationary" coordinate system would be one that is being blown around by the wind!
With that said, from the point of view of studying the wave equation in a medium, the notion of ambient space isotropy is not all that important. So while the interpretation is perfectly valid, it is not something that you necessarily need to keep in mind. When one studies wave propagation in a non-linear medium (keywords: crystal optics, elastic waves), the notion of anisotropy more often refers to the fact that the longitudinal components of a wave and the various polarisations of the traverse components of a wave can have anisotropic propagation behaviour relative to each other. Like Whoplisp wrote in his comments, the general case is governed by a system of equations that cannot be reduced to just the simple scalar wave equation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946500301361084, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/67535/dimension-of-the-v-e-of-the-solutions-of-an-homogeneous-ode
|
# dimension of the v.e of the solutions of an homogeneous ODE
I was studying the book of ODE of Simmons, and he says that the Wronskian ($W$) is $0$ iff the solutions are LI. Even he present a proof, a counterexample that $W = 0$ does not implies LD is $$\left| x \right|, \ x$$ and he used that to prove the theorem, but since it is not well tested I can not follow, in addition to not really understand what he did.
Anyway, I want to know some proof of this fact, the fact that the solutions form a vector space (easy) of dimension $n$ (difficult to me T_T).
Thanks.
If someone knows some book to study ODE (for a beginner), I'll appreciate it.
-
1
You have 8 more questions with no accepted answers. This fact may discourage people to answer you ( – Ilya Sep 25 '11 at 21:48
The pair $|x|,x$ is not a counterexample. Either your interval has $0$ in which case the wronskian doesn't exist there, or it doesn't include $0$ and then they are actually linearly dependent. However, the pair $x^2,x|x|$ both have continuous derivatives, their wronskian vanishes everywhere, and they are not linearly independent on intervals that include $0$. Generally, $W=0$ can only guarantee linear dependence if the functions are analytic. (I don't think the other direction requires analyticity, though.) – anon Sep 25 '11 at 23:01
What does "v.e" in the title mean? I don't see any pair of neighboring words in the question that even start with those letters. I suppose "LI" means linearly independent, but what is "LD"? – Henning Makholm Sep 25 '11 at 23:03
@Henning: Obviously LD means linearly *de*$\text{}$pendent. But I have no idea what v.e means either. It's possible OP mistyped "VS" for vector space (as 'e' and 's' are right next to each other on the keyboard). And I think OP's asking (1) how the mentioned proof could be valid given the stated 'counterexample' (which I addressed), (2) for help understanding the proof, and (3) for other ODE books for a beginner. – anon Sep 25 '11 at 23:15
@anon, of course "linearly dependent". It was not as easy to figure out because it is not clearly a predicate where it appears. – Henning Makholm Sep 25 '11 at 23:20
## 1 Answer
You might take a look at Birkhoff and Rota, "Ordinary Differential Equations". It treats the Wronskian in section II.3.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606814980506897, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Change_of_Coordinate_Systems&diff=4429&oldid=4352
|
# Change of Coordinate Systems
### From Math Images
(Difference between revisions)
| | | | |
|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | | |
| Line 2: | | Line 2: | |
| | |ImageName=Change of Coordinates | | |ImageName=Change of Coordinates |
| | |Image=Coordchange.JPG | | |Image=Coordchange.JPG |
| - | |ImageIntro=The same object, here a circle, can look completely different depending on which coordinate system is used. | + | |ImageIntro=The same object, here a disk, can look completely different depending on which coordinate system is used. |
| | |ImageDescElem=It is a common practice in mathematics to use different coordinate systems to solve different problems. An example of a switch between coordinate systems follows: suppose we take a set of points in regular x-y '''Cartesian Coordinates''', represented by ordered pairs such as (1,2), then multiply their x-components by two, meaning (1,2) in the old coordinates is matched with (2,2) in the new coordinates. | | |ImageDescElem=It is a common practice in mathematics to use different coordinate systems to solve different problems. An example of a switch between coordinate systems follows: suppose we take a set of points in regular x-y '''Cartesian Coordinates''', represented by ordered pairs such as (1,2), then multiply their x-components by two, meaning (1,2) in the old coordinates is matched with (2,2) in the new coordinates. |
| | | | |
| - | Under this transformation, a set of points would become stretched in the horizontal x-direction since each point becomes further from the vertical y-axis (except for points originally on the y-axis, which remain on the axis). A set of points that was originally contained in a circle in the old coordinates would be contained by a stretched-out ellipse in the new coordinate system, as shown in the top two figures of this page's main image. | + | Under this transformation, a set of points would be stretched out in the horizontal x-direction since each point becomes further from the vertical y-axis (except for points originally on the y-axis, which remain on the axis). A set of points that was originally contained in a circle in the old coordinates would be contained by a stretched-out ellipse in the new coordinate system, as shown in the top two figures of this page's main image. |
| | | | |
| | Many other such transformations exist and are useful throughout mathematics, such as mapping the points in a disk to a rectangle. | | Many other such transformations exist and are useful throughout mathematics, such as mapping the points in a disk to a rectangle. |
| Line 48: | | Line 48: | |
| | </math> | | </math> |
| | | | |
| | | + | ===Three-Dimensional Coordinates=== |
| | | + | In 3 dimensions, similar coordinate systems and transformations between them exist. Three common systems are rectangular, cylindrical and spherical coordinates: |
| | | + | :*Rectangular Coordinates use standard <math> x,y,z </math> coordinates, where each coordinate is a distance on a coordinate axis. |
| | | + | :*Cylindrical Coordinates use <math> r,\theta,z</math>, where <math> r, \theta </math> are the same as two-dimensional polar coordinates and ''z'' is distance from the x-y plane. |
| | | + | :*Spherical Coordinates use <math> \rho, \theta, \phi </math>, where <math> \rho </math> is the distance from the origin, <math> \theta </math> is rotation from the positive x-axis as in polar coordinates, <math> \phi </math> and is rotation from the positive z-axis. |
| | |AuthorName=Brendan John | | |AuthorName=Brendan John |
| | |Field=Algebra | | |Field=Algebra |
| | |InProgress=Yes | | |InProgress=Yes |
| | }} | | }} |
## Revision as of 10:11, 15 June 2009
Change of Coordinates
Field: Algebra
Created By: Brendan John
Website: [ ]
Change of Coordinates
The same object, here a disk, can look completely different depending on which coordinate system is used.
# Basic Description
It is a common practice in mathematics to use different coordinate systems to solve different problems. An example of a switch between coordinate systems follows: suppose we take a set of points in regular x-y Cartesian Coordinates, represented by ordered pairs such as (1,2), then multiply their x-components by two, meaning (1,2) in the old coordinates is matched with (2,2) in the new coordinates.
Under this transformation, a set of points would be stretched out in the horizontal x-direction since each point becomes further from the vertical y-axis (except for points originally on the y-axis, which remain on the axis). A set of points that was originally contained in a circle in the old coordinates would be contained by a stretched-out ellipse in the new coordinate system, as shown in the top two figures of this page's main image.
Many other such transformations exist and are useful throughout mathematics, such as mapping the points in a disk to a rectangle.
# A More Mathematical Explanation
[Click to view A More Mathematical Explanation]
Some of these mappings can be neatly represented by vectors and matrices, in the form
U [...]
[Click to hide A More Mathematical Explanation]
Some of these mappings can be neatly represented by vectors and matrices, in the form
$A\vec{x}=\vec{x'}$
Where $\vec{x}$ is the coordinate vector of our point in the original coordinate system and $\vec{x'}$ is the coordinate vector of our point in the new coordinate system.
For example the transformation in the basic description, doubling the value of the x-coordinate, is represented in this notation by
$\begin{bmatrix} 2 & 0 \\ 0 & 1 \\ \end{bmatrix}\vec{x} = \vec{x'}$
As can be easily verified.
The ellipse that is tilted relative to the coordinate axes is created by a combination of rotation and stretching, represented by the matrix
$\begin{bmatrix} 2cos(\theta) & -sin(\theta)\\ 2sin(\theta) & cos(\theta) \\ \end{bmatrix}\vec{x} = \vec{x'}$
Some very useful mappings cannot be represented in matrix form, such as mapping points from Cartesian Coordinates to Polar Coordinates. Such a mapping, as shown in this page's main image, can map a disk to a rectangle. Each origin-centered ring in the disk consists of points at constant distance from the origin and angles ranging from 0 to $2\pi$. These points create a vertical line in Polar Coordinates. Each ring at a different distance from the origin creates its own line in the polar system, and the collection of these lines creates a rectangle.
The transformation from Cartesian coordinates to Polar Coordinate can be represented algebraically by
$\begin{bmatrix} r\\ \theta\\ \end{bmatrix} = \begin{bmatrix} \sqrt{x^2 + y^2}\\ \arctan{y/x}\\ \end{bmatrix}$
### Three-Dimensional Coordinates
In 3 dimensions, similar coordinate systems and transformations between them exist. Three common systems are rectangular, cylindrical and spherical coordinates:
• Rectangular Coordinates use standard $x,y,z$ coordinates, where each coordinate is a distance on a coordinate axis.
• Cylindrical Coordinates use $r,\theta,z$, where $r, \theta$ are the same as two-dimensional polar coordinates and z is distance from the x-y plane.
• Spherical Coordinates use $\rho, \theta, \phi$, where $\rho$ is the distance from the origin, $\theta$ is rotation from the positive x-axis as in polar coordinates, $\phi$ and is rotation from the positive z-axis.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
Categories: | | |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8951036334037781, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=582300
|
Physics Forums
## Capacitors - Conservation of Charge vs Conservation of Energy
I am a TA for a physics teacher. I wrote a problem that the students did in the lab quiz. The students tried to use conservation of energy instead of conservation of charge, which I used. Both methods seem sound to me, but they produce different answers. I need help figuring out which method is correct, and why the other one is incorrect.
1. The problem statement, all variables and given/known data
S2 is now opened after C1 has been fully charged and S1 is then closed. What is the final voltage?
2. Relevant equations
$C = \frac{Q}{V}$
$U = \frac{1}{2} C V^{2}$
3. The attempt at a solution
$V_{0} = \text{initial voltage} = 3V$
$C_{0} = \text{initial capacitance} = C_{1} = 470 \mu F$
$Q_{0} = \text{initial charge}$
$Q_{1} = \text{final charge on } C_{1}$
$Q_{2} = \text{final charge on } C_{2}$
$V^{'} = \text{final voltage}$
Conservation of Charge:
$Q_{0} = C_{0} V_{0}$
$Q_{1} = C_{1} V^{'}$
$Q_{2} = C_{2} V^{'}$
$Q_{0} = Q_{1} + Q_{2} = C_{1} V^{'} + C_{2} V^{'} = (C_{1} + C_{2}) V^{'}$
$V^{'} = \frac{Q_{0}}{C_{1} + C_{2}} = \frac{C_{0} V_{0}}{C_{1} + C_{2}} = \frac{(470 \times 10^{-6} F) (3V)}{470 \times 10^{-6} F + 100 \times 10^{-6} F} \approx \textbf{2.474 V}$
Conservation of Energy:
$U_{1} = \frac{1}{2} C_{1} (V^{'})^2$
$U_{2} = \frac{1}{2} C_{2} (V^{'})^2$
$U_{0} = U_{1} + U_{2} = \frac{1}{2} C_{1} (V^{'})^2 + \frac{1}{2} C_{2} (V^{'})^2 = \frac{1}{2}(V^{'})^{2}(C_{1} + C_{2})$
$\frac{1}{2} C_{1} V_{0}^{2} = \frac{1}{2}(V^{'})^{2}(C_{1} + C_{2})$
$C_{1} V_{0}^{2} = (V^{'})^{2}(C_{1} + C_{2})$
$V^{'} = V_{0} \sqrt{\frac{C_{1}}{C_{1} + C_{2}}} = (3V) \sqrt{\frac{470 \times 10^{-6} F}{470 \times 10^{-6} F + 100 \times 10^{-6} F}} \approx \textbf{2.724 V}$
The answers are inconsistent. Which one (if at all) is correct?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions:
Homework Help
Quote by vineel I am a TA for a physics teacher. I wrote a problem that the students did in the lab quiz. The students tried to use conservation of energy instead of conservation of charge, which I used. Both methods seem sound to me, but they produce different answers. I need help figuring out which method is correct, and why the other one is incorrect. 1. The problem statement, all variables and given/known data S2 is now opened after C1 has been fully charged and S1 is then closed. What is the final voltage? 2. Relevant equations $C = \frac{Q}{V}$ $U = \frac{1}{2} C V^{2}$ 3. The attempt at a solution $V_{0} = \text{initial voltage} = 3V$ $C_{0} = \text{initial capacitance} = C_{1} = 470 \mu F$ $Q_{0} = \text{initial charge}$ $Q_{1} = \text{final charge on } C_{1}$ $Q_{2} = \text{final charge on } C_{2}$ $V^{'} = \text{final voltage}$ Conservation of Charge: $Q_{0} = C_{0} V_{0}$ $Q_{1} = C_{1} V^{'}$ $Q_{2} = C_{2} V^{'}$ $Q_{0} = Q_{1} + Q_{2} = C_{1} V^{'} + C_{2} V^{'} = (C_{1} + C_{2}) V^{'}$ $V^{'} = \frac{Q_{0}}{C_{1} + C_{2}} = \frac{C_{0} V_{0}}{C_{1} + C_{2}} = \frac{(470 \times 10^{-6} F) (3V)}{470 \times 10^{-6} F + 100 \times 10^{-6} F} \approx \textbf{2.474 V}$ Conservation of Energy: $U_{1} = \frac{1}{2} C_{1} (V^{'})^2$ $U_{2} = \frac{1}{2} C_{2} (V^{'})^2$ $U_{0} = U_{1} + U_{2} = \frac{1}{2} C_{1} (V^{'})^2 + \frac{1}{2} C_{2} (V^{'})^2 = \frac{1}{2}(V^{'})^{2}(C_{1} + C_{2})$ $\frac{1}{2} C_{1} V_{0}^{2} = \frac{1}{2}(V^{'})^{2}(C_{1} + C_{2})$ $C_{1} V_{0}^{2} = (V^{'})^{2}(C_{1} + C_{2})$ $V^{'} = V_{0} \sqrt{\frac{C_{1}}{C_{1} + C_{2}}} = (3V) \sqrt{\frac{470 \times 10^{-6} F}{470 \times 10^{-6} F + 100 \times 10^{-6} F}} \approx \textbf{2.724 V}$ The answers are inconsistent. Which one (if at all) is correct?
What are your own thoughts on the matter?
gneill, I personally feel that my method (conservation of charge) is the correct one. I feel as if conservation of energy does not apply for some reason. I am not sure at all, so I am posting this question.
Recognitions:
Homework Help
## Capacitors - Conservation of Charge vs Conservation of Energy
Quote by vineel gneill, I personally feel that my method (conservation of charge) is the correct one. I feel as if conservation of energy does not apply for some reason. I am not sure at all, so I am posting this question.
We can't just give you an answer here (Forum rules), but we can help to a solution...
I suggest that you try a thought experiment. Suppose your circuit was the same as before but includes a resistance R between the two capacitors. Clearly some energy will be lost as heat in the resistor when S1 is closed and the current flows.
Write the equation for the current I(t) in the circuit and then find the energy lost by integrating I2R from time 0 to infinity. What's the resulting expression? What does it depend upon?
Thread Tools
| | | |
|------------------------------------------------------------------------------------|----------------------------------------------|---------|
| Similar Threads for: Capacitors - Conservation of Charge vs Conservation of Energy | | |
| Thread | Forum | Replies |
| | Electrical Engineering | 22 |
| | General Physics | 1 |
| | Classical Physics | 32 |
| | Engineering, Comp Sci, & Technology Homework | 0 |
| | Classical Physics | 2 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458034634590149, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=616301
|
Physics Forums
## Simplify Condition for Chord Length Equals Angle?
Consider the following:
On a circle of radius 1, two points are marked: P1 and P2.
Two lines are drawn from the center of the circle:
one from the center to P1,
the other from the center to P2.
The angle between these two lines is $\theta$.
One more line is drawn: from P1 directly to P2. In other words, this third line is a chord on this circle.
For the special condition that the length of this chord equals the angle, find a simple expression.
i.e. – find a simple expression for $\theta$ given the special condition that chord length = $\theta$ = angle = $\theta$
- - -
So far, all the expressions that I have worked out mix terms of $\theta$ and either sin($\theta$) or cos($\theta$); I have not been able to find an expression simply in terms of $\theta$, sin($\theta$), or cos($\theta$).
For example, following is one of my approaches:
Bisect the angle $\theta$, which also divides the chord in half.
The chord length is $\theta$.
But this value is also 2 sin($\theta$/2)
Equating these two expressions: 2 sin($\theta$/2) = $\theta$ or sin($\theta$/2) = $\theta$/2
I cannot find a way to simplify this expression further.
Any suggestions?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
You're almost there. The only x that satisfies sin(x)=x is zero, so theta/2=0, so theta=0 is the simple expression you're looking for. To see geometrically why it's true, use the fact that on the unit circle, the arclength subtended by an angle is equal to the measure of that angle in radians. So you have two paths connecting P1 and P2: one of them is the chord, and the other is the arc along the circle. But you're requiring that their lengths are equal. The chord is a straight line, and the arc is not, so that can't happen unless they both have zero length.
Thread Tools
| | | |
|------------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Simplify Condition for Chord Length Equals Angle? | | |
| Thread | Forum | Replies |
| | Advanced Physics Homework | 0 |
| | Introductory Physics Homework | 0 |
| | Differential Geometry | 8 |
| | Calculus & Beyond Homework | 6 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92146235704422, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/metric-tensor%20gravity
|
# Tagged Questions
3answers
114 views
### Relation between the determinants of metric tensors
Recently I have started to study the classical theory of gravity. In Landau, Classical Theory of Field, paragraph 84 ("Distances and time intervals") , it is written We also state that the ...
0answers
156 views
### Penrose Conformal diagram for flat 2-dim Lorentz space-time
I have the following metric $$ds^2 ~=~ Tdv^2 + 2dTdv,$$ defined for $$(v,T)~\in~ S^1\times \mathbb{R},$$ e.g. $v$ is periodic. This is the according Penrose diagram: Question 1) Is the ...
2answers
120 views
### Most suitable metric for the Solar system?
If I wanted to solve the Einstein equations for the solar system, which choice of $g_{\mu\nu}$ and $T_{\mu\nu}$ is more suitable? I thought about using a Schwarzschild metric near each planet, but ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9088115096092224, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/205248-if-lie-group-abelian-its-lie-algebra-has-lie-bracket-zero.html
|
3Thanks
• 1 Post By johnsomeone
• 1 Post By johnsomeone
• 1 Post By johnsomeone
# Thread:
1. ## If a Lie group is abelian, its Lie algebra has Lie bracket = zero
Hello
I'm looking for tips (and just tips) on how to prove the proposition in the title: If a Lie group is abelian, its Lie algebra has Lie bracket = zero. I know I have to prove that the Lie bracket of the left invariant vector fields (to which the Lie algebra is isomorphic) is zero. But that is as far as I can get. I can't find a way to relate the commutativity of the Lie group with the Lie bracket
Thanks in advance.
2. ## Re: If a Lie group is abelian, its Lie algebra has Lie bracket = zero
It depends on how the brackets of the Lie Algebra is being defined.
If it's defined via the adjoint map ad = derivative of Ad, then note that, for all a, Ad_a is the identity for a commutative group.
If it's defined via the exponential map, then use that exp(v1)exp(v2) = exp(v2)exp(v1) for a commutative group, to show that the second order term of f in exp(v1)exp(v2) = exp(f(v1, v2)) that defines the bracket is both symmetric and skew symmetric (and bilinear), hence is 0.
3. ## Re: If a Lie group is abelian, its Lie algebra has Lie bracket = zero
The Lie bracket on the Lie algebra is determined by the isomorphism between the left invariant vector fields on the Lie group G, with the usual Lie bracket [X,Y], and the Lie algebra it self, which associates each V in the Lie algebra with $X_g^V(L_g)_1V$ in the set of the vector fields. So none of those aproaches I understand. But I'm very thankful anyway. But, if you have any sugestion on how to aproach the problem from this angle, please share.
4. ## Re: If a Lie group is abelian, its Lie algebra has Lie bracket = zero
Originally Posted by ModusPonens
The Lie bracket on the Lie algebra is determined by the isomorphism between the left invariant vector fields on the Lie group G, with the usual Lie bracket [X,Y], and the Lie algebra it self, which associates each V in the Lie algebra with $X_g^V(L_g)_1V$ in the set of the vector fields.
If it's known (defined) by such an isomorphism, then you need only show that the bracket is always 0 in the simpler case to show that it's always 0 in the other case. Presumably, the simpler case is "left invariant vector fields on the Lie group G, with the usual Lie bracket [X,Y]". Unfortunately, what exactly "the usual Lie bracket [X,Y]" means there isn't clear. How is your book defining that?
Is it the operation of vector fields on test functions in the manifold sense? (i.e. [X,Y](f) = X(Y(f)) - Y(X(f)) ?)
5. ## Re: If a Lie group is abelian, its Lie algebra has Lie bracket = zero
$[X,Y]_p=(X.Y_1-Y.X_1)(p)(\partial/\partial x_1)_p + ... + X.Y_n-Y.X_n)(p)(\partial/\partial x_n)_p$ , so we're probably speaking of the same thing.
6. ## Re: If a Lie group is abelian, its Lie algebra has Lie bracket = zero
The dots mean composition, that is, $X.Y_1=X(Y_1)$ .
7. ## Re: If a Lie group is abelian, its Lie algebra has Lie bracket = zero
I futzed around with this yesterday and, while proving this on the other definitions of bracket is easy, doing it in the direct way, straight from the manifold definitions - without any recourse to flows, one parameter subgroups, exp map, the Lie derivative, adjoint map, etc., seems rather cumbersome - and I'm still not done. That's because looking at the flows, Lie derivative, etc, naturally provide a place to exploit commutivity, whereas the direct manifold-vector type defintions don't reveal anything about the Lie multiplication until you've drilled in deep. I'll leave it to you to complete this, if you like. Basically, I drilled down far enough to where the abelian-ness is obviously the cause of the brackets being 0. It looks like a short argument from there, but I'm done thinking about it.
I suspect it might be easier to prove that your definition of Lie brackets is equivalent to one of the many other definitions, and then prove that abelian=>[,]=0 from that other definition.
Of course, it's certainly possible that there's an easier way, still keeping just to the definitions, that I've simply overlooked. And of course, I certainly could've made a mistake somewhere.
$\text{Fix a chart } (U, \phi = (x_1, x_2, ... x_n) \) \text{about } e \in G.$
$\text{Define left translation by }g \in G \text{ as } L_g : G \rightarrow G \text{ by } L_g(h) = gh.$
$\text{Define smooth functions } c_{i, j} \text{ on } U \text{ by } T_eL_g(\partial_i|_e) = \sum_j c_{i, j}(g) \partial_j|_g \in T_gG.$
$\text{For any vector } \tilde{y} = \sum_i \tilde{y_i} \partial_i|_e \in T_eG \text{, there's a unique left invariant vector field having that value in } T_eG \text{, given by }$
$Y(g) = T_eL_g(\tilde{y}) = \sum_i \tilde{y_i} T_eL_g(\partial_i|_e) \in T_gG. \text{ Since } L_e = id_G, \text{ get } Y(e) = \tilde{y}.$
$\text{On } U \text{, } Y|U = \sum_i y_i \partial_i \in \Gamma(TU) \text{, and the functions } y_i \text{ are entirely determined by } \tilde{y}\text{ via: }$
$Y(g) = T_eL_g(\tilde{y}) = \sum_i \tilde{y_i} T_eL_g(\partial_i|_e) = \sum_{i} \tilde{y_i} \left\{\sum_j c_{i, j}(g) \partial_j|_g \right\}$
$= \sum_{j} \left\{\sum_{i} \tilde{y_i} c_{i, j}(g) \right\} \partial_j|_g = \sum_{j} y_j(g) \partial_j|_g \text{ so that }$
$y_j(g) = \sum_{i} \tilde{y_i} c_{i, j}(g) \text{, or simply } y_j = \sum_{i} \tilde{y_i} c_{i, j}$.
$\text{Now, compute the bracket of left invariant vector fields } Y, W \text{ at } e \text{ based on their values at } T_eG:$
$[Y, W]|_e = \sum_j \left\{ \sum_i ( \ y_i(e) \partial_i|_e (w_j) - w_i(e) \partial_i|_e (y_j) \ ) \right\} \partial_j_|e$
$= \sum_{i, j} \left\{ \ y_i(e) \partial_i|_e \left(\sum_{k} w_k(e) c_{k, j}\right) - w_i(e) \partial_i|_e \left(\sum_{k} y_k(e) c_{k, j}\right) \ \right\} \ \partial_j|_e$
$= \sum_{i, j, k} \left\{ \ y_i(e) w_k(e) \partial_i|_e ( c_{k, j}) - y_k(e) w_i(e) \partial_i|_e ( c_{k, j}) \right\} \ \partial_j|_e$
$= \sum_{i, j, k} \left\{ \ [ \ y_i(e) w_k(e) - y_k(e) w_i(e) \ ] \ \partial_i|_e ( c_{k, j}) \ \right\} \ \partial_j|_e$
$= \sum_{j} \left\{ \sum_{i, k} \ [ \ y_i(e) w_k(e) - y_k(e) w_i(e) \ ] \ \partial_i|_e ( c_{k, j}) \right\} \ \partial_j|_e.$
$\text{Now, since } (y_i(e) w_k(e) - y_k(e) w_i(e)) \text{ is skew-symmetric in } i \text{ and } k, \text{if } \partial_i|_e ( c_{k, j})$
$\text{were symmetric in } i \text{ and } k \text{, then the product would be skew symmetric, and so the double sum would be } 0.$
$\text{Thus if } \partial_i|_e ( c_{k, j}) = \partial_k|_e ( c_{i, j}) \ \forall i, j, k \in \{1, 2, ... n\}, \text{ then }$
$(y_i(e) w_k(e) - y_k(e) w_i(e)) \partial_i|_e ( c_{k, j}) = -(y_k(e) w_i(e) - y_i(e) w_k(e)) \partial_k|_e ( c_{i, j}) \ \forall i, j, k,$
$\text{and so } \sum_{i, k} \ (y_i(e) w_k(e) - y_k(e) w_i(e)) \partial_i|_e ( c_{k, j}) \} = 0 \ \forall j \in \{1, 2, ... n\},$
$\text{and so } [Y, W]|_e = 0.$
$\text{So if can show that } G \text{ abelian implies } \partial_i|_e ( c_{k, j}) = \partial_k|_e ( c_{i, j}) \ \forall i, j, k \in \{1, 2, ... n\},$
$\text{then will have proven that } G \text{ abelian implies that the brackets at } T_eG \text{ of any left invariant vector fields are } 0.$
$\text{Now } c_{i,s}(g) = \sum_j c_{i, j}(g) \delta_j^s = \sum_j c_{i, j}(g) \partial_j|_g [x_s] = ( \ T_eL_g(\partial_i|_e) \ )[x_s]\text{, so }$
$c_{i,s}(g) = \partial_i|_e[x_s \circ L_g]. \text{ Thus } \ \forall g \in U, \ \forall i, j, \ c_{i,j}(g) = \partial_i|_e[x_j \circ L_g].$
$\text{Thus want to show that } G \text{ abelian implies }$
$\partial_k|_e \{ g \mapsto \partial_i|_e[x_j \circ L_g] \} = \partial_i|_e \{ g \mapsto \partial_k|_e[x_j \circ L_g] \} \ \forall i, j, k \in \{1, 2, ... n\}.$
$\text{(I've written it as } \partial_k|_e \{ g \mapsto \partial_i|_e[x_j \circ L_g] \} \text{ to emphasize which function } \partial_k|_e \text { acts on.)}$
$\text{Thus want to show that } G \text{ abelian implies }$
$\partial_k|_e \partial_i|_e[x_j \circ L_g] = \partial_i|_e \partial_k|_e[x_j \circ L_g] \ \forall i, j, k \in \{1, 2, ... n\}.$
$\text{Define } \phi : G \times G \rightarrow G \text{ by } \phi(u_1, u_2) = u_2u_1 = L_{u_2}u_1.$
$\text{Then } (x_j \circ L_g)(u_1) = x_j(L_g(u_1)) = (x_j \circ \phi) (u_1, g),$
$\text{so } \partial_i|_e [x_j \circ L_g] = \partial_i|_{u_1=e} [x_j \circ L_g(u_1)] = \partial_i|_{u_1=e}[(x_j \circ \phi) (u_1, g)],$
$\text{so } \partial_k|_e \{ g \mapsto \partial_i|_e [x_j \circ L_g] \} = \partial_k|_{u_2 = e} \partial_i|_{u_1=e}[(x_j \circ \phi) (u_1, u_2)],$
$\text{So, } \ \forall i, j, k \in \{1, 2, ... n\}, \text{ define }b_{i, j, k} =\partial_k|_{u_2 = e} \partial_i|_{u_1=e}[(x_j \circ \phi) (u_1, u_2)].$
$\text{The problem thus reduces to showing that: }$
$G \text{ abelian implies } b_{i, j, k} = b_{k, j, i} \forall i, j, k \in \{1, 2, ... n\}.$
If I've made no mistake, this does "drive" the problem down to where commutivity makes the brackets 0. I'll let you figure that out. I'm sick of this. It looks like you just use that partials commute, then use that phi is symmetric, and you're done. But I'm done - I'll leave that for you to think about.
8. ## Re: If a Lie group is abelian, its Lie algebra has Lie bracket = zero
Wow! Thanks a lot for your work! I really apreciate it.
I never thought it would be that hard. I'll read it when I have the time, since I'm close to going to bed.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947833240032196, "perplexity_flag": "head"}
|
http://climateaudit.org/2009/06/02/does-steig-understand-north-et-al-1982/?like=1&_wpnonce=d19428bcb3
|
by Steve McIntyre
## Steig’s “Tutorial”
In his RC post yesterday – also see here – Steig used North et al (1982) as supposed authority for retaining three PCs, a reference unfortunately omitted from the original article. Steig also linked to an earlier RC post on principal components retention, which advocated a completely different “standard approach” to determining which PCs to retain. In yesterday’s post, Steig said:
The standard approach to determining which PCs represent to retain is to use the criterion of North et al. (1982), which provides an estimate of the uncertainty in the eigenvalues of the linear decomposition of the data into its time varying PCs and spatial eigenvectors (or EOFs).
I’ve revised this thread in light of a plausible interpretation of Steig’s methodology provided by a reader below – an interpretation that eluded me in my first go at this topic. Whatever the merits of this supposedly “standard approach”, it is not used in MBH98 nor in Wahl and Ammann.
Steig provided the figure shown below with the following comments:
The figure shows the eigenvalue spectrum — including the uncertainties — for both the satellite data from the main temperature reconstruction and the occupied weather station data used in Steig et al., 2009. It’s apparent that in the satellite data (our predictand data set), there are three eigenvalues that lie well above the rest. One could argue for retaining #4 as well, though it does slightly overlap with #5. Retaining more than 4 requires retaining at least 6, and at most 7, to avoid having to retain all the rest (due to their overlapping error bars). With the weather station data (our predictor data set), one could justify choosing to retain 4 by the same criteria, or at most 7. Together, this suggests that in the combined data sets, a maximum of 7 PCs should be retained, and as few as 3. Retaining just 3 is a very reasonable choice, given the significant drop off in variance explained in the satellite data after this point: remember, we are trying to avoid including PCs that simply represent noise.
Figure 1. Eigenvalues for AVHRR data.
As I observed earlier this year here, if you calculate eigenvalues from spatially autocorrelated random data on a geometric shape, you get eigenvectors occurring in multiplets (corresponding to related Chladni patterns/eigenfunctions). The figure below shows the eigenvalues for spatially autocorrelated data on a circular disk. Notice that, after the PC1, lower eigenvalues occur in pair or multiplets. In particular, notice that the PC2 and PC3 have equal eigenvalues.
North et al 1982 observed that PCs with “close” but not identical eigenvalues could end up being “statistically inseparable”, just as PCs with identical eigenvalues. North:
An obvious difficulty arises in physically interpreting an EOF if it is not even well-defined intrinsically. This can happen for instance if two or more EOFs have the same eigenvalue. It is easily demonstrated that any linear combination of the members of the degenerate multiplet is also an EOF with the same eigenvalue. Hence in the case of a degenerate multiplet one can choose a range of linear combinations which .. are indistinguishable in terms of their contribution to the average variance…Such degeneracies often arise from a symmetry in the problem but they can be present for no apparent reason (accidental degeneracy.)
As the reader observes, if you have to take everything in an overlapping confidence intervals, then there are only a few choices available. You can cut off after 3 PCs or 7 PCs – as Steig observes in his discussion. Not mentioned by Steig are other cut off points within North’s criteria: after 1, 9, 10 and 13 PCs, the latter being, by coincidence, the number in Ryan’s preferred version. North’s criterion gives no guidance as between these 4 situations.
In addition, if you assume that there is negative exponential decorrelation, then the eigenvalues from a Chladni situation on an Antarctic shaped disk area as follows – with the first few eigenvalues separated purely by the Chladni patterns, rather than anything physical – a point made on earlier occasions, where we cited Buell’s cautions in the 1970s against practitioners using poorly understood methodology to build “castles in the clouds” (or perhaps in this case, in the snow).
Eigenvalues from 300 random gridcells.
As an experiment, I plotted the first 200 eigenvalues from the AVHRR anomaly matrix as against the first 200 eigenvalues assuming negative exponential spatial decorrelation from randomly chosen gridcells. (I don’t have a big enough computer to do this for all 5509 gridcells and I’m satisfied that the sampling is a sensible way of doing the comparison). A comparison of observed eigenvalues to those from random matrices is the sort of thing recommended by Preisendorfer.
Comparison of Eigenvalues for AVHRR and Random data. Proportion of variance of first 200 eigenvalues shown on log scale .
This is actually a rather remarkable pattern. It indicates that the AVHRR data has less loading in the first few PCs than spatially autocorrelated data from an Antarctic-shaped disk and more loading in the lower order PCs.
### Like this:
This entry was written by , posted on Jun 2, 2009 at 11:53 AM, filed under General, Steig at al 2009 and tagged chladni, North, separable, Steig. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.
### 49 Comments
1. Steve J
In North et al., the error bar width is given by Eq. 24. It’s proportional to each particular eigenvalue. If you look closely at Fig. 4 of North, the error bar widths are smaller for smaller magnitude eigenvalues than for larger magnitude eigenvalues. The widths don’t vary much because the largest eigenvalue is only 40% greater than the smallest, but the variation is there.
In Steig’s figure (in his RC “tutorial” post and above), the error bar widths appear equal in spite of the fact that the ratio of the largest to the smallest eigenvalues are about 50 for the satellites and 100 for the weather stations. Whatever is being used to compute the error bar widths, it isn’t Eq. 24 of North. However, Steig’s comments seem to imply that his formula is the one being used to compute how many PCs to use.
• Ryan O
Re: Steve J (#1), He’s using a log scale on the graph, which is why they look the same.
• Steve J
Re: Ryan O (#2), You’re right. My mistake.
2. Steve McIntyre
I’ve edited this slightly, cropping Steig’s diagram, to show the overlap of eigenvalues for the PC2 and PC3 – which are thus not “statistically separable” (regardless of the scale of the graph.)
3. richard
It seems to me that he’s saying that wherever you draw the line in retaining PC’s, it needs to be between “statistically separable” eigenvalues. So you can’t just take 2 PC’s because 2 and 3 aren’t separable. But you can take just 3, because PC 3 and PC 4 are separable.
Likewise, PC 4, 5, 6 and 7 aren’t separable. But 7 is from 8 and 3 is from 4. So you can choose either 3 PC’s or 7 PC’s.
So in other words, Steig is claiming that PC 2 and 3 are separable from the rest not from each other, which may be an idiosyncratic usage, but I’m not sure he’s confused about North.
Am I missing something?
-rv
• Steve McIntyre
Re: richard (#5),
Re-reeading Steig’s excerpt, I think that you’re right about your interpretation of what Steig’s doing. It’s not an approach that is “standard” in principal components literature – Ryan’s listing of PC retention methods does not include it. It’s not an approach that Mann used in MBH, though he also said that he used the standard method.
Your interpretation opens up an interesting point in terms of Ryan’s PC=13 reconstruction. The first 13 PCs are “separable” according to the above diagram.
I’m revising the thread to correct my reading of the article (and will post any deleted material in a comment.)
4. Posted Jun 2, 2009 at 12:46 PM | Permalink | Reply
The whole business about North and ‘standard’ seems far to general for PCA. Certainly there are processes with multiple axes of variance which can violate these criteria. I can’t get my head around any concept of a pre-determined number of PC’s which guarantees a good result. Some methods come close.
Steve’s hypothesis of random distributions on a map of antarctica is an extension of the broken stick method (as far as I understand) with the added complexity that the continental shape and auto-correlation is important in generating random data.
Ryan showed me this link before which describes a number of methods and the limitations. Sorry if everyone has already seen it.
http://labs.eeb.utoronto.ca/jackson/pca.pdf
• Steve McIntyre
Re: Jeff Id (#6),
Jeff, the linked article has a pretty exhaustive survey of stopping rules, but, as far as I can tell, none of them correspond to Stieg’s “standard” approach which is not set out in that form in North et al 1982. Despite all my reading about PCs over the past few years, I’d never seen Steig’s “standard” method before in statistical literature – though perhaps it subsists within the climate subculture.
5. richard
Without having read North in full, I’m also interested in what it has to say about chained inseparable eigenvalues (to make up a term). In other words, PC 4 is not “statistically separable” from PC 5 and PC 5 is not separable from PC 6, but PC 4 is separable from PC 6.
The key insight of inseparability is the fact that a linear combination of the (1 or more) inseparable eigenvectors is also an eigenvector with the same eigenvalue (within margins of error). But in the case above, will linear combinations of PC 4, 5, and 6 be eigenvectors? And if so, what will their eigenvalues be?
• RomanM
Re: richard (#7),
You are quite correct in the statement that any linear combination of eigenvectors corresponding to the same eigenvalue is also an eigenvector (corresponding to that same eigenvalue).
In such a situation, the PCs are calculated to form an orthogonal basis for the vector space spanned by the eigenvectors corresponding to that common eigenvalue. However, the answer is not unique because any rigid rotation (i.e. one which retains the orthogonality) of those basis vector PCs will produce a new set of PCs.
• richard
Re: RomanM (#10),
Right, that makes perfect sense to me when there are 3 PCs that share an eigenvalue (within error bars). But I was specifically talking about a situation where the first and second are “the same” (i.e. inseparable) and the second and third are the same, but the first and third are separable.
When you are talking true equality of eigenvalues, this is not a problem, either all three are equal or they aren’t. So you might have three eigenvectors with eigenvalues of 0.5.
But when you extend it as North did to include error bars, you can get the situation I described. To extend my example, you might get eigenvalues of 0.6, 0.5 and 0.4, all plus or minus 0.075. How does this affect the situation given that the first and third are separable?
With more inseperable PCs (as in Steig), the problem gets bigger. If you have 7 PCs, each inseparable from the next, the first might have a very different eigenvalue than the seventh.
• Ryan O
Posted Jun 2, 2009 at 2:02 PM | Permalink
Re: richard (#12), The problem with blindly using North is that he does not define how close eigenvalues have to be to be considered “separable”. The closer they are together, the more likely they represent a mixed mode. But this is not necessarily the case, as the loads of small eigenvalues show.
.
In my opinion, an approximate truncation point should be determined based on the purpose of the analysis (i.e., whether you are simply looking to reduce the size of a data set or whether you are looking to extract individual components for analysis), using some of the heuristic selection tools (like in the Jackson article) as a guide, and supplementing with North to minimize the chance of splitting important modes.
• RomanM
Posted Jun 2, 2009 at 2:45 PM | Permalink
Re: richard (#12),
You are confusing the sample PCs with the population PCs.
Statisticians would model a situation with a population from which the multivariate sample of observations was taken. That population would have some sort of correlation structure. In the simplest case, it might be that all of the variables are independent and have equal variability. In that case, all of the population eigenvalues would be exactly equal to one.
However, the sample would display various levels of (spurious) correlation and it would very likely be that the eigenvalues of the sample correlation matrix would all be different (although most of those differences might be small). Various tests have been devised to decide whether the differences in the sample eigenvalues represent real differences in the population. Such tests should be viewed as giving conclusions of either “could be the same (i.e. I can’t tell the two eigenvalues apart)” or “the eigenvalues are different (i.e. in the Steig vocabulary, “separable”)”.
The interpretation in the case you describe, for example, is “I can’t tell that 1 and 2 differ and I can’t tell that 2 and 3 are different, but 1 and 3 are far enough apart to decide that they are not the same”. Even when all of the poulation eigenvalues are all different, it is quite possible that I might not be able to decide this for each comparison because the amount of information in the sample is too small.
This sort of conclusion can also arise quite often when doing multiple comparisons for factors in an analysis of variance as well. It should not be viewed as contradictory.
6. Steve McIntyre
I’ve edited the above post in light of comments 5 and 7. I substantially reorganized the thread, adding some new material and deleted the following comment:
In February, I’d noticed and been puzzled by the use of the term “statistically separable” in Steig et al (2009) – as opposed to the more usual “statistically significant”. I traced this usage back to North et al (1982) and observed a couple of months ago that it deals with an entirely different problem than Steig used it for in yesterday’s post. Indeed, the diagram in Steig’s post demonstrates something exactly opposite than what Steig claims – it demonstrates that the PC2 and PC3 are not “statistically separable” according to North et al (1982) criteria.
Steig et al 2009 stated:
The first three principal components are statistically separable and can be meaningfully related to important dynamical features of high-latitude Southern Hemisphere atmospheric circulation
I’ve observed elsewhere that there is considerable reason to interpret PC1-PC3 as no more than Chladni patterns. Interpreting North et al 10=982 literally, the PC2 and PC3 are not statistically separable and the statement in Steig et al 2009 is not correct as it stands. However, reader richard has convinced me that Steig had a different point in mind – and, rather than parsing North et al 1982, I’ve followed this interpretation in the restatement above, so that comments are at least apples and apples as much as possible.
7. Nick Moon
I’ve been following the various Steig et al 2009 threads for some time, and one thing really puzzles me. I wish I know a bit more statistics but given that I don’t – how is any use of PCA justified at all?
We have a load of data – it’s a measure of temperature (or something close to it). For simple people (like
myself) there is a simple way forward. Use the temperature data. If you use the temperature data you can then plot the data and perhaps observe a trend (like temperature going up or going down). Sorry this is all a bit basic when compared to the level you statisticians work at.
However, in this paper, apparently, instead of using the temperature we use the first n PCs. How can this ever be an improvement. instead of using the actual data of temperature – which we acgually have, we put in through some statistical mincing machine and use some numbers that come out of the other end.
In a sausage factory, if you want to know the length of a sausage, you put it against a rule and measuer it. You don’t mince it back up and then meause the length of the mince.
In the most famous (or infamous) example of PCA discussed here, MBH98, there was at least some physical argument for using PCA. (I am about to say something positive about MBH98 – here of all places – weird!) It could be argued that tree ring growth isn’t directly a measure of temperature. It’s a measure of how tree friendly conditions were at that time. That includes temperature but may also include other things, nutrients, water table, rainfall, CO2 levels, the absence of deer whatever. In MBH98, tree-ring growth wasn’t used – instead the first PC was used. There is a justification for this – the 1st PC represented the strongest of many signals that the tree was measuring. This was assumed to be temperature.
So my question is this. What physically does the sum of the first 3 PCs represent? It can’t be temperature – to get that you have to add all the PCs back together. And what physically do the remaining PCs represent? And is it OK to throw that information away?
• Gerald Machnee
Re: Nick Moon (#14),
In MBH98, tree-ring growth wasn’t used – instead the first PC was used. There is a justification for this – the 1st PC represented the strongest of many signals that the tree was measuring. This was assumed to be temperature.
I do not see how you can say there was justification for this, if you read all the papers under Hockey Stick Studies on the left in this blog.
• Nick Moon
Re: Gerald Machnee (#35),
I didn’t mean to suggest that it was a valid justification Perhaps I should say, it’s how they justified it to themselves. But I remain confused as to how you can choose to ignore the actual temperature data that you have and instead use something else derived from but clearly different from the temperature.
With MBH98, you could claim (not saying it would be true – but you can make the claim) that PC1 was close to temperature – being the strongest component of tree-ring growth. You could also choose to argue that the remaining PCs represented other factors – not temperature.
I can’t see in this case how you can mkae any such ‘justification’ for using PCs.
• Gerald Machnee
Posted Jun 3, 2009 at 7:02 AM | Permalink
Re: Nick Moon (#36),
I will not discuss it as this has been gone over by Steve many times, But the Team used it as a display of temperature and got it into IPCC.
• Steve McIntyre
Re: Nick Moon (#14),
Nick, your question is reasonable enough, but, as Gerald says, this isn’t a good thread to interject it. In addition, if you want an explanation of how PCs applied to tree ring networks are supposed to yield something sensible, you really should ask Michael Mann or Caspar Ammann. The methodology more or less arrives from outer space in their articles with the only argument being how many PCs to retain. And the issue is always couched with bristlecones in the background. If you remove Graybill bristlecones, then no one really cares about PCs anyway. The entire debate is about magic thermometers – but this has been discussed in many threads and I don’t want to spend time on it right now.
8. scp
I’m still stuck on the concept of “overfitting”. I thought I understood PCAs, eigenvalues, eigenvectors and similar mysticisms when I took Numerical Linear Algebra class. It seems I was wrong.
For Steig to try to apply the idea of overfitting the order of a polynomial equation to the number of PCAs just seems like a complete misfit to me(?). As I recall, adding more PCAs always gets you at least a little closer to the matrix that you’re reconstructing. After some number of PCAs, the incremental change from using more might be imperceptible, but adding PCAs is never going to take you further from the goal in the way increasing the order of a polynomial equation does, is it? It seems to me that overfitting isn’t really even a relevant idea in the context of PCA. Am I missing something?
• RomanM
Re: scp (#15),
I started writing a response to your post several hours ago as follows:
No, you may still understand your linear algebra ( I’m giving you the benefit of the doubt ).
I will try to interpret Dr. Steig’s lesson so that it makes sense (somebody correct me if I missed the boat):
…
After a while, I realized that I could NOT frame the regression example in a meaningful fashion in the context of the Steig et al. (2009) paper. It just didn’t fit! Of course, if there was a place in the RegEM procedure where they did PCA regression, i.e. used the PCs (which were limited to 3 by the “standard rule”) as predictors for imputing the missing values, I could have pointed to it and said, “Yes, there is the danger of overfitting because you have too many predictors!”. But that is not the case anywhere in the procedure.
Perhaps Dr. Steig or whoever came up with the example could explain to me how it relates to what was done – I am always open to learning new things . (For quite a while I lurked more than I posted. It was only when I became more comfortable that I actually had learned enough to have a handle on what climate statistics was about that I decided I would add a comment or two.)
Frankly, by limiting the number of PCs a priori to three, the only thing that seems sure is that they would pretty much no longer have enough information to possibly describe the complexities of a large area with a multitude of topographies and diverse climate influences – the “smearing” referred to by Jeff and Ryan. Overfitting indeed!
• Ryan O
Re: RomanM (#23), I almost responded to the overfitting part at RC, but the only thing I could come up with was, “That doesn’t really apply.” It does apply quite nicely to the RegEM portion, but Steig didn’t seem to take issue with our RegEM. This is odd, since the results are more sensitive to RegEM settings than retained AVHRR PCs. In the end, I decided it wasn’t that important, since our verification stats indicate that overfitting is not a major problem.
.
Also – I’m not sure if anyone else realizes this – but Steig automatically assumed you need to set regpar equal to the number of retained AVHRR PCs. I don’t think it ever occurred to them to look at the ground station imputation as a separate animal from the AVHRR data – and they are quite different. That could be why they disregarded using the higher-order PCs. Keeping regpar the same as the AVHRR PCs leads to massive overfitting during RegEM. I compared his graphs of trends by PCs to my table I generated for the “Antarctic Tiles” post and they match rather well to the regpar = # PCs entries.
.
Re: Steve McIntyre (#25), I note the implication in Smith: As long as your EOFs are based on densely populated data (like the AVHRR data), you almost can’t retain too many. I’ve done reconstructions using up to 25 PCs. The answers don’t change. I should try more, just to see what happens.
.
If, however, you form the EOFs out of sparsely populated data, then I imagine the fit between the sample EOFs and population EOFs is subject to increased sampling error, leading to less faithful representations in the higher-order modes.
• Posted Jun 2, 2009 at 7:12 PM | Permalink
Re: Ryan O (#26),
If, however, you form the EOFs out of sparsely populated data, then I imagine the fit between the sample EOFs and population EOFs is subject to increased sampling error, leading to less faithful representations in the higher-order modes.
I think this is an important point, it’s almost like overfitting is a misnomer. It’s more of a convergence to one of many local minima in my mind. Overfitting implies a more exact solution with too many dof, however when the data is this sparse it has a different connotation to me. Regularization is designed to help EM converge to a reasonable solution with more extensive missing data. In this case half is missing and the premise is that there are going to be cases when the solution is unreasonable.
Since we have a situation where we know reasonable spatial covariance/distance doesn’t occur until at least 7 pc’s and we also know RegEM ttls really distorts the signal at that point (extreme fluctuations resembling overfitting), other regularization is required. That’s one of the thing’s Ryan has solved through his piecemeal optimization is the stabilization of the result at higher PC’s (I’m sure Dr. Steig has missed this). I’ve been working on a post about that but we are currently in intermission in the Stanley cup finals so it will have to wait. Go wings.
9. Peter D. Tillman
North et al:
An obvious difficulty arises in physically interpreting an EOF if it is not even well-defined intrinsically….
For fellow statistics dummies, EOF = Empirical Orthogonal Functions, http://en.wikipedia.org/wiki/Empirical_orthogonal_functions
–from our own handy http://climateaudit101.wikispot.org/Glossary_of_Acronyms
Cheers — Pete Tillman
10. Jason
We should not forget that statistical procedures are merely a tool for scientific understanding; not an end in and of themselves.
Do PCs 4 through 13 contain important information about the geographic distribution of Antarctic temperature trends? Or are they just noisy residue?
Its not clear to me that Steig et al have ever argued that PCs 4 through 13 are noise. Their own comments seem to suggest otherwise.
If so, it hardly seems relevant which authority is cited for excluding the extra PCs. Their exclusion only serves to obscure the underlying physical reality by attributing much of the very rapid peninsular warming to regions that have actually gotten colder or displayed very modest warming trends.
11. Craig Loehle
It seems to me irrelevant what a “standard” method says about retaining PCs when RyanO shows that the results change completely with different numbers of retained. In this case they are clearly not “noise” and have not been shown to be “noise” by Steig et al. That is, the result (the temperature map) is NOT converging with addition of more PCs but is unstable with this addition.
12. PhilH
It is finally beginning to penetrate this non-mathematical, non-statistican’s brain that, strangely enough in this day and age, although we, or someone, may have observed that parts of West Antartica have trended warmer in recent years, and that thermometers in Central and Eastern Antartica have shown cooling during the same period, no one “knows,” and, given the limits of our technical resources, together with the remoteness, the size, and apparent persistency of the place, may never “know” whether these basically unobserved and non-verifiable Antartic conditions, and therefore trends, are any sort of valid evidence for or against the theory of man-made climate change. Or any other kind of climate change. Statements to the contrary, be they in “peer” reviewed journals or on blogs,and no matter how calculated, are nothing more than guesses; and we have no way of knowing whether they are “educated” guesses or not. Antartica is the head of the pin.
• stan
Re: PhilH (#20),
what he said.
Stats or no stats — there isn’t enough information about vast areas of the continent to make any kind of intelligent claim about trends. This study is a farce, not because of what was or wasn’t done with however many PCs, but because the study purported to make a conclusory statement when there isn’t enough information to do so.
13. Michael Jankowski
Steig says that “One could argue for retaining #4 as well, though it does slightly overlap with #5.” However, in his Figure 1 “eigenvalue spectrum – including error bars,” #4 does NOT overlap with #5.
14. Steve McIntyre
I mentioned Smith et al 1996 recently in connection with SST. This is an extremely important article regarding the use of PCs for interpolating climate fields and I urge readers to consult it. I should have mentioned it in my post.
The problem in Smith et al – reconstructing SST in periods with sparse data – is really rather similar to Steig and it’s surprising that it wasn’t kept more firmly in view. In Steig’s recent post at RC, instead of reviewing SMith et al 1996, he merely provides an armwaving argument using an example from regression that is at best an analogy.
Smith et al refer to North et al 1982 and the problem of separability. Their eigenvalue plots look a lot like the Antarctic AVHRR plots. Nonetheless, they consider very large number of modes(PCs) as shown in the following table:
As noted previously, they mention that failure to include sufficient modes can cause problems.
• AnonyMoose
Re: Steve McIntyre (#22),
So by claiming that North promotes a few-PC statistical cutoff point, Steig is ignoring deeper analysis? Smith went to the trouble of comparing various PCA against observations, in Smith’s case many regional comparisons in order to find which PCs in each region contain the regional signals. It’s possible that by treating the continent as a single region, Steig’s first 3 PCs are driven by something less relevant to temperatures than is relevant. If Steig had applied Smith’s regional analysis, he might have been able to create better regional approximations. So he might be using something affected by barometric pressure at the higher continental altitudes, and smearing peninsula temperatures across the entire continent instead of limiting the proper peninsula patterns to the peninsula.
Admittedly, Steig has to deal with having much less data and many fewer locations than Smith et al did. For a back-of-the-envelope estimate, what Steig did is reasonable. But because the rough estimate doesn’t show dramatically different results, it shouldn’t have been published and something more detailed should have been done.
• Steve McIntyre
Re: AnonyMoose (#41),
I’m not so sure that Smith’s SST data for the 19th century is any better than Antarctic station data in the 1960s. I’d be surprised if it was. The enterprises are similar.
Steig promoted his calculation as follows:
“People were calculating with their heads instead of actually doing the math,” Steig said. “What we did is interpolate carefully instead of just using the back of an envelope. While other interpolations had been done previously, no one had really taken advantage of the satellite data, which provide crucial information about spatial patterns of temperature change.”
And you say that his calculation is OK for the “back of the envelope”.
15. Steve McIntyre
Two more quotes from Smith et al 1996 – I’ve mentioned the 2nd one before. I don’t see how Steig’s 3 PCs can be construed as being “standard” under SMith et al 1996.
We have discussed refinements tp the technique in the tropical PAcific that allow it to be applied directly to COADS super observations using 24 EOFs. It was initially surprising to us that the higher modes could give useful skill even though they were not individually separated. However our results show the higher modes do contribute important combined skill…
A compromise choice of a much lower number of fitted modes for an earlier period may lead to another difficulty however, This is the situation in which one or more well sampled structures are spanned by higher order modes not retained for the reconstruction fit. In this instance the observations will either be ignored or misinterpreted…. It is not a problem over our period because the relatively dense data compared to earlier periods allowed us to always use a large number of modes without overfitting.
16. Steve McIntyre
Ryan O (#26),
Also – I’m not sure if anyone else realizes this – but Steig automatically assumed you need to set regpar equal to the number of retained AVHRR PCs. I don’t think it ever occurred to them to look at the ground station imputation as a separate animal from the AVHRR data – and they are quite different. That could be why they disregarded using the higher-order PCs. Keeping regpar the same as the AVHRR PCs leads to massive overfitting during RegEM
Hmmm. Very interesting.
There seem to be good reasons to carry more PCs from the AVHRR and no valid reason to stop at PC=3. I agree that there are reasons not to increase regpar. Actually, I can think of an argument for regpar stopping at 1.
As he doesn’t seem to understand the ingredients, I guess he didn’t realize that the mixture might not work very well.
• Michael Jankowski
Re: Steve McIntyre (#27), so what is the rationale behind selecting a value for regpar?
And going back to my comment in #21, why does Steig claim that #4 and #5 overlap when the uncertainty bars in the exhibit clearly do not? 4 PCs seems to make more sense from “Figure 1″ than 3. And now that I look closer, 6 PCS seems to make even more sense, and possibly 7. Or even 10, where the weather station eigenvalue has a pretty big drop-off between #10 and #11, and the eigenvalue change between #10 and #11 for the satellite data looks like it may be “statistically separable.”
You don’t get “statistically separability” in the weather station eigenvalues until you get to the difference between #7 and #8 – where there is also a “statistically separability” in the satellite eigenvalues, too. So even if stopping at 3 PC is somehow justified based on the satellites alone, does it not matter that the weather stations aren’t hitting “noise” yet and should have more PCs?
17. RomanM
I think that Dr. Steig has it somewhat backwards. One has to remember that it is the values of the satellite PCs that are BEING predicted in the pre-satellite era and not the other way around. If anything can cause overfitting, it is having too many stations doing the predicting and (unless I am missing a step where different weights are used for each of the stations and PCs – I don’t recall seeing any weights specified), each of the stations and each of the PCs has an equal say so the stations tend to predominate. For that and reasons of blackbox non-linearity, I doubt TTLS can be trusted anyway.
I can’t see any theoretical problem with increasing the number of PCs calculated from the original data to input into RegEM. Put in more satellite information and let the procedure sort it out. Jeff’s observation about local minima with higher values of regpar is likely correct (and exacerbated further through the enormous number of imputed values in the process).
In retrospect, although the personal attacks by his henchmen were pretty animated, I thought his entire defence of the analysis was shaky.
• Ryan O
Re: RomanM (#30),
think that Dr. Steig has it somewhat backwards. One has to remember that it is the values of the satellite PCs that are BEING predicted in the pre-satellite era and not the other way around.
.
Dammit! I was hoping to bust that one out myself and impress people. Hahaha.
.
Seriously, though, I am presently completing my description of Steig’s method (yes – actually drafting notes for a manuscript) where I describe his method as extrapolation. Has important consequences when interpreting the data.
• Ryan O
Re: Ryan O (#31), Err, that should be “interpreting the results”.
• Posted Jun 2, 2009 at 9:26 PM | Permalink | Reply
Re: RomanM (#30),
I don’t know that anyone has looked at Ridge regression yet as an alternative. If high order RegEM ttls overweights low influence PC’s in this case, do you have any thoughts on ridge regularization for minimizing the low influence PC’s? As I understand it, the pc’s are forced to follow a exponentially decreasing order of importance, but I’ve only done a bit of reading on it. I don’t want to waste too much time without good reason but it might be a more appropriate regularization.
SteveM
I also have been looking at R Kriging functions. I’m not sure they will provide a better baseline than area weighting but Kriging is well known and IMO a lot better than all this EM fanciness. If you had any advice, it would be helpful.
• Steve McIntyre
Re: Jeff Id (#33),
Jeff, I wouldn’t bother wasting any time with RegEM – ridge. There’s something a bit wrongheaded in their enterprise that this sort of thing won’t patch – I’ve got a post in the works which may elucidate things a bit.
• RomanM
Re: Jeff Id (#33),
Although I don’t have much experience with ridge regression, it is apparent that there are difficulties with bias in the results and with the ability to calculate confidence bounds and do testing. Some of this inherent difficulty seems to stem from the fact that one is simply trying to squeeze too many estimates from limited data. As in PTTLS, there is also the fact that the method leaves itself open to “manipulation” through the choice of parameters by the analyst.
18. Steve McIntyre
JEff, in a regression setting, ridge regression is “blend” of the partial least squares coefficients (i.e. correlation-wieghted) and OLS coefficients:
$\hat{\beta}_{RR}= (1-k) \hat{\beta}_{OLS}+k\hat{\beta}_{PLS}$
They can be viewed as be a path in $R^m$ where you have m coefficients from $\hat{\beta}_{PLS}$ to tex]\hat{\beta}_{OLS} \$, a perspective that is consistent with Stone and Brooks 1990, about which I’ve written before.
The truncated SVD coefficients also make a path from from $\hat{\beta}_{PLS}$ to $\hat{\beta}_{OLS}$. In truncated SVD, the matrix $(X^TX)^{-1}$ is used in the form $(V_kS^{-2}V_k^T)^{-1} where$latex V_k \$ is the k-truncated eigenvector matrix from the SVD of X=USV^T.
When k=n, you have $\hat\{beta}_{SVD:k}= (VS^{-2}V^T)^{-1}X^Ty=\hat{\beta}_{OLS}$latex
You can construe k=0 as yielding:
$\hat\{beta}_{SVD:0}= (I) X^Ty=\hat{\beta}_{PLS}$latex
The intermediate values yield line segments that will follow a path in $R^m$ that is a little different than the Ridge Regression, but they will be “close” to one another in the sense that the coefficients become increasingly “overfitted”. The term “overfitted” is not as operational a term as one would like as it seems like a value judgment. In practical terms, the behavior of the inverse borehole coefficients were a good illustration as they become increasingly unstable with as k increases. Operationally, this means that $||\beta_{SVD:k}||$ increases as k increases, a point made in Fierro as well.
In the barplot terms shown in the post, my guess is that “overfitted” coefficients will have increasingly wild mixtures of positive and negative weights.
I’m still trying to picture the path of TTLS coefficients in these terms. I think that they are going from the PC1 coefficients (k=1) to the TLS coefficients (k=n+1). I think that the TLS coefficients are going to be “close” to the OLS coefficients in most cases that we’re going to encounter and that the introduction of TLS is the introduction of a more elaborate and murkier technique by people that don’t really understand what they’re doing.
If you went to regpar=1, I think that you’d end up (in a backdoor route) to coefficients that are the same as the GLS coefficients – which I’m pretty sure could be shown to be closer to the maximum likelihood estimate than regpar=3 under most circumstances.
19. Steve McIntyre
It’s also not the case that more PCs is necessarily “deeper”. I’m not absolutely sold on the idea that going past 1 PC is a good idea. If our ultimate interest is an Antarctic trend, then the statistical model is more less T+noise – and for that sort of model, something that’s more like an average or a PC1 might well be better than starting down the lower PCs.
But if you use lower order PCs, then surely Steig should have considered relevant literature like Smith et al 1996, which warns against the precise situation that occurs. Smith et al 1996 is even cited by coauthor Mann in MBH98 and Smith’s methods are discussed at length in Schneider 2001. This is not an esoteric reference. It’s the sort of thing that maybe he should discuss in his classes on PCS.
• Michael Jankowski
Re: Steve McIntyre (#43), In the case of satellite data, it sure looks like a case could be made that PC1 is where the analysis should stop. Whether that means to truly just use PC1 or if PCA should just be avoided for this type of instance altogether, I don’t know.
• Ryan O
Re: Michael Jankowski (#44) and Re: Steve McIntyre (#43),
.
The answer to that depends on what you are trying to do. If you are attempting to obtain an overall trend for the continent, 1 PC may work quite well. If you are attempting to obtain geographic information about temperature, 1 PC does not provide the needed resolution.
• Steve McIntyre
Posted Jun 3, 2009 at 3:39 PM | Permalink
Re: Ryan O (#45), Yep, we obviously dont disagree on this.
• D. Patterson
Posted Jun 3, 2009 at 4:29 PM | Permalink
Re: Ryan O (#45),
If you are attempting to obtain an overall trend for the continent
Given the fact Antarctica has significantly different climate zones with divergent climate trends, doesn’t Steig’s methods and/or PCA of any number give inherently incorrect results? Don’t the climate zones need to be classed with a k-PCA at a minimum, with special handling to prevent the low variability and the small number of interior Antarctic sites being disregarded in the transformations?
• Ryan O
Posted Jun 3, 2009 at 5:25 PM | Permalink
Re: D. Patterson (#47), Actually, #48 pretty much answered that question. I believe Steve M has several good expositions on this relative to both his MBH work and what he’s done with Steig. The simple answer is that the extracted modes are unlikely to be interpretable as physical processes. The modes simply explain, in order of most to least, the amount of variation in the original data set. The more you extract, the greater geographic resolution you have (and, also, the more noise you include). There’s always a tradeoff, but in this case – where the idea is to simply reduce the AVHRR set to a reasonable size for computation – the only penalty for extracting too many is that your computer runs longer before returning an answer.
• JS
Re: Steve McIntyre (#43),
If that was truly your interest principle components would not be my first thought; it would be frequency domain analysis where you chop off the higher frequency components. There at least what you pull out has a meaningful interpretation that is robust (although only with infinite data if you want to be techincal!). The first principle component is just an orthogonal component that has the greatest weight. It is not fixed a priori and you can choose multiple rotations of your basis that affects the results. PCs have no inherent meaning and arguments about what they actually are never seem to rise above hand-waving in these contexts. (In engineering and experimental sciences you have a chance of attaching robust meaning to a PC, but in others the chance seems vanishingly small.)
### One Trackback
1. By Smith 2010: Elementary Reconstruction of the Hockey Stick Curve « The Whiteboard on Oct 13, 2010 at 10:31 PM
[...] A Challenge to Tamino: MBH PC Retention Rules (Mar 14, 2008) Gavin and the PC Stories (Mar 3, 2009) Steig’s “Tutorial” (Jun 2, 2009 ) Possibly related posts: (automatically generated)Tamino v. Montford – A [...]
• ### Tip Jar
(The Tip Jar is working again, via a temporary location)
• ### NOTICE
Frequent visitors will want the CA Assistant. Sort/hide comments; improved reply box, etc.
• ### Blog Stats
• 10,053,262 hits since 2010-Sep-12
• ### Recent Comments
• http://getpaid2surf.Altervista.org/ on www.realclimate.org
• kim on PAGES2K Online “Journal Club”
• Jeff Norman on PAGES2K Online “Journal Club”
• 97% Of The Papers We Choose Agree With Our Criteria | suyts space on Tom Curtis Writes
• seanbrady on Cook’s Survey
• Don Monfort on PAGES2K Online “Journal Club”
• Skiphil on PAGES2K Online “Journal Club”
• Skiphil on PAGES2K Online “Journal Club”
• Steven Mosher on PAGES2K Online “Journal Club”
• Steven Mosher on PAGES2K Online “Journal Club”
• Cap and Trade | Detached Ideas on Cloud Super-Parameterization and Low Climate Sensitivity
• Yamal | Detached Ideas on Yamal: A “Divergence” Problem
• Will J. Richardson on PAGES2K Online “Journal Club”
• TerryMN on PAGES2K Online “Journal Club”
• Duster on PAGES2K Online “Journal Club”
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935842752456665, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/90893/list
|
## Return to Answer
2 deleted 5 characters in body
Inhttp://projecteuclid.org/DPubS/Repository/1.0/Disseminateview=body&id=pdf_1&handle=euclid.jmsj/1261734945
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.jmsj/1261734945
Chevalley shows the following related statement (Remarque p. 39): let $p$ be a prime, $K$ a field of characteristic different from $p$, and $\zeta$ a $p^e$-th primitive root of $1$ in some algebraic extension of $K$, where $e\geq 1$ is any integer. Assume moreover that $-1$ is a square in $K$ if $p=2$. Then if an element $x\in K(\zeta)$ is a $p^e$-th power, then it is already a $p^e$-th power in $K$.
1
In http://projecteuclid.org/DPubS/Repository/1.0/Disseminateview=body&id=pdf_1&handle=euclid.jmsj/1261734945 Chevalley shows the following related statement (Remarque p. 39): let $p$ be a prime, $K$ a field of characteristic different from $p$, and $\zeta$ a $p^e$-th primitive root of $1$ in some algebraic extension of $K$, where $e\geq 1$ is any integer. Assume moreover that $-1$ is a square in $K$ if $p=2$. Then if an element $x\in K(\zeta)$ is a $p^e$-th power, then it is already a $p^e$-th power in $K$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7214380502700806, "perplexity_flag": "head"}
|
http://www.nag.com/numeric/cl/nagdoc_cl23/html/C02/c02intro.html
|
# NAG Library Chapter Introductionc02 – Zeros of Polynomials
## 1 Scope of the Chapter
This chapter is concerned with computing the zeros of a polynomial with real or complex coefficients.
## 2 Background to the Problems
Let $f\left(z\right)$ be a polynomial of degree $n$ with complex coefficients ${a}_{i}$:
$fz≡a0zn+a1zn-1+a2zn-2+⋯+an-1z+an, a0≠0.$
A complex number ${z}_{1}$ is called a zero of $f\left(z\right)$ (or equivalently a root of the equation $f\left(z\right)=0$), if
$fz1=0.$
If ${z}_{1}$ is a zero, then $f\left(z\right)$ can be divided by a factor $\left(z-{z}_{1}\right)$:
$fz=z-z1f1z$ (1)
where ${f}_{1}\left(z\right)$ is a polynomial of degree $n-1$. By the Fundamental Theorem of Algebra, a polynomial $f\left(z\right)$ always has a zero, and so the process of dividing out factors $\left(z-{z}_{i}\right)$ can be continued until we have a complete factorization of $f\left(z\right)$:
$fz≡a0z-z1z-z2…z-zn.$
Here the complex numbers ${z}_{1},{z}_{2},\dots ,{z}_{n}$ are the zeros of $f\left(z\right)$; they may not all be distinct, so it is sometimes more convenient to write
$fz≡a0z-z1m1z-z2m2…z-zkmk, k≤n,$
with distinct zeros ${z}_{1},{z}_{2},\dots ,{z}_{k}$ and multiplicities ${m}_{i}\ge 1$. If ${m}_{i}=1$, ${z}_{i}$ is called a simple or isolated zero; if ${m}_{i}>1$, ${z}_{i}$ is called a multiple or repeated zero; a multiple zero is also a zero of the derivative of $f\left(z\right)$.
If the coefficients of $f\left(z\right)$ are all real, then the zeros of $f\left(z\right)$ are either real or else occur as pairs of conjugate complex numbers $x+iy$ and $x-iy$. A pair of complex conjugate zeros are the zeros of a quadratic factor of $f\left(z\right)$, $\left({z}^{2}+rz+s\right)$, with real coefficients $r$ and $s$.
Mathematicians are accustomed to thinking of polynomials as pleasantly simple functions to work with. However, the problem of numerically computing the zeros of an arbitrary polynomial is far from simple. A great variety of algorithms have been proposed, of which a number have been widely used in practice; for a fairly comprehensive survey, see Householder (1970). All general algorithms are iterative. Most converge to one zero at a time; the corresponding factor can then be divided out as in equation (1) above – this process is called deflation or, loosely, dividing out the zero – and the algorithm can be applied again to the polynomial ${f}_{1}\left(z\right)$. A pair of complex conjugate zeros can be divided out together – this corresponds to dividing $f\left(z\right)$ by a quadratic factor.
Whatever the theoretical basis of the algorithm, a number of practical problems arise; for a thorough discussion of some of them see Peters and Wilkinson (1971) and Chapter 2 of Wilkinson (1963). The most elementary point is that, even if ${z}_{1}$ is mathematically an exact zero of $f\left(z\right)$, because of the fundamental limitations of computer arithmetic the computed value of $f\left({z}_{1}\right)$ will not necessarily be exactly $0.0$. In practice there is usually a small region of values of $z$ about the exact zero at which the computed value of $f\left(z\right)$ becomes swamped by rounding errors. Moreover, in many algorithms this inaccuracy in the computed value of $f\left(z\right)$ results in a similar inaccuracy in the computed step from one iterate to the next. This limits the precision with which any zero can be computed. Deflation is another potential cause of trouble, since, in the notation of equation (1), the computed coefficients of ${f}_{1}\left(z\right)$ will not be completely accurate, especially if ${z}_{1}$ is not an exact zero of $f\left(z\right)$; so the zeros of the computed ${f}_{1}\left(z\right)$ will deviate from the zeros of $f\left(z\right)$.
A zero is called ill-conditioned if it is sensitive to small changes in the coefficients of the polynomial. An ill-conditioned zero is likewise sensitive to the computational inaccuracies just mentioned. Conversely a zero is called well-conditioned if it is comparatively insensitive to such perturbations. Roughly speaking a zero which is well separated from other zeros is well-conditioned, while zeros which are close together are ill-conditioned, but in talking about ‘closeness’ the decisive factor is not the absolute distance between neighbouring zeros but their ratio: if the ratio is close to one the zeros are ill-conditioned. In particular, multiple zeros are ill-conditioned. A multiple zero is usually split into a cluster of zeros by perturbations in the polynomial or computational inaccuracies.
## 3 Recommendations on Choice and Use of Available Functions
All zeros of cubic:
real coefficients nag_cubic_roots (c02akc)
All zeros of polynomial:
complex coefficients,
modified Laguerre's method nag_zeros_complex_poly (c02afc)
real coefficients,
modified Laguerre's method nag_zeros_real_poly (c02agc)
All zeros of quartic:
real coefficients nag_quartic_roots (c02alc)
None.
## 5 References
Householder A S (1970) The Numerical Treatment of a Single Nonlinear Equation McGraw–Hill
Peters G and Wilkinson J H (1971) Practical problems arising in the solution of polynomial equations J. Inst. Maths. Applics. 8 16–35
Thompson K W (1991) Error analysis for polynomial solvers Fortran Journal (Volume 3) 3 10–13
Wilkinson J H (1963) Rounding Errors in Algebraic Processes HMSO
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 50, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.85140061378479, "perplexity_flag": "head"}
|
http://cstheory.stackexchange.com/questions/6704/is-the-integer-factorization-problem-harder-than-rsa-factorization-n-pq
|
# Is the integer factorization problem harder than RSA factorization: $n = pq$?
This is a cross-post from math.stackexchange.
Let FACT denote the integer factoring problem: given $n \in \mathbb{N},$ find primes $p_i \in \mathbb{N},$ and integers $e_i \in \mathbb{N},$ such that $n = \prod_{i=0}^{k} p_{i}^{e_i}.$
Let RSA denote the special case of factoring problem where $n = pq$ and $p,q$ are primes. That is, given $n$ find primes $p,q$ or NONE if there is no such factorization.
Clearly, RSA is an instance of FACT. Is FACT harder than RSA? Given an oracle that solves RSA in polynomial time, could it be used to solve FACT in polynomial time?
(A pointer to literature is much appreciated.)
Edit 1: Added the restriction on computational power to be polynomial time.
Edit 2: As pointed out in the answer by Dan Brumleve that there are papers arguing for and against RSA harder (or easier than) FACT. I found the following papers so far:
D. Boneh and R. Venkatesan. Breaking RSA may be easier than factoring. EUROCRYPT 1998. http://crypto.stanford.edu/~dabo/papers/no_rsa_red.pdf
D. Brown: Breaking RSA may be as difficult as factoring. Cryptology ePrint Archive, Report 205/380 (2006) http://eprint.iacr.org/2005/380.pdf
G. Leander and A. Rupp. On the Equivalence of RSA and Factoring regarding Generic Ring Algorithms. ASIACRYPT 2006. http://www.iacr.org/archive/asiacrypt2006/42840239/42840239.pdf
D. Aggarwal and U. Maurer. Breaking RSA Generically Is Equivalent to Factoring. EUROCRYPT 2009. http://eprint.iacr.org/2008/260.pdf
I have to go through them and find a conclusion. Is someone aware of these results can provide a summary?
-
1
if i remember correctly, then computing $\phi(n)$ or finding out d is equivalent to factoring but as such there might be some way that RSA is weaker than factoring. In short solving RSA may not imply solving the factoring problem. No formal proofs known for them being equivalent.(as far as i know) – singhsumit May 24 '11 at 8:28
1
Mohammad, why is FACT not reducible to RSA? – Dan Brumleve May 24 '11 at 10:13
1
Maybe I am misunderstanding something basic. How to show that the existence of an algorithm to factor a semiprime in polynomial time doesn't imply the existence of an algorithm to factor a number with three prime factors in polynomial time? – Dan Brumleve May 24 '11 at 10:28
6
How do you know that is what it amounts to? – Dan Brumleve May 24 '11 at 10:51
7
If there is no poly-time reduction between the two stated problems, then it's going to be hard to show this, right? To prove no poly-time reduction can exist requires that we prove $P\neq NP$. – Fixee May 25 '11 at 3:39
show 6 more comments
## 3 Answers
I found this paper entitled Breaking RSA May Be Easier Than Factoring which argues for a "no" answer and states that the problem is open.
-
Thanks! I found several other papers with related titles, cross-references. I will post links below. (Edit: links below are ugly. I can't get proper formatting in comments.) – user17 May 28 '11 at 3:27
D. Boneh and R. Venkatesan. Breaking RSA may be easier than factoring. EUROCRYPT 1998. crypto.stanford.edu/~dabo/papers/no_rsa_red.pdf D. Brown: Breaking RSA may be as difficult as factoring. Cryptology ePrint Archive, Report 205/380 (2006) eprint.iacr.org/2005/380.pdf D. Aggarwal and U. Maurer. Breaking RSA Generically Is Equivalent to Factoring. EUROCRYPT 2009. eprint.iacr.org/2008/260.pdf G. Leander and A. Rupp. On the Equivalence of RSA and Factoring regarding Generic Ring Algorithms. ASIACRYPT 2006. iacr.org/archive/asiacrypt2006/42840239/42840239.pdf – user17 May 28 '11 at 3:27
1
I read the abstracts, and the Aggarwal and Maurer paper seems to be about a slightly differerent problem (factoring a semiprime vs. computing the phi function?) The others say explicitly that the problem is open. I suppose it still is unless there is a result more recent than 2006? – Dan Brumleve May 28 '11 at 3:45
My stated interpretation of the Aggarwal and Maurer paper is certainly incorrect (if we can factor the modulus then obviously we know its phi value). But I'm still not sure exactly what it is claiming. – Dan Brumleve May 28 '11 at 4:21
As far as I can see, an efficient algorithm for factoring semiprimes (RSA) does not automatically translate into an efficient algorithm for factoring general integers (FACT). However, in practice, semiprimes are the hardest integers to factor. One reason for this is that the maximum size of the smallest prime is dependent on the number of factors. For an integer $N$ with $f$ prime factors, the maximum size of the smallest prime factor is $\lfloor N^\frac{1}{f} \rfloor$, and so (via the prime number theorem) there are approximately $\frac{f N^\frac{1}{f}}{\log(N)}$ possibilities for this. Thus increasing $f$ decreases the number of possibilities for the smallest prime factor. Any algorithm which works be successively reducing this space of probabilities will then work best for large $f$ and worst for $f=2$. This is borne out in practice, as many classical factoring algorithms are much faster when the number being factored has more than 2 prime factors.
Further the General Number Field Sieve, the fastest known classical factoring algorithm, and Shor's algorithm, the polynomial time quantum factoring algorithm, work equally well for non-semiprimes. In general, it seems much more important that the factors by coprime than that they be prime.
I think part of the reason for this is the decision version of factoring co-primes is most naturally described as a promise problem, and any way of removing the promise of the input being semiprime is to either
1. introduce an indexing on the semiprimes (which in itself I suspect is as hard as factoring them), or
2. by generalising the problem to include non-semiprimes.
It seems likely that in the latter case the most efficient algorithm would solve FACT as well as RSA, though I have no proof of this. However, a proof is a little to much to ask for, since given an oracle for RSA proving that this cannot efficiently solve FACT amounts to proving that $P\neq NP$.
Lastly it is worth pointing out that RSA (the cryptosystem, not the factoring problem you defined above) trivially generalizes beyond semi-primes.
-
2
Joe, I think it would be reasonable to assume that factoring is not in $P$ (and therefore $P \neq NP$) for this question (and then the answer would not imply a break-through complexity result as you stated in the last paragraph). – Kaveh♦ May 26 '11 at 4:06
@Kaveh: I don't think that is enough. We want to show whether or not $P^{RSA} = P^{FACT}$. This question has different answers depending on the assuptions you make. Imagine that in reality P=NP (actually we only need FACT in P, but I wanted to emphasise the connection to P v NP), but we make the assumption that FACT is not in P. Then it is possible to prove that $P^{RSA} = P^{FACT}$ by exhibiting a polynomial time algorithm for the reduction, or to prove that $P^{RSA} \neq P^{FACT}$ by exhibiting a polynomial time algorithm for RSA and using the assumption about the complexity of FACT. – Joe Fitzsimons May 26 '11 at 13:19
I interpreted the question as "$FACT \in P^{RSA}?$". Then if $FACT \in P$ then the answer is trivially yes. So we can assume that $FACT \notin P$, and if we are making a false assumption, then of course we can derive anything. :) – Kaveh♦ May 27 '11 at 3:59
@Kaveh: I believe the two statements of the problem are equivalent in this case. My point is that that it is only potentially possible to prove that $FACT \in P^{RSA}$ without first deciding P vs NP, and not the converse. – Joe Fitzsimons May 28 '11 at 0:10
Wouldn't the following be a polynomial time algorithm for RSA: Start walking through the integers starting with 2 and stopping at sqrt(n) or even n if we want because it's still O(n) steps. (If we reach sqrt(n), return NONE because n is itself prime.) At each step test for whether i divides n, which takes O((ln(n)^2) time using school division. For the first number found that divides n, we know it is prime, so call it p. And we also have q (which needs to be tested) from the same division. Now if q is prime (same procedure), we return (p, q) and if not we return NONE. Won't that work and run in O(n * (ln(n)^2)) time?
-
3
"Polynomial time" in this case means that it should run it a polynomial of log(n), not a polynomial of n. – Dan Brumleve May 24 '11 at 18:07
To add to Dan's comment, if $n$ is the input, then $log(n)$ is the size of the input (roughly, the number of bits in $n$). – user17 May 24 '11 at 18:34
ok, tx! now i understand what the difficulty is. – marshallf May 26 '11 at 2:59
5
To be fair this is a very common mistake. I remember being confused about the same thing when I was first learning about numerical algorithms. – Huck Bennett May 26 '11 at 3:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9291641712188721, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/76557-quadratic-forms.html
|
# Thread:
1. ## Quadratic Forms???
im not really sure if this is to do with quadratic forms or not but tbh i have no idea of where to start with this question, i think it has something to do with diagonilising a matrix somewhere along the line but apart from that im clueless
by using suitable transformations express
$x^2+x-8+5xy-6y+2y^2 = 0$
in the form
$AX^2+BY^2=1$
2. ## General Equation of a Conic
Hello rebirthflame
Originally Posted by rebirthflame
im not really sure if this is to do with quadratic forms or not but tbh i have no idea of where to start with this question, i think it has something to do with diagonilising a matrix somewhere along the line but apart from that im clueless
by using suitable transformations express
$x^2+x-8+5xy-6y+2y^2 = 0$
in the form
$AX^2+BY^2=1$
What you are being asked to do here is to transform a general equation of the second degree into a standard form for a conic. The usual method, starting with
$ax^2 + 2hxy + by^2 + 2gx + 2fy + c = 0$
is to let
$x = X\cos\theta - Y\sin\theta$
$y = X\sin\theta + Y\cos\theta$
which corresponds to a rotation of the axes through an angle $\theta$, in order to eliminate the term in $xy$. When you do this, it leads to the value of $\theta$ given by
$\tan 2\theta = \frac{2h}{a-b}$
You then complete the squares for $X$ and $Y$ to get the equation in the required form.
In the example you've been given:
$\tan 2\theta = -5$
But this doesn't seem to give straightforward (i.e. easy) values for $\sin\theta$ and $\cos\theta$ ( $\sin\theta = \sqrt{\frac{1 + \sqrt{26}}{2\sqrt{26}}}$, for instance).
Are you sure you have the right numbers here?
Grandad
3. the numbers are defnetly right, ive had a word with my lecturer and he says like you that i need to transform the curve so that we get the equation in a quadratic form then he said something about completing squares and then using eigenvalues or diagonolising matrices, your method looks like it would work however i also know there is more than one way to do this, you are looking at eliminating the values of xy which wont be a quadratic form, i know what to do when i get to the quadratic form i think. its just getting there and choosing the suitable trandformation that is the problem.
4. your quadratic form can be written as a matrix multiplication as
$<br /> x^2+5xy+2y^2 = \begin{bmatrix}x & y\end{bmatrix}\begin{bmatrix}1 & \frac{5}{2} \\ \frac{5}{2} & 2\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}<br />$
We can diagonalize that matrix by finding its eigenvalues and eigenvectors. Its characteristic equation is $\left|\begin{array}{cc}1-\lambda & \frac{5}{2} \\ \frac{5}{2} & 2-\lambda\end{array}\right|= \lambda^2- 3\lambda+ 2- \frac{25}{4}$ $= \lambda^2- 3\lambda- \frac{17}{4}= 0$
You can find the eigenvalues by solving that equation using the quadratic formula. They are not, as already pointed out, "easy" numbers: something like, I think $\frac{3}{2}\pm \frac{\sqrt{37}}{4}$. Find the eigenvectors corresponding to that and construct the matrix P having those eigenvectors as columns. $P^{-1}AP$ will be the diagonal matrix having the eigenvallues as entries.
5. Originally Posted by HallsofIvy
your quadratic form can be written as a matrix multiplication as
$<br /> x^2+5xy+2y^2 = \begin{bmatrix}x & y\end{bmatrix}\begin{bmatrix}1 & \frac{5}{2} \\ \frac{5}{2} & 2\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}<br />$
We can diagonalize that matrix by finding its eigenvalues and eigenvectors. Its characteristic equation is $\left|\begin{array}{cc}1-\lambda & \frac{5}{2} \\ \frac{5}{2} & 2-\lambda\end{array}\right|= \lambda^2- 3\lambda+ 2- \frac{25}{4}$ $= \lambda^2- 3\lambda- \frac{17}{4}= 0$
You can find the eigenvalues by solving that equation using the quadratic formula. They are not, as already pointed out, "easy" numbers: something like, I think $\frac{3}{2}\pm \frac{\sqrt{37}}{4}$. Find the eigenvectors corresponding to that and construct the matrix P having those eigenvectors as columns. $P^{-1}AP$ will be the diagonal matrix having the eigenvallues as entries.
this would be correct if my equation was in the quadratic for, however i cant just discount the other values that are in the equation can i? i think i have a slightly better understanding of what it is am i supposed to do now.
i need to transform the curve to get rid of all the variables apart from the ones of the form $ax^2+bxy+cy^2$ this will centre my curve about the origin as far as i understand. then i can diagonalize the matrix as hallsofivy has done this rotates the curve so that it is of the form $AX^2+BY^2=1$
i could be completely wrong but i feel this is what im supposed to do. i'm ok with everything after getting it into a quadratic form its just getting to that point which is troubling me so much. i've looked on the interenet but i cant find any way of transforming a polnomial of the form i have at the start
$x^2+x-8+5xy-6y+2y^2 = 0$ into a quadratic form.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586141109466553, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/27749/what-are-some-correct-results-discovered-with-incorrect-or-no-proofs/27774
|
## What are some correct results discovered with incorrect (or no) proofs?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Many famous results were discovered through non-rigorous proofs, with correct proofs being found only later and with greater difficulty. One that is well known is Euler's 1737 proof that
$1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\cdots =\frac{\pi^2}{6}$
in which he pretends that the power series for $\frac{\sin\sqrt{x}}{\sqrt{x}}$ is an infinite polynomial and factorizes it from knowledge of its roots.
Another example, of a different type, is the Jordan curve theorem. In this case, the theorem seems obvious, and Jordan gets credit for realizing that it requires proof. However, the proof was harder than he thought, and the first rigorous proof was found some decades later than Jordan's attempt. Many of the basic theorems of topology are like this.
Then of course there is Ramanujan, who is in a class of his own when it comes to discovering theorems without proving them.
I'd be interested to see other examples, and in your thoughts on what the examples reveal about the connection between discovery and proof.
Clarification. When I posed the question I was hoping for some explanations for the gap between discovery and proof to emerge, without any hinting from me. Since this hasn't happened much yet, let me suggest some possible explanations that I had in mind:
Physical intuition. This lies behind results such as the Jordan curve theorem, Riemann mapping theorem, Fourier analysis.
Lack of foundations. This accounts for the late arrival of rigor in calculus, topology, and (?) algebraic geometry.
Complexity. Hard results cannot proved correctly the first time, only via a series of partially correct, or incomplete, proofs. Example: Fermat's last theorem.
I hope this gives a better idea of what I was looking for. Feel free to edit your answers if you have anything to add.
-
5
I was thinking also of stuff like Witten. – Steve Huntsman Jun 11 2010 at 1:01
10
In Tom Hales account of Jordan's proof, he states that there is essentially no problem with Jordan's original proof, and that claims to the contrary are themselves wrong or based on misunderstandings. As far as I can tell, he is correct, and there is no reason to impugn Jordan's original proof. (See "Jordan's proof of the Jordan curve theorem" at math.pitt.edu/~thales/papers ) – Emerton Jun 11 2010 at 2:49
1
@Emerton. I stand corrected. Maybe Jordan's proof should be in the same category as Heegner's: thought to be incorrect, but essentially correct when properly understood. – John Stillwell Jun 11 2010 at 3:09
10
A further remark: I think that is important to distinguish between polishing an argument, or perhaps interpreting it in terms of contemporary language and formalism, which will almost always be required when reading arguments (especially subtle ones) from 100 or more years ago, and genuinely incomplete arguments. As an example of the latter, one can think of Riemann's arguments with the Dirichlet principle, where this result was simply taken as an axiom. Additional work was genuinely required to validate the Dirichlet principle, and thus complete Riemann's arguments. – Emerton Jun 11 2010 at 5:59
3
I would argue that (although it came after the drive for rigor had already started thanks to Cantor, Weierstrass, et al.) the dawn of modern statistical and quantum physics had a great deal to do with the consolidation of rigor throughout mathematics. Indeed, ergodic theory and functional analysis owe a great deal to these disciplines, and neither could have existed in the time of (say) Euler because the approach to mathematics was different. – Steve Huntsman Jun 11 2010 at 12:32
show 8 more comments
## 35 Answers
Renormalizations in QFT
Renormalizations as discarding perturbative corrections to masses and charges were not easily accepted, even by their inventors, because of being obviously anti-mathematic. It remains to be a prescription, lucky in some rare cases and wrong in the others.
In Physics we use a perturbation theory where the perturbation is supposed to be small but it is "big" in QFT. First we write down a non perturbed Hamiltonian, let's say:
$\hat H_0 = -\frac{\hbar^2}{2m_e}\frac{d^2}{dx^2} + \hat{V}_0 (x)$ (1)
Everything in it is quite physical including the electron mass. Then we "develop" our theory and include, as we think, a small interaction that has also a kinetic and a potential term:
$\hat H_{int} = -\epsilon\frac{d^2}{dx^2} + \hat{V}_1 (x)$ (2)
The kinetic term shifts the particle mass, it is obvious. But our mass is already good in (1) and any its shifting worsens agreement with experiment. Discarding this correction "restores" the right kinetic part of the Hamiltonian, and taking $\hat{V}_1$ into account improves agreement with experiment. So the discarding practice became a part of QFT calculations.
Appearance of a kinetic perturbative term is due to our misunderstanding interactions. Some part of interactions cannot be treated perturbatively but should be present in the zeroth-order approximation. Discarding is a very bad practice. For (2) it may luckily work, but for other our guesses of interactions it can be more complicated and be just "non renormalizable".
Although shown on a simplest example, the renormalizations in QFT have nothing else in their meaning but repairing a wrongly guessed Hamiltonian via repairing the corresponding solutions. Normally it is difficult to see explicitly that some part of guessed interaction, namely a "self-action" term, is of a kinetic nature. That is why presently they "explain" renormalizations differently.
A correct theory development should not include kinetic perturbative terms. Then the perturbative series will be reasonable, in my opinion.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Results in complexity theory such as $P \neq NP$.
Philosophically, it makes sense that there is a difference between verification and search, and no-one has discovered a counterexample.
(Note, that this result is not strictly speaking known to be correct. However, it is believed to be correct and routinely used as if it were simply true. No one, as far as I can tell, would ever begin a proof in complexity theory by assuming $P = NP$. )
-
Collatz Conjecture
The 3x+1 problem and its generalizations
The 3x+1 problem, also known as the Collatz problem, the Syracuse problem, Kakutani's problem, Hasse's algorithm, and Ulam's problem, concerns the behavior of the iterates of the function which takes odd integers n to 3n+1 and even integers n to n/2. The 3x+1 Conjecture asserts that, starting from any positive integer n, repeated iteration of this function eventually produces the value 1.
The 3x+1 Conjecture is simple to state and apparently intractably hard to solve. It shares these properties with other iteration problems, for example that of aliquot sequences and with celebrated Diophantine equations such as Fermat's last theorem. Paul Erdos commented concerning the intractability of the 3x+1 problem: "Mathematics is not yet ready for such problems." Despite this doleful pronouncement, study of the 3x+1 problem has not been without reward. It has interesting connections with the Diophantine approximation of the binary logarithm of 3 and the distribution mod 1 of the sequence {(3/2)^k : k = 1, 2, ...}, with questions of ergodic theory on the 2-adic integers, and with computability theory - a generalization of the 3x+1 problem has been shown to be a computationally unsolvable problem. In this paper I describe the history of the 3x+1 problem and survey all the literature I am aware of about this problem and its generalizations.
-
3
According to the title, these should be "correct results". Probably conjectures not yet proved (and refereed) should not be included in the list. – Gerald Edgar Aug 27 2011 at 17:37
-
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567366242408752, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/10404/stable-categories-as-spectral-categories
|
## Stable ∞-categories as spectral categories
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let C be a stable ∞-category in the sense of Lurie's DAG I. (In particular I do not assume that C has all colimits.) Then C does have all finite colimits, the suspension functor on C is an equivalence, and C is enriched in Spectra in a way I don't want to make too precise (basically the Hom functor Cop × C → Spaces factors through Spectra and there are composition maps on the level of spectra).
Now suppose instead that C is an ∞-category which has all finite colimits and comes equipped with an enrichment in Spectra in the above sense. One can show easily that C then has a zero object which allows us to define a suspension on C. Suppose it is an equivalence. Is C then a stable ∞-category? Moreover, is the enrichment on C the one which comes from the fact that it is a stable ∞-category?
-
## 1 Answer
According to Corollary 8.28 in DAG I a pointed $\infty$-category is stable iff it has finite colimits and the suspension functor is an equivalence.
-
1
Huh, so the enrichment in spectra is unnecessary: one only needs an enrichment in pointed spaces (to get a zero object). I wonder whether with the spectral enrichment, one can weaken one of the other hypotheses. But I don't see how. – Reid Barton Jan 2 2010 at 2:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270661473274231, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/234189/relation-between-x-and-y/234219
|
# Relation between X and Y
If two random variables X and Y are such that $$E(X)+E(Y)=0\dots, \tag1$$ $Var(X)=Var(Y)$ and $1+r=0$ where $r$ is the correlation co-efficient between $X$ and $Y$,then what is the relation between $X$ and $Y$?[E(x) is the expectation of X]
I do not really know how to proceed. This is all that I could gather:
$Var(X)=Var(Y)\implies E(X^2)-E^2(X)=E(Y^2)-E^2(Y)$ which means that $E(X^2)=E(Y^2)$ on account of $(1)$.
From the other condition I get $r=-1$ ie.e $E(XY)-E(X)E(Y)=-Var(X)=-Var(Y)$.
But after that I seem lost;can anyone please point out how I can establish a relation between $X$ and $Y$?
-
The correlation can be writen as: $$\frac {E(XY)-E(X)E(Y)}{\sqrt {Var(x)}\sqrt {Var(y)}}=-1$$ So $E(X^2)=E(Y^2)$ give us that $$\frac {E(XY)+E^2(X)}{Var(X)}=-1$$. Does this help? – Charlie Nov 10 '12 at 15:01
## 1 Answer
$\begin{align} Var[Y+X]&=Var[Y]+Var[X]+2Cov[X,Y]\\ &=Var[Y]+Var[X]-2\sqrt{Var[X]Var[Y]} \; \because \text{X,Y are negatively correlated} \\ &=0 \; \because \text{Var[X]=Var[Y]} \end{align}$
A Random variable with zero variance is a constant.
$\therefore X+Y=c$
but $\because E[X]+X[Y]=0$ we have that $E[X+Y]=E[X]+E[Y]=E[c]=c=0$.
$\therefore X=-Y$
-
I didn't understand why $X[Y]$ ? – Charlie Nov 10 '12 at 16:00
@Charlie sorry typo – Amatya Nov 10 '12 at 16:07
Ah! Okay, thanks! – Charlie Nov 10 '12 at 16:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449521899223328, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/155180-derivative-question.html
|
# Thread:
1. ## Derivative Question?
Hi All!
I am new to this topic and not sure if it is correct place to post this.
I know simple derivative like $x^3-x-1$. For $x^3$ we multiply it by power and subtract 1 from power which is equal to $3x^2$, x will become 1 and derivative of 1 is 0 so answer is
$3x^2-1$
Don't know how to calculate derivative of
Thanks in advance
2. Originally Posted by mrsenim
Hi All!
I am new to this topic and not sure if it is correct place to post this.
I know simple derivative like $x^3-x-1$. For $x^3$ we multiply it by power and subtract 1 from power which is equal to $3x^2$, x will become 1 and derivative of 1 is 0 so answer is
$3x^2-1$
Don't know how to calculate derivative of
$y = \frac{1}{\sqrt{x+1}}$
Thanks in advance
Chain rule works best here IMO.
Let $u = x+1$. This should be pretty easy to solve for $\frac{du}{dx}$
Thus we get $y = \frac{1}{\sqrt{u}}$.
Remembering the laws of exponents this is equal to $y = u^{-1/2}$ which differentiates in the same way as the positive integers so you can find $\frac{dy}{du}$.
Here's where the chain rule comes in:
$\frac{dy}{dx} = \frac{dy}{du} \times \frac{du}{dx}$
You should have and expression for both those terms
3. $\frac{1}{\sqrt{x + 1}} = (x + 1)^{-\frac{1}{2}}$.
Now you need to use the Chain Rule.
4. Originally Posted by Prove It
$\frac{1}{\sqrt{x + 1}} = (x + 1)^{-\frac{1}{2}}$.
Now you need to use the Chain Rule.
I tried to solve it as follows
$\frac{1}{\sqrt{x + 1}} = (x + 1)^{-\frac{1}{2}}$
Now multiplying by -1/2 and subtracting 1 from power
$= {-\frac{1}{2}}[(x + 1)^{-\frac{3}{2}}]$
$= {-\frac{1}{2}}[\frac{1}{(x + 1)^{\frac{3}{2}}}]$
Is it correct?
Don't know what is chain rule.
5. Originally Posted by mrsenim
I tried to solve it as follows
$\frac{1}{\sqrt{x + 1}} = (x + 1)^{-\frac{1}{2}}$
Now multiplying by -1/2 and subtracting 1 from power
$= {-\frac{1}{2}}[(x + 1)^{-\frac{3}{2}}]$
$= {-\frac{1}{2}}[\frac{1}{(x + 1)^{\frac{3}{2}}}]$
Is it correct?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513706564903259, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/66381/how-to-show-in-mathbbrn-is-the-intersection-of-any-number-of-closed-sets
|
# How to show In $\mathbb{R^{n}}$ is the intersection of any number of closed sets is closed?
I have been trying to come up with a proof, but I don't think I am making leaps in logic and assuming things that I'm possibly trying to prove, if anyone could help correct/(or completely discard) what I have done so far and help me towards a rigerous proof I would greatly appreciate it. This is what I have come up with so far.
Let $U_1,U_2,...,U_n$ be subsets of $\mathbb{R^{n}}$ and closed, define
$U:=\bigcap _{i=1}^nU_i$
And the boundary of $U$ as $B:=CL(U)\cap CL(\mathbb{R^{n}}-U)$
WLG take $U_1,U_2,...,U_m$ where $m\leq n$ such that for some $W_1,W_2,...,W_m$ where $W_i\subset U_i$
$B:=\bigcup _{i=1}^mW_i$
Take $x\in B$ $\Rightarrow x\in \bigcup _{i=1}^mW_i$ $\Rightarrow x\in U_i$ for some $i=1,2,...,m$ $\Rightarrow x\in \bigcap _{i=1}^nU_i \Rightarrow x\in U$
Therefore $B\subset U$ therefore $U$ is closed
Pretty sure this is wrong, but I thought it would be good to show you my train of thought
-
1
So you want to show that the union? (title says intersection) of a finite collection of closed sets is closed? What is your definition of closed? – Henno Brandsma Sep 21 '11 at 14:23
1
In a topological space the intersection of any family of closed sets is closed, even infinite families. – lhf Sep 21 '11 at 14:25
1
@HennoBrandsma: Sorry, just corrected it.. intersection is correct. My definition of closed a closed set is one that contains all it's closure points, e.g. i'm trying to show that the boundary of $U$ is a subset of $U$ – Freeman Sep 21 '11 at 14:25
## 2 Answers
The intersection of any number of closed subsets if always closed.
Let $X$ be a topological space. Suppose $C_i$ are closed sets ($i\in I$ any index set). Set $U_i=\{x\in X\mid x\notin C_i\}$ the complement of $C_i$ is an open set.
The union of any number of open sets is open, therefore $U=\bigcup_{i\in I} U_i$ is open.
We have that $\{x\in X\mid x\notin U\}$ is closed. This is exactly $\bigcap_{i\in I}C_i$, since $x\notin U$ if and only if for all $i\in I$, $x\notin U_i$, if and only if for all $i\in I$, $x\in C_i$, if and only if $x\in\bigcap_{i\in I}C_i$.
Thus the intersection of closed sets is closed.
-
Brilliant, that's much more elegant than what I was trying to do. – Freeman Sep 21 '11 at 14:39
Suppose the definition of a closed set is being taken as "a set which contains all its accumulation points". Then examine an accumulation point of the intersection. All of the points in the intersection are contained in each one of the original collection of closed sets. Looking at the closed sets one by one, because they are closed they all contain the accumulation point we have identified. Because they all contain that point, it is also contained in the intersection, and the intersection is therefore closed.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9801504015922546, "perplexity_flag": "head"}
|
http://stochastix.wordpress.com/2012/08/03/towards-the-cantor-set/
|
# Rod Carvalho
## Towards the Cantor set
Today we will “construct” the standard (ternary) Cantor set [1]. We start with the closed unit interval $\mathcal{C}_0 := [0,1]$, and then remove its open middle third $]1/3, 2/3[$ to obtain $\mathcal{C}_1 := [0, 1/3] \cup [2/3, 1]$. We then remove the open middle thirds of each of these two intervals to obtain
$\mathcal{C}_2 := [0, 1/9] \cup [2/9, 3/9] \cup [6/9, 7/9] \cup [8/9, 1]$
and, continuing this process ad infinitum, we obtain a nested sequence of compact sets $\mathcal{C}_0 \supset \mathcal{C}_1 \supset \mathcal{C}_2 \supset \dots$. Note that $\mathcal{C}_n$ is the union of $2^n$ intervals, each of length $3^{-n}$. We refer to the following set
$\mathcal{C} := \displaystyle\bigcap_{n=0}^{\infty} \mathcal{C}_n$
as the standard (ternary) Cantor set. This set is most interesting, indeed, since it is uncountable and has Lebesgue measure zero [1].
__________
In Haskell
In Haskell, a closed interval $[a, b]$ can be represented by an ordered pair $(a,b)$. Each set $\mathcal{C}_n$ can be represented by a list of ordered pairs, where each pair represents a closed interval. We create a function $f$ that takes $\mathcal{C}_n$ and returns $\mathcal{C}_{n+1} = f ( \mathcal{C}_n )$. We will work with arbitrary-precision rational numbers, not floating-point numbers.
The following Haskell script lazily generates the sequence of sets:
```import Data.Ratio
type Interval = (Rational, Rational)
-- remove the middle third of an interval
removeMiddleThird :: Interval -> [Interval]
removeMiddleThird (a,b) = [(a,b'),(a',b)]
where b' = (2%3)*a + (1%3)*b
a' = (1%3)*a + (2%3)*b
-- define function f
f :: [Interval] -> [Interval]
f intervals = concat $ map removeMiddleThird intervals
-- create list of sets
sets :: [[Interval]]
sets = iterate f [(0,1)]
-- define Lebesgue measure
measure :: Interval -> Rational
measure (a,b) = b - a```
Note that we used function iterate to generate the sequence of sets. Here is a GHCi session:
```*Main> -- take first 4 sets
*Main> take 4 sets
[[(0 % 1,1 % 1)],[(0 % 1,1 % 3),(2 % 3,1 % 1)],
[(0 % 1,1 % 9),(2 % 9,1 % 3),(2 % 3,7 % 9),(8 % 9,1 % 1)],
[(0 % 1,1 % 27),(2 % 27,1 % 9),(2 % 9,7 % 27),(8 % 27,1 % 3),
(2 % 3,19 % 27),(20 % 27,7 % 9),(8 % 9,25 % 27),(26 % 27,1 % 1)]]
*Main> -- compute measure of C_0
*Main> sum $ map measure (sets !! 0)
1 % 1
*Main> -- compute measure of C_1
*Main> sum $ map measure (sets !! 1)
2 % 3
*Main> -- compute measure of C_2
*Main> sum $ map measure (sets !! 2)
4 % 9
*Main> -- compute measure of C_3
*Main> sum $ map measure (sets !! 3)
8 % 27```
This is arguably the most useless Haskell script ever written.
__________
References
[1] Charles Chapman Pugh, Real Mathematical Analysis, Springer-Verlag, New York, 2002.
About these ads
### Like this:
Like Loading...
Tags: Cantor Set, Data.Ratio, Haskell
This entry was posted on August 3, 2012 at 10:30 and is filed under Haskell, Mathematics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8733580708503723, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/164526-combinatorics.html
|
# Thread:
1. ## Combinatorics
1.Given $n$ equally spaced points on the circumference of a circle(the vertices of a regular n-gon).Find number of ways to select three points such that the triangle formed by joining them is isosceles.
2.Find number of ways to select 3 vertices from a polygon of sides 2n+1 such that the centre of the polygon lies inside the triangle.
3.An operation $*$ on a set $A$ is said to be binary,if $x*y\in A$,for all $x,y\in A$, and it is said to be commutative if $x*y=y*x$ for all $x,y\in A$.Now if $A=\big\{a_{1},a_{2},a_{3},...,a_{n}\big\}$ then find the following
(i)Total number of binary operations on $A$
(ii)Total number of binary operations on $A$ such that $a_{i}*a_{j}\neq a_{i}*a_{k},if j\neq k$
(iii)Total number of binary operations on $A$ such that $a_{i}*a_{j}<a_{i}*a_{j+1},\forall i,j$
2. Originally Posted by pankaj
1.Given $n$ equally spaced points on the circumference of a circle(the vertices of a regular n-gon).Find number of ways to select three points such that the triangle formed by joining them is isosceles.
Select any one of the n points to be the vertex where the two congruent sides intersect. There are n ways to do that.
What you do next depends upon whether n is even or odd. If n is odd, there are now (n-1)/2 points to the "left" of the chosen vertex, (n-1)/2 on the "right". Choose any one of the (n-1)/2 points on the "left" (there are (n-1)/2 ways to do that) and the point on the "right" that completes the isosceles triangle if automatically selected also.
If n is even, there are n/2 points on either side so there are n/2 ways, rather than (n- 1)/2, to choose the second vertex.
2.Find number of ways to select 3 vertices from a polygon of sides 2n+1 such that the centre of the polygon lies inside the triangle.
Very similar. There are 2n+1 ways to select the first vertex, leaving (2n+1-1)/2= n points on either side. Choosing any of the n points on the "right" and any of the n points on the left guarentees that the center of the polygon will lie within the triangle.
3.An operation $*$ on a set $A$ is said to be binary,if $x*y\in A$,for all $x,y\in A$, and it is said to be commutative if $x*y=y*x$ for all $x,y\in A$.Now if $A=\big\{a_{1},a_{2},a_{3},...,a_{n}\big\}$ then find the following
(i)Total number of binary operations on $A$
(ii)Total number of binary operations on $A$ such that $a_{i}*a_{j}\neq a_{i}*a_{k},if j\neq k$
(iii)Total number of binary operations on $A$ such that $a_{i}*a_{j}<a_{i}*a_{j+1},\forall i,j$
3. Hello, pankaj!
I have a clumsy start on the first one . . .
1. Given $\,n$ equally spaced points on the circumference of a circle (the vertices
of a regular $\,n$-gon). .Find the number of ways to select 3 points such that
the triangle formed by joining them is isosceles.
I found three cases to consider . . .
[1] $\,n$ is even: . $n = 2k$
Number the vertices: . $a_1,a_2,a_3,\hdots a_{2k}$
. . $\begin{array}{cccccccccc}<br /> &&&&& a_{_1} \\<br /> && a_{2k} &&&&&& a_{_2} \\<br /> a_{_{2k-1}} &&&&&&&&& a_{_3} \\ \\<br /> * &&&&&&&&& * \\ \\<br /> * &&&&&&&&& * \\ \\<br /> && a_{_{k+1}} &&&&&& a_{_{k-1}} \\<br /> &&&&& a_{_k}<br /> \end{array}$
Using any of the $\,n$ points as the vertex of the isosceles triangle,
. . there are $k-2 \,=\,\frac{n-4}{2}$ choices for the base vertices.
The number of isosceles triangles is:
. . $f(n) \;=\;n\left(\dfrac{n-4}{2}\right) \:=\:\dfrac{n(n-4)}{2}\:\text{ for even }n$ .[1]
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
[2] $\,n$ is odd: $\,n \:=\:2k-1$
Number the vertices: . $a_1,a_2,a_3, \hdots a_{2k-1}$
. . $\begin{array}{cccccccccc}<br /> &&&&& a_}_1} \\<br /> && a_{_{2k-1}} &&&&&& a_{_2} \\<br /> a_{_{2k-2}} &&&&&&&&& a_{_3} \\ \\<br /> * &&&&&&&&& * \\ \\<br /> * &&&&&&&&& * \\ \\<br /> && * &&&&&& * \\<br /> &&&& a_{_{k+1}} && a_{_k} <br /> \end{array}$
Using any of the $\,n$ points as the vertex of the isosceles triangle,
. . there are $k-1 \:=\:\frac{n-1}{2}$ choices for the base vertices.
The number of isosceles triangles is:
. . $f(n) \;=\;n\left(\dfrac{n-1}{2}\right) \;=\;\dfrac{n(n-1)}{2}\:\text{ for odd }n$ .[2]
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
[3] $\,n$ is a multiple of 3: . $n \:=\:3k$
Then some of the isosceles triangles are equilateral
. . resulting in an overcount in the above formulas.
Therefore, if $\,n$ is a multiple of 3,
. . formulas [1] and [2] must be reduced by $(n-1).$
4. Originally Posted by pankaj
3.An operation $*$ on a set $A$ is said to be binary,if $x*y\in A$,for all $x,y\in A$, and it is said to be commutative if $x*y=y*x$ for all $x,y\in A$.Now if $A=\big\ a_{1},a_{2},a_{3},...,a_{n}\big\}$ then find the following
(i)Total number of binary operations on $A$
(ii)Total number of binary operations on $A$ such that $a_{i}*a_{j}\neq a_{i}*a_{k},if j\neq k$
(iii)Total number of binary operations on $A$ such that $a_{i}*a_{j}<a_{i}*a_{j+1},\forall i,j$
For (i) we count the number of functions from $A\times A\to A$.
For (ii) lets take an example: $A=\big\{a_{1},a_{2},a_{3}\big\}$.
We can assign $a_1\circ a_1$ in 3 ways.
Then we can assign $a_1\circ a_2$ in 2 ways.
Finally w can assign $a_1\circ a_3$ in 1 way.
But there are three first terms. That gives a total of $(3!)^3$
You do the general case.
Part (iii) is ill defined. I think that $<$ must be a total order on the set $A$. Otherwise how could we define $a_n\circ a_n$?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 67, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8756577372550964, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/3913/the-right-weigh-to-do-integrals?answertab=active
|
# The right “weigh” to do integrals
Back in the day, before approximation methods like splines became vogue in my line of work, one way of computing the area under an empirically drawn curve was to painstakingly sketch it on a piece of graphing paper (usually with the assistance of a French curve) along with the axes, painstakingly cut along the curve, weigh the cut pieces, cut out a square equivalent to one square unit of the graphing paper, weigh that one as well, and reckon the area from the information now available.
One time, when we were faced with determining the area of a curve that crossed the horizontal axis thrice, I did the careful cutting of the paper, and made sure to separate the pieces above the horizontal axis from the pieces below the horizontal axis. Then, my boss suddenly scooped up all the pieces and weighed them all.
I argued that the grouped pieces should have been weighed separately, and then subtract the weights of the "below" group from the "above" group, while my boss argued that we were calculating the total area, and thus, not separating the pieces was justifiable.
Which one of us was correct?
-
@J. M.: I guess you, in the end, did it like according to your boss? – AD. Sep 3 '10 at 3:48
Well, I was young and stupid back then, so I let the boss do it his way (I couldn't help but feel I was right, but hey, he's the boss). Now I am no longer so young, but as to whether the other condition still applies is apparently still a matter of contention amongst my peers. – J. M. Sep 3 '10 at 5:44
*sigh* If there were only a way to accept two answers... – J. M. Sep 3 '10 at 9:58
3
I had to upvote just because of the title :) – anon Sep 3 '10 at 17:18
This is really cool! – bobobobo Nov 8 '10 at 1:14
## 3 Answers
you are correct. Actually, the integral is equal to: "(total above weight) minus (total below weight)", which I'm sure is what you meant. What your boss is calculating is actually $$\int_a^b |f(x)| dx$$
-
In my opinion, this is not really a math question. Which procedure is correct depends what you're going to do with your calculation.
As a matter of definition, the integral indeed measures the signed area (positive area minus negative area), as you suggest. So your approach is computing an approximation to the definite integral $\int_a^b f$.
But maybe you want the total (unsigned) area. E.g. if you're going to lay concrete along (some real-world space corresponding to) the region bounded by the curve and the x-axis then surely you want the total area -- there's no such thing as negative concrete.
Without knowing what you're using the calculation for, it's impossible to say. (Essentially, you're asking us: "Which of these is mathematically correct: $A-B+C$ or $A+B+C$?" Of course it depends upon what you're trying to do.) I would like to think that your boss knew what the point of it all was, so without further information I guess I would trust him.
In fact, your story arouses my curiosity. I suppose you're not putting us on, but weighing paper cutouts is just about the last method I would ever think of for computing area (aren't you going to need a very sensitive scale or an awfully big piece of paper to get anywhere with this?). How long ago are we talking? What was the job? You don't have to answer these questions, but it would be interesting to know...
-
1
My curiosity is also roused. As soon as I read this question, I started wondering about the points posed in Pete's last paragraph. – Derek Jennings Sep 3 '10 at 9:25
I wouldn't even dream of putting anyone on on a mathematics Q&A site, Pete. :) As I recall, it had something to do with the integral of the set of empirically-defined measurements, since they're correlated with a certain quantity we were trying to determine from our data. Without a plausible functional form, and none of us knowing about things like splines back then (hey, I only barely learned calculus in school!), it was what was written in the "manual". We had, however, an exquisitely sensitive Sartorius analytical balance, which was meant for weighing very light objects. – J. M. Sep 3 '10 at 9:43
– J. M. Sep 3 '10 at 9:48
1
@J.M.: very interesting. As a lifetime academic my ideas about how things might be done in the real world are rather...theoretical. – Pete L. Clark Dec 3 '10 at 5:55
I guess we speak of the area "under a curve" when the piece of curve is above the $x$ axis. Sign does matter in the definition of the proper integral, so I would say you were right, and your boss was wrong. Doing integrals by "weighing" things should allow for negative weights.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9755151271820068, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/32777/poisson-structure-comes-from-hamiltonian?answertab=votes
|
# Poisson structure comes from hamiltonian?
I am interested in studying quantization, but it seems I am lacking the basics of classical mechanics. Any help would be appreciated.
I would first like to ask what is necessary to have a sympletic/Poisson structure in a classical mechanical system (e.g. an isolated particle moving on some Riemannian manifold $Q$). Do we need an evolution rule to begin with?
For example, if we have some ("nice"?) Lagrangian structure (this gives an evolution rule), we can define the momenta and get to Hamilton's equations, and build a Hamiltonian flow on the phase space $P = T^*Q$. This gives us a Poisson structure for the algebra of smooth observables $C^\infty(P,\mathbf{R})$.
But many books depart from a Poisson manifold $(P, \{\,,\,\})$ and input the Hamiltonian later, or even suggest it can be a parameter, some $h\in C^\infty(P,\mathbf{R})$. This suggests that the Poisson structure is, in fact, more of a kinematic entity, rather than a dynamic one (means it has no a priori relation with the rule of how the system evolves). Is that right? On a practical example, how do we even know what the Poisson structure would be, if we don't even know what are the momenta?
Also, as I see, the fact that one can use a Poisson manifold also comes from the fact that a point in phase space will determine the state (give a solution curve for the motion by initial value formulation) only if we restric ourselves to evolution equations of second order, which means some restriction on the Lagrangian, no?
Lastly: For quantization of a Poisson manifold, do we need the Hamiltonian beforehand? Or: quantization is kinematic or dynamic?
Thank youuu!
-
1
Sympletic structure arises naturally on the cotangent bundle $T^*Q$ of the configuration space $Q$. Nothing needed, it's already there. Not even Riemannian structure (kinetic energy) on $Q$, $Q$ should be just a smooth manifold. That's the naturallity in math. – Yrogirg Jul 24 '12 at 19:16
## 3 Answers
I too have been trying to understand meaning of quantization, and have had similar kinds of questions as you are asking. So I would like to share my own understanding regarding these.
First of all when we encounter problems in classical mechanics, it is almost never that we are given with a symplectic or Poisson manifold, with a smooth Hamiltonian function on it so that all you have to do is to carry out formal well defined mathematical steps to find equations of motion. Usually finding the phase space itself is part of the problem. And in some cases (good examples I think come from field theories) it could be a nontrivial problem.
On the other hand if some mathematician gives you with a classical mechanics problem s/he will at least provide you with the data {Phase space, Hamiltonian}, because mathematically this data is necessary for defining the problem. Here by phase space I mean a symplectic manifold (or may be a Poisson manifold with a given Poisson structure).
Now coming to your questions :
I would first like to ask what is necessary to have a sympletic/Poisson structure in a classical mechanical system (e.g. an isolated particle moving on some Riemannian manifold Q). Do we need an evolution rule to begin with?
If you want to work on $TQ$ ("velocity space") then here too you can work in Hamiltonian (rather than action) formulation, but your symplectic structure here will depend upon Lagrangian. Also this I think would put some conditions (call them conditions A) on Lagrangian so that you can have a "good" symplectic structure on $TQ$ (i.e. a nice Hamiltonian formulation of the problem on $TQ$) . As of now I don't know what these conditions are but it should not be very difficult to find them from expression of symplectic form in terms of Lagrangian.
If, on the other hand, you want to work on $T^*Q$ then here symplectic structure will not depend upon the Hamiltonian. That is why I think people usually study Hamiltonian formulation of classical mechanics on $T^*Q$ rather than on $TQ$ because here symplectic structure is fixed once and for all and thus to study different physical systems you only need to change your Hamiltonian. In order to go from $TQ$ to $T^*Q$ one needs Legendre transformation and for this to be possible Lagrangian needs to satisfy some conditions (I am not sure what (if any) is the relation of these conditions with conditions A mentioned above; though its likely that they are related).
But many books depart from a Poisson manifold $(P,{,})$ and input the Hamiltonian later, or even suggest it can be a parameter, some $h\in C^\infty(P,R)$. This suggests that the Poisson structure is, in fact, more of a kinematic entity, rather than a dynamic one (means it has no a priori relation with the rule of how the system evolves). Is that right? On a practical example, how do we even know what the Poisson structure would be, if we don't even know what are the momenta?
From what we saw above it should be clear that symplectic structure on Phase space $T^*Q$ is independent of Hamiltonian. On $T^*Q$ we know what position is and what momentum is and that is all that's needed to define a symplectic structure.
Also, as I see, the fact that one can use a Poisson manifold also comes from the fact that a point in phase space will determine the state (give a solution curve for the motion by initial value formulation) only if we restric ourselves to evolution equations of second order, which means some restriction on the Lagrangian, no?
If classical problem is given in Lagrangian formulation then (as mentioned above) there should be some conditions that Lagrangian must satisfy in order for the problem to have Hamiltonian formulation. On the other hand on a phase space mathematically it would be fine to choose any smooth function as your Hamiltonian.
Lastly: For quantization of a Poisson manifold, do we need the Hamiltonian beforehand? Or: quantization is kinematic or dynamic?
You don't need to know Hamiltonian for quantizing your phase space. Quantization treats all observables on equal footing. However there are some subtleties due to which not all observables may get quantized. So some of the steps of quantization may require you to refer to your Hamiltonian just to make sure that at least your Hamiltonian function can be assigned to a corresponding quantum operator.
-
1
The last part is just not true. You can't "quantize" a classical phase space independent of extra structure, since the quantum commutation relations are not invariant under general canonical transformations. If [x,p]=i, it isn't true that any other classical functions of x and p whose Poisson bracket is 1 have the same commutation relation, because of operator ordering issues. – Ron Maimon Jul 25 '12 at 3:53
@Ron Maimon Thanks for your correction. Ya, choice of polarization in geometric quantization method may require extra structure on phase space. But still Hamiltonian doesn't have much role to play here. Right ? – user10001 Jul 25 '12 at 4:24
1
If I was sure, I would have written an answer. The problem is that classical mechanics is more symmetrical than quantum mechanics, and QM picks out certain canonical pairs as "truly canonical" (in that their commutation relation is really a c-number, equal to their Poisson bracket, equal to one), and other variables which look just as canonically conjugate classically are no good as quantum canonical pairs. This is something which makes the program of "quantization" impossible as mathemticians imagine it, although it can be done to leading semiclassical order. – Ron Maimon Jul 25 '12 at 4:30
1
I agree you don't need a Hamiltonian to write down q and p and identify p as the derivative with respect to q. This is all that you are doing when "quantizing". But you do need to know which way is q and which way is p (although you can interchange the two, or rotate, you can't do a general nonlinear transformation). I will think what this extra structure is, it's like a foliation of some kind. – Ron Maimon Jul 25 '12 at 4:34
@RonMaimon: One can also argue that QM is more symmetric than CM, as the classical commutation relations are not invariant under general unitary mappings. – Arnold Neumaier Jul 25 '12 at 13:32
show 6 more comments
Yes, the Poisson structure determines part of the kinematic set-up. A choice of relevant observables spanning a Lie algebra under the PB (a Heisenberg algebra for a multiparticle system, $iso(3)$ for rigid motions, etc.) makes up the remainder of the kinematics. In a Lagrangian framework, for Lagrangians at most quadratic in the first derivatives and not containing higher derivatives, the PB is determined by the derivative part of the Lagrangian, hence is far from fixing the dynamics.
One needs the Poisson structure in order that the dynamics is determined by the Hamiltonian.
Quantization is kinematic, too, and has nothing to do with particular Hamiltonians. The quantum-classical correspondence of the Hamiltonian is anyway valid only up to $O(\hbar)$, as the same classical $H$ is the classical limit $\hbar\to 0$ of many quantum $H$s, and as $\hbar$ is a constant in Nature.)
-
You are speaking about systems with quadratic Lagrangians only, this is a subset of the systems that interest the OP, but it is the one where the Lagrangian formulation is easiest, and it does include most field theories people study. – Ron Maimon Jul 25 '12 at 16:24
@RonMaimon: Thanks. I edited my answer to reflect that. – Arnold Neumaier Jul 25 '12 at 17:12
The symplectic structure is independent of the Hamiltonian, but often a Hamiltonian gives us a hint to which symplectic structure we should use.
Here's what I mean in more detail. The way I intuit a symplectic structure is as a polarization of the 2n coordinates of the manifold into n positions and n conjugate momenta. This shouldn't depend on the Hamiltonian, which just tells me the energy of a configuration, but the Hamiltonian will be easier to manipulate with a good choice of polarization.
A symplectic structure is specified by a nondegenerate real closed 2-form $\omega$. We can relate this to the Poisson bracket as follows. Given a function $f:M\rightarrow\mathbb{R}$ on the phase space $M$, since $\omega$ is nondegenerate we can find a unique vector field $X_f$ such that $\omega(X_f, -)=df(-)$ as 1-forms. Then given another $g:M\rightarrow\mathbb{R}$, we can define $\{f,g\}=\omega(X_f,X_g)$.
So far everything is time-independent. In this symplectic perspective, the notion of time should be a flow on $M$, so that given a function $f$ on $M$ as before, we can form a 1-parameter family of such functions $f(t)$ such that $\frac{df}{dt}$ depends only on $f$. One way of doing this is to specify a function $H:M\rightarrow\mathbb{R}$ and declare $\frac{df}{dt}=\{H,f(t)\}$. We could also take a 1-parameter family of Hamiltonians $H(t)$ and do the same thing.
Finally, it is important to note that our phase space is often not a cotangent bundle when we have symmetries or constraints. However, there is a theorem of Darboux which says that the symplectic form always looks locally like the natural symplectic form on a cotangent bundle.
-
The local statement is obvious, but the global statement eluded me--- what's an example of a global phase space which is not a cotangent bundle globally? – Ron Maimon Jul 27 '12 at 7:10
4
The trivial example is the two-sphere. Its symplectic structure is given by the volume form, but as $S^2$ is compact it could not be a cotangent bundle. Its the classical (and quantum mechanical, in the language of geometric quantum mechanics) phase space of the spin degrees of freedom. – altertoby Jul 27 '12 at 17:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374146461486816, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/60030/estimates-for-symmetric-functions/60279
|
## Estimates for Symmetric Functions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $z_1,z_2,\ldots,z_n$ be i.i.d. random variables in the unit circle. Consider the polynomial $$p(z)=\prod_{i=1}^{n}{(t-z_i)}=t^n+a_{1}t^{n-1}+\cdots+a_{n-2}t^2+a_{n-1}t+a_n$$ where the $a_i$ are the symmetric functions $$a_{1}=(-1)\sum_{i=1}^{n}{z_{i}}\hspace{0.3cm},\quad a_{2}=(-1)^2\sum_{1\leq i< j\leq n}{z_{i}z_{j}}\hspace{0.3cm} \quad\ldots\quad a_{n}=(-1)^{n}z_{1}z_{2}\ldots z_{n}.$$
How can we estimate the random variable $Z$ defined as $$Z=\sum_{j=1}^{n}{|a_{j}|}$$ asymptotically as $n\to\infty$?
It is not very difficult to estimate $|\sum_{j=1}^{n}{a_{j}}|$ by estimating $\log p(1)$ via the CLT. However, $Z$ seems to be much more difficult. Any idea of what can work here?
Update: If we look at the term at the central symmetric random variable $a_{\lfloor n/2 \rfloor}$ $$a_{\lfloor n/2 \rfloor}=\text{sum of the products of $\lfloor n/2 \rfloor$ of different $z_{i}$'s}$$ it is not hard to see that it has uniform distributed phase in $(-\pi,\pi]$.
However, its magnitude is blowing up extremely fast!
Does anyone knows how to compute the limit distribution of $|a_{\lfloor n/2 \rfloor}|$ under the appropriate normalizations?
Thanks!
-
Have you tried any techniques from Stein's Method? – Alex R. Mar 30 2011 at 21:51
2
If I haven't made some stupid mistake in my estimates (done when I was driving from Madison to Milwaukee), one can show that $n^{-1/2}\log Z$ tends in distribution to $\max_{|z|=1}\Re\sum_{k\ge 1}\frac{\xi_k}k z^k$ where $\xi_k$ are i.i.d. standard complex normals. Since the latter series converges uniformly a.s., we get some nontrivial distribution on $(0,+\infty)$ for which we can do good tail estimates and a lot of other things though for the life of mine I cannot tell the exact formula for its density. I'll try to expand this comment into a full answer when I have at least some free time. – fedja Mar 31 2011 at 2:41
## 1 Answer
OK, here is my argument (sorry for the delay).
First of all, $Z$ is essentially the maximum of the absolute value of the polynomial $P(z)=\prod_j(1-z_jz)$ on the unit circumference (up to a factor of $n$, but it is not noticeable on the scale we are talking about).
Second, the maximum of the absolute value of a (trigonometric) polynomial of degree $K$ can be read from any $AK$ uniformly distributed points on the unit circumference $\mathbb T$ (say, roots of unity of degree $AK$) with relative error of order $A^{-1}$.
Now let $\psi(z)=\log(1-z)$. We want to find the asymptotic distribution of the $\max_z n^{-1/2}Re\sum_j \psi(zz_j)$ where $z_j$ are i.i.d. random variables uniformly. Since it is the logarithm of $|P(z)|$, the maximum can be found using $10n$ points.
Decompose $\psi(z)$ into its Fourier series $-\sum_{k\ge 1}\frac 1kz^k$. Then, formally, we have $n^{-1/2}\sum_j \psi(zz_j)=-\sum_k\left(n^{-1/2}\sum_j z_j^k\right)\frac{z^k}{k}$. It is tempting to say that the random variables $\xi_n,k=n^{-1/2}\sum_j z_j^k$ converge to the uncorrelated standard complex Gaussians $\xi_k$ by the CLT in distribution and, therefore, the whole sum converges in distribution to the random function $F(z)=\sum_k\xi_k\frac{z^k}k$, so $n^{-1/2}\log Z$ converges to $\max_z\Re F(z)$ (the $-$ sign doesn't matter because the limiting distribution is symmetric). This argument would be valid literally if we had a finite sum in $k$ but, of course, it is patently false for the infinite series (just because if we replace $\max$ by $\min$, we get an obvious nonsense in the end). Still, it can be salvaged if we do it more carefully.
Let $K$ run over the powers of $2$. Choose some big $K_0$ and apply the above naiive argument to $\sum_{k=1}^{K_0}$. Then we can safely say that the first $K_0$ terms in the series give us essentially the random function $F_{K_0}(z)$ which is the $K_0$-th partial sum of $F$ when $n$ is large enough.
Our main task will be to show that the rest of the series cannot really change the maximum too much. More precisely, it contributes only a small absolute error with high probability.
To this end, we need
Lemma: Let $f(z)$ be an analytic in the unit disk function with $f(0)=0$, $|\Im f|\le \frac 12$. Then we have $\int_{\mathbb T}e^{\Re f}dm\le \exp\left(2\int_{\mathbb T}|f|^2dm\right)$ where $m$ is the Haar measure on $\mathbb T$.
Proof: By Cauchy-Schwartz, $$\left(\int_{\mathbb T}e^{\Re f}dm\right)^2\le \left(\int_{\mathbb T}e^{2\Re f}e^{-2|\Im f|^2}dm\right)\left(\int_{\mathbb T}e^{2|\Im f|^2}dm\right)$$ Note that if$|\Im w|\le 1$, we have $e^{\Re w}e^{-|\Im w|^2}\le \Re e^w$. So the first integral does not exceed $\int_{\mathbb T}\Re e^{2f}dm=\Re e^{2f(0)}=1$. Next, $e^s\le 1+2s$ for $0\le s\le\frac 12$, so $\int_{\mathbb T}e^{2|\Im f|^2}dm\le 1+4\int_{\mathbb T}|\Im f|^2dm\le 1+4\int_{\mathbb T}|f|^2dm$. Taking the square root turns $4$ into $2$ and it remains to use that $1+s\le e^s$
The immediate consequence of Lemma 1 is a Bernstein type estimate for $G_K(z)=\sum_{k\in (K,2K]}\left(n^{-1/2}\sum_j z_j^k\right)\frac{z^k}{k}$ $$P(\max|\Re G_K|\ge 2T)\le 20Ke^{-T^2K/9}$$ if $0\le TK\le \sqrt n$, say.
Indeed, just use the Bernstein trick on the independent random shifts of $g_K(z)=\sum_{k\in (K,2K]}\frac{z^k}{k}$: $$E e^{\pm t\Re G_K(z)}\le \left(\int_{\mathbb T}e^{\Re tn^{-1/2}g_K}dm\right)^n\le e^{2t^2/K}$$ for every $t\le \sqrt n/2$ (we used the Lemma to make the last estimate) and put $t=\frac{TK}{3}$. After that read the maximum from $10K$ points with small relative error and do the trivial union bound.
Choosing $T=K^{-1/3}$, we see that we can safely ignore the sum from $K=K_0$ to $K=\sqrt n$ if $K_0$ is large enough. Now we are left with $$G_K(z)=\sum_{k\ge \sqrt n}\left(n^{-1/2}\sum_j z_j^k)\right)\frac{z^k}{k}$$ to deal with. Recall that all we want here is to show that it is small at $10n$ uniformly distributed points. Again, if $g(z)=\sum_{k\ge \sqrt n}\frac{z^k}{k}$, we have $|\Im g|\le 10$, say so we can use the same trick and get $$P(\max_{10n\text{ points}}|\Re G|\ge 2T)\le 20n e^{n^{-1/2}t^2-tT}$$ if $0\le t\le \sqrt n/20$, say. Here we do not need to be greedy at all: just take a fixed small $T$ and choose $t=\frac{2\log n}T$.
Now, returning to your original determinant problem, we see that the norm of the inverse matrix is essentially $Z/D$ where $D=\min_i\prod_{j:j\ne i}|z_i-z_j|$. We know the distribution of $\log Z$ and we have the trivial Hadamard bound $D\le n$. This already tells you that the typical $\lambda_1$ is at most $e^{-c\sqrt n}$. The next logical step would be to investigate the distribution of $\log D$.
-
Thanks fedja, it makes sense. – ght Apr 1 2011 at 13:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9231846332550049, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/63244-stuck-inverse-transform.html
|
# Thread:
1. ## Stuck on Inverse Transform
$1/2((s+3)/(s^2+2s+2))$
I am getting stuck trying to find the inverse laplace transform...I've checked it against the tables, and nothings looking familiar...Am I missing a trick here?!
Thank you in advance!
2. How about express it in terms of partial fractions first then take the inverse transform of those terms? I would think we could treat the complex roots just like regular constants and work it through and get an expression containing those complex numbers. If it were part of a DE problem, then I'd separate the real and complex components of the solution and just like the case when you solve linear DEs, the real and imaginary components are the two real solutions to the DE. Think so anyway.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329374432563782, "perplexity_flag": "head"}
|
http://psychology.wikia.com/wiki/Confidence_interval
|
# Confidence limits (statistics)
Talk0
31,735pages on
this wiki
## Redirected from Confidence interval
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
In statistics, a confidence interval (CI) or confidence bound is an interval estimate of a population parameter. Instead of estimating the parameter by a single value, an interval likely to include the parameter is given. Thus, confidence intervals are used to indicate the reliability of an estimate. How likely the interval is to contain the parameter is determined by the confidence level or confidence coefficient. Increasing the desired confidence level will widen the confidence interval.
For example, a CI can be used to describe how reliable survey results are. In a poll of election voting-intentions, the result might be that 40% of respondents intend to vote for a certain party. A 95% confidence interval for the proportion in the whole population having the same intention on the survey date might be 36% to 44%. All other things being equal, a survey result with a small CI is more reliable than a result with a large CI and one of the main things controlling this width in the case of population surveys is the size of the sample questioned. Confidence intervals and interval estimates more generally have applications across the whole range of quantitative studies.
If a statistic is presented with a confidence interval, and is claimed to be statistically significant, the underlying test leading to that claim will have been performed at a significance level of 100% minus the confidence level of the interval. If that test has produced a type I error, the statistic and its confidence interval will bear no relationship to the underlying parameter.
## Brief explanation
For a given proportion p (where p is the confidence level), a confidence interval for a population parameter is an interval that is calculated from a random sample of an underlying population such that, if the sampling was repeated numerous times and the confidence interval recalculated from each sample according to the same method, a proportion p of the confidence intervals would contain the population parameter in question. In unusual cases, a confidence set may consist of a collection of several separate intervals, which may include semi-infinite intervals, and it is possible that an outcome of a confidence-interval calculation could be the set of all values from minus infinity to plus infinity.
Confidence intervals are the most prevalent form of interval estimation. Interval estimates may be contrasted with point estimates and have the advantage over these as summaries of a dataset in that they convey more information – not just a "best estimate" of a parameter but an indication of the precision with which the parameter is known.
Confidence intervals play a similar role in frequentist statistics to the credibility interval in Bayesian statistics. However, confidence intervals and credibility intervals are not only mathematically different; they have radically different interpretations.
Confidence regions generalise the confidence interval concept to deal with multiple quantities. Such regions can indicate not only the extent of likely estimation errors but can also reveal whether (for example) the estimate for one quantity is too large then the other is also likely to be too large. See also confidence bands.
In applied practice, confidence intervals are typically stated at the 95% confidence level.[3] However, when presented graphically, confidence intervals can show several confidence levels, for example 50%, 95% and 99%.
## Theoretical basis
### Definition
#### Confidence Intervals as random intervals
Confidence intervals are constructed on the basis of a given dataset: x denotes the set of observations in the dataset, and X is used when considering the outcomes that might have been observed from the same population, where X is treated as a random variable whose observed outcome is X = x. A confidence interval is specified by a pair of functions u(.) and v(.) and the confidence interval for the given data set is defined as the interval (u(x), v(x)). To complete the definition of a confidence interval, there needs to be a clear understanding of the quantity for which the CI provides an interval estimate. Suppose this quantity is w. The property of the rules u(.) and v(.) that makes the interval (u(x),v(x)) closest to what a confidence interval for w would be, relates to the properties of the set of random intervals given by (u(X),v(X)): that is treating the end-points as random variables. This property is the coverage probability or the probability c that the random interval includes w,
$c=\Pr(u(X)<w<v(X)).$
Here the endpoints U = u(X) and V = v(X) are statistics (i.e., observable random variables) which are derived from values in the dataset. The random interval is (U, V).
#### Confidence intervals for inference
For the above to provide a viable means to statistical inference, something further is required: a tie between the quantity being estimated and the probability distribution of the outcome X. Suppose that this probability distribution is characterised by the unobservable parameter θ, which is a quantity to be estimated, and by other unobservable parameters φ which are not of immediate interest. These other quantities φ in which there is no immediate interest are called nuisance parameters, as statistical theory still needs to find some way to deal with them.
The definition of a confidence interval for θ for any number α between 0 and 1 is an interval
$(u(X), v(X))$
for which
$\Pr_{X;\theta,\varphi}(u(X)<\theta<v(X)) = 1 - \alpha\text{ for all }(\theta,\varphi)\,$
and u(X) and v(X) are observable random variables, i.e. one need not know the value of the unobservable quantities θ, φ in order to know the values of u(X) and v(X).
The number 1 − α (sometimes reported as a percentage 100%·(1 − α)) is called the confidence level or confidence coefficient. Most standard books adopt this convention, where α will be a small number. Here ${\Pr}_{X;\theta,\varphi}$ is used to indicate the probability when the random variable X has the distribution characterised by $(\theta,\phi)$. An important part of this specification is that the random interval (U, V) covers the unknown value θ with a high probability no matter what the true value of θ actually is.
Note that here ${\Pr}_{X;\theta,\phi}$ need not refer to an explicitly given parameterised family of distributions, although it often does. Just as the random variable X notionally corresponds to other possible realisations of x from the same population or from the same version of reality, the parameters $(\theta,\phi)$ indicate that we need to consider other versions of reality in which the distribution of X might have different characteristics.
#### Intervals for random outcomes
Confidence intervals can be defined for random quantities as well as for fixed quantities as in the above. See prediction interval. For this, consider an additional single-valued random variable Y which may or may not be statistically dependent on X. Then the rule for constructing the interval(u(x), v(x)) provides a confidence interval for the as-yet-to-be observed value y of Y if
$\Pr_{X,Y;\theta,\varphi}(u(X)<Y<v(X)) = 1-\alpha\text{ for all }(\theta,\varphi).\,$
Here ${\Pr}_{X,Y;\theta,\varphi}$ is used to indicate the probability over the joint distribution of the random variables (X, Y) when this is characterised by parameters $(\theta,\varphi)$.
#### Approximate confidence intervals
For non-standard applications it is sometimes not possible to find rules for constructing confidence intervals that have exactly the required properties. But practically useful intervals can still be found. The coverage probability $c(\theta,\phi)$ for a random interval is defined by
${\Pr}_{X;\theta,\phi}(u(X)<\theta<v(X))=c(\theta,\phi)$
and the rule for constructing the interval may be accepted as providing a confidence interval if
$c(\theta,\phi)\approxeq 1-\alpha$ for all $(\theta,\phi)$
to an acceptable level of approximation.
#### Comparison to Bayesian interval estimates
A Bayesian interval estimate is called a credible interval. Using much of the same notation as above, the definition of a credible interval for the unknown true value of θ is, for a given α[4],
${\Pr}_{\Theta|X=x}(u(x)<\Theta<v(x))=1-\alpha.$
Here Θ is used to emphasize that the unknown value of $\theta$ is being treated as a random variable. The definitions of the two types of intervals may be compared as follows.
• The definition of a confidence interval involves probabilities calculated from the distribution of X for given $(\theta,\phi)$ (or conditional on these values) and the condition needs to hold for all values of $(\theta,\phi)$.
• The definition of a credible interval involves probabilities calculated from the distribution of Θ conditional on the observed values of X=x and marginalised (or averaged) over the values of $\Phi$, where this last quantity is the random variable corresponding to the uncertainty about the nuisance parameters in $\phi$.
Note that the treatment of the nuisance parameters above is often omitted from discussions comparing confidence and credible intervals but it is markedly different between the two cases.
In some simple standard cases, the intervals produced as confidence and credible intervals from the same data set can be identical. They are always very different if moderate or strong prior information is included in the Bayesian analysis.
### Desirable properties
When applying fairly standard statistical procedures, there will often be fairly standard ways of constructing confidence intervals. These will have been devised so as to meet certain desirable properties, which will hold given that the assumptions on which the procedure rely are true. In non-standard applications, the same desirable properties would be sought. These desirable properties may be described as: validity, optimality and invariance. Of these "validity" is most important, followed closely by "optimality". "Invariance" may be considered as a property of the method of derivation of a confidence interval rather than of the rule for constructing the interval.
• Validity. This means that the nominal coverage probability (confidence level) of the confidence interval should hold, either exactly or to a good approximation.
• Optimality. This means that the rule for constructing the confidence interval should make as much use of the information in the data-set as possible. Recall that one could throw away half of a dataset and still be able to derive a valid confidence interval. One way of assessing optimality is by the length of the interval, so that a rule for constructing a confidence interval is judged better than another if it leads to intervals whose widths are typically shorter.
• Invariance. In many applications the quantity being estimated might not be tightly defined as such. For example, a survey might result in an estimate of the median income in a population, but it might equally be considered as providing an estimate of the logarithm of the median income, given that this is a common scale for presenting graphical results. It would be desirable that the method used for constructing a confidence interval for the median income would give equivalent results when applied to constructing a confidence interval for the logarithm of the median income: specifically the values at the ends of the latter interval would be the logarithms of the values at the ends of former interval.
### Methods of derivation
For non-standard applications, there are several routes that might be taken to derive a rule for the construction of confidence intervals. Established rules for standard procedures might be justified or explained via several of these routes. Typically a rule for constructing confidence intervals is closely tied to a particular way of finding a point estimate of the quantity being considered.
Sample statistics
This is closely related to the method of moments for estimation. A simple example arises where the quantity to be estimated is the mean, in which case a natural estimate is the sample mean. The usual arguments indicate that the sample variance can be used to estimate the variance of the sample mean. A naive confidence interval for the true mean can be constructed centered on the sample mean with a width which is a multiple of the square root of the sample variance.
Likelihood theory
Where estimates are constructed using the maximum likelihood principle, the theory for this provides two ways of constructing confidence intervals or confidence regions for the estimates.
Estimating equations
The estimation approach here can be considered as both a generalization of the method of moments and a generalization of the maximum likelihood approach. There are corresponding generalizations of the results of maximum likelihood theory that allow confidence intervals to be constructed based on estimates derived from estimating equations.
Via significance testing
If significance tests are available for general values of a parameter, then confidence intervals/regions can be constructed by including in the 100p% confidence region all those points for which the significance test of the null hypothesis that the true value is the given value is not rejected at a significance level of (1-p).
## Examples
### Practical example
A machine fills cups with margarine, and is supposed to be adjusted so that the mean content of the cups is close to 250 grams of margarine. Of course it is not possible to fill every cup with exactly 250 grams of margarine. Hence the weight of the filling can be considered to be a random variable X. The distribution of X is assumed here to be a normal distribution with unknown expectation μ and (for the sake of simplicity) known standard deviation σ = 2.5 grams. To check if the machine is adequately adjusted, a sample of n = 25 cups of margarine is chosen at random and the cups weighed. The weights of margarine are $X_1,\dots,X_{25}$, a random sample from X.
To get an impression of the expectation μ, it is sufficient to give an estimate. The appropriate estimator is the sample mean:
$\hat \mu=\bar X=\frac{1}{n}\sum_{i=1}^n X_i.$
The sample shows actual weights $x_1,\dots,x_{25}$, with mean:
$\bar x=\frac {1}{25} \sum_{i=1}^{25} x_i = 250.2\,\mathrm{grams}$.
If we take another sample of 25 cups, we could easily expect to find values like 250.4 or 251.1 grams. A sample mean value of 280 grams however would be extremely rare if the mean content of the cups is in fact close to 250g. There is a whole interval around the observed value 250.2 of the sample mean within which, if the whole population mean actually takes a value in this range, the observed data would not be considered particularly unusual. Such an interval is called a confidence interval for the parameter μ. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample $X_1,\dots,X_{25}$ and hence random variables themselves.
In our case we may determine the endpoints by considering that the sample mean $\bar X$ from a normally distributed sample is also normally distributed, with the same expectation μ, but with standard error $\sigma/\sqrt{n} = 0.5$ (grams). By standardizing we get a random variable
$Z = \frac {\bar X-\mu}{\sigma/\sqrt{n}} =\frac {\bar X-\mu}{0.5}$
The above expression, standardizes your varible. This allows you to do this analysis, to calculate the 95% confidence interval. μ is some future measurement, sigma is your standard deviation, N is your sample size (in this case 25), and X bar is your sample mean (in this case 250.2). In order to calculate a confidence interval we first need to pick an α varible. Since we are interested in the 95% confidence interval, we set α = 0.05. Hence it is possible to find numbers −z and z, independent of μ, where Z lies in between with probability 1 − α. We take 1 − α = 0.95. So we have:
$P(-z\le Z\le z) = 1-\alpha = 0.95.$
The number z follows from the cumulative distribution function, which gives us are z value, and is valid becuase we standardized our big Z. (Also see probit). Therefore:
$\Phi(z) = P(Z \le z) = 1 - \frac{\alpha}2 = 0.975\,,$
$z=\Phi^{-1}(\Phi(z)) = \Phi^{-1}(0.975) = 1.96\,,$
and then we get:
$0.95 = 1-\alpha=P(-z \le Z \le z)=P \left(-1.96 \le \frac {\bar X-\mu}{\sigma/\sqrt{n}} \le 1.96 \right)$
$=P \left( \bar X - 1.96 \frac{\sigma}{\sqrt{n}} \le \mu \le \bar X + 1.96 \frac{\sigma}{\sqrt{n}}\right)$
$=P\left(\bar X - 1.96 \times 0.5 \le \mu \le \bar X + 1.96 \times 0.5\right)$
$=P \left( \bar X - 0.98 \le \mu \le \bar X + 0.98 \right).$
This might be interpreted as: with probability 0.95 to one we will choose a confidence interval in which we will meet the parameter μ between the stochastic endpoints, but that does not mean that possibility of meeting parameter μ in confidence interval is 95% :
$\bar X - 0{.}98$
and
$\bar X + 0.98.$
Every time the measurements are repeated, there will be another value for the mean $\bar X$ of the sample. In 95% of the cases μ will be between the endpoints calculated from this mean, but in 5% of the cases it will not be. The actual confidence interval is calculated by entering the measured weights in the formula. Our 0.95 confidence interval becomes:
$(\bar x - 0.98;\bar x + 0.98) = (250.2 - 0.98; 250.2 + 0.98) = (249.22; 251.18).\,$
This interval has fixed endpoints, where μ might be in between (or not). There is no probability of such an event. We cannot say: "with probability (1 − α) the parameter μ lies in the confidence interval." We only know that by repetition in 100(1 − α) % of the cases μ will be in the calculated interval. In 100α % of the cases however it doesn't. And unfortunately we don't know in which of the cases this happens. That's why we say: "with confidence level 100(1 − α) % μ lies in the confidence interval."
The figure on the right shows 50 realisations of a confidence interval for a given population mean μ. If we randomly choose one realisation, the probability is 95% we end up having chosen an interval that contains the parameter; however we may be unlucky and have picked the wrong one. We'll never know; we're stuck with our interval.
### Theoretical example
Suppose X1, ..., Xn are an independent sample from a normally distributed population with mean μ and variance σ2. Let
$\overline{X}=(X_1+\cdots+X_n)/n,\,$
$S^2=\frac{1}{n-1}\sum_{i=1}^n\left(X_i-\overline{X}\,\right)^2.$
Then
$T=\frac{\overline{X}-\mu}{S/\sqrt{n}}$
has a Student's t-distribution with n − 1 degrees of freedom. Note that the distribution of T does not depend on the values of the unobservable parameters μ and σ2; i.e., it is a pivotal quantity. If c is the 95th percentile of this distribution, then
$\Pr\left(-c<T<c\right)=0.9.\,$
(Note: "95th" and "0.9" are correct in the preceding expressions. There is a 5% chance that T will be less than −c and a 5% chance that it will be larger than +c. Thus, the probability that T will be between −c and +c is 90%.)
Consequently
$\Pr\left(\overline{X}-cS/\sqrt{n}<\mu<\overline{X}+cS/\sqrt{n}\right)=0.9\,$
and we have a theoretical (stochastic) 90% confidence interval for μ.
After observing the sample we find values $\overline{x}$ for $\overline{X}$ and s for S, from which we compute the confidence interval
$[\overline{x}-cs/\sqrt{n},\overline{x}+cs/\sqrt{n}]\,$,
an interval with fixed numbers as endpoints, of which we can no more say there is a certain probability it contains the parameter μ. Either μ is in this interval or isn't.
## Relation to hypothesis testing
While the formulations of the notions of confidence intervals and of statistical hypothesis testing are distinct they are in some senses related and to some extent complementary. While not all confidence intervals are constructed in this way, one general purpose approach to constructing confidence intervals is to define a 100(1−α)% confidence interval to consist of all those values θ0 for which a test of the hypothesis θ=θ0 is not rejected at a significance level of 100α%. Such an approach may not always be available since it presupposes the practical availability of an appropriate significance test. Naturally, any assumptions required for the significance test would carry over to the confidence intervals.
It may be convenient to make the general correspondence that parameter values within a confidence interval are equivalent to those values that would not be rejected by an hypothesis test, but this would be dangerous. In many instances the confidence intervals that are quoted are only approximately valid, perhaps derived from "plus or minus twice the standard error", and the implications of this for the supposedly corresponding hypothesis tests are usually unknown.
## Meaning and interpretation
For users of frequentist methods, various interpretations of a confidence interval can be given.
• The confidence interval can be expressed in terms of samples (or repeated samples): "Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 90% of the time." [5] Note that this need not be repeated sampling from the same population, just repeated sampling [6].
• The explanation of a confidence interval can amount to something like: "The confidence interval represents values for the population parameter for which the difference between the parameter and the observed estimate is not statistically significant at the 10% level"[7]. In fact, this relates to one particular way in which a confidence interval may be constructed.
• The probability associated with a confidence interval may also be considered from a pre-experiment point of view, in the same context in which arguments for the random allocation of treatments to study items are made. Here the experimenter sets out the way in which they intend to calculate a confidence interval and know, before they do the actual experiment, that the interval they will end up calculating has a certain chance of covering the true but unknown value. This is very similar to the "repeated sample" interpretation above, except that it avoids relying on considering hypothetical repeats of a sampling procedure that may not be repeatable in any meaningful sense.
In each of the above, the following applies. If the true value of the parameter lies outside the 90% confidence interval once it has been calculated, then an event has occurred which had a probability of 10% (or less) of happening by chance.
Users of Bayesian methods, if they produced an interval estimate, would by contrast want to say "My degree of belief that the parameter is in fact in this interval is 90%" [8]. See Credible interval. Disagreements about these issues are not disagreements about solutions to mathematical problems. Rather they are disagreements about the ways in which mathematics is to be applied.
## Meaning of the term confidence
There is a difference in meaning between the common usage of the word 'confidence' and its statistical usage, which is often confusing to the layman. In common usage, a claim to 95% confidence in something is normally taken as indicating virtual certainty. In statistics, a claim to 95% confidence simply means that the researcher has seen something occur that only happens one time in twenty or less. If one were to roll two dice and get double six, few would claim this as proof that the dice were fixed, although statistically speaking one could have 97% confidence that they were. Similarly, the finding of a statistical link at 95% confidence is not proof, nor even very good evidence, that there is any real connection between the things linked.
When a study involves multiple statistical tests, some laymen assume that the confidence associated with individual tests is the confidence one should have in the results of the study itself. In fact, the results of all the statistical tests conducted during a study must be judged as a whole in determining what confidence one may place in the positive links it produces. If a study involving 40 statistical tests at 95% confidence was performed, about two of the tests can be expected to return false positives. If 3 links are found, the confidence associated with those links 'as the result of the survey' is actually about 32%; it's what should be expected two-thirds of the time.
## Confidence intervals in measurement
This section does not cite any references or sources.Please help improve this section by adding citations to reliable sources. (help, get involved!)Unverifiable material may be challenged and removed. This article has been tagged since April 2008.
The factual accuracy of this section is disputed.Please see the relevant discussion on the talk page
The results of measurements are often accompanied by confidence intervals. For instance, suppose a scale is known to yield the actual mass of an object plus a normally distributed random error with mean 0 and known standard deviation σ. If we weigh 100 objects of known mass on this scale and report the values ±σ, then we can expect to find that around 68% of the reported ranges include the actual mass.
If we wish to report values with a smaller standard error value, then we repeat the measurement n times and average the results. Then the 68.2% confidence interval is $\pm \sigma/\sqrt{n}$. For example, repeating the measurement 100 times reduces the confidence interval to 1/10 of the original width.
Note that when we report a 68.2% confidence interval (usually termed standard error) as v ± σ, this does not mean that the true mass has a 68.2% chance of being in the reported range. In fact, the true mass is either in the range or not. How can a value outside the range be said to have any chance of being in the range? Rather, our statement means that 68.2% of the ranges we report using ± σ are likely to include the true mass.
This is not just a quibble. Under the incorrect interpretation, each of the 100 measurements described above would be specifying a different range, and the true mass supposedly has a 68% chance of being in each and every range. Also, it supposedly has a 32% chance of being outside each and every range. If two of the ranges happen to be disjoint, the statements are obviously inconsistent. Say one range is 1 to 2, and the other is 2 to 3. Supposedly, the true mass has a 68% chance of being between 1 and 2, but only a 32% chance of being less than 2 or more than 3. The incorrect interpretation reads more into the statement than is meant.
On the other hand, under the correct interpretation, each and every statement we make is really true, because the statements are not about any specific range. We could report that one mass is 10.2 ± 0.1 grams, while really it is 10.6 grams, and not be lying. But if we report fewer than 1000 values and more than two of them are that far off, we will have some explaining to do.
It is also possible to estimate a confidence interval without knowing the standard deviation of the random error. This is done using the t distribution, or by using non-parametric resampling methods such as the bootstrap, which do not require that the error have a normal distribution.
## Confidence intervals for proportions and related quantities
See also: Margin of error
See also: Binomial proportion confidence interval
An approximate confidence interval for a population mean can be constructed for random variables that are not normally distributed in the population, relying on the central limit theorem, if the sample sizes and counts are big enough. The formulae are identical to the case above (where the sample mean is actually normally distributed about the population mean). The approximation will be quite good with only a few dozen observations in the sample if the probability distribution of the random variable is not too different from the normal distribution (e.g. its cumulative distribution function does not have any discontinuities and its skewness is moderate).
One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that have the variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing. To apply the central limit theorem, one must use a large enough sample. A rough rule of thumb is that one should see at least 5 cases in which the indicator is 1 and at least 5 in which it is 0. Confidence intervals constructed using the above formulae may include negative numbers or numbers greater than 1, but proportions obviously cannot be negative or exceed 1. Additionally, sample proportions can only take on a finite number of values, so the central limit theorem and the normal distribution are not the best tools for building a confidence interval. See "Binomial proportion confidence interval" for better methods which are specific to this case.
## References
1. ↑ Goldstein, H., & Healey, M.J.R. (1995). "The graphical presentation of a collection of means." Journal of the Royal Statistical Society, 158, 175-77.
2. ↑ Wolfe R, Hanley J (Jan 2002). If we're so different, why do we keep overlapping? When 1 plus 1 doesn't make 2. CMAJ 166 (1): 65–6.
3. ↑ Zar, J.H. (1984) Biostatistical Analysis. Prentice Hall International, New Jersey. pp 43-45
4. ↑ Bernardo JE, Smith, Adrian (2000). Bayesian theory, 259, New York: Wiley.
5. ↑ Cox DR, Hinkley DV. (1974) Theoretical Statistics, Chapman & Hall, p49, 209
6. ↑ Kendall, M.G. and Stuart, D.G. (1973) The Advanced Theory of Statistics. Vol 2: Inference and Relationship, Griffin, London. Section 20.4
7. ↑ Cox DR, Hinkley DV. (1974) Theoretical Statistics, Chapman & Hall, p214, 225, 233
8. ↑ Cox DR, Hinkley DV. (1974) Theoretical Statistics, Chapman & Hall, p390
• Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd, Edinburgh. (See p. 32.)
• Freund, J.E. (1962) Mathematical Statistics Prentice Hall, Englewood Cliffs, NJ. (See pp. 227–228.)
• Hacking, I. (1965) Logic of Statistical Inference. Cambridge University Press, Cambridge
• Keeping, E.S. (1962) Introduction to Statistical Inference. D. Van Nostrand, Princeton, NJ.
• Kiefer, J. (1977) "Conditional Confidence Statements and Confidence Estimators (with discussion)" Journal of the American Statistical Association, 72, 789–827.
• Neyman, J. (1937) "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability" Philosophical Transactions of the Royal Society of London A, 236, 333–380. (Seminal work.)
• Robinson, G.K. (1975) "Some Counterexamples to the Theory of Confidence Intervals." Biometrika, 62, 155–161.
Advertisement | Your ad here
# Photos
Add a Photo
6,465photos on this wiki
• by Dr9855
2013-05-14T02:10:22Z
• by PARANOiA 12
2013-05-11T19:25:04Z
Posted in more...
• by Addyrocker
2013-04-04T18:59:14Z
• by Psymba
2013-03-24T20:27:47Z
Posted in Mike Abrams
• by Omaspiter
2013-03-14T09:55:55Z
• by Omaspiter
2013-03-14T09:28:22Z
• by Bigkellyna
2013-03-14T04:00:48Z
Posted in User talk:Bigkellyna
• by Preggo
2013-02-15T05:10:37Z
• by Preggo
2013-02-15T05:10:17Z
• by Preggo
2013-02-15T05:09:48Z
• by Preggo
2013-02-15T05:09:35Z
• See all photos
See all photos >
# Wikia Inc Navigation
• About
• Community Central
• Careers
• Advertise
• API
• Contact Wikia
• Terms of Use
• Privacy Policy
• Content is available under CC-BY-SA.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 48, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9017750024795532, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/31888/flow-of-freezing-liquid-or-a-melting-solid
|
# Flow of freezing liquid or a melting solid
If a liquid is freezing, is equation of continuity violated? As the liquid flows, some portion of it is getting frozen. The mass of the fluid thus keeps dropping. Similarly, when a molten fluid flows over a solid, its mass may increase because the solid over which it flows also melts. In either of these flows, can we assume that div u = 0, where u is the velocity vector.
-
## 1 Answer
The continuity equation is not violated in either of the situations you describe above. The generic continuity equation for some scalar quantity (such as density) can be written as
$\frac{\partial \psi}{\partial t} + \nabla . \psi\mathbf{u} = \sigma$, (1)
where here $\psi$ is the some conseverd scalar quantity, $\mathbf{u}$ is the velocity field and $\sigma$ is the is the generation of $\psi$ per unit volume per unit time (or source term). This source term is introduced when you are adding or taking away from the substance under consideration (or indeed changing its state).
For a ordinary incompressible flow (with the fluid not changing state), the continuity equation can be written as
$\frac{\partial \rho}{\partial t} + \nabla . \rho\mathbf{u} = 0$. (2)
Taking the density of the fluid to be constant we find (as you point out)
$\nabla . \mathbf{u} = 0$. (3)
However, for the situation describe above, it is not this simplified version that will model the flow correctly as the density is not constant. As long as you model the substance with correct continuity equation, nothing will be violated. In fact, for the cases you speak of, you can still treat the molten metal, or melting ice and as the same continuous substance, but instead of the simplified version of the continuity equation (the divergence free condition you have stated (3)), would need to use equation (2). To model the flow only, where you would treat the solidification as a change in volume/mass you would also need to include the source term (equation (1)).
I hope this helps.
-
Thanks, your explanation was indeed helpful. I was actually looking at a flow of a liquid under gravity, for which $\vec{u}=w\vec{e}_z$. If eqn(3) was valid then w would be independent of z and Navier-Stokes equation will be $0=-\nabla{p}+\eta\Delta\vec{u}+\rho\vec{g}$. However, in case of a melting flow, I would have to use the general form of continuity equation (1) and I will no longer conclude that $w$ is a function of $x$ and $y$ alone. Am I thinking correctly? – Amey Joshi Jul 21 '12 at 11:46
$w$ here is a scalar. This is giving magnitude to the unit vector $\mathbf{e}_{z}$. The equation $\mathrm{div}\;\mathbf{u} = 0$ is just saying that the net flux though the surface of the relevent volume is zero. As you have written the defintition of $\mathbf{u}$ above, it has no dependency on x or y coords. If $w = w(x, y)$ so $\mathbf{u} = w(x, y)\mathbf{e}_{z}$ changing the type of flow considered will not nessecaraly change your definition of $w$. Also note, that the Navier-Stokes eqn. is a conservation of momentum eqn., the continuity eqn. for density above are conservation of mass... – Killercam Jul 21 '12 at 12:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491628408432007, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/33788/fxyfx-y-2fxfy?answertab=active
|
# $f(x+y)+f(x-y)=2[f(x)+f(y)]$
Here is the problem: Find all continuous $f:\mathbb{R}\rightarrow\mathbb{R}$ which satisfy $$f(x+y)+f(x-y)=2[f(x)+f(y)]\;\;\;(1).$$
Here is my attempt:
Fix $\delta>0$ and let $C=\int_{0}^{\delta}2f(y)dy.$
Then $$\begin{align*} 2\delta f(x)+C&=\int_{0}^{\delta}2[f(x)+f(y)]dy\\ &=\int_{0}^{\delta}f(x+y)+f(x-y)dy\\ &=\int_{x}^{x+\delta}f(y)dy+\int_{x-\delta}^{x}f(y)dy\\ &=\int_{x-\delta}^{x+\delta}f(y)dy. \end{align*}$$
Now since $f$ is continuous, the last expression is a differentiable function of x and thus the first expression must also be differentiable; hence $f$ is differentiable. By induction, f is infinitely differentiable.
Differentiating (1) first with respect to y, we arrive at: $$f'(x+y)-f'(x-y)=2f'(y)\;\;\;(2).$$
Differentiating once more with respect to x, we have: $f''(x+y)=f''(x-y),$ so $f''$ is constant. It follows that $f(x)= ax^2+bx+c$ are the only potential solutions.
Substituting $x=y=0$ in (1) and (2) implies $f(0)=f'(0)=0$; hence $f(x)=ax^2\;\;\;(a\in\mathbb{R})$.
It is easy to check that all such $f$ are indeed solutions. $\square$
Is my proof correct?
-
3
Very nice approach. – Andres Caicedo Apr 19 '11 at 5:46
Nice indeed! I was thinking more along the lines of when y = 0, we get f(y) = f(y) = 0... – The Chaz 2.0 Apr 19 '11 at 5:49
How do you get $f(-y)$ in $\int_{x}^{\delta+x}f(y)+f(-y)dy$? It seems wrong, since $f(-y)$ has arguments near $-x$ instead of $x$. – joriki Apr 19 '11 at 5:53
@joriki: You are correct; I have fixed the error. – bobobinks Apr 19 '11 at 6:03
@Mike: I am aware of the approach you suggested; in fact, in most cases it is the better approach as it allows one to reach the same conclusion with weaker hypotheses, i.e, monotonicity away from 0 or boundedness on a closed interval. I was simply looking for an alternate method. In any case, thank you for your suggestion. – bobobinks Apr 19 '11 at 6:19
show 1 more comment
## 1 Answer
You might be interested in problem 12 in "functional equations" by Marko Radovanović
The problem states: Find all functions $f,g,h\colon \mathbb R\to\mathbb R$ that satisfy $f(x+y)+g(x-y)= 2h(x)+2h(y)$. The solution is quite short and beautiful (as is typical with that kind of problems.)
Turns out the only solutions are $$h(x) = \alpha x^2 + \beta x + b$$ $$f(x) = \alpha x^2 + 2\beta x + 4 b- a$$ $$g(x) = \alpha x^2 + a$$
For some choice of constants. Specializing to the case where $f=g=h$ yields $\beta = 0$ and $a=b=4b-a$, so $f(x)=g(x)=h(x)=\alpha x^2$.
Edit: I just noted that the author apparently reduces this problem to something very similar to the given problem (the sign on the left hand side is different) and then states it is trivial with the given technique. I think it's not that difficult, something like this might work on his problem --- maybe it provides a shortcut here too (?):
• Show that $f(0) = 0$ and $f(-x) = f(x)$
• Assume that $f(1)=f(-1) = \alpha$, find $f(2)=f(1+1)=4\alpha$
• Find that $f(n) = f((n-1)+1) = n^2 \alpha$ with induction.
• Proceed to deduce that $f(\tfrac{n}{m}) = \frac{n^2 \alpha}{m^2}$.
• Use continuity.
-
+1 Very nice reference. – Adrián Barquero Apr 19 '11 at 7:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942111074924469, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Problem_of_Apollonius&diff=33162&oldid=32746
|
# Problem of Apollonius
### From Math Images
(Difference between revisions)
| | | | |
|---------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Current revision (12:26, 2 July 2012) (edit) (undo) | |
| (9 intermediate revisions not shown.) | | | |
| Line 1: | | Line 1: | |
| | {{Image Description Ready | | {{Image Description Ready |
| | | + | |Field=Geometry |
| | | + | |Field2=Fractals |
| | |Image=Apollonian.jpg | | |Image=Apollonian.jpg |
| | | + | |AuthorName=Paul Nylander |
| | | + | |SiteName=Fractals |
| | | + | |Field=Fractals |
| | | + | |Field=Geometry |
| | | + | |SiteURL=http://bugman123.com/Fractals/Fractals.html |
| | | + | |InProgress=No |
| | |ImageIntro=This an example of a fractal that can be created by repeatedly solving the Problem of Apollonius. | | |ImageIntro=This an example of a fractal that can be created by repeatedly solving the Problem of Apollonius. |
| | |ImageDescElem=<div style="width:auto; position:relative;">[[Image:Apollonius.png|thumb|150px|Image by: [http://commons.wikimedia.org/wiki/File:Apollonius_solution_3B.png Wikipedia]|left]]<div style="position:relative;">The problem of Apollonius involves trying to find a circle that is tangent to three objects: points, lines, or circles in a plane. The most famous of these is the case involving three different circles in a plane, as seen in the picture to the left. The given three circles are in red, green, and blue, while the solution circle is in black.</div></div> | | |ImageDescElem=<div style="width:auto; position:relative;">[[Image:Apollonius.png|thumb|150px|Image by: [http://commons.wikimedia.org/wiki/File:Apollonius_solution_3B.png Wikipedia]|left]]<div style="position:relative;">The problem of Apollonius involves trying to find a circle that is tangent to three objects: points, lines, or circles in a plane. The most famous of these is the case involving three different circles in a plane, as seen in the picture to the left. The given three circles are in red, green, and blue, while the solution circle is in black.</div></div> |
| Line 13: | | Line 21: | |
| | Given three points, the problem only has one solution. In the cases of one line and two points; two lines and one point; and one circle and two points, the problem has two solutions. Four solutions exist for the cases of three lines; one circle, one line, and one point; and two circles and one point. There are eight solutions for the cases of two circles and one line; and one circle and two lines, in addition to the three circle problem. | | Given three points, the problem only has one solution. In the cases of one line and two points; two lines and one point; and one circle and two points, the problem has two solutions. Four solutions exist for the cases of three lines; one circle, one line, and one point; and two circles and one point. There are eight solutions for the cases of two circles and one line; and one circle and two lines, in addition to the three circle problem. |
| | | | |
| - | {{-}} | + | |ImageDesc= |
| - | | + | |
| - | <h1> A More Mathematical Explanation</h1> | + | |
| - | | + | |
| - | {{hide|1=There are many different ways of solving the problem of Apollonius. The few that are easiest to understand include using an algebraic method or an [[Inversion|inverse geometry]] method. | + | |
| | | | |
| - | {{-}} | + | There are many different ways of solving the problem of Apollonius. The few that are easiest to understand include using an algebraic method or an [[Inversion|inverse geometry]] method. |
| | | | |
| | === Algebraic Method === | | === Algebraic Method === |
| Line 149: | | Line 153: | |
| | | | |
| | This process can be repeated choosing different signs for the different r values in the c coefficients to find the other seven circles. | | This process can be repeated choosing different signs for the different r values in the c coefficients to find the other seven circles. |
| - | | | |
| - | | | |
| - | | | |
| | | | |
| | }} | | }} |
| Line 158: | | Line 159: | |
| | | | |
| | | | |
| - | | + | == Apollonian Gasket == |
| - | | + | The Apollonian gasket is an example of one of the earliest studied fractals and was first constructed by Gottfried Leibniz. It can be constructed by solving the problem of Apollonius iteratively. It was a precursor to [[Sierpinski's Triangle]], and in a special case, it forms [[Ford Circles]]. |
| - | <h1> Apollonian Gasket </h1> | + | |
| - | {{Switch|link1=Show|link2=Hide|1=<div style="float:left; margin-right:25px"><pausegif id="2" wiki="no" border="no">Problemapollonius3.gif</pausegif></div>|2=The Apollonian gasket is an example of one of the earliest studied fractals and was first constructed by Gottfried Leibniz. It can be constructed by solving the problem of Apollonius iteratively. It was a precursor to [[Sierpinski's Triangle]], and in a special case, it forms [[Ford Circles]]. | + | |
| | | | |
| | <div style="float:left"><pausegif id="2" wiki="no" border="no">Problemapollonius3.gif</pausegif></div> | | <div style="float:left"><pausegif id="2" wiki="no" border="no">Problemapollonius3.gif</pausegif></div> |
| Line 174: | | Line 173: | |
| | Repeating this process over and over again with each set of three mutually tangent circles will create the Apollonian gasket.}} | | Repeating this process over and over again with each set of three mutually tangent circles will create the Apollonian gasket.}} |
| | | | |
| - | }} | + | == Interactive Applet == |
| | | + | {{#iframe:https://www.cs.drexel.edu/~rlw82/Canvas/ResizeCircles/|632|632|border=0}} |
| | | + | <BR> |
| | | + | Drag the circles around for solutions to be drawn |
| | | + | Drag the red circles to resize that circle |
| | | | |
| - | {{-}} | + | == References == |
| - | | + | |
| - | <h1>References</h1> | + | |
| | | | |
| | | | |
| Line 188: | | Line 189: | |
| | Wikibooks , [http://en.wikibooks.org/wiki/Fractals/Apollonian_fractals Apollonian fractals] | | Wikibooks , [http://en.wikibooks.org/wiki/Fractals/Apollonian_fractals Apollonian fractals] |
| | | | |
| - | <h1> Mathematica Programs </h1> | + | == Mathematica Programs == |
| | | | |
| | [[User:AnnaP|Anna]] created several programs that solve the problem of Apollonius in the case of three non-tangent, non-intersecting circles that the user inputs. The programs were written in Mathematica 7, but are likely compatible with Mathematica 5 and/or 6. | | [[User:AnnaP|Anna]] created several programs that solve the problem of Apollonius in the case of three non-tangent, non-intersecting circles that the user inputs. The programs were written in Mathematica 7, but are likely compatible with Mathematica 5 and/or 6. |
| Line 195: | | Line 196: | |
| | | | |
| | The second program allows the user to choose which solutions to plot in groups of two. [http://www.sccs.swarthmore.edu/users/09/anna09/mathematica/apolloniussolutionoptions.nb Click here to download this program] | | The second program allows the user to choose which solutions to plot in groups of two. [http://www.sccs.swarthmore.edu/users/09/anna09/mathematica/apolloniussolutionoptions.nb Click here to download this program] |
| - | | | |
| - | ==Ideas for the future== | | |
| - | Currently working on html5 canvas where the user can move around three circles and resize them. | | |
| - | | | |
| - | | | |
| - | |AuthorName=Paul Nylander | | |
| - | |SiteName=Fractals | | |
| - | |SiteURL=http://bugman123.com/Fractals/Fractals.html | | |
| - | |Field=Geometry | | |
| - | |Field2=Fractals | | |
| - | |InProgress=No | | |
| - | }} | | |
## Current revision
Fields: Geometry and Fractals
Image Created By: Paul Nylander
Website: Fractals
This an example of a fractal that can be created by repeatedly solving the Problem of Apollonius.
# Basic Description
Image by: Wikipedia
The problem of Apollonius involves trying to find a circle that is tangent to three objects: points, lines, or circles in a plane. The most famous of these is the case involving three different circles in a plane, as seen in the picture to the left. The given three circles are in red, green, and blue, while the solution circle is in black.
Apollonius of Perga posed and solved this problem in his work called Tangencies. Sadly, Tangencies has been lost, and only a report of his work by Pappus of Alexandria is left. Since then, other mathematicians, such as Isaac Newton and Descartes, have been able to recreate his results and discover new ways of solving this interesting problem.
Click to stop animation.
The problem usually has eight different solution circles that exist that are tangent to the given three circles in a plane. The given circles must not be tangent to each other, overlapping, or contained within one another for all eight solutions to exist.
Given three points, the problem only has one solution. In the cases of one line and two points; two lines and one point; and one circle and two points, the problem has two solutions. Four solutions exist for the cases of three lines; one circle, one line, and one point; and two circles and one point. There are eight solutions for the cases of two circles and one line; and one circle and two lines, in addition to the three circle problem.
# A More Mathematical Explanation
[Click to view A More Mathematical Explanation]
There are many different ways of solving the problem of Apollonius. The few that are easiest to under [...]
[Click to hide A More Mathematical Explanation]
There are many different ways of solving the problem of Apollonius. The few that are easiest to understand include using an algebraic method or an inverse geometry method.
### Algebraic Method
This method only uses math up to the level of understanding quadratic equations. We will proceed by setting up a system of quadratic equations and solving for the radius, r, of the unknown circle.
We start by labeling the center of each of the given circles $(x$1$,y$1$)$, $(x$2$,y$2$)$, and $(x$3$,y$3$)$. We will call the center of the unknown circle $(x,y)$. $r$1,$r$2, and $r$3 are the different radii of each of the given circles.
From this we are able to write our equations:
• $(x - x$1$)^2 + (y - y$1$)^2 = (r \pm r$1$)^2$
• $(x - x$2$)^2 + (y - y$2$)^2 = (r \pm r$2$)^2$
• $(x - x$3$)^2 + (y - y$3$)^2 = (r \pm r$3$)^2$
Next we are able to expand each of the equations to see better how they can relate to each other.
Expanding gives us:
• $(x^2+y^2-r^2)-2xx$1$-2yy$1$\pm2rr$1$+(x$1$^2+y$1$^2-r$1$^2)=0$
• $(x^2+y^2-r^2)-2xx$2$-2yy$2$\pm2rr$2$+(x$2$^2+y$2$^2-r$2$^2)=0$
• $(x^2+y^2-r^2)-2xx$3$-2yy$3$\pm2rr$3$+(x$3$^2+y$3$^2-r$3$^2)=0$
We can now look at the equations and see how we can subtract them from each other. So we will take the second and third equation minus the first equation.
Second minus first gives us:
• $2(x_1-x_2)x+2(y_1-y_2)y+2(\pm r_1 \pm r_2)r=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2)$
Third minus first gives us:
• $2(x_1-x_3)x+2(y_1-y_3)y+2(\pm r_1 \pm r_3)r=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2)$
For the sake of simplicity, we'll define some new variables. Let
$a_2=2(x_1-x_2)$; $b_2=2(y_1-y_2)$ ; $c_2=2(\pm r_1 \pm r_2)$ ; $d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2)$
$a_3=2(x_1-x_3)$; $b_3=2(y_1-y_3)$ ; $c_3=2(\pm r_1 \pm r_3)$ ; $d_2=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2)$
Now our two equations can be written as
$a_2 x+b_2 y +c_2 r=d_2$
$a_3 x+b_3 y +c_3 r=d_3$
Since this is a simple linear system of equations, we can solve it for x and y in terms of r.
[click to show algebra]
[click to hide algebra]
Solving the first equation for x:
$a_2x=d_2-c_2r-b_2y \rightarrow x=\frac{d_2-c_2 r -b_2 y}{a_2}$
Substituting that in to the second equation allows us to find y in terms of known values and r:
$a_3\left(\frac{d_2-c_2 r -b_2 y}{a_2} \right)+b_3 y +c_3 r =d_3$
$a_3 d_2-a_3c_2 r -a_3 b_2 y+a_2 b_3 y +a_2 c_3 r =d_3 a_2$
$y(a_2 b_3-a_3 b_2)=a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r$
• $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}$.
Rather than substituting back in to find x, it is actually simpler to go through the same process we used to find y to get x in terms of known values and r.
First, we solve the first equation for y.
$b_2y=d_2-c_2r-a_2x \rightarrow y=\frac{d_2-c_2 r -a_2 x}{b_2}$
Plugging into the second equation gives us
$a_3 x+b_3\left(\frac{d_2-c_2 r -a_2 x}{b_2}\right) +c_3 r =d_3$
$a_3 b_2 x+b_3 d_2 -b_3 c_2 r-a_2 b_3 x +b_2 c_3 r=b_2 d_3$
$x(a_3 b_2-a_2 b_3)=b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r$
• $x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3}$.
With $x=\frac{b_2 d_3-b_3 d_2 +(b_2 c_3-b_3 c_2) r}{a_3 b_2-a_2 b_3}$ and $y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}$, we plug in values for the a's, b's and c's (which we get from the original information about the centers and radii of the three circles) to calculate x and y. Using our very first equations for the circles, we can then solve for r.
[click to show example]
[click to hide example]
Let's pick three circles and find the mutually tangent ones. Let's choose some coordinates and radii for our three circles.
Take $(x_1,y_1,r_1)=(0,3,2)$, $(x_2,y_2,r_2)=(-2,-2,1)$ and $(x_3,y_3,r_3)=(3,-3,3)$. These three circles are shown below.
Now we want to calculate the a's, b's, and d's first.
$a_2=2(x_1-x_2)=2(0--2)=4$
$a_3=2(x_1-x_3)=2(0-3)=-6$
$b_2=2(y_1-y_2)=2(3--2)=10$
$b_3=2(y_1-y_3)=2(3--3)=12$
$d_2=(x_1^2+y_1^2-r_1^2)-(x_2^2+y_2^2-r_2^2) =(0^2+3^2-2^2)-(2^2+2^2-1^2)=(9-4)-(4+4-1)=-2$
$d_3=(x_1^2+y_1^2-r_1^2)-(x_3^2+y_3^2-r_3^2) =(0^2+3^2-2^2)-(3^2+3^2-3^2)=(9-4)-(9+9-9)=-4$
Calculating the c terms requires a bit more thought since there are the $\pm$ signs. The choice of these signs determines which circle we are solving for. We simply must be consistent in all of our applications of signs for a given r. For the first example, let's simply take all of the plus signs. Then
$c_2=2(r_1+r_2)=2(2+1)=6$
$c_3=2(r_1+r_3)=2(2+3)=10$.
Now we can calculate x and y for this first circle.
$x=\frac{b_2 d_3-b_3 d_2 +(b_3 c_2-b_2 c_3) r}{a_3 b_2-a_2 b_3}=\frac{(10)(-4)-(12)(-2)+(12(6)-10(10))r}{-6(10)-4(12)}=\frac{-16-(28)r}{-108}$
$x=\frac{1}{27}(4+7r)$.
$y=\frac{a_2 d_3-a_3 d_2 +(a_3 c_2 -a_2 c_3)r}{a_2 b_3-a_3 b_2}=\frac{(4)(-4)-(-6)(2)+ (-6(6)-4(10))r}{12(4)-(10)(-6)}=\frac{-28-(76)r}{108}$
$y=\frac{-7-19r}{27}$.
Now we can return to one of our first equations to find r.
$(x-x_1)^2+(y-y_1)^2=(r+r_1)^2$
Note that here the sign of $r_1$ is positive. That is because we took the positive sign when we solved for the c values. Plugging in values, we get
$\left(\frac{1}{27}(4+7r)\right)^2+\left(\frac{-7-19r}{27}-3 \right)^2=(r+2)^2$
This equation is quadratic in r, so it can be solved using the quadratic formula (or a graphing calculator, if you prefer). When the dust clears, we get $r\approx 4.729$ (the other value that comes out of the quadratic formula does not work when plotted).
Now we can plot this circle with center $\left(\frac{1}{27}(4+7(4.729)),\frac{-7-19(4.729)}{27} \right)$. It is shown below in red, with the original circles in black.
We can see that it is indeed tangent to the three original circles!
This process can be repeated choosing different signs for the different r values in the c coefficients to find the other seven circles.
## Apollonian Gasket
The Apollonian gasket is an example of one of the earliest studied fractals and was first constructed by Gottfried Leibniz. It can be constructed by solving the problem of Apollonius iteratively. It was a precursor to Sierpinski's Triangle, and in a special case, it forms Ford Circles.
Click to stop animation.
Constructing the gasket begins with three mutually tangent circles. By solving this case of the problem of Apollonius we know that there are two other circles that are tangent to the three given circles. We now have five circles from which to start again.
Repeat the process with two of the original circles and one of the newly generated circles. Again, by solving Apollonius' problem we can find two circles that are tangent to this new set of three circles. Although, we already know one of the two solutions for this set of three circles; it is the other of the three circles that we started with.
Repeating this process over and over again with each set of three mutually tangent circles will create the Apollonian gasket.
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
## Interactive Applet
Drag the circles around for solutions to be drawn Drag the red circles to resize that circle
## References
Math Pages, Apollonius' Tangency Problem
MathWorld, Apollonius' Problem
Wikibooks , Apollonian fractals
## Mathematica Programs
Anna created several programs that solve the problem of Apollonius in the case of three non-tangent, non-intersecting circles that the user inputs. The programs were written in Mathematica 7, but are likely compatible with Mathematica 5 and/or 6.
The second program allows the user to choose which solutions to plot in groups of two. Click here to download this program
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 91, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8989153504371643, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/49054/rainbow-matchings-in-random-graphs
|
## Rainbow matchings (in random graphs)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose we have an $(n,n)$-bipartite graph with edges colored with $k$ colors. Is anything known about the existence of rainbow matchings (i.e. a matching that uses each color exactly once, for $k=n$) for a random bipartite graph (e.g. that for $k$ colors and more than $f(k,n)$ edges we get a rainbow matching with $p \rightarrow 1$)?
In the noncolored case, Hall's theorem makes proving this kind of results relatively simple, since we are interested in the non-existence of "no matching possible" witness (i.e. a subset that violates Hall condition) and we can use union bound to bound the probability from above (for $A_k$ = "k-th subset is a witness" give bound to $\mathbb{P}(\cup A_k)$). However, there is no simple condition of this kind equivalent to the existence of a rainbow matching.
-
Where does the colouring come from? I guess every edge is coloured uniformly at random? – Andrew D. King Dec 11 2010 at 18:45
Yes, the color of each edge is chosen uniformly at random. – Marcin Kotowski Dec 11 2010 at 19:07
Do you mean that $k=n$? (otherwise I do not see how to interpret the "uses each color exactly once" requirement). – fedja Dec 12 2010 at 3:24
@fejda I took that to mean it doesn't need to be a perfect matching. Consider the formulation of this as a question on the existence of an independent systems of representatives. Construct a graph with vertex set $(V_1, \ldots, V_k)$ corresponding to edges of colour $i$. Now make vertices $v_i \in V_i$ and $v_j \in V_j$ adjacent precisely if the corresponding edges share an endpoint in the bipartite graph. What you want is a matching hitting all the colours, i.e. a stable set intersecting each $V_i$ for $1 \le i \le k$. – Andrew D. King Dec 12 2010 at 3:50
1
I left & deleted a comment due to confusion about reading the problem statement. It was a little confusing for me but I eventually understood you are picking a $f(k, n)$-edge subgraph of $K_{n, n}$ uniformly at random, and then assigning each edge one of $k$ colours uniformly at random; you want to compute $f$ so there is a $k$-edge rainbow matching w.h.p. Finally, you are considering only the asymptotics of $n \to \infty$, since otherwise "w.h.p" makes no sense. – Dave Pritchard Dec 12 2010 at 10:34
show 3 more comments
## 3 Answers
Isn't this very much related to the problem of a transversal in a Latin square? Suppose we have an $(n,n)$ bipartite graph with $n$ edge colors, such that every vertex has one edge of each color. This is equivalent to an $n\times n$ Latin square. A rainbow matching is a transversal of the Latin square. There is a conjecture (due to Ryser) that every Latin square with $n$ odd has a transversal, that is, a perfect rainbow matching. For even $n$, the conjecture (due to Brualdi) is that it has a partial transversal of length $n-1$ (i.e., a rainbow matching of cardinality $n-1$). To indulge in a little self-promotion, the best known result is that there exists a partial transversal of length $n -O(\log^2 n)$. There are also a number of results about transversals and partial transversals in near-Latin squares, which will probably be relevant to rainbow matching questions.
I guess the relevant Latin square question would be: does a random Latin square have a transversal with high probability? I know extensive calculations have been done which suggest that the answer is yes. I don't know whether anybody has proven this.
-
Note that the "each vertex has one edge of each color" is not part of the original problem, so the "relevant Latin square question" is not a re-statement of what was discussed earlier. (But, of course similar methods could in principle apply, and they are indeed very much related.) – Dave Pritchard Dec 13 2010 at 11:10
@Dave: Thanks for pointing that out. I really should have put it in my answer. – Peter Shor Dec 13 2010 at 13:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
(This is not an answer to your question.)
Let $M$ be the $n\times n$ matrix with coefficients in $k[x_1, \ldots, x_n]$, whose $ij$-th entry is the variable $x_{color(i,j)}$. Then it is sufficient to prove that with high probability $det(M)$ has nonzero coefficient in $x_1\cdot\ldots\cdot x_n$ for $n$ sufficiently large (i.e., $det(M)$ is nonzero in $k[x_1, \ldots, x_n]/(x_1^2, \ldots, x_n^2)$). So maybe we could try to look at the partial differentials of $\det M$ and find a suitable witness this way? I have no idea if this can be useful.
-
I think that you need to formulate a more specific question. For fixed $k$, the $n$ is fairly irrelevant. Let $g(n)$ be a non-decreasing function which increases to infinity but exceedingly slowly (such as the inverse Ackerman function) then $f(n,k)=g(n)$ yields $k$ disjoint unequally colored edges with probability going to 1 (although for $k=6$, n will have to be unspeakable huge before $g(n)>5$).
At the other extreme, let $p_n$ be the probability that a random edge coloring of $K_{nn}$ (with n colors) yields a rainbow matching (with $n$ edges). I would have guessed that as $\lim_{n \rightarrow \infty}p_n=1$, and maybe that is true, but the small numbers point in the other direction. $p_1=1$, $p_2=\frac{7}{8}=87.5\%$ and $p_3=\frac{5090}{6561}=77.58\%$.
This still leaves a large middle ground with open questions (and even the $k=n$ case is not settled by what I wrote) later Indeed, it appears (see below) that right after $n=3$ it moves decisively towards $1$.
-
1
Actually, with the first part, you can deduce $f(n, k) = \Theta(k \log k)$ as $n \to \infty$ for any fixed $k$ since the edges become disjoint w.h.p, and then it reduces to the coupon collector problem with $k$ coupons. – Dave Pritchard Dec 12 2010 at 10:26
2
The second question, whether $f(n, n)$ exists, is quite interesting! (I.e., whether $K_{n, n}$ with a random $n$-colouring a.a.s has a rainbow perfect matching.) It looks to me that the expected number of rainbow perfect matchings is $(n!)^2/n^n$ which is huge; nonetheless the first thing I tried to prove that 1 exists a.a.s, the second moment method, doesn't seem to help. – Dave Pritchard Dec 12 2010 at 11:07
1
@Dave good call. For $n=4$ it seems (from random trials) that one gets a rainbow perfect matching somewhat more than 99.55% (but probably less than 99.56%) of he time. – Aaron Meyerowitz Dec 12 2010 at 15:31
1
@Dave and for the first part, I agree that it is the coupon collector problem so $k \log k$ is relevant. One would need $f(n,k)=k\log k +\alpha_{n}k$ with $\alpha_n$ going to infinity as $n$ does. – Aaron Meyerowitz Dec 12 2010 at 15:36
Thanks for the clarification; I missed earlier, but see now, that without the $\omega(n)$ term, the probability of a bad set of coupons is a fixed positive number depending on $k$ but not $n$, so it's really needed. The calculation is great too. – Dave Pritchard Dec 13 2010 at 11:12
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485092163085938, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/5420/iterative-refinement-algorithm-for-computing-expx-with-arbitrary-precision?answertab=oldest
|
# Iterative refinement algorithm for computing exp(x) with arbitrary precision
I'm working on a multiple-precision library. I'd like to make it possible for users to ask for higher precision answers for results already computed at a fixed precision. My $\mathrm{sqrt}(x)$ can pick up where it left off, even if $x$ changes a bit, because it uses Newton-Raphson. But $\exp(x)$ computed using a Maclaurin series or continued fraction has to be computed from scratch.
Is there an iterative refinement (i.e. Newton-Raphson, gradient descent) method for computing $\exp(x)$ that uses only arithmetic and integer roots?
(I know Newton-Raphson can solve $\log(y)-x=0$ to compute $\exp(x)$. I am specifically not asking for that. Newton-Raphson can also solve $\exp(y)-x=0$ to compute $\log(x)$. Note that each requires the other. I have neither right now as an arbitrary-precision function. I have arithmetic, integer roots, and equality/inequality tests.)
-
1
I don't believe there's a refinement scheme for $\exp(z)$. The best I've seen, assuming you're using scaling+squaring, is to have to redo series/CF computations on a version of $z\cdot2^{-n}$ with greater precision, and then square $n$ times. – J. M. Sep 25 '10 at 6:01
1
Not sure if it helps, I know there is an algorithm to compute kth digit of pi (in base 16). Perhaps something like that exists for exp(x) in base 10/whatever base you are working in. Spigot algorithm is the name I believe. (not sure) – Aryabhata Sep 25 '10 at 13:41
There is a spigot algorithm for $e$, but I don't see how it generalizes to $\exp(x)$. – J. M. Sep 25 '10 at 23:32
1
– Peter Taylor Jan 21 '11 at 18:18
## 1 Answer
There is an algorithm for computing $\log_2(x)$ that might suit you. Combine that with the spigot algorithm for $e$, and you can get $\ln(x)$. From there, you can use Newton-Raphson to get $\exp(x)$.
I don't know if this roundabout way ends up doing any better than just recomputing.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228906035423279, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/34967/basis-for-l-inftyr/35004
|
## Basis for L_infty(R)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $V$ be the Banach space of bounded sequences of reals with the sup norm. Does there exists a subset $B$ of $V$ such that
• Linear Independence: For all functions $c$ in $\mathbb{R}^B$, if $\sum_{b \in B} c(b) \cdot b = 0$, then $c$ is identically zero.
• Spanning Set: For all vectors $v$ in $V$, there exists a function $c$ in $\mathbb{R}^B$ such that $\sum_{b \in B} c(b) \cdot b = v$.
If so, is an explicit such $B$ known?
-
he Banach space V with which you start would seem to be $\ell^\infty_{\mathbb R}$ and not $L^\infty({\mathbb R})$ -- as it happens, the two are (non-isometrically) isomorphic as Banach spaces, but this is not trivial and the isomorphism is slightly mysterious. Could you clarify whether you had one or the other in mind when you asked this question? – Yemon Choi Aug 9 2010 at 4:50
Why the set of $e_j=(0,\ldots,0,1,0\ldots)$ does not work ? – Leandro Aug 9 2010 at 4:51
Yemon, I had the sequence space in mind. Leandro, that's not a spanning set. – Ricky Demer Aug 9 2010 at 4:55
Are you requiring these functions to be zero for all but finitely many $b$? If so, this is just the statement that every vector space has a basis. I think you can't find an explicit basis. – Kevin Ventullo Aug 9 2010 at 5:24
No, and that's why I made my question more explicit than the topic. – Ricky Demer Aug 9 2010 at 5:38
show 1 more comment
## 2 Answers
The space $\ell^\infty_R$ does not have even an M-basis; i.e., a biorthogonal set $(x_t,x_t^*)$ such that the span of the $x_t$ is dense and the $x_t^*$ are total (Lindenstrauss, late 1960s IIRC), so it has nothing like a Schauder basis. Later I proved [PAMS 26. no. 3 467-468 (1970)] that $\ell^\infty$ also does not have an M-basis. However, each of these spaces does have a biorthogonal set $(x_t,x_t^*)$ such that the span of the $x_t$ is dense. This is in my paper with W.J. Davis [Studia Math. 45 173-179 (1973)].
-
What does "the $x_t^*$ are total" mean? – Ricky Demer Aug 9 2010 at 14:53
1
Separate points. – Bill Johnson Aug 9 2010 at 15:03
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It exists for any linear space (Banach structure is not essential here), is called Hamel base. No explicit construction (without use of Axiom of Choice) exists (and, I guess, it may proved in a sense - that existence of Hamel base in any linear space implies the axiom of choice or smth similar).
-
1
No, a Hamel base is what you would get if you required the cs to be zero for all but finitely many b. – Ricky Demer Aug 9 2010 at 5:59
If $c$ is not assumed to be finitely supported, then how do you understand this sum? Should it be absolutely convergent series in almost every point? – Fedor Petrov Aug 9 2010 at 6:09
See en.wikipedia.org/wiki/… – Ricky Demer Aug 9 2010 at 6:12
So, $B$ is not countable, but $c$ is countably supported and the convergence is unconditional in $L_{\infty}$ norm. Right? – Fedor Petrov Aug 9 2010 at 6:20
1
Ricky: regarding the Wikipedia page, my mistake. I still think the question might be a little clearer if you said explicitly that you were talking about unconditional convergemce of the "series" - especially to indicate that you are talking about something a little different from a Schauder basis – Yemon Choi Aug 9 2010 at 8:49
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311851263046265, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/12309/electric-field-outside-a-capacitor/12316
|
# Electric field outside a capacitor
I know that the electric field outside of a capacitor is 0 and I know it is easy to calculate using Gauss's law. We create cylindrical envelope that holds the same amount of charges (of opposite signs) in each plate.
My question is why can't I pick an envelope which include only part of one of the plate? Guess's law states, specifically, that I can pick any envelope I want.
Note,
I have encountered this question a couple of years a go and I got an answer which I was not completely happy with.
-
1
The net field outside an actual capacitor can never be zero. There will always be a fringe field that is too frequently neglected in textbooks. Such idealizations are common, but are also dangerous in that they create misconceptions that are extremely difficult to correct later on. – user11266 Aug 18 '12 at 16:54
## 3 Answers
Outside two infinite parallel plates with opposite charge the electric field is zero, and that can be proved with Gauss's law using any possible Gaussian surface imaginable. However, it might be extremely hard to show if you don't choose the Gaussian surface in a smart way.
The usual way you'd show that the electric field outside an infinite parallel-plate capacitor is zero, is by using the fact (derived using Gauss's law) that the electric field above an infinite plate, lying in the $xy$-plane for example, is given by $$\vec{E}_1=\frac{\sigma}{2\epsilon_0}\hat{k}$$ where $\sigma$ is the surface charge density of the plate. If you now put another plate with opposite charge, i.e. opposite $\sigma$, some distance below or above the first one, then that contributes its own electric field, $$\vec{E}_2=-\frac{\sigma}{2\epsilon_0}\hat{k}$$ in the region above it. Since the electric field obeys the principle of superposition, the net electric field above both plates is zero. The same happens below both plates, while between the plates the electric field is constant and nonzero.
Your way of doing it is a little more tricky, but again gives the same answer. For example, if you choose the Gaussian surface to have an hourglass shape with different radii for the two sides, then indeed the net charge enclosed is not zero. However, when you calculate the total electric flux through that surface, you have to be careful to realize that there is nonzero electric field between the two plates, and therefore there is a nonzero flux through the part of the Gaussian surface that lies between the plates. That flux, of course, has to be accounted for. Assuming that you know the electric field inside the capacitor, $\vec{E}_\text{inside}$, you can do the integral $\oint\vec{E}_\text{inside}\cdot d\vec{A}$ for such a Gaussian surface (it's not that hard actually), and you find that the flux through the part of the surface that lies between the plates is exactly equal to $q_{\text{enclosed}}/\epsilon_0$. Thus, the net flux through the part of the Gaussian surface that lies outside the plates has to be zero, proving, after a little thought, that the electric field outside the capacitor is zero.
The final answer for $\vec{E}$ never depends on the Gaussian surface used, but the way to get to it always does. That's why the Gaussian surface has to be chosen in a smart way, i.e. in a way that makes the calculation of $\oint\vec{E}\cdot d\vec{A}$ easy.
-
## Did you find this question interesting? Try our newsletter
email address
You can pick such a box. However, this box alone does not allow you to determine the electric field. Gauss' law states that the total flux is equal to the charge enclosed times $4\pi$. It doesn't tell you where that flux leaves the box. There could be a lot here, none there, a negative amount at a third place, etc.
In some situations, symmetry allows us to use Gauss' law to find the electric field. A Gaussian box enclosing both plates of a parallel-plate capacitor is symmetric with respect to a reflection through a plane through the middle of the capacitor. It is also symmetric with respect to rotations about an axis perpendicular to the plates (ignoring edge effects). The rotational symmetry lets us say that the electric field first points along the axis perpendicular to the plates. The reflection symmetry tells us that the electric field must be the same through both sides of the box parallel to the plates. That electric field must be zero because the box has no net charge in it.
A box around a single plate does not have the same symmetries. It still has the rotational symmetry, so the electric field must be perpendicular to the plates. However, we don't have the reflection symmetry any more, so the electric field can be different strengths on different sides of the box.
We know there must be net flux through the box because the box encloses net charge. Using the fact that the electric field is zero outside the capacitor, we can deduce the he flux through a box that encloses only one plate is all through the side of the box that's inside the capacitor. Hence, the electric field must be $4\pi\rho$ inside the capacitor.
-
Why do we know that the electric field is 0? The intuition is clear but is there something more formal than that? – Yotam Jul 15 '11 at 19:55
1
Unless it's a capacitor with infinite plates, the field outside isn't exactly zero. – David Zaslavsky♦ Jul 15 '11 at 20:56
It's not only above zero outside but also the path integral over an outer curve is exactly equal to the integral over inner line but of the opposite sign, so the total integral (over a closed path) is equal to zero. – Vladimir Kalitvianski Jul 15 '11 at 21:04
1
@David Right. We're neglecting edge effects. @Yolam We know it's zero because we used Gauss' law and the symmetry argument. If you want you could integrate Coulomb's law, but it wouldn't teach you much new. @Vladimir Good point. – Mark Eichenlaub Jul 15 '11 at 22:46
@Mark: yeah, I was responding to Yolam just to point out that for a real (finite) capacitor, there's no formal way to show that the external field is zero, because is isn't. – David Zaslavsky♦ Jul 15 '11 at 23:08
show 1 more comment
The whole premise of this question is wrong. No application of Gauss's law can prove that the electric field outside a capacitor is zero, because it isn't necessarily zero. Here is another situation that satisfies Gauss's law
````-----> - + ----->
-----> - + ----->
-----> - + ----->
-----> - + ----->
-----> - + ----->
````
The arrows are electric field lines extending to infinity, while the +'s and -'s are two uniform 2D sheets of charge. You can check that Gauss's law is satisfied for every possible Gaussian surface.
When people say "the electric field is zero outside a capacitor", they are assuming there is no other cause of electric fields besides the capacitor itself. In the example above, if you took the "capacitor" away, there would be a uniform electric field everywhere in space. Why is that field there? Who knows, but it's certainly not because of the capacitor! In other words, they are talking about the electric field created by the capacitor itself.
The proper analysis of a capacitor ASSUMES that the field is zero at infinity and then uses different gaussian surfaces to prove that the field remains zero everywhere outside the capacitor.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445059895515442, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/3228/is-there-any-thing-other-than-time-that-triggers-a-radioactive-atom-to-decay
|
# Is there any thing other than time that “triggers” a radioactive atom to decay?
Say you have a vial of tritium and monitor their atomic decay with a geiger counter. How does an atom "know" when it's time to decay? It seems odd that all the tritium atoms are identical except with respect to their time of decay.
-
1
– Helder Velez May 12 '11 at 9:16
Does it have anything to do with langevin noise forces ? – New Horizon Jun 3 '11 at 18:23
I think you would be interested in the first part of this Feyman lecture where he explains the idea that nature is not deterministic (with typical Feynman bluntness: "if you don't like it, go find another universe.") - vega.org.uk/video/programme/45 – Michael Edenfield Aug 7 '12 at 20:41
## 6 Answers
Actually, all the atoms are identical. The time at which it is observed to decay is not an intrinsic property of a given atom, but rather an effect of quantum mechanics. For any given time bin, there is some finite amplitude for a transition to a decayed state, which in turn corresponds to a finite probability, since the particle(s) emitted will escape from the system once such a state is reached. This also means that the process is in irreversible, due to the open nature of the system. This works in the same way as atomic transitions when atoms emit photons (see the relevant Wikipedia page).
For each undecayed atom, in each time bin $T$ there is a probability of transitioning to the decayed state given by a fixed probability $p$ (which is independent of $T$, and depends only on the binning size). Thus between the time $t$ and $t+\Delta t$ there is a fixed probability $\Delta p = \lambda \Delta t$ of transitioning to the decayed state for any given atom. So if we have $N(t)$ undecayed nuclei at time $t$, then at time $t+\Delta t$ we should have $N(t+\Delta t) = (1-\lambda\Delta t)N(t)$. Rearranging thisa and taking the limit $\Delta t \to 0$ we obtain $dN/dt = -\lambda t$. Solving this equation yields the total number of nuclei left undecayed at time $t$ as $N(t) = N(0) e^{-\lambda t}$.
Anyway, the point to take from all this is simply that the atoms are all identical and decay by a purely random process.
UPDATE: I forgot to mention that decay probability can be increased, for example via collision with another particle for the right energy, and this is exactly how fission based nuclear bombs work. Here though, again, there is nothing special about the particular atom decaying, and it is simply the particles involved in the collision that have the increased decay probability. (I must admit that I have pared this picture right down to the basics as otherwise it would need to be a far more technical discussion).
-
4
Thanks Joe. I realize that decay rate can be modeled by assuming a random decay process. But making that assumption doesn't really explain why the process is "random" in the first place. I guess Nature simply likes to play dice...and I need to get used to the odds! – BuckyBadger Jan 18 '11 at 11:04
2
Great answer, Joe. And yes, BuckyBadger, the fact that processes occur randomly and only the probabilities may be predicted from the inherent properties of the physical systems is one of the main lessons of quantum mechanics. There are no "hidden variables" that would secretly decide when a particle should decay. A wave function of a neutron evolves into a wave function that also contains a "proton+electron+antineutrino" state multiplied by complex number $c$. Whenever you look whether the neutron is still there, the probability is $|c|^2$ that it has already decayed. – Luboš Motl Jan 18 '11 at 11:08
I'd suggest being careful with your language here in two ways. 1) "Intrinsic property" risk sounding like there is a number store in the nucleus (i.e. a hidden variable) but experiments on Bell's inequality show clear that there are no local hidden variables; 2) you don't really "increase the probability that the system will change with strong fields or neutron interactions, etc... rather you change the system to a different state that has a shorter halflife. Just semantic nits. – dmckee♦ Jan 18 '11 at 16:46
2
Not sure I agree with your fission argument. Absorbing a neutron changes the isotope into another one, which is unstable to fission, but that is different from increasing the probability of say alpha decay. B.T.W we could build a classical box with a small hole in it, put a gas molecule into the box, and the box in a vacuum, and the molecule escaping would be a probablistic event, so QM isn't even needed to get such an effect. – Omega Centauri Jan 18 '11 at 19:21
2
@dmckee: I said it was -not- an intrinsic property, and I had hidden variables specifically in mind. As regards the increased probability, your point is exactly the reason for the disclaimer immediately following it. To explain it properly would require quite a technical discussion about nuclear structure and interaction cross-sections. – Joe Fitzsimons Jan 18 '11 at 19:38
show 3 more comments
You can trigger decay of certain nuclei with gamma rays, just like you can stimulate emission of photons from excited atoms with incoming radiation. You can even make a bomb if that is your kind of thing. Induced emission
-
I think if even there were a "trigger variable" for each atom, one would need to randomize it anyway to describe an ensemble of decaying atoms.
On the other hand, in case of atoms there is a stimulated emission - with help of photons coherent with the "future" photon. This shows that the "environment" is somewhat important. As soon as the environment is complicated and is hard to control, one can loosely think that the random character of decays is due to random character of the "triggering QM environment".
-
Yes, thanks Vladimir. And Joe mentioned above that "stimulated radioactive decay" is also possible! – BuckyBadger Jan 18 '11 at 13:08
It seems to be widely assumed that the observed click in the geiger counter corresponds to the instantaneous decay of a particular tritium atom. I don't know if I'm just pointing out the obvious, but I'm quite sure this correspondence has never been explicitly demonstrated. Quantum mechanics tells us there is a certain flux of electrons emanating from the vial of tritium; that there are is a certain frequency of clicks in a geiger counter; and that if analyzed, the sample of tritium may be separated into two streams, one of which turns out to be helium. These are three different phenomena, none of which can be easily correlated with any of the others.
To put it plainly, all we can say about your sample of tritium is that the atoms are in a superposition of states. When they are observed individually, they are found to be in one or the other atomic state - tritium or He3. There is no experiment I know of where we can identify the moment when a particular tritium atom changed state.
-
– Helder Velez Jan 8 at 17:33
Thanks, Helder. It's too bad that this website doesn't promote more discussion. I feel like after the first day or two, the topics just disappear from the radar. – Marty Green Jan 8 at 20:12
There are more reasons then simple random time as we can see in this image.
(M. Yamamoto et. al. Journal of Environmental Radioactivity, 2006, 86, 110-131) (from WP )
EDIT add 1
This data is based in sediments. As sediments trace the environment conditions, they act as a proxy.
The next image is a variation of The decay rate of the radioactive isotope 32Si, and is found in
"“Evidence for Correlations Between Nuclear Decay Rates and Earth-Sun Distance” by Jenkins et al. 2008.
Later the distance from Sun connection was dismissed, but data is still valid.
Similar data about neutrino and Wimp seasonal variarance can be found.
EDIT add 1 end
EDIT add 2
a related paper, in the opposite direction can be found here
"Evidence against correlations between nuclear decay rates and Earth-Sun distance"
"We have reexamined our previously published data .. We find no evidence for such correlations"
They used ratios and this null result is expectable if both the samples and the 'presumed' reference are affected by the same nuclear processes. IMO, in this situation a counting procedure is better than to use ratios.
If the Jenkins proposal were correct, it is very unlikely that the alpha, beta-minus, beta-plus, and electron-capture decays of all radioactive isotopes would be affected in quantitatively the same way. Thus the ratios of counts observed from two different isotopes would also be expected to show annual variations.
In order to minimize the influence of any changes in detector and/or electronics performance, we analyzed ratios of gamma-ray peak areas from the isotope of interest and those from a reference isotope whose half life was well known.
EDIT add 2 end
There are seasonal variations (diurnal and annual) in radioactive processes:
1. Radon, Lead, etc ...
2. Neutron production on reactors (at lab and in space missions RTGs)
3. Neutrino
4. DM - WIMPs
I dont know what the experts say about the actual explanations. I think that they dont know the whys. I'm in the chase of data labeled with the timedate and geographical local of the 'crime'.
Can someone help, pls?
Are the atoms of the same isotope equal?
I can not agree with the most voted answer, by Joe:
"Actually, all the atoms are identical. The time at which it is observed to decay is not an intrinsic property of a given atom, but rather an effect of quantum mechanics."
Since when quantum mechanics has effects? QM does not produce any effect, QM describes what we see at a statistical level.
We see that at a particular time moment, a particular atom decayed, and not the other. It has to be an intrinsic property of that particular atom that made it decay at that precise moment.
I do not know of any experiment that tried to measure how much equal or distinct can be a similar group of atoms. The community has the hope that they are identical. I'm skeptical about this issue and I take nothing from granted, that I can say 'They are different one from the others'.
-
Think of the fermi exclusion principle. If there were individuality in the electrons, for example, it would be at some level assigned by a quantum number different for each one, and therefore there could be no fermi exclusion principle, something against experimental evidence. – anna v May 12 '11 at 11:56
@anna : yes I know. But an extreme tiny difference in mass can happen without trouble. (Really I'm not sure that such a difference exist, but as a test was not attempted, AFAIK, a skeptical person as I am will have to do the Devil's Avocate work) – Helder Velez May 12 '11 at 13:08
2
The diagram from WP above is the variation of Pb210 concentration in a sediment core!. No conection to some seasonal influence on half life! (Which would be 1 : 10 by the way in this example!) Do You misuse anything for Your crackpottery? – Georg May 12 '11 at 13:11
I droped this sentence from the Answer: (I've theoretical reasons to think that they are more different than expected). I've added another graph about Si-32 and Ra-226 variability, direct measures I suspect. @Georg I'm presenting DATA, and asking for more data. Before this data was gathered by patient experimentalists it was a strong beleif that such variability could not exist. Instead of calling me 'a crackpot', which I dont like, you can assume a much more interesting position: find data to invalidate those studies, or the data that I'm trying to find or candidate explanations. – Helder Velez May 12 '11 at 14:41
1
This answer is lacking some accuracy, but the idea in itself is not crackpot at all. Firstly, disregard the first graph, the second graph is the real evidence and sufficiently makes the argument. The thinking is that the distance from the sun, and thus the neutrino flux (which is known to reach the sample) is possibly correlated with a change in measured decay rate. This isn't some correction to "radioactive decay", but just another level of physical complexity. A reaction we THOUGHT was just decay is now decay+(infrequent interaction). Nothing controversial here. – AlanSE May 31 '11 at 3:30
Sir Isaac Newton struggled with exactly this question, in the context of optics, and the best he could come up with was a theory of superluminal waves associated with light particles. (The question Newton asked was essentially the same question as yours: If a light beam is 20% absorbed into a glass surface and 80% reflected, and if light is particular, then how can one particle acting alone make a correct decision?) The whole business of selecting one from a possible set for no apparent reason can be used for quantum computing. It can effectively compute a "diagonalisation of a matrix". Lets look more closely at the meaning of random. Random is the limit of compressibility of a pattern, the removal of all predictability. The pattern as a whole has this character, so it is a feature of the set. A radioactive particle does not have the knowledge of which you speak, in fact it is missing information. It is forbidden to carry predictive knowledge. No information can be projected from any part of the sequence, present or future, to determine any part of the sequence. In this sense, every event in the sequence has no extractable knowledge about its position in the sequence. It is a safe spy, unable to betray its fellows.
This does not even answer the question, but gives a better way to think about it. What we see about randomness as contrived should more be seen as a natural default. Einsten protests that God does not play dice, but he is possibly complaining if God does plays dice then that means that God will always be with-holding information.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488928914070129, "perplexity_flag": "middle"}
|
http://www.timgittos.com/learning/mit-single-variable-calculus/week-3/
|
# MIT OCW Single Variable Calculus - Week 3
## Implicit differentiation
Implicit differentiation is a technique that allows you to differentiate functions that you may not have even been able to see.
\[
\frac{\delta}{\delta x}x^2 = ax^{a-1}
\]
So far we’ve covered the case where $$a$$ is an integer: $$a = 0, \pm 1, \pm 2, \ldots$$. Now we’re going to consider the case where $$a$$ is a rational number: $$a = \frac{m}{n}$$, where m & n are integers.
Example 1
\[
\begin{align}
y &= x^{\frac{m}{n}} \qquad \text{(1)} \\
y^n &= x^m \qquad \text{(2)}
\end{align}
\]
Now, we apply $$\frac{\delta}{\delta x}$$ to equation (2). We don’t apply to equation (1) because right now, we don’t know how to differentiate it. We do know how to differentiate (2).
\[
\begin{align}
\frac{\delta}{\delta x} y^n &= \frac{\delta}{\delta x}x^m \\
\left( \frac{\delta}{\delta y}y^n \right) \frac{\delta y}{\delta x} &= mx^{m-1} \qquad \text{(by the chain rule)} \\
ny^{n-1}\frac{\delta y}{\delta x} &= mx^{m-1} \\
\frac{\delta y}{\delta x} &= \frac{m x^{m – 1}}{n y^{n – 1}} \\
&= \frac{m}{n} \frac{x^{m-1}}{(x^\frac{m}{n})^{n-1}} \\
&= a x^{m-1-(n-1)\frac{m}{n}} \\
&= a x^{m-1-m+\frac{m}{n}} \\
&= a x^{-1 + \frac{m}{n}} \\
&= a x^{a-1}
\end{align}
\]
NOTE: THE CHAIN RULE WILL CAUSE YOU ISSUES
Example 2
\[
x^2 + y^2 = 1
\]
This is defining $$y$$ as a function of $$x$$ implictily. That means it can be arranged to express $$y$$ in terms of $$x$$.
Solving for y:
\[
\begin{align}
y^2 &= 1 – x^2 \\
y &= \pm \sqrt{1 – x^2} \\
y &= (1-x^2)^{\frac{1}{2}} \qquad \text{considering positive branch only} \\
y’ &= \frac{1}{2}(1 – x^2)^{-\frac{1}{2}}(-2x) \\
y’ &= (1 – x^2)^{-\frac{1}{2}}(-x) \\
y’ &= \frac{-x}{\sqrt{1 – x^2}}
\end{align}
\]
The above is the explicit differentiation.
Following is the implicit method:
\[
\begin{align}
\frac{\delta}{\delta x} (x^2 + y^2 &= 1) \\
2x + 2yy’ &= 0 \qquad \text{(by the chain rule)} \\
y’ &= -\frac{2x}{2y} \\
y’ &= -\frac{x}{y} \\
\end{align}
\]
The result of the explicit and implicit methods are the same, and the implicit is better than the explicit because it doesn’t differentiate between positive and negative values for $$y$$ and $$x$$.
Example 3
Consider the equation $$y^4 + xy^2 – 2 = 0$$.
We can solve it explicitly: (using the quadratic forumula )
\[
\begin{align}
y^2 &= \frac{-x \pm \sqrt{x^2 – 4(-2)}}{2} \\
y &= \pm \sqrt{\frac{-x \pm \sqrt{x^2+8}}{2}} \\
\end{align}
\]
Notice the 4 different roots of the equation (because it’s to the 4th power).
It’s very complex and time consuming.
Compare this to the implicit method:
\[
\begin{align}
\frac{\delta}{\delta x}(y^4 + xy^2 -2 = 0) \\
4y^3y’ + y^2 + x(2yy’) – 0 &= 0 \\
(4y^3 + 2xy)y’ + y^2 &= 0 \\
(4y^3 + 2xy)y’ &= -y^2 \\
y’ &= \frac{-y^2}{4y^3 + 2xy}
\end{align}
\]
To solve $$y’$$ in terms of x, we’d need to put the result of the explicit method (so far) into the implicit method. The two go hand in hand.
Although the implicit method hides the conplexity of solving a quartic equation, it still exists, and at some point, we’re going to have to deal with it. That means going through all possible 4 roots to find the answer.
## Inverse functions
The inverse of a function is the function that gets us back to our original arguments. For example:
\[
y = \sqrt{x} \text{for x \gt 0}, y^2 = x
\]
if we define these in terms of functions:
\[
f(x) = \sqrt{x}, g(y) = x, g(y) = y^2
\]
In general:
\[
\begin{array}{2,1}
y = f(x), g(y) = x \\
g(f(x)) = x, g = f^{-1}, f = g^{-1}
\end{array}
\]
Consider the case of graphing a function and it’s inverse. The inverse can be obtained by swapping the $$x$$ and $$y$$ values, which in effect is mirroring the function along the line $$x = y$$.
Implicit differentiation allows us to find the derivative of any inverse function, provided we know the derivative of the function.
Example 1:
Consider the function:
\[
y = tan^{-1}x
\]
which can be rearranged/simplified by taking $$tan$$ of each side:
\[
tan\;y = x
\]
This can be graphed as follows:
Recall that
\[
\begin{align}
\frac{\delta}{\delta y} tan\;y &= \frac{\delta}{\delta y} \frac{sin\;y}{cos\;y} \\
&= \frac{1}{cos^2y}
\end{align}
\]
\[
\begin{align}
\frac{\delta}{\delta x} (tan\;y &= x) \\
(\frac{\delta}{\delta y} tan\;y)\frac{\delta y}{\delta x} &= 1 \\
\frac{1}{cos^2y} y’ &= 1 \\
y’ &= cos^2y
\end{align}
\]
Where we apply $$\frac{\delta}{\delta x}$$ to both sides (thats why the x turns into a 1). However, we wanted to find $$\frac{\delta}{\delta x}tan^{-1}x$$. We can simplify this by expressing $$cos\;y$$ in terms of it’s defining ratio:
which shows us the hypotenuse is $$\sqrt{1 + x^2}$$. This gives us:
\[
\begin{align}
cos\;y &= \frac{1}{\sqrt{1 + x^2}} \\
cos^2y &= \frac{1}{1 + x^2}
\end{align}
\]
and then we substitute in the above:
\[
\begin{align}
\frac{\delta}{\delta x}tan^{-1}x &= cos^2(tan^{-1}x) \\
&= \frac{1}{1 + x^2}
\end{align}
\]
Example 2:
\[
\begin{align}
y &= sin^{-1}x \\
sin y &= x \\
(cos\;y)y’ &= 1 \\
y’ &= \frac{1}{cos\;y} \\
y’ &= \frac{1}{\sqrt{1 – sin^2y}} \\
y’ &= \frac{1}{\sqrt{1 – x^2}} \\
\therefore \frac{\delta}{\delta x}sin^{-1}x &= \frac{1}{\sqrt{1 – x^2}}
\end{align}
\]
## Exponentials & Logarithms
First, lets review exponents.
Consider some base, $$a$$, such that $$a > 0$$. We know that $$a^0 = 0$$, $$a^1 = a$$, $$a^2 = a \times a$$. Generally:
\[
a^{x_1 + x_2} = a^{x_1}a^{x_2}
\]
This is known as the law of exponents. From this we can derive:
\[
(a^{x_1})^{x_2} = a^{x_1x_2}
\]
Likewise:
\[
a^{\frac{m}{n}} = \sqrt[n]{a^m}
\]
$$a^x$$ is defined for all x by continuity. We can calculate things like $$a^\pi$$ and $$a^{\sqrt{2}}$$ using the rules above.
Example graph of $$y = 2^x$$:
WThe ultimate goal is to be able to solve
\[
\frac{\delta}{\delta x}a^x
\]
We can begin by going back to first principles:
\[
\begin{align}
\frac{\delta}{\delta x}a^x &= \lim_{\Delta x \to 0} \frac{a^{x + \Delta x} – a^x}{\Delta x} \\
&= \lim_{\Delta x \to 0} \frac{a^x a^{\Delta x} – a^x}{\Delta x} \\
&= \lim_{\Delta x \to 0} a^x \frac{a^{\Delta x} – 1}{\Delta x} \\
&= a^x \lim_{\Delta x \to 0} \frac{a^{\Delta x} – 1}{\Delta x}
\end{align}
\]
We move the $$a^x$$ out of the limit, because as the limit tends to 0, $$a^x$$ is constant, and can be moved out of the limit.
Next, lets define a new variable, and express the above equation in a new way:
\[
M(a) = \lim_{\Delta x \to 0} \frac{a^{\Delta x} – 1}{\Delta x}
\]
so that
\[
\frac{\delta}{\delta x}a^x = M(a)a^x
\]
If we recall back to the start of the course, we can recognise that as the equation of a line, the tangent line. From here, we can work out the slope of $$a^x$$. First, plug in $$a = 0$$ to the above:
\[
\begin{align}
\left. \frac{\delta}{\delta x}a^x \right|_{x = 0} &= M(a)a^0 \\
&= M(a)
\end{align}
\]
So we can see that the slope of $$a^x$$ at $$x = 0$$ is $$M(a)$$. What is $$M(a)$$?
Solving $$M(a)$$ is difficult. Lets beg the question, and instead, define $$e$$ as the number such that $$M(e) = 1$$.
The consequences of having a number such as $$e$$ are as follows:
\[
\begin{align}
\frac{\delta}{\delta x}e^x &= M(e)e^x \\
&= e^x
\end{align}
\]
which is based on the above derivation.
Also note that $$M(e)$$ at $$x = 0$$ is 1. That is, the slope of the tangent line at $$x = 0$$ is 1.
How do we know that a number such as $$e$$ exists?
Without defining $$e$$, we can show graphically that it must exist.
Consider the graph of the function $$y = 2^x$$, and compare the tangent line at $$x = 0$$ ($$M(2)$$) to the secant line through the points $$(0,1), (1,2)$$:
Notice the slope of the line $$M(2)$$ is less than the secant.
Now compare the graph of the function $$y = 4^x$$ and compare $$M(4)$$ against the same secant:
Notice the slope of the line is greater than the secant.
This shows that somewhere between the bases of 2 and 4, there is a number where $$M(a) = 1$$.
To fit this concept together with where we left the derivation off, we need to use the natural log.
If $$y = e^x$$, then $$ln\;y = x$$. That is, the natural log is the inverse of $$e^x$$.
A refresher on logarithms:
- $$ln(x_1x_2) = ln\;x_1 + ln\;x_2$$
- $$ln\;1 = 0$$
- $$ln\;e = 1$$
Next, we need to cover computing the derivative of a logarithm. Remember from implicit differentiation, we can compute the derivative of a function if we know the derivative of it’s inverse. That’s the rational behind finding the derivative.
First, lets define $$w = ln\;x$$. Using the above rules of $$ln$$ and $$e$$ we can show that:
\[
\begin{align}
e^w &= x \\
\frac{\delta}{\delta x}e^w &= \frac{\delta}{\delta x} x \\
&= 1 \\
(\frac{\delta}{\delta w}e^w)(\frac{\delta w}{\delta x}) &= 1 \\
e^w \frac{\delta w}{\delta x} &= 1 \\
\frac{\delta w}{\delta x} &= \frac{1}{e^w} \\
&= \frac{1}{x}
\end{align}
\]
See that $$\frac{\delta}{\delta w}e^w = e^w$$ based on the assumption that $$M(e) = 1$$.
For further info, check the lecture notes for this lecture.
Therefore we get:
\[
\frac{\delta}{\delta x}ln\;x = \frac{1}{x}
\]
To summarize, we have these two rates of change:
\[
\frac{\delta}{\delta x} e^x = e^x
\]
and
\[
\frac{\delta}{\delta x} ln\;x = \frac{1}{x}
\]
With these two derivatives in hand, we can return to the task of solving $$M(a)$$, and so differentiate $$a^x$$. There are two methods we can use to solve any exponential.
Method 1:
Convert the exponential to base $$e$$:
\[
\begin{align}
a^x &= (e^{ln\;a})^x \\
&= e^{x\;ln\;a}
\end{align}
\]
and then differentiate:
\[
\begin{align}
\frac{\delta}{\delta x}a^x &= \frac{\delta}{\delta x}e^{x\;ln\;a} \\
&= (ln\;a)e^{x\;ln\;a}
&= (ln\;a)a^x
\end{align}
\]
and we can see therefore that $$M(a) = ln\;a$$.
The above works because of 2 things, and can be illustrated as follows:
\[
\frac{\delta}{\delta x}e^{3x} = 3e^{3x}
\]
The 3 is the derivative of $$3x$$, and is multiplied by the derivative of $$e^{3x}$$, which is $$e^{3x}$$.
The same is occuring above. $$ln\;a$$ is a constant, and so we differentiate out the $$x$$ and multiply it by the derivative of $$e$$, which is itself.
Method 2:
The second method involves logarithmic differentiation.
Sometimes it’s easier to differentiate the logarithm of a function instead of the function itself.
Lets assume some function $$u$$, and solve:
\[
\begin{align}
\frac{\delta}{\delta u} ln\;u &= \left(\frac{\delta ln\;u}{\delta u}\right)\left(\frac{\delta u}{\delta x}\right) \\
&= \frac{1}{u} \frac{\delta u}{\delta x}
&= \frac{u’}{u}
\end{align}
\]
where we switch notation in the last step.
It’s easy to see where to go now from here to solve for $$a^x$$:
\[
\begin{align}
\frac{\delta}{\delta x} a^x \bigg| u = a^x \\
ln\;u &= x\;ln\;a \\
\frac{\delta}{\delta x} x\;ln\;a &= ln\;a \\
(a^x)’ &= (ln\;a)a^x
\end{align}
\]
Example 2 of method 2
An example of having moving exponents and a moving base.
Let’s consider the function:
\[
v = x^x
\]
Solve $$\frac{\delta}{\delta x} v$$ using logarithmic differentiation:
\[
\begin{align}
ln\;v &= x\;ln\;x \\
(ln\;v)’ &= ln\;x(x)’ + x(ln\;x)’ \\
&= ln\;x + x . \frac{1}{x} \\
&= ln\;x + 1 \\
\frac{v’}{v} &= ln\;x + 1 \\
v’ &= v(ln\;x + 1) \\
&= x^x(1 + ln\;x)
\end{align}
\]
Where on the second line, we apply the product rule.
Example 3
We’re going to evaluate:
\[
\lim_{n \to \infty}\left(1 + \frac{1}{n}\right)^n
\]
\[
ln \left( \left(1 + \frac{1}{n} \right)^n \right) = n\;ln\left(1 + \frac{1}{n}\right)
\]
here, we’re going to assign $$\Delta x = \frac{1}{n}$$, which makes $$n = \frac{1}{\Delta x}$$:
\[
n\;ln \left( 1 + \frac{1}{n} \right) = \frac{1}{\Delta x} (ln(1 + \Delta x) – ln\;1)
\]
here, we’re adding 0, in the form of $$ln\;1$$.
This results in an equation that matches the form of another equation we know:
\[
\begin{align}
\frac{ln(1 + \Delta x) – ln\;1}{\Delta x} &= \frac{\delta}{\delta x}ln\;x \quad \text{at x = 1} \\
&= \frac{1}{x} \quad \text{at x = 1} \\
&= 1
\end{align}
\]
Now we can just work back:
\[
\begin{align}
\lim_{n \to \infty}\left( 1 + \frac{1}{n} \right)^n &= e^{\left[ ln \left[ lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n \right] \right]} \\
&= e^1 \\
&= e
\end{align}
\]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 82, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6980816125869751, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=1434796
|
Physics Forums
Thread Closed
Page 1 of 2 1 2 >
## Hollow Earth, Hollow Gravity
I have just finished reading a very detailed book that has half convinced me that the earth (and other planetary bodies) are hollow. I know this sounds ridiculous, and if someone here gives a good answer why not, I’ll change my mind, (and get very annoyed that I read a 500 page page book that can be disproved!) but I can’t think of a good reason to disprove it.
I read the previous topic on this site on the hollow earth theory, but no-one raised the points that were in this book so I’ll have a go.
You can see the book at: http://www.amazon.com/gp/reader/0620...15#reader-link , its written by Jan Lamprecht who says the book is a Lateral Thinking exercise along the lines advocated by Kurt Godel.
When I started reading I had three problems I could see with the hollow earth concept;
1. Gravity
2. Seismology
3. The origin of volcanoes
I found his rebuttals to these points hard to flaw, which he answered in the very first section:
1. Newton's Law of Gravity; one of the most useful mathematical formulae ever devised. This little formula has made space travel and the exploration of the Solar System possible. It made satellites possible. . . . Scientists use this little formula to gain an understanding of galaxies far away, and indeed the behaviour of the universe as a whole. It is now more than 300 years since Newton devised this little formula; and we still do now know what causes gravity.
Newtonian gravity is accurately measured and proven with the bounds of the solar system. However, Newtonian gravity remains untested in other areas. All we have is a formula. This formula has been used to determine the mass of the Earth. This is based on the concept that for each mass of M inside the Earth, it exerts and attractive force of F. We do not know the valid range for Newtonian gravity. Inside Newton's formula is G. G is the "universal gravitational constant". It is assumed, and assumed is the correct word here, that each mass of M exerts the same force of F regardless of where in the universe it may be placed. It is also assumed that each mass of M exerts the same force F whether it lies on the surface of the Earth or whether it be deep inside the Earth. When using the Cavendish balance to determine the mass of the Earth, it is assumed that each particle exerts a fixed force upon all others. This assumption rules out the very real possibility that particles near the surface of a planet might exert a force greater than those deep down. The key to all of our gravity is the mass of the Earth. If the mass of the Earth is wrong, then so are our estimates for those of other bodies. If the mass of the Earth has been overstated, then it follows that the masses of all other bodies in the solar system have also been overstated. If the Earth is hollow, then so too is every other planet in the solar system.
How can we be sure that the Earth really has the mass accorded it by Newtonian gravity? As gravity is so unbelievably weak, is an experiment using two lead balls really representative of the entire Earth? No, of course not. There is electrical charge to account for, and also magnetic forces and electromagnetic forces, that are a lot stonger than gravity, that the current theory does not take into account.
[He then proceeds to go into detail about Newton’s shell theorem, which is too mathematically complex for me to understand, but I think proves that gravity is always 0 at the centre of any spherical body]
..then seismology
2. The only "reliable" method we have of knowing what goes on in the Earth beneath our feet comes from the science of Seismology. However, there are many examples of actual findings being different from what was predicted. The science of seismology contains two very broad assumptions which no one has ever been able to verify: 1. The speed of seismic waves beneath the Earth is ultimately inferred from our understanding of the structure of the Earth based on Newtonian Gravity. We have no way of being certain that these waves really are reaching these depths or travelling at these speeds. 2. We cannot be sure that speed changes are due to the changing constitution of the Earth. Our view of the inner Earth might be very skewed. Much of the predicted structure changes have never turned out to be real. If we find such errors at depths of just a few kilometres, how much less can we trust our ideas when dealing with rock which is hundreds and perhaps thousands of miles beneath the surface?
The fact that the deepest man has ever gone into the earths crust is 25 miles, theres still hundreds of miles to go until we can have any sort of proof.
..then geology
3. What do we really know about the Earth's interior? And how trustworthy is our knowledge of it? Many people (mistakenly) think that the lava which pours out of volcanoes comes from a large reservoir of molten material which makes up the greater part of the Earth. Scientists have discovered that lava comes from within the Earth's crust. The lava comes from approximately 20 miles down. The existence of lava does not affect the passage of earthquake (seismic) waves. This indicates to scientists that the crust is largely solid. So where does the heat come from which melts the rock locally? Scientists have advanced two theories. Some say that the melting is due to high concentrations of radioactive elements in a particular area. These decaying radioactive elements generate enough heat to melt rock. Much lava is slightly radioactive and that lends support to this theory. Other geologists have argued that shearing and faulting are adequate heat generating mechanisms via friction. The evidence supports both theories. Lava cannot possibly be rising from the centre of the Earth as some may be tempted to think. It would cool down and become solid on its long, slow journey upwards. Lava is therefore a surface phenomenon and does not in any way reflect what the Earth is like 50 or 100 or more miles down.
The author then gives some reasons to believe the earth may be hollow. He mentioned what seismologists call the 'shadow zone' which is the large area in the centre of the earth that no P or S waves ever penetrate, and he has concluded that it is more likely that they cannot penetrate that because there is nothing there to penetrate (seismology does not work in gas). He also explains the earth’s magnetic field with a counter rotating dynamo effect which involves a burning hydrogen core. He postulates that as Hydrogen is the lightest element, due to Newton’s shell theorem making g = 0 at the centre, it would naturally diffuse to the centre and would burn in a reaction similar to the suns.
I was surprised to read a review of the book from Dr Tom Van Flandern (astronomer, formerly U.S. Naval Observatory) who said: “For merely showing us all that the inferred density profile of Earth's interior is not a unique solution of seismic data -- an important constraint for all theoreticians working in that area -- the book had already made itself worthwhile.”
And also a review from Richard Baum (Director Mercury & Venus Sections, British Astronomical Association) which said: “I must say you have stored your book with an enormous amount of information; much quite surprising, all stimulating. Essentially you are not only obliging us to take a fresh look at things but to observe from an unsuspected different position - the presumed impossible.”
It all sounded quite scientifically sound to me, so I decided to put it in this debunking section, as I can’t debunk it, especially the points 1, 2, 3 he makes.
Is there anything that really puts a nail in all this theory? i'd be eager to hear it.
some diagrams: http://www.bibliotecapleyades.net/ti...ra_hueca_9.htm
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
How did the earth come to be hollow then? It is rather implausible that the earth was somehow formed that it is, or has been, or will be hollow.
Quote by bel How did the earth come to be hollow then? It is rather implausible that the earth was somehow formed that it is, or has been, or will be hollow.
That is a valid point, but he did cover that
You would have the centripetal force from the spin, and he says magnetism from pole to pole may affect the interior formation. There is also Electrical current which have been proven to flow through the earth which may affet the interior.
The current theory, of gradual accumulation of mass from asteroids and dust, does not really explain how the huge reaction at the centre started in the first place, or how it is sustained.
He says that over time as the earth accumulated more mass it grew more circular as the strength of gravity increased, and when bodies form into circular bodies, as Newtons shell theorem says (http://en.wikipedia.org/wiki/Shell_theorem), gravity starts to cancel out directly in the centre, and the planet forms into a hollow structure. Architects havce already proven the structure is sustainable, like a Dyson Sphere (http://en.wikipedia.org/wiki/Dyson_sphere) or a Globus Cassus (http://en.wikipedia.org/wiki/Globus_Cassus)
## Hollow Earth, Hollow Gravity
Quote by ZeuZ It is assumed, and assumed is the correct word here, that each mass of M exerts the same force of F regardless of where in the universe it may be placed. It is also assumed that each mass of M exerts the same force F whether it lies on the surface of the Earth or whether it be deep inside the Earth.
The assumption of translational symmetry is one of the most basic assumptions of physics without which physics is generally meaningless. If this assumption is challenged, other physical laws should not be invoked to further an argument. And Newton's theory of universal gravitation follows from $$F=ma$$.
i dont think he is saying we need to challenge translational symmetry at all, he is pointing out that there are other forces that affect particles apart from gravity and all of these forces are infact much stronger than gravity, so they may infact play a much bigger role than gravity does in planetary formation. In other words its not just the mass of particles that affect their formation, its also the charge and other local electromagnetic forces. He also suggests that gravity may invert, which would put the center of gravity as a ring around the crust, and objects inside the cavity will be attracted out to the crust aswell as objects outside the crust. i'm not so sure about 'inverting gravity', i can see gravity cancelling at the centre, but not fully inverting like he suggests.
Mentor If the earth was hollow, then wouldn't that hugely affect the orbit of the earth around the sun, and the moon around the earth? What ever happened to Occams Razor? Given what we know about planet formation, it is a lot more likely that the earth has matter inside it than it is hollow.
Mentor
Quote by ZeuZ I’ll change my mind, (and get very annoyed that I read a 500 page book that can be disproved!)
You should be annoyed, but withyourself as well as the author. Why did you waste your time on this tripe? I'm not going to spend a whole lot of time debunking this, but here goes.
When I started reading I had three problems I could see with the hollow earth concept; 1. Gravity 2. Seismology 3. The origin of volcanoes
That's a good start. Add the Earth's magnetic field, cosmology, Occam's scalpel, ...
Newtonian gravity is accurately measured and proven with the bounds of the solar system. However, Newtonian gravity remains untested in other areas. All we have is a formula. This formula has been used to determine the mass of the Earth.
First, Newtonian gravity is not exactly correct. General relativity is a better model, and it works exactly as it does here as far as we can see. That bit about not knowing whether gravity works outside the solar system is the purest of BS.
We also have the density of rocks and the size of the Earth which serve as a very good check on the Earth's mass as deduced by satellite orbits and the moon's orbits. If the Earth ws hollow, it would have to be made of something much denser than rock to account for orbits.
The Earth is very, very small compared to the sun. The Earth's mass contributes almost nothing to the orbits of the other planets. Their orbits are independent checks against the value of the universal gravitational constant based on observations of the moon and satellites.
Finally, the gravitational attraction inside the supposed hollow earth would be zero. Supposed inhabitants inside the hollow earth would live in a near zero-g environment.
..then seismology
THe author is completely full of it here. A hollow earth would have a very distinct seismological ring. We use seismological data to "see" to the core of the Earth. Either siesmologists have conspired to cover-up the hollow earth, or the author is lying.
..then geology
More BS. Please read up on the incredible amounts of science that describes how our earth was formed and how we know about the inside of the Earth.
Quote by bel And Newton's theory of universal gravitation follows from $$F=ma$$.
You aren't helping. Newton's theory of universal gravitation does not follow in any way from Newton's laws of motion.
Mentor
Quote by ZeuZ That is a valid point, but he did cover that You would have the centripetal force from the spin...
Clearly, the centripetal force could not create a spherical shell, since it is nonexistent at the poles. It could only ever create a doughnut-shaped body, like the rings of Saturn. But like rings, such things are typically gravitationally unstable and tend to collect into single, solid bodies (aka, moons).
Regarding his other issues with gravity:
The key to all of our gravity is the mass of the Earth. If the mass of the Earth is wrong, then so are our estimates for those of other bodies. If the mass of the Earth has been overstated, then it follows that the masses of all other bodies in the solar system have also been overstated. If the Earth is hollow, then so too is every other planet in the solar system.
Correct - therefore the calculated mass of the earth could not possibly be wrong. You can't have a hollow gas giant (or sun, for that matter), and we've sent space probes to every planet and many, many moons. Obviously, our understanding of gravity must be correct.
I love self-debunking crackpots!
Recognitions:
Gold Member
Science Advisor
Staff Emeritus
Quote by ZeuZ 1. Newton's Law of Gravity; one of the most useful mathematical formulae ever devised. This little formula has made space travel and the exploration of the Solar System possible. It made satellites possible. . . . Scientists use this little formula to gain an understanding of galaxies far away, and indeed the behaviour of the universe as a whole. It is now more than 300 years since Newton devised this little formula; and we still do now know what causes gravity. Newtonian gravity is accurately measured and proven with the bounds of the solar system. However, Newtonian gravity remains untested in other areas. All we have is a formula. This formula has been used to determine the mass of the Earth. This is based on the concept that for each mass of M inside the Earth, it exerts and attractive force of F. We do not know the valid range for Newtonian gravity. Inside Newton's formula is G. G is the "universal gravitational constant". It is assumed, and assumed is the correct word here, that each mass of M exerts the same force of F regardless of where in the universe it may be placed. It is also assumed that each mass of M exerts the same force F whether it lies on the surface of the Earth or whether it be deep inside the Earth. When using the Cavendish balance to determine the mass of the Earth, it is assumed that each particle exerts a fixed force upon all others. This assumption rules out the very real possibility that particles near the surface of a planet might exert a force greater than those deep down. The key to all of our gravity is the mass of the Earth. If the mass of the Earth is wrong, then so are our estimates for those of other bodies. If the mass of the Earth has been overstated, then it follows that the masses of all other bodies in the solar system have also been overstated. If the Earth is hollow, then so too is every other planet in the solar system. How can we be sure that the Earth really has the mass accorded it by Newtonian gravity? As gravity is so unbelievably weak, is an experiment using two lead balls really representative of the entire Earth? No, of course not. There is electrical charge to account for, and also magnetic forces and electromagnetic forces, that are a lot stonger than gravity, that the current theory does not take into account. [He then proceeds to go into detail about Newton’s shell theorem, which is too mathematically complex for me to understand, but I think proves that gravity is always 0 at the centre of any spherical body]
All this proves is that Jan Lamprecht is ignorant of the physics necessary to write this book. Newtonian Gravitation is not accurate with the Solar System - even GPS satellites rely on General Relativity.
..then seismology 2. The only "reliable" method we have of knowing what goes on in the Earth beneath our feet comes from the science of Seismology. However, there are many examples of actual findings being different from what was predicted. The science of seismology contains two very broad assumptions which no one has ever been able to verify: 1. The speed of seismic waves beneath the Earth is ultimately inferred from our understanding of the structure of the Earth based on Newtonian Gravity. We have no way of being certain that these waves really are reaching these depths or travelling at these speeds. 2. We cannot be sure that speed changes are due to the changing constitution of the Earth. Our view of the inner Earth might be very skewed. Much of the predicted structure changes have never turned out to be real. If we find such errors at depths of just a few kilometres, how much less can we trust our ideas when dealing with rock which is hundreds and perhaps thousands of miles beneath the surface? The fact that the deepest man has ever gone into the earths crust is 25 miles, theres still hundreds of miles to go until we can have any sort of proof.
This is almost entirely misinformation. The speed of seismic waves have been computed with little or no recourse to "Newtonian Gravity". The speed of these waves have been experimentally measured and the numbers agree with the theory. Furthermore, in the last couple of decades, people have used this same technique to calculate phonon velocities in multilayered semiconductor materials, which agree excellently with experimental measurements.
References:
Lord Rayleigh, Proc. London Math. Soc. 17, 4 (1887)
K. Sezawa and K. Kanai, Bull. Earthquake Res. Hist. 17, (1939)
J. G. Scholte, Proc. Acad. Sci. Amsterdam 45, 20 (1942)
R. A. Phinney, Bull. Seismol. Soc. Am. 51, 527 (1961)
W. Pilant, Bull. Seismol. Soc. Am. 62, 285 (1972)
I. V. Ponomarev and A. L. Efros, Phys. Rev. B 63, 165305 (2001)
..then geology 3. What do we really know about the Earth's interior? And how trustworthy is our knowledge of it? Many people (mistakenly) think that the lava which pours out of volcanoes comes from a large reservoir of molten material which makes up the greater part of the Earth. Scientists have discovered that lava comes from within the Earth's crust. The lava comes from approximately 20 miles down. The existence of lava does not affect the passage of earthquake (seismic) waves. This indicates to scientists that the crust is largely solid. So where does the heat come from which melts the rock locally? Scientists have advanced two theories. Some say that the melting is due to high concentrations of radioactive elements in a particular area. These decaying radioactive elements generate enough heat to melt rock. Much lava is slightly radioactive and that lends support to this theory. Other geologists have argued that shearing and faulting are adequate heat generating mechanisms via friction. The evidence supports both theories. Lava cannot possibly be rising from the centre of the Earth as some may be tempted to think. It would cool down and become solid on its long, slow journey upwards. Lava is therefore a surface phenomenon and does not in any way reflect what the Earth is like 50 or 100 or more miles down.
This is also completely nonsensical. Measured lava temperatures (from hundreds of volcanoes) range from about 650C to about 1200C. Temperatures in the Mantle are estimated to be about 900C. There's no reason for magma (I sure hope the author didn't call it 'lava', 'cause then he'd also be revealing his ignorance of middle school geography) to cool significantly below this temperature on its way up.
The author then gives some reasons to believe the earth may be hollow. He mentioned what seismologists call the 'shadow zone' which is the large area in the centre of the earth that no P or S waves ever penetrate, and he has concluded that it is more likely that they cannot penetrate that because there is nothing there to penetrate (seismology does not work in gas). He also explains the earth’s magnetic field with a counter rotating dynamo effect which involves a burning hydrogen core. He postulates that as Hydrogen is the lightest element, due to Newton’s shell theorem making g = 0 at the centre, it would naturally diffuse to the centre and would burn in a reaction similar to the suns.
More crackpottery! For one thing, the gravitational field is zero everywhere inside the hollow - not just at the very center. Secondly, to burn hydrogen in a thermonuclear reaction (like in the sun) you need a mass of at least 8% the mass of our sun. The mass of the earth is itself about 50,000 times too small. And if you believe in a hollow earth, then it's even smaller still.
See: Very low mass stars, black dwarfs and planets
I was surprised to read a review of the book from Dr Tom Van Flandern (astronomer, formerly U.S. Naval Observatory) who said: “For merely showing us all that the inferred density profile of Earth's interior is not a unique solution of seismic data -- an important constraint for all theoreticians working in that area -- the book had already made itself worthwhile.”
Dr. van Flandern, once a respectable astronomer, is now a Relativity denying conspiracy theorist.
And also a review from Richard Baum (Director Mercury & Venus Sections, British Astronomical Association) which said: “I must say you have stored your book with an enormous amount of information; much quite surprising, all stimulating. Essentially you are not only obliging us to take a fresh look at things but to observe from an unsuspected different position - the presumed impossible.”
This is an excerpt from a private email to the author who seems to have a personal relationship with R M Baum - it a not a public review.
Recognitions:
Gold Member
Quote by ZeuZ ..then geology 3. What do we really know about the Earth's interior? And how trustworthy is our knowledge of it? Many people (mistakenly) think that the lava which pours out of volcanoes comes from a large reservoir of molten material which makes up the greater part of the Earth. Scientists have discovered that lava comes from within the Earth's crust. The lava comes from approximately 20 miles down. The existence of lava does not affect the passage of earthquake (seismic) waves. This indicates to scientists that the crust is largely solid. So where does the heat come from which melts the rock locally? Scientists have advanced two theories. Some say that the melting is due to high concentrations of radioactive elements in a particular area. These decaying radioactive elements generate enough heat to melt rock. Much lava is slightly radioactive and that lends support to this theory. Other geologists have argued that shearing and faulting are adequate heat generating mechanisms via friction. The evidence supports both theories. Lava cannot possibly be rising from the centre of the Earth as some may be tempted to think. It would cool down and become solid on its long, slow journey upwards. Lava is therefore a surface phenomenon and does not in any way reflect what the Earth is like 50 or 100 or more miles down.
This fellow hasn't studied geology past high school, and even then probably wasn't paying attention.
He's right that there isn't (much) melted material in the deep earth, but this should surprise no one (I'll forgive the fact that he called it lava, when the correct term is magma. Lava is an eruptive product and is only called that on the surface). The mantle is a plastically deforming solid (think silly putty) and does not melt due to the extreme pressure on it. I don't know where he gets his ideas of what scientists say* causes lava melt, but he's fairly far off the mark. For the major causes, firstly, yes heating is an option, but not from localised radioactive decay or faults, but from convecting material upwelling from the core-mantle boundary. Secondly, decompression of material at ocean ridges, continental rifts etc allows melting. Thirdly, addition of volatiles like at subduction zones can lower the melting point of the mantle.
Let's think about his claim that lava (magma!) rising from the core would cool and set. Firstly, barring perhaps some amount of magma at the ultra low velocity zone next to the outer core, as I've already said the mantle is largely solid and deforms as a plastic. This material setting is irrelevant. If it were a liquid, S-waves would be unable to penetrate it, and yet they do. Again, he doesn't know what he's talking about. As for a hot plastically deforming solid rising, that is plausible. The earth is a very poor conductor of heat, and it so conduction is a very slow method of removing heat from the core. Since the mantle can behave as a fluid, convection therefore is the major way in which heat is transferred. Seismology, by the way can in fact pick up columns of low density material rising up through the mantle...
I'm not at all sure how he feels this is evidence for a hollow earth anyway... What does it have to do with it?
The author then gives some reasons to believe the earth may be hollow. He mentioned what seismologists call the 'shadow zone' which is the large area in the centre of the earth that no P or S waves ever penetrate, and he has concluded that it is more likely that they cannot penetrate that because there is nothing there to penetrate (seismology does not work in gas).
Ok, lets examine this claim. Here is your S-wave shadow zone. These S-waves can't penetrate liquid, so we get this pattern. The same would be seen if there was nothing to penetrate I suppose. But what about the P-waves? If there were a hollow space, these would not be transmitted either surely and we would see more or less the same pattern. Instead we see this pattern. They are transmitted through. I doubt this would happen if the earth was hollow.
*There's nothing that screams bulls*** more than the phrase "scientists have discovered" followed by no citations of any kind.
Quote by D H That's a good start. Add the Earth's magnetic field, cosmology, Occam's scalpel, ...
He explained the magnetic field is produced with Dynamo theory, which is an ionized gas (a plasma) which can be shown, with use of magnetohydrodynamic equations, that the flow of the conducting ionised materials in the interior of an object can continuously regenerate the magnetic fields of planetary and stellar bodies, similar to action observed in the sun. In terms of cosmology he could not see why gravity should be the predominant force in the universe when it is so weak. Also he said there is no reason why the burning core started burning.
Quote by D H Finally, the gravitational attraction inside the supposed hollow earth would be zero. Supposed inhabitants inside the hollow earth would live in a near zero-g environment.
Correct, that would be because of shell theorem. I was wondering, is gravity at all related to pressure inside planets? or are they completely independant variables?
Quote by D H THe author is completely full of it here. A hollow earth would have a very distinct seismological ring. We use seismological data to "see" to the core of the Earth. Either siesmologists have conspired to cover-up the hollow earth, or the author is lying.
He questions the data of seismology, and even proposes an alternative interpretaion of seimological data,
http://www.bibliotecapleyades.net/ti...ra_hueca_9.htm
Quote by D H More BS. Please read up on the incredible amounts of science that describes how our earth was formed and how we know about the inside of the Earth.
Its all derived from seismology though, is it not?
The only other method to map the underground is using HAARP technology, but thats military owned and not publicly available data.
Quote by russ_watters You can't have a hollow gas giant (or sun, for that matter), and we've sent space probes to every planet and many, many moons. Obviously, our understanding of gravity must be correct. I love self-debunking crackpots!
He wrote that he doubts that NASA, or the RKA, would have the authority or the integrity to rubbish Newtoniam gravity when they first started to explore the solar system. Its similar to the (alleged) moon landing hoax, whether you believe it or not, NASA could tell us whatever they want and we would not know any better. Every textbook in the world would have to have been changed, the whole basis of the mass of the earth and many other ideas would have to be changed, which would make a lot of scientists look very foolish. Its quite a typical conspiracy theorist idea; where's the evidence? -they covered it up, etc.
He also suggests that this is why not many other countries have successful space programs!
Quote by Gokul43201 This is almost entirely misinformation. The speed of seismic waves have been computed with little or no recourse to "Newtonian Gravity". The speed of these waves have been experimentally measured and the numbers agree with the theory. Furthermore, in the last couple of decades, people have used this same technique to calculate phonon velocities in multilayered semiconductor materials, which agree excellently with experimental measurements.
I agree, I found the fact that seismologists could have got the internal constitution of the earth so utterly wrong highly unlikely. Of course HE would again claim that fear of looking stupid would have prevented seismologists accepting this interpretation of the data.
I fail to see what multilayered semiconductor materials has to do with proving seismology, could you explain please.
also there are very genuine doubts about how much seismology can actually tell us about the inside of the earth.
references:
http://www.pubmedcentral.nih.gov/art...?artid=1084633
Problems of Seismology (Structural Review)
http://www.cosis.net/abstracts/EAE03...03-J-07580.pdf
British Geological Survey, West Mains Road, Edinburgh, EH9 3LA, UK
Quote by Gokul43201 This is also completely nonsensical. Measured lava temperatures (from hundreds of volcanoes) range from about 650C to about 1200C. Temperatures in the Mantle are estimated to be about 900C. There's no reason for magma (I sure hope the author didn't call it 'lava', 'cause then he'd also be revealing his ignorance of middle school geography) to cool significantly below this temperature on its way up.
estimated being the key word here. He is not arguing there are no hot magma areas, the amount of volcanoes at plate boundries shows that clearly, and they probably get even hotter than 1200C, he is saying that they are a localised phemenon and do not necessarily reflect the temparature deep down as most people believe.
Quote by Gokul43201 More crackpottery! For one thing, the gravitational field is zero everywhere inside the hollow - not just at the very center. Secondly, to burn hydrogen in a thermonuclear reaction (like in the sun) you need a mass of at least 8% the mass of our sun. The mass of the earth is itself about 50,000 times too small. And if you believe in a hollow earth, then it's even smaller still.
Gravity would be 0 everywhere in the hollow, which is a problem for him, but it seems strange that there is no gravity at the centre, to me anyway, as i always thought that was why the pressure was so high.
He resolved the thermonuclear reaction by using some of the latest tests in plasma cosmology, which is largely ignored by mainstream cosmologists, because it says that the Sun is a charged body that acts electrically. It also proposes that the Sun is surrounded by a plasma cell that stretches far out - many times the radius of Pluto, and that the sun may be fuelled externally by ionised plasma provided by the inaptly named 'solar wind' ( it should be called an 'ionized gas' the solar wind is actaully constituted of very fast moving ionized particles), and he quoted some questions from Lewis E. Franks, PhD, Stanford University, that i could not answer:
So one of the most basic questions that ought to arise in any discussion of the Sun is: Why does our Sun have a corona? Why is it there? It serves no purpose in a fusion-only model nor can such models explain its existence? and why is it millions of degrees hotter than the surface of the sun? This is highly unexpected for a body thats heat should raidiate uniformly from the core.
Quote by matthyaouw Ok, lets examine this claim. Here is your S-wave shadow zone. These S-waves can't penetrate liquid, so we get this pattern. The same would be seen if there was nothing to penetrate I suppose. But what about the P-waves? If there were a hollow space, these would not be transmitted either surely and we would see more or less the same pattern. Instead we see this pattern. They are transmitted through. I doubt this would happen if the earth was hollow.
http://earthsci.org/education/teache...veloclayer.gif - its a nice diagram, but the problem is that anything near the core is pure hypothesis from seismology, the only thing you really know is where the p wave went in and where it came out. You can speculate where it travelled inside, but not prove. The waves could have been refracted around the hollow cavity due to the decrease in pressure, which would give similar results.
Its not as if we can set up accurate seismological experiments on another planet to test the accuracy of seismology, so seismology apparently works well now, and is allegedly matematically provable, but the math could very well be based on wrong assumptions about the internal constituation.
Mentor Blog Entries: 27 Just out of curiosity, is there a reason you are giving this crackpot that much credibility (not to mention, time and effort) when compared with other reputable sources? Zz.
Recognitions: Gold Member Science Advisor Staff Emeritus This is, as so many other crackpot theories, undebunkable. If you put everything one knows in doubt, if you put everything that is said in doubt, for a variety of reasons, then you can always come to a self-consistent, or at least undebunkable set of statements. This is in fact a philosophical truth that there is no ultimate truth that you can demonstrate. You always need to start *somewhere* to show something. So if, from the start, about everything we (think to) know is put in doubt, every attempt at debunking will consistently be put in doubt too. As such, I think that "can we show you to be wrong" not the right way to tackle these statements, because this will be an endless chase of argument-building followed by putting in doubt the premises. The creationists follow in fact the same path. The scientific question is: can that hollow-earth guy make a falsifiable prediction with his hollow-earth theory that is in contradiction with the standard view and where an experiment will show his theory right and the standard view wrong ? Or has he set up a hollow-earth-that-looks-as-if-it-were-filled theory, where all our knowledge is modified in such a way as to accomodate for a hollow-earth-that-is-not-to-be-detected-to-be-hollow-in-our-paradigm ? In that case, he might just have invented a peculiar world view which is scientifically equivalent to the standard view. In that case, his theory is a fun philosophical game, which doesn't have any advantage (or disadvantage) over the standard view. But the problem with this kind of games is that one usually has an "expanding frontier of necessary new theories" when one tracks them logically, and that the inventor of the theory has limited himself to only a small domain where his system seems not self-contradictory.
Recognitions: Gold Member I've had a more thorough look at the guy's alternative explanations, and looking at these diagrams, things don't seem to add up. conventional model: http://www.bibliotecapleyades.net/im...ahueca/P_6.gif Hollow earth: http://www.bibliotecapleyades.net/im...ahueca/P_7.jpg A novel explanation for the wavepaths and distribution, but do the travel times match? In the hollow earth model, the waves arriving in the shadow zone take a much shorter path than those emerging close to 180 degrees from the epicentre, and they also avoid the hypothesised zone where density decreases with depth and thus should arrive much sooner than those that emerge further from the epicentre as well as being considerably stronger. Also note how in the conventional model has an area where both PKP and PKIKP waves arrive. That means two separate arrival times for waves, which again isn't accounted for in the hollow earth model. Also, isn't it possible to identify boundaries by reflection seismology as well as refraction? If we can pick up the inner core with reflected waves, that would pretty much can his whole idea.
Mentor Blog Entries: 27 And let's not forget the results from the geophysical neutrinos that was detected by KamLAND back in 2005[1]. I'd like to see this crackpot accounts for that. Zz. [1] T. Araki et al., Nature v.436, p.499 (2005).
Quote by ZeuZ In terms of cosmology he could not see why gravity should be the predominant force in the universe when it is so weak. Also he said there is no reason why the burning core started burning.
Yes, electromagnetism is orders of magnitude stronger than gravity, but electromagnetic forces can be attractive or repulsive and even if you can have a locally attractive or repulsive force, in a large enough sample they cancel each other.
Gravity is only attractive, so it adds continually when more and more mass is added.
Strong and weak interactions are even stronger than electromagnetism, but the vanish so fast with increasing distance that they are only active in the interior of atomic nuclei.
There is no mystery at all that gravity is the dominant force in the universe.
About the burning core, there is no mystery either. The responsible is gravity.
You must have seen the experience in high school of the rise of temperature when you compress a gas. When a cloud of gas and dust is compressed by gravity to form a star or a planet, its temperature rises. If the mass is big enough, the temperature is sufficient to start fusion reactions and you have a star. If the mass is lesser you have a planet and the temperature may be enough to fuse rocks.
Smaller bodies, like asteroids have not enough mass to be stable under the weak gravitational force, so they are subject mainly by electromagnetic interactions.
Quote by ZapperZ And let's not forget the results from the geophysical neutrinos that was detected by KamLAND back in 2005[1]. I'd like to see this crackpot accounts for that. [1] T. Araki et al., Nature v.436, p.499 (2005).
look at this official page at the stanford site of this experiment. They are clearly not claiming to measure the internal constitution of the earth, they are detecting electron antineutrinos produced by natural radioactivity in the Earth.
They say on that page, "The Earth can be split into 5 basic regions according to the seismic data", which shows they are using seismology as if it is a proven fact, and are obviously never going to conclude anything about the structure of the earth as all they are just measuring the radiogenic power of particles coming from the earth.
they also say on that site, "The deepest borehole is ~12km, 1/500 of the Earth's radius. ", so if theres still over five thousand miles to explore, i'd say being confident that seismology is accurate to predict the exact constitution, temparature, pressure and density of the Earth is naive.
No-one has pointed out an alternative way of measuring the earths interior apart from (potentially flawed) seismology.
(they even have a picture of a hollow earth on their site, if you look at the left sphere.)
i'm glad you brought up that paper, as it clearly demonstrates matthyaouw wrong when he says
Quote by matthyaouw I don't know where he gets his ideas of what scientists say* causes lava melt, but he's fairly far off the mark. For the major causes, firstly, yes heating is an option, but not from localised radioactive decay or faults, but from convecting material upwelling from the core-mantle boundary.
As on the Stanford site it quite clearly says that:
"The radioactive isotopes inside the Earth generate heat. In particular, decays of the daughter nuclei in the decay chains of 238U and 232Th, and 40K generate most of the radiogenic heat produced. According to some of the mantle convection models, these two numbers, 44TW (or 31TW) for the total heat dissipation rate from the Earth, and 19TW for radiogenic heat production rate should be similar.
As these radioactive isotopes beta-decay, they produce antineutrinos. So, measuring these antineutrinos may serve as a crosscheck of the radiogenic heat production-rate"
so actually Jan is quite near the mark when he says heat is produced from radioactive isotopes in the earths crust.
Quote by Gokul43201 This is also completely nonsensical. Measured lava temperatures (from hundreds of volcanoes) range from about 650C to about 1200C. Temperatures in the Mantle are estimated to be about 900C. There's no reason for magma to cool significantly below this temperature on its way up.
Estimated, yes, proved, no. You have also assumed that seismology is accurate in measuring the temparature of the earths interior in the first place, as seismology is the only method of knowing what is under our feet more than 20 miles down.
Quote by CEL Strong and weak interactions are even stronger than electromagnetism, but the vanish so fast with increasing distance that they are only active in the interior of atomic nuclei. There is no mystery at all that gravity is the dominant force in the universe... Smaller bodies, like asteroids have not enough mass to be stable under the weak gravitational force, so they are subject mainly by electromagnetic interactions.
I agree about asteroids being affercted more by electromagnetic interactions. Infact some cosmologists believe that the tails on comets are electrical discharges as the comet travels rapidly through local stars electric field (or "ion wind" ). if you look at these pictures of comets its quite obviosly electrical in nature. The details of this observed phenomenon are here.
I know that the srong nuclear force and the weak force are too strong to work on a scale as large as planets as they only effect the subatomic level, but he didn't mention the weak or strong force. He specifically mentioned magnetism, charge and electricity. We all know what happens when you apply an electric field to, for example, an electron, it will experience a force in a certain direction, ( F = Ee ) or similarly with a magnet. If you are using a magnet to pick something up, the WHOLE of the earth is pulling down on it, yet you are able to overcome this with something in your hand. I think it is actually quite logical to assume that magnetism and charge play a part in planetary formation since they are so much stronger and dominant in the universe we observe.
Quote by CEL About the burning core, there is no mystery either. The responsible is gravity. You must have seen the experience in high school of the rise of temperature when you compress a gas. When a cloud of gas and dust is compressed by gravity to form a star or a planet, its temperature rises. If the mass is big enough, the temperature is sufficient to start fusion reactions and you have a star. If the mass is lesser you have a planet and the temperature may be enough to fuse rocks.
Thats what i want to know, because i'm not sure it is. Newtons shell theorem says gravity is 0 at the centre, and infact at the centre you would be being pulled out in every direction, so i cant imagine there would be much pressure there since all the forces are acting outwards on you.
is there a relationship between pressure and gravity inside planets? or are they completely independant variables? it seems odd they are independant as the alleged reason the pressure is so high at the centre is due to gravity, but gravity = 0 at the centre.
Thread Closed
Page 1 of 2 1 2 >
Thread Tools
| | | |
|---------------------------------------------------|-----------------|---------|
| Similar Threads for: Hollow Earth, Hollow Gravity | | |
| Thread | Forum | Replies |
| | General Physics | 14 |
| | Earth | 4 |
| | General Physics | 12 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564093947410583, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/39447/off-center-vortex?answertab=active
|
# Off-center vortex
If we have a off-center vortex with strength $n$ in a superfluid contained in a cylindrical container (with distance $r$ away from center), the angular momentum is determined from the following integral: $$\int r dr d\theta dz \rho r v(r)$$ Now suppose the shortest distance from the vortex to the edge of the container is $r_e$, and we ignore the variation of density $\rho$ in the core, how should the integral be evaluated? Is it correct to just take the position of the vortex as the new center and integrate all the way up to $r_e$ or that vortex still have influence for regions outside the circle with radius $r_e$?
-
– David Zaslavsky♦ Oct 9 '12 at 23:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.892922043800354, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/46358/arithmetic-groups-vs-zariski-dense-discrete-subgroups
|
## arithmetic groups VS. Zariski dense discrete subgroups?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Assume that $G$ is a semi-simple linear algebraic group defined over $\mathbb{Q}$, which is $\mathbb{Q}$-simple, and that $G(\mathbb(R)$ is non-compact, without $\mathbb{R}$-factors of rank 1. Then by Margulis's works, the arithmetic subgroups of $G(\mathbb{R})$ are the same as discrete lattice in $G(\mathbb{R})$. Here a lattice is a discrete subgroup $\Gamma$ such that the quotient $\Gamma\backslash G(\mathbb{R})$ is of finite volume with respect to the measure deduced from the left Haar measure. In particular, an arithmetic subgroup in $G(\mathbb{R})$ is Zariski dense in $G_\mathbb{R}$.
Conversely, a discrete subgroup $\Gamma$ in $G(\mathbb{R})$ is given, such that $\Gamma$ is also dense in $G_\mathbb{R}$ for the Zariski topology, what condition should one impose to make it arithmetic? Shall I assume $\Gamma$ to be finitely generated, or stable under certain actions such as $Aut(\mathbb{R/Q})$? I feel that such kind of results are more or less available in the literature, bu I'm far from an expert in this field.
Many thanks!
-
The group Aut($\mathbf{R}$) is trivial. – BCnrd Nov 17 2010 at 14:46
here by arithmetic subgroup I mean a subgroup of $G(\mathbb{R})$ that is commensurable with some congruence subgroup, the latter is given in term of the $\mathbb{Q}$-structure for $G$, namely $\Gamma$ is commensurable with some $\Gamma'=K\cap G(\mathbb{Q})$ with $K$ a compact open subgroup of $G(\mathbb{A}_f)$, $\mathbb{A}_f$ being the ring of finite adeles of $\mathbb{Q}$. Since $G(\mathbb{Q})$ is often too large (cf. the real approaximation), $\Gamma$ is not that large, rather it contains a cofinite subgroup of $G(\mathbb{Z})$, provided the latter being reasonably defined. – genshin Nov 17 2010 at 15:17
1
Suppose $G = {\rm{SO}}(q)$ for non-deg. quad. space $(V,q)$ over $\mathbf{Q}$ that is pos.-def. at the real place, and let $G' = {\rm{SO}}(q')$ be another, where $(V,q)$ and $(V',q')$ are not $\mathbf{Q}$-isomorphic but $\dim V = \dim V'$, so $G'_{\mathbf{R}} \simeq G_{\mathbf{R}}$. Then $G'$ is a different $\mathbf{Q}$-structure on $G_{\mathbf{R}}$, and provides lots of discrete subgroups of $G(\mathbf{R}) = G'(\mathbf{R})$ that aren't arithmetic relative to the $\mathbf{Q}$-structure $G$ yet pass all "algebraic" tests. Margulis' result includes existence of suitable $\mathbf{Q}$-structure. – BCnrd Nov 17 2010 at 15:41
## 2 Answers
Arbitrary Zariski-dense subgroups in a semisimple group can be very small from a real-analytic point of view. It seems that algebra cannot distinguish between "small" and "large" Zariski-dense subgroups, so most criteria to distinguish between the two have a strong non-algebraic flavour. (Of course one can also characterize arithmetic groups algebraically, but this has even less to do with the line of argument you seem to suggest.) From a dynamical point of view, the key difference between lattices and arbitrary Zariski-dense subgroups is that the former act transitively on the product of the Furstenberg boundary of the ambient Lie group with itself ("double ergodicity"). This is a sort of "largeness" property. There are various ways to capture this property, the most systematic way seems to me the concept of a generalized Weyl group due to Bader and Furman.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
an arithmetic subgroup is a lattice. This is in characteristic zero due to Borel and Raghunathan (MR0147566) and in positive characteristic due to Harder and Behr.
-
The OP is well-aware of this, since it is stated in the first paragraph of the question (in a more precise form). The question being posed (I think) is to give "algebraic" and/or "group-theoretic" conditions (not involving measure) that imply arithmeticity for a discrete subgroup of $G(\mathbf{R})$ that is Zariski-dense in $G_{\mathbf{R}}$. – BCnrd Nov 17 2010 at 15:14
I wonder if an alternative description is possible, so that discrete Zariski dense subgroups $\Gamma$ would be close to being arithmetic. Of course I should not directly require $\Gamma$ to be also a lattice, which would be reduced back to Margulis' theorem. – genshin Nov 17 2010 at 15:20
2
Well, there is also a characterization which is totally algebraic. It is in a paper by Lubotzky and Venkataramana: A group theoretical characterisation of S-arithmetic groups in higher rank semi-simple groups. This does not use even the Zariski density, since you do not assume your group to be linear. – Keivan Karai Nov 17 2010 at 15:33
Plus, I do not think that Zariski dense subgroups are "close" in any sense to being arithmetic. Venkataramana has a very nice paper in which he constructs a variety of Zariski dense subgroups. – Keivan Karai Nov 17 2010 at 15:35
1
Dear Keivan: great, the constructions in that paper of Venkataramana are contained in arithmetic groups relative to a specified $\mathbf{Q}$-structure. So that seems to settle things in the negative (even if it is assumed that $\Gamma$ is commensurable with $\Gamma \cap G(\mathbf{Q})$). – BCnrd Nov 17 2010 at 16:16
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499061703681946, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/102536/axioms-for-riemann-zeta-function/102596
|
## Axioms for Riemann $\zeta$ function
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Are there any set of axioms that completely characterize the Riemann zeta function? i.e. like Ressayre axioms for the exponential function in an exponential field or functional equations.
-
Do you allow axioms like: "analytic except for a pole at 1" ?? – Gerald Edgar Jul 18 at 13:35
Yes of course. Analitycity is first-order. – Math-player Jul 18 at 13:44
...I mean after invoking Cauchy-Riemann conditions. – Math-player Jul 19 at 9:55
## 4 Answers
Perhaps you are looking for something like Hamburger's Theorem?
It states, essentially, that the only Dirichlet series with a finite number of singularities satisfying the same functional equation as the zeta-function is the zeta-function. You can find the details in Titchmarsh's book.
Googling I found the following link: http://www.mat.univie.ac.at/~esiprpr/Zetaproc/patterson.pdf
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't know if this is in the spirit you're looking for, but there is the Selberg class -- an attempt at axiomatizing $L$-functions, requiring a Dirichlet series, functional equation of a certain type, analyticity, and an Euler product (typically) -- and it would be possible to impose extra conditions to isolate the zeta function. In particular, all degree 1 elements are known to come from Dirichlet L-functions (this was proved by Kaczorowski and Perelli, and then reproved by Soundararajan). Thus, requiring the degree and the conductor to both be 1 should isolate the zeta function.
-
If it is possible, I would like to remove any conditions related to the infinite sum. Because giving an infinite sum completely characterizes the zeta function, but we are unable to express it in first order as I suppose. – Math-player Jul 18 at 13:37
Fair enough. It's also worth noting that Hamburger's theorem, as mentioned by Micah and Stopple, is the better way of stating my answer, since the conditions that the degree and conductor are 1 restrict the functional equation to being exactly the one satisfied by zeta, and is thus subject to Hamburger's result. – rlo Jul 18 at 18:18
Robert: Do any (or all) of the proofs which classify degree one elements in the Selberg class in some way assume the existence of an Euler product? – Micah Milinovich Jul 18 at 23:29
1
As I stated it, yes, that is necessary. However, Kaczorowski and Perelli define what they call the extended Selberg class, where there is no Euler product, and they provide a classification of the elements of degree up to 1. Degree 0 elements are Dirichlet polynomials satisfying a certain symmetry condition, and degree 1 elements are linear combinations of the product of a degree 0 element with a (potentially shifted) Dirichlet L-function. Sound's proof also works for this class, and shows that the Dirichlet coefficients are periodic, so that multiplicativity implies Dirichlet character. – rlo Jul 19 at 2:29
Thanks for the clarification... – Micah Milinovich Jul 19 at 19:20
Hamburger's Theorem (see Titchmarsh 'Theory of the Riemann Zeta Function' $\S$ 2.13) is in some sense an axiomatic characterization of $\zeta(s)$ among all Dirichlet series by its functional equation. It says:
Let $f(s)=\sum_n a_n n^{-s}$ a Dirichlet series absolutely convergent for $\sigma>1$ such that for some polynomial $P(s)$, $G(s)=P(s)f(s)$ is an integral function of finite order. Suppose $$f(s)\Gamma(s/2)\pi^{-s/2}=g(1-s)\Gamma((1-s)/2)\pi^{-(1-s)/2}$$ where $g(1-s)=\sum_n b_n n^{1-s}$ is absolutely convergent for $\sigma<-\alpha<0$. Then $$f(s)=C\zeta(s)$$ for some constant $C$.
Hamburger's theorem was a motivation for Hecke's study of Dirichlet series with functional equations generally, leading to his work on automorphic forms.
-
1
oops, too late. – Stopple Jul 18 at 15:54
There are in fact at least two axiomatic characterizations of $\zeta(s).$ One of them is given by Hecke and one of them was given by Hamburger.
Hamburger's Theorem states: Suppose $D(s)$ is a dirichlet series, convergent in some half plane and whose coefficients have polynomial growth. If 1) There exists a polynomial $P(s)$ so that $P(s)D(s)$ is entire and of finite order.
2) $D(s/2)$ is also dirichlet series. That is, the coefficients of $D(s)$ are supported on squares.
3) If $\xi(s)= \pi^{-s}\Gamma(s)D(s)$ then $\xi(1/2-s)=\xi(s).$
Then $D(s)=C\zeta(2s).$
Hecke's version states: Suppose $D(s)$ is a dirichlet series, convergent in some half plane and whose coefficients have polynomial growth. If 1) $(s-1/2)D(s)$ is entire and of finite order.
2) The coefficients of $D(s)$ have arbetrary support.
3) If $\xi(s)= \pi^{-s}\Gamma(s)D(s)$ then $\xi(1/2-s)=\xi(s).$
Then $D(s)=C\zeta(2s).$
A little extra information: These theorems cannot be combined. The so-called "Big Mac Theorem" where 1) There exists a polynomial $P(s)$ so that $P(s)D(s)$ is entire and of finite order.
2) The coefficients of $D(s)$ have arbetrary support.
3) If $\xi(s)= \pi^{-s}\Gamma(s)D(s)$ then $\xi(1/2-s)=\xi(s).$
produces infinitely many linearly independent dirichlet series!
Sources:
http://www.rowan.edu/open/depts/math/HASSEN/Papers/paper1.pdf
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209108948707581, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/56072-differentiation-rules-2-a.html
|
# Thread:
1. ## differentiation rules #2
I need help on these problems. Thanks in advance.
1. Use differentials to estimate the amount of paint needed to apply a coat of paint .05 cm thick to a hemispherical dome with diameter 50 m.
2. Use a linear approximation or differentials to estimate the given number. L(x) = f(a)+f '(a)(x-a) I have tried to use the formula and pluged in the numbers but I still couldn't figure out the correct answer.
a) e^-.015
b) tan 44 degrees
2. 1. Use differentials to estimate the amount of paint needed to apply a coat of paint .05 cm thick to a hemispherical dome with diameter 50 m.
$dV = 2\pi r^2 \, dr$
$dV = 2\pi (25 \, m)^2 \, (.0005 \, m)$
dV will be in cubic meters
2. Use a linear approximation or differentials to estimate the given number. L(x) = f(a)+f '(a)(x-a) I have tried to use the formula and pluged in the numbers but I still couldn't figure out the correct answer.
a) $e^{-.015}$
use the line tangent to $y = e^x$ at $(0,1)$
$m = f'(0) = e^0$
$y - e^0 = e^0(x - 0)$
$y - 1 = x$
$y = x + 1$
$e^{-.015} \approx -.015 + 1 = .985$
b) tan 44 degrees
note that derivatives for trig functions are only valid for angles in radians.
$f(x) = \tan{x}$
$f'(x) = \sec^2{x}$
use line tangent to the point $\left(\frac{\pi}{4},1\right)$
$m = \sec^2\left(\frac{\pi}{4}\right) = 2$
$y - 1 = 2\left(x - \frac{\pi}{4}\right)$
$y = 2\left(x - \frac{\pi}{4}\right) + 1$
$\tan\left(\frac{44\pi}{180}\right) \approx 2\left(-\frac{\pi}{180}\right) + 1 = 1 - \frac{\pi}{90}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8553974628448486, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/57255/probability-distribution-for-several-variables
|
## probability distribution for several variables
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Fokker-Planck equation for several variables is :
$\frac{\partial W}{\partial t} = L_{FP}W$
where
$L_{FP} = -\frac{\partial}{\partial x_i}D_i({x})+\frac{\partial^2}{\partial x_i \partial x_j}D_{ij}({x}).$
The summation convention for Latin indices is used here. The drift vector $D_i$ and the diffusion tensor $D_{ij}$ generally depend on the N variables $x_1,...,x_N = {x}$. The Fokker-Planck equation is an equation for the distribution function $W({x},t)$.
According to [Risken 1989 ch6], If drift & diffusion coefficients do not depend on time & $D_{ij}$ is positive definite everywhere & if the drift coefficient has no singularities, a stationary solution $W_{st}$
$L_{FP} W_{st} = 0$,
may exist.
If one solves the above equation, a possible stationary solution can be
$W_{st} =\frac{a}{D_{ij}}exp(\int^{x_j}_0 \frac{D_i}{D_{ij}} dt_j)$
Where a is a normalization constant. Now I want to expand this probability distribution for i=1,2. If I use the Einstein summation convention, it becomes
$W_{st} ={\frac{a}{D_{11}}exp(\int^{x_1}_0 \frac{D_1}{D_{11}} dt_1)+\frac{a}{D_{12}}exp(\int^{x_2}_0 \frac{D_1}{D_{12}} dt_2)+\frac{a}{D_{21}}exp(\int^{x_1}_0 \frac{D_2}{D_{21}} dt_1)+\frac{a}{D_{22}}exp(\int^{x_2}_0 \frac{D_2}{D_{22}} dt_2)}$.
It seems very strange to me. Is it a really correct probability distribution or I made a mistake somewhere? And if it is correct how can I normalize it? Can anyone help?
-
Is this the same question as the one you asked on math.stackexchange.com? – Mariano Suárez-Alvarez Mar 3 2011 at 15:28
@Mariano: yes. I repeat it here in with complete details to see if expert mathematicians can help me. – SMH Mar 3 2011 at 15:32
1
@SMH: a five hours wait is not anywhere near long enough. For what it's worth, quite a few professional mathematicians monitor math.stackexchange, but they tend to live all around the world. At least wait a day before you give up on math.SE. Also, There's some rather fundamental misunderstanding going on here with the second to last equation written above. You should take a long hard look at Fabian's comments and answer to your question there: math.stackexchange.com/questions/24793/… – Willie Wong Mar 3 2011 at 16:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9084141850471497, "perplexity_flag": "middle"}
|
http://nrich.maths.org/1853/note
|
### Matching Fractions Decimals Percentages
An activity based on the game 'Pelmanism'. Set your own level of challenge and beat your own previous best score.
### Weekly Problem 49 - 2007
What is the mean of 1.2 recurring and 2.1 recurring?
# Repetitiously
### Why do this problem?
This problem introduces the idea that a decimal can 'go on forever'. This idea is subtle and interesting and confusing.
### Possible approach
"I'm thinking of two numbers, I add $6$ to the smaller one to get the larger one, OR I could multiply the smaller one by $4$ to get to the larger one. What are my numbers?"
Continue with more examples to wake everyone up and establish that a difference and a ratio define two unknowns (but not necessarily in those words).
Include examples that lead to simple fractions, e.g. $d=5$ $r=3$, and insist on fraction, not decimal answers.
Put $0.222222\ldots$ and $2.22222 \ldots$ on the board, and ask how to get from the smaller to the larger in two ways. Work through the processes for finding these "unknowns", alert to all opportunities for students to talk about the meaning of these recurring decimals.
Ask students to choose and work on their own pairs of related recurring decimals, from those in the problem, or later, to make up their own. It's easy for students to verify their final fraction on a calculator.
### Key questions
• What does $0.22222222 \ldots$actually mean? How many decimal places are there?
• Would multiplying by $10$ give a decimal that ended? Why not?
• Can I do $2.2222222 \ldots$ divided by $0.222222222 \ldots$ on a calculator?
### Possible extension
• What is the fraction equivalent of $0.999999 \ldots$? $0.4999999 \ldots$?
• How would you find the fraction for $0.225225225 \ldots$ or $0.222522252225 \ldots$
• Do you think that all recurring decimals will correspond to a fraction?
• Can you work out which fractions will correspond to a recurring decimal and which fractions will not?
### Possible support
Experimentation with a calculator for small numbers can help students to get into the problem.
Students could be asked to catalogue decimal equivalents of many common fractions, classifying the decimals as terminating, recurring and "no obvious repeats". This data set can be used to check work later, or to suggest recurring decimals to convert back. Encourage students to classify and describe families of decimals with clear recurring patterns (e.g. ninths and elevenths).
Encourage students to spot patterns and then to make a conjecture about the result when dividing two recurring fractions.
$\frac{2.2}{0.22}, \frac{2.22}{0.222},\frac{2.222}{0.2222}, \frac{2.2222}{0.22222},$etc.
Can they extend this to the second part of the question?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263160228729248, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/24344/help-understanding-tensoring-of-exact-sequences?answertab=votes
|
# Help understanding tensoring of exact sequences
I'm reading the proof of the following fact:
Let A be a ring, a an ideal of A, M an A-module. Then $(A/a) \otimes_{A} M$ is isomorphic to M/aM.
So the solution is to tensor the exact sequence $0 \rightarrow a \rightarrow A \rightarrow A/a \rightarrow 0$ with M.
This induces an exact sequence:
$a \otimes_{A} M \rightarrow A \otimes_{A} M \rightarrow (A/a) \otimes_{A} M \rightarrow 0$
One question:
Why is $a \otimes_{A} M \cong aM$ ? Why is this "clear and obvious" i.e how do we see it quickly?
I just started reading about the tensor product of modules and have trouble seeing why such isomorphisms are natural, I'm sure there's a "quick" way to see them, so why this is clear? is there any other way to think about the tensor product besides been the quotient of a free module by a set of generators?
Thanks
-
Your induced sequence is not exact on the left. – curious Mar 1 '11 at 2:59
So I am not sure you want $a\otimes M \cong aM$ – curious Mar 1 '11 at 3:13
2
Rather, the image of the map from $a\otimes M$ to $M$ is $aM$. – curious Mar 1 '11 at 3:17
2
Just to reiteratre the comment of @curious: curious is correct. The sequence you get after tensoring up is typically not exact on the left, and it is not true in general that $a \otimes M \cong a M.$ What is true, as Soarer explains in an answer below, is that $a\otimes M$ maps onto $a M$ under the natural map. (Think about the case $A =\mathbb Z$, $a = 2 \mathbb Z$, and $M = \mathbb Z/2 \mathbb Z$ to see how exactness on the left can fail, and (hence) why you don't always get an isomorphism.) – Matt E Mar 1 '11 at 3:47
BTW, as to your last question: yes, the other way to think about the tensor product is via its defining universal mapping property. This may seem very pie-in-the-sky at first, but with practice it gives you a good handle on things. In particular, it forces you to think about the functorial properties of your homomorphisms, which is a healthy practice. – Pete L. Clark Mar 1 '11 at 4:04
## 4 Answers
Since $a \otimes_{A} M \rightarrow A \otimes_{A} M \rightarrow (A/a) \otimes_{A} M \rightarrow 0$ is exact,
the second function in the sequence $g:A \otimes_{A} M \rightarrow (A/a) \otimes_{A} M$ is surjective. Moreover, the image of $f:a \otimes_{A} M \rightarrow A \otimes_{A} M$, that is $f(a \otimes_{A} M)$, is the kernel of $g$.
By the first theorem of isomorphism for modules,
$A \otimes_{A} M/f(a \otimes_{A} M) \cong (A/a) \otimes_{A} M$ (1)
Now, if $h: S \rightarrow T$ is an isomorphism of modules, and $S'$ is a submodule of $S$, we have $S/S'\cong T/h(S')$ (2)
which can be proved by applying again the theorem of isomorphism to the projection function $S\rightarrow T/h(S')$.
Since $h:A \otimes_{A} M\rightarrow M$ , defined by $h(a\otimes_{A} m)=am$ is an isomorphism and $h(f(a \otimes_{A} M))=aM$, it follows by (2) that
$A \otimes_{A} M/f(a \otimes_{A} M) \cong M/aM$ (3)
Finally, (1) and (3) prove your requirement.
Another proof can be given using the universal property of the tensor product mentioned above.
-
1
$a \otimes_A M \to A \otimes_A M$ may NOT be injective. – Soarer Mar 1 '11 at 5:38
Thanks. I modified it. – Theta30 Mar 1 '11 at 6:27
thanks! – user6495 Mar 3 '11 at 19:18
The isomorphism $A \otimes_A M \to M$ is by sending $a \otimes m \to am$. Under this isomorphism, the image of $\mathfrak{a} \otimes M$ would be $\mathfrak{a}M$.
-
That the natural map $a \otimes_A M \rightarrow aM$ be an isomorphism is very far from being "clear and obvious". As others have noted, it is not generally true. But even more, it may be viewed as the entire obstruction to the functor "tensoring with $M$" being exact. In other words, if for all ideals $a$ of $A$ (in fact it suffices even to restrict to finitely generated ideals) the map $a \otimes_A M \rightarrow aM$ is an isomorphism, then the module $M$ is flat and tensoring any injection with $M$ gives an injection. See for instance $\S 3.11$ of my commutative algebra notes, where I state and prove this (well-known) result, which I call the Tensorial Criterion For Flatness.
-
thanks, I still don't see why the conclusion of the problem follows, namely why $M/aM \cong (A/a) \otimes_{A} M$? sorry I'm new with this stuff. The solution is based on the hint in Atiyah's book. – user6495 Mar 1 '11 at 4:35
As mentioned by curious, the induced sequence is in general not exact on the left, though it is exact at the other two places. But as both curious and Soarer point out, the image of the map on the left is $aM$ under the isomorphism $A\otimes_A M \cong M$, and then exactness at the other two places gives you the isomorphism that you want.
is there any other way to think about the tensor product besides been the quotient of a free module by a set of generators?
Yes! In fact, there is a much more useful, if more abstract, way to think of tensor products. Let $A$ be a commutative ring, and let $M,N,Z$ be $A$-modules. (Analogous statements hold over noncommutative rings, but the formulation is not quite as clean.) An $A$-bilinear map $b:M\times N\to Z$ is a map of sets such that for each $m\in M$ and each $n\in N$, the maps $b(m,-):N\to Z$ and $b(-,n):M\to Z$ are $A$-linear.
Bilinear maps appear all over the place; you can think of them as "generalized multiplications" -- for any $A$-algebra $R$, its multiplication $R\times R\to R$ is $A$-bilinear. Note that a bilinear map $M\times N\to Z$ is not $A$-linear with respect to the usual $A$-module structure on $M\times N$. That's too bad, because it's always nice to have things be linear. To "correct" this, the tensor product $M\otimes_A N$ is exactly the $A$-module such that bilinear maps $M\times N\to Z$ are the same thing as linear maps $M\otimes_A N\to Z$.
More precisely, $M\otimes_A N$ is characterized by the following universal property: if $Q$ is an $A$-module and $b:M\times N\to Q$ is an $A$-bilinear map, then there is a unique $A$-linear map $\tilde b:M\otimes_A N\to Q$ such that $\tilde b(m\otimes n) = b(m,n)$ for all $(m,n)\in M\times N$.
So, for example, if $R$ is an $A$-algebra, then the multiplication of $R$ is an $A$-linear map $R\otimes_A R\to R$. Indeed, this lets you phrase the axioms of an associative algebra completely in terms of linear maps, which is a useful thing to do if, for example, you want to talk about "algebra objects" in categories other than the category of $A$-modules.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 82, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399868845939636, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/kinematics?page=4&sort=newest&pagesize=15
|
Tagged Questions
The description of the movement of bodies by their position, velocity, acceleration (and possibly higher time derivatives, such as, jerk) without concern for the underlying dynamics/forces/causes.
3answers
80 views
Estimate quarter mile time
I need to estimate a drag race quarter mile time given the car's weight, bhp and preferably the drive (FWD, RWD, 4WD). I know $v(t) = ds/dt$ and $a(t) = dv/dt = d^2s/dt^2$, but how can I get the ...
1answer
107 views
Do observers at different speeds perceive other speeds differently?
I was told that if a plane takes less time to travel from the China to US as opposed to the other direction due to rotation of the earth. I suspect this is incorrect however. From this scenario I ...
2answers
278 views
Time and distance where velocity is a function of time
I'm creating a video game set in space in which spaceships travelling between places accelerate until they hit a maximum velocity. They then travel at that velocity until a time when they need to ...
0answers
129 views
Meeting Of 2 People On a circular track [closed]
Two persons are running on a circular track either in the same direction or in the opposite direction, indefinitely. The speed of both of them is given to you. Speed will be positive in clockwise ...
2answers
377 views
Does an airplane's speed include the speed of the Earth?
After take off, does airplane's speed include Earth's movement/speed? Do airplanes turn with Earth movement/rotation?
3answers
362 views
Finding instantaneous speed in MPH given acceleration, RPM and gear ratio
I am still trying to figure out how to (semi) accurately model instantaneous speed after having found acceleration. I have found that at higher RPMs, the resultant acceleration will be lower. I was ...
2answers
836 views
Finding acceleration for a car after finding torque
I am trying to calculate the acceleration of a vehicle after finding the torque $\tau$. Assuming my Horse Power is 130 and my RPM is 1000, I calculated torque as: \tau = \frac{130 \times ...
1answer
114 views
why does perpendicular motion to the direction of someone' s approach does not affect the distance between them
I was reading about the the four turtles/bugs math puzzle Four bugs are at the four corners of a square of side length D. They start walking at constant speed in an anticlockwise direction at all ...
1answer
252 views
Relationship between acceleration, velocity and position
I'm learning some applications for equation of motion. But I'm failing to relate velocity, acceleration and position. If $v=\frac{dr}{dt}$ and $a=\frac{dv}{dt}$, why $a$ is $\frac{d^2r}{dt^2}$ ...
1answer
348 views
How do I determine a component of relative velocity?
At a particular instant, a stationary observer on the ground sees a package falling with speed v1 at an angle to the vertical. To a pilot flying horizontally at constant speed relative to the ...
2answers
338 views
Finding Distance an Object Travels Up an Incline After Launch
I've been doing a review for an introductory physics course final. I have a question on one problem though. Here is the problem: A mass (M=2kg) is placed in front of a spring with k=900N/m, ...
2answers
316 views
Kinematics - concept question
A child tosses a ball directly upward. Its total time in the air is T. Its maximum height is H. What is its height after it has been in the air a time T/4? Neglect air resistance. Ok so I know that ...
2answers
156 views
Should the cold drink come out of a free falling bottle with an open lid?
If I hold an open bottle and perform free fall, say sky diving, along with it from, say 10000ft, while I'm holding the bottle vertically upwards should the cold drink come out of the bottle? Kindly ...
1answer
11k views
Solving for initial velocity required to launch a projectile to a given destination at a different height
I need to calculate the initial velocity required to launch a projectile at a given angle from point A to point B. The only force acting on the projectile after launch will be gravity – zero air ...
3answers
2k views
Difference b/w Kinetics & Kinematics w/concrete example
(I know whether I understand this or not doesn't matter much to my work & study but am just curious.) I still can't differentiate in my head kinetics and kinematics (similar thread is found but ...
5answers
1k views
Why does work equal force times distance?
My 'government-issued' book literally says: Energy is the capacity to do work and work is the product of net force and the 1-dimensional distance it made a body travel while constantly affecting ...
2answers
556 views
What is displacement? Position relative to a reference point or change of position
What is the "official" or most useful definition of displacement in the context of kinematics? There are two common ones: Displacement is the length and direction of a line from a fixed reference ...
1answer
134 views
Looking for a way to simplify a physics formula [closed]
I have the following physics formula: $$d = \frac{1}{2} at^2$$ where d is equal to half (at) squared where: d is distance a is acceleration t is time I need to simplify this to get the ...
2answers
302 views
2D - Kinematics - Linkage System using Vector Algebra
I have this question that I dont know how to solve correctly : My question is, how do I find $V_B$ ? I will find the angular velocities myself, but I want to know the method to get $V_B$ ? I know ...
1answer
1k views
How do kinetic energy and linear momentum relate?
It took me quite a long time to click my gears in place and even then I'm not sure it's completely correct. The problem is that I need to understand these concepts (physics concepts; not just these ...
1answer
140 views
A practical deceleration question
My friend is a U.S. Army paratrooper. Today, through an unfortunate series of events, he was jerked out of a C-17 traveling at 160 knots by his reserve parachute. First-hand accounts describe it as he ...
1answer
77 views
A Satellite's Perspective
If a planet is spinning east to west and there is a satellite spinning from west to east... Can the satellite travel at a speed sufficient to make the planet appear, from the vantage point of the ...
1answer
1k views
Question on Projectile Motion equation [closed]
A golf ball is shot into the air from the ground. If the initial horizontal velocity is 20m/s and the initial vertical velocity is 30m/s, what is the horizontal distance the ball will travel ...
3answers
250 views
Differential squared vs. differential of squared
Why it is said that $$\frac{dx^2}{dt^2}=\upsilon^2$$ I can only understand the following one: $$\left (\frac{dx}{dt} \right)^2=\upsilon^2$$ Edit: Excerpt from Landau's Mechanics: Execrpt from ...
1answer
690 views
Trajectory of projectile thrown downhill
I'm teaching myself mechanics, and set out to solve a problem determining the optimum angle to throw a projectile when standing on a hill, for maximum range. My answer seems almost plausible, except ...
2answers
189 views
Find vector position in time give this graph?
Graph is position x position. There are 3 points, $A$, $B$ and $C$. $A(0,2)$ $B(4,2)$ $C(6,0)$ Particle travels from $A$ to $B$ and from $B$ to $C$ at constant $v = 2 ~m/s$. Find vector ...
2answers
123 views
Trains travelling at different speeds towards a station C
Here's the question: Consider two stations A and B located 100 kilometer apart. There is a station C, located between A and B. Now trains from station A and B start moving towards station C at ...
3answers
1k views
Maximum range of a projectile launched from elevation “dumbed down”
I am trying to conceptually understand why the angle which produces the greatest range for a projectile launched with an elevation is not 45 degrees. I have exausted all other options, and I hope that ...
2answers
195 views
What is the displacement of an accelerated and relativistic object?
Displacement in an accelerated classical object is: $$s=ut+\frac {at^2}{2}$$ What is the displacement of an accelerated relativistic object? In Newtonian mechanics there are two types of ...
1answer
1k views
Simple Projectile Motion Question
A volcano erupts 50m below the sea level. A rock leaves the crater at 20 m/s at an angle 30 deg with the vertical line. The rock has a mass of 15kg. IGNORE WATER RESISTANCE. It gets out of the water, ...
1answer
627 views
What's wrong with this equation for harmonic oscillation?
The question: A particle moving along the x axis in simple harmonic motion starts from its equilibrium position, the origin, at t = 0 and moves to the right. The amplitude of its motion is ...
2answers
256 views
Does acceleration at an angle to velocity change the direction?
During an interval of time, a tennis ball is moved so that the angle between the velocity and the acceleration of the ball is kept at a constant 120º. Which statement is true about the tennis ball ...
1answer
648 views
How do you calculate instantaneous velocity in projectile motion?
An object is thrown horizontally with a velocity of 30 m/s from the top of a tower. It undergoes a constant downward acceleration of 10 m/s2. The magnitude of its instantaneous velocity after 4.0 ...
2answers
210 views
Kinematics Problem
The question asks me to find the angular velocity. Now I do not want you to solve my homework, I want explanation please. It states that the acceleration of point P is \$\vec{a}= -3.02 \vec{i} ...
1answer
578 views
How do I know an initial speed of a thrown object using the max height [closed]
The simulation being referred to is in box2d An object is thrown to the max height of $h$ with gravity of $g$, what is it initial speed? I tried the following: $v = v_0 - g t$ $0 = v_0 - g t$ \$t ...
2answers
299 views
Can these figures demonstrating the safety of “Archery Tag” arrows be correct?
There is a new sport called "Archery Tag" that involves shooting opponents with foam-tipped arrows fired out of a real bow. The official Archery Tag web site presents data that claims to show the ...
3answers
174 views
If my acceleration is -1 ($a=-1\:\rm{m/s^2}$) and I'm standing in the infinite ($x_0=\infty \:\rm m$), could I reach the point $x=0\:\rm m$?
I'm standing in the infinite where $x_0=\infty \:\rm m$. If I have a negative acceleration, could I reach the point $x=0\:\rm m$? Would it be possible to calculate how long would take to reach the ...
1answer
221 views
Calculating the launch angle of a horizontal launch (mechanics) [closed]
I need some help with the following question: A smooth spherical object (the first object) is projected horizontally from a point of vertical height H = 26.38 metres above horizontal ground with a ...
2answers
457 views
What will be the relative speed of the fly? [duplicate]
It has happened many times and i have ignored it everytime. Yesterday it happened again . I was travelling in a train and saw a fly (insect) flying near my seat. Train was running at a speed of ...
3answers
361 views
A freefalling body problem, only partial distance and time known
Well, I've been trying to figure out a problem which I imposed on myself, so no literal values included. Unfortunately, my brain is not cooperating. The problem states: What is the height from ...
1answer
606 views
How to find acceleration given position and velocity?
Sorry for this very simple question but I am still very new to the laws of motion. I am dealing with 2-dimensional vectors in my programming environment and I'm following these slides to learn about ...
1answer
324 views
Question of force of friction on incline plane [closed]
An object of mass $4\text{ kg}$ starts at rest from the top of a rough inclined plane of height $10\text{ m}$. If the speed of the object at the bottom of the inclined plane is $10\text{ m/s}$, and ...
1answer
102 views
Simplifying some math for an ant-on-rubber-band problem
OK, I've been doing this problem for fun (it's a great problem, BTW!): http://www.physics.harvard.edu/academics/undergrad/probweek/prob76.pdf Here is the solution: ...
2answers
368 views
Sign of Velocity for a Falling Object
I'm working on a homework problem in Mathematica. We have to graph the height and the velocity of a function given an initial height and initial velocity. However, when I do the graph for the velocity ...
2answers
766 views
Why do far away objects appear to move slowly in comparison to nearby objects?
When we are sitting in a moving train than nearby stationary objects appear to go backwards...in terms of physics we can use the formula velocity of object with respect ...
1answer
892 views
How to find speed from Newton's law?
I have a question about Newton's law. The question says block A(mass 2.25kg) rests on a tabletop. It is connected by a horizontal cord passing over a light, friction less pulley to hanging block ...
1answer
369 views
The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, [closed]
The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, where a is in meters per second-squared and t is in seconds. At t = 0s, the position vector r= (20.0 m)i + ...
1answer
135 views
What is the origin of the naming convention for position functions?
In physics, position as a function of time is generally called d(t) or s(t). Using "d" is pretty intuitive, however I haven't ...
2answers
487 views
Calculating Impact Velocity Given Displacement and Acceleration
Assume a car has hit a wall in a right angled collision and the front bumper has been displaced 9 cm. The resulting impact is 25g. Also, it is evident by skid marks that the car braked for 5m with ...
2answers
74 views
Acceleration: Value Disparity?
If we consider a ball moving at an acceleration of $5ms^{-2}$, over a time of 4 seconds, the distance covered by the ball in the first second is $5m$. In the 2nd second will $5 + 5 = 10m$. In the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332777857780457, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/160014-proof-openness-subset-function-subset.html
|
# Thread:
1. ## Proof on Openness of a Subset and a Function of This Subset
Dear all,
I am having trouble with understanding the intuition behind the following theorem and certain elements of the proof to this theorem.
Therefore, could someone kindly answer my questions on the proof to this theorem on continuity, all written below?
Thank you very much!
scherz0
---
2. Originally Posted by scherz0
Dear all,
I am having trouble with understanding the intuition behind the following theorem and certain elements of the proof to this theorem.
Therefore, could someone kindly answer my questions on the proof to this theorem on continuity, all written below?
Thank you very much!
scherz0
---
1) This follows from the very definition of S...
2) If $f(x)\in U$ , then by definition, $x\in S$ ...
3) This is the very definition of continuity in the general context of topology: a function is continuous iff
the inverse image of an open (closed) set is open (closed)
Tonio
3. Originally Posted by tonio
1) This follows from the very definition of S...
2) If $f(x)\in U$ , then by definition, $x\in S$ ...
3) This is the very definition of continuity in the general context of topology: a function is continuous iff
the inverse image of an open (closed) set is open (closed)
Tonio
Thank you for your response, Tonio.
May I ask kindly if you could elaborate on 2)?
I understand from the definition of S that $x\in S \Rightarrow f(x)\in U$.
But how does the converse hold true? It is written above that $f(x)\in U \Rightarrow x\in S$, yet $f(x)$ isn't defined in terms of anything else in the theorem or proof.
4. Originally Posted by scherz0
Thank you for your response, Tonio.
May I ask kindly if you could elaborate on 2)?
I understand from the definition of S that $x\in S \Rightarrow f(x)\in U$.
But how does the converse hold true? It is written above that $f(x)\in U \Rightarrow x\in S$, yet $f(x)$ isn't defined in terms of anything else in the theorem or proof.
As it's defined: $S:=\{x\in\mathbb{R}^n\,;\,f(x)\in U\}$ , so $f(x)\in U$ for some $x\in\mathbb{R}^n$ means that $x\in S$ ...
There is no direct or converse here: $f(x)\in U\Longleftrightarrow x\in S$ , by definition of $S$
Tonio
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.94939124584198, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/202792-how-prove-derivative-x-n-nx-n-1-a.html
|
# Thread:
1. ## How to prove that the derivative of x^n=nx^(n-1)
How do I simplify the limit as h approaches 0 of ((x+h)^n-x^n)/h to prove that the derivative of x^n=nx^(n-1)?
Thanks!
2. ## Re: How to prove that the derivative of x^n=nx^(n-1)
Originally Posted by citcat
How do I simplify the limit as h approaches 0 of ((x+h)^n-x^n)/h to prove that the derivative of x^n=nx^(n-1)?
${\left( {x + h} \right)^n} - {x^n} = \sum\limits_{k = 1}^n {\binom{n}{k}{x^{n - k}h^k}}$
$\frac{{\sum\limits_{k = 1}^n {\binom{n}{k}{x^{n - k}}{h^k}} }}{h} = \sum\limits_{k = 1}^n {\binom{n}{k}{x^{n - k}}{h^{k - 1}}}$
3. ## Re: How to prove that the derivative of x^n=nx^(n-1)
Another way. Let y = x + h; then y - x = h. We have $y^n - x^n=(y-x)(y^{n-1}+y^{n-2}x+\dots+yx^{n-2}+x^{n-1})$. Now $y^{n-1}+\dots+x^{n-1}\to nx^{n-1}$ as $y\to x$.
4. ## Re: How to prove that the derivative of x^n=nx^(n-1)
an "informal proof"
[(x+h)n - xn]/h =
(xn + nxn-1h + (other stuff with h2 terms) - xn)/h =
(nxn-1h)/h + h2(...stuff...)/h =
nxn-1 + h(...stuff...).
no matter what the "...stuff..." is, it is clearly not infinite at any particular x, so we have:
$\lim_{h \to 0} \frac{(x+h)^n - x^n}{h} = nx^{n-1} + (0)(...\text{limit of stuff}...) = nx^{n-1}$.
Plato's post makes this argument rigorous, but this gives you "the general idea".
or, you can use induction:
base case: n = 1
then $\lim_{h \to 0} \frac{(x+h) - x}{h} = \lim_{h \to 0} \frac{h}{h} = \lim_{h \to 0} 1 = 1 = x^0$.
assume that for n = k-1:
$\lim_{h \to 0} \frac{(x+h)^{k-1} - x^k}{h} = (k-1)x^{k-2}$.
then:
$\lim_{h \to 0} \frac{(x+h)^k - x^k}{h} = \lim_{h \to 0} \frac{(x+h)(x+h)^{k-1} - (x+h)x^{k-1} + (x+h)x^{k-1} - x(x^{k-1})}{h}$
$= \lim_{h \to 0}\left[ (x+h)\left(\frac{(x+h)^{k-1} - x^{k-1}}{h}\right)\right] + \lim_{h \to 0}\left[x^{k-1}\left(\frac{(x+h) - x}{h}\right)\right]$
$= \left(\lim_{h \to 0} (x+h) \right) \left(\lim_{h \to 0} \frac{(x+h)^{k-1} - x^{k-1}}{h}\right) + \left(\lim_{h \to 0} x^{k-1} \right) \left( \lim_{h \to 0} \frac{(x + h) - x}{h} \right)$
$= \left(\lim_{h \to 0} (x+h) \right)((k-1)x^{k-2}) + \left(\lim_{h \to 0} x^{k-1} \right)(1)$, by our induction hypothesis, and the base case,
$= (x)((k-1)x^{k-2}) + x^{k-1} = (k-1)x^{k-1} + x^{k-1} = (k-1+1)x^{k-1} = kx^{k-1}$, and we are done.
5. ## Re: How to prove that the derivative of x^n=nx^(n-1)
Originally Posted by emakarov
Another way. Let y = x + h; then y - x = h. We have $y^n - x^n=(y-x)(y^{n-1}+y^{n-2}x+\dots+yx^{n-2}+x^{n-1})$. Now $y^{n-1}+\dots+x^{n-1}\to nx^{n-1}$ as $y\to x$.
Is the above proof valid if, for example, $n = \sqrt{2}$?
6. ## Re: How to prove that the derivative of x^n=nx^(n-1)
No, it isn't.
Another proof, for n an integer, would be inductive: if n= 0, then $x^0= 1$ is a constant so its derivative is $0= 0x^{0-1}$. Assume that, for some k, the derivative of $x^k$ is $kx^{k-1}$. Then we can differentiate $x^{k+1}= x(x^k)$ by the product rule: the derivative is $(x)'(x^k)+ (x)(x^k)'= 1(x^k)+ x(kx^{k-1})= x^k+ x(kx^{k-1}= (k+1)x^k$.
For k not an integer, we can use "logarithmic differentiation". If $y= x^k$, then $ln(y)= ln(x^k)= kln(x)$. Now, we need the fact that the derivative of ln(x) is 1/x (which why we typically start with integer powers of x). The derivative of the left side is $\frac{1}{y}\frac{dy}{dx}$ and the derivative of the right side is $\frac{k}{x}$. Since $y= x^k$, we have $\frac{1}{x^k}\frac{dy}{dx}= \frac{k}{x}$. From that, we have $\frac{dy}{dx}= \frac{k}{x}x^k= kx^{k-1}$.
7. ## Re: How to prove that the derivative of x^n=nx^(n-1)
Originally Posted by citcat
How do I simplify the limit as h approaches 0 of ((x+h)^n-x^n)/h to prove that the derivative of x^n=nx^(n-1)?
Thanks!
You can use the Binomial Expansion to prove the result for positive integer values of n. Then you can use the chain rule to extend this result to other values of n.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8877978324890137, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.