url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://gauravtiwari.org/tag/ordered-pair/
|
# MY DIGITAL NOTEBOOK
A Personal Blog On Mathematical Sciences and Technology
Home » Posts tagged 'Ordered pair'
# Tag Archives: Ordered pair
## A Trip to Mathematics: Part III Relations and Functions
Read these statements carefully:
‘Michelle is the wife of Barak Obama.’
‘John is the brother of Nick.’
‘Robert is the father of Marry.’
‘Ram is older than Laxman.’
‘Mac is the product of Apple Inc.’
After reading these statements, you will realize that first ‘Noun’ of each sentence is some how related to other. We say that each one noun is in a RELATIONSHIP to other. Mischell is related to Barak Obama, as wife. John is related to Nick, as brother. Robert is related to Marry, as father. Ram is related to Laxman in terms of age(seniority). Mac is related to Apple Inc. as a product.These relations are also used in Mathematics, but a little variations; like ‘alphabets’ or ‘numbers’ are used at place of some noun and mathematical relations are used between them. Some good examples of relations are:
is less than
is greater than
is equal to
is an element of
belongs to
divides
etc. etc.
Some examples of regular mathematical statements which we encounter daily are:
4<6 : 4 is less than 6.
5=5 : 5 is equal to 5.
6/3 : 3 divides 6.
For a general use, we can represent a statement as:
”some x is related to y”
Here ‘is related to’ phrase is nothing but a particular mathematical relation. For mathematical convenience, we write ”x is related to y” as $x \rho y$. x and y are two objects in a certain order and they can also be used as ordered pairs (x,y).
$(x,y) \in \rho$ and $x \rho y$ are the same and will be treated as the same term in further readings. If $\rho$ represents the relation motherhood, then $\mathrm {(Jane, \ John)} \in \rho$ means that Jane is mother of John.
All the relations we discussed above, were in between two objects (x,y), thus they are called Binary Relations. $(x,y) \in \rho \Rightarrow \rho$ is a binary relation between a and b. Similarly, $(x,y,z) \in \rho \Rightarrow \rho$ is a ternary (3-nary) relation on ordered pair (x,y,z). In general a relation working on an n-tuple $(x_1, x_2, \ldots x_n) \in \rho \Rightarrow \rho$ is an n-ary relation working on n-tuple.
We shall now discuss Binary Relations more rigorously, since they have solid importance in process of defining functions and also in higher studies. In a binary relation, $(x,y) \in \rho$, the first object of the ordered pair is called the the domain of relation ρ and is defined by
$D_{\rho} := \{x| \mathrm{for \ some \ y, \ (x,y) \in \rho} \}$ and also the second object is called the range of the relation ρ and is defined by $R_{\rho} := \{y| \mathrm{for \ some \ y, \ (x,y) \in \rho} \}$.
There is one more thing to discuss about relations and that is about equivalence relation.
A relation is equivalence if it satisfies three properties, Symmetric, Reflexive and Transitive.
I mean to say that if a relation is symmetric, reflexive and transitive then the relation is equivalence. You might be thinking that what these terms (symmetric, reflexive and transitive) mean here. Let me explain them separately:
A relation is symmetric: Consider three sentences “Jen is the mother of John.”; “John is brother of Nick.” and “Jen, John and Nick live in a room altogether.”
In first sentence Jen has a relationship of motherhood to John. But can John have the same relation to Jen? Can John be mother of Jen? The answer is obviously NO! This type of relations are not symmetric. Now consider second statement. John has a brotherhood relationship with Nick. But can Nick have the same relation to John? Can Nick be brother of John? The answer is simply YES! Thus, both the sentences “John is the brother of Nick.” and “Nick is the brother of John.” are the same. We may say that both are symmetric sentences. And here the relation of ‘brotherhood’ is symmetric in nature. Again LIVING WITH is also symmetric (it’s your take to understand how?).
Now let we try to write above short discussion in general and mathematical forms. Let X and Y be two objects (numbers or people or any living or non-living thing) and have a relation ρ between them. Then we write that X is related by a relation ρ , to Y. Or X ρ Y.
And if ρ is a symmetric relation, we might say that Y is (also) related by a relation ρ to X. Or Y ρ X.
So, in one line; $X \rho Y \iff Y \rho X$ is true.
A relation is reflexive if X is related to itself by a relation. i.e., $X \rho X$. Consider the statement “Jen, John and Nick live in a house altogether.” once again. Is the relation of living reflexive? How to check? Ask like, Jen lives with Jen, true? Yes! Jen lives there.
A relation is transitive, means that some objects X, Y, Z are such that if X is related to Y by the relation, Y is related to Z by the relation, then X is also related to Z by the same relation.
i.e., $X \rho Y \wedge Y \rho Z \Rightarrow X \rho Z$. For example, the relationship of brotherhood is transitive. (Why?) Now we are able to define the equivalence relation.
We say that a relation ρ is an equivalence relation if following properties are satisfied: (i) $X \rho Y \iff Y \rho X$
(ii) $X \rho X$
(iii) $X \rho Y \ Y \rho Z \Rightarrow X \rho Z$.
Functions: Let f be a relation (we are using f at the place of earlier used ρ ) on an ordered pair $(x,y) : x \in X \ y \in Y$. We can write xfy, a relation. This relation is called a function if and only if for every x, there is always a single value of y. I mean to say that if $xfy_1$ is true and $xfy_2$ is also true, then always $y_1=y_2$. This definition is standard but there are some drawbacks of this definition, which we shall discuss in the beginning of Real Analysis .
Many synonyms for the word ‘function’ are used at various stages of mathematics, e.g. Transformation, Map or Mapping, Operator, Correspondence. As already said, in ordered pair (x,y), x is called the element of domain of the function (and X the domain of the function) and y is called the element in range or co-domain of the function (and Y the range of the function).
Here I will stop myself. I don’t want a post to be long (specially when writing on basic mathematics) that reader feel it boring. The intermediate mathematics of functions is planned to be discussed in Calculus and advanced part in functional analysis. Please note that I am regularly revising older articles and trying to maintain the accuracy and completeness. If you feel that there is any fault or incompleteness in a post then please make a comment on respective post. If you are interested in writing a guest article on this blog, then kindly email me at mdnb[at]live[dot]in.
###### Must Read
• Equivalence relations (gowers.wordpress.com)
26.740278 83.888889
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 21, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484092593193054, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/114340/what-does-following-matrix-says-geometrically?answertab=active
|
# what does following matrix says geometrically
Let $M\subset \mathbb C^2$ be a hypersurface defined by $F(z,w)=0$. Then for some point $p\in M$, I've $$\text{ rank of }\left( \begin{array}{ccc} 0 &\frac{\partial F}{\partial z} &\frac{\partial F}{\partial w} \\ \frac{\partial F}{\partial z} &\frac{\partial^2 F}{\partial ^ 2z} &\frac{\partial^2 F}{\partial z\partial w} \\ \frac{\partial F}{\partial w} &\frac{\partial^2 F}{\partial w\partial z} & \frac{\partial^2 F}{\partial w^2} \\ \end{array} \right)_{\text{ at p}}=2.$$
What does it mean geometrically? Can anyone give a geometric picture near $p$?
Edit: Actually I was reading about Levi flat points and Pseudo-convex domains. I want to understand the relation between these two concepts. A point p for which the rank of the above matrix is 2 is called Levi flat. If the surface is everywhere Levi flat then it is locally equivalent to $(0,1)\times \mathbb{C}^n$, so I have many examples....but what will happen for others for example take the three sphere in $\mathbb{C}^2$ given by $F(z,w)=|z|^2+|w|^2−1=0$. This doesn't satisfy the rank 2 condition. Can I have precisely these two situations?
-
1
Hint: try some examples, e.g. $F=z^2+w^2-2$ at $(1,1)$. If one first derivative is zero and the other is nonzero, you get another example, e.g. $F=e^{zw}-1$ at $(0,1)$. – bgins Feb 28 '12 at 7:31
@bgins, hmm.. thanks.. But actually i was reading Levi flat point and Pseudo-convex domain, I want to understand relation of these two concept.... The point $p$ for which above matrix rank is 2 is called is levi flat....If surface is everywhere leviflat then it is locally equivalent to $(0,1)\times \mathbb C^n$.... so i have many examples....but what will happen for others for example take two sphere in $\mathbb C^2$ that is $F(z,w)= |z|^2+|w|^2-1=0$ this doesn't satisfy rank 2 condition.... can i have precisely these two situation .. – zapkm Feb 28 '12 at 7:57
So you got an answer to mathoverflow.net/questions/88782/…, but not to mathoverflow.net/questions/85178/every-where-levi-flat, I see. – bgins Feb 28 '12 at 8:06
1
@PradipMishra: I edited the question slightly. I hope that I didn't change the meaning of your last sentence (or of anything else for that matter). I don't have time to think about the answer at the moment, but the definition I am familiar with involves the Levi form. A hypersurface in $\mathbb{C}^2$ has a complex line in its tangent bundle (i.e. the subspace of its tangent space that is invariant under multiplication by $i$). If this is an integrable distribution, then we say it is Levi flat. – Sam Lisi Mar 2 '12 at 23:29
1
– Sam Lisi Mar 2 '12 at 23:36
show 3 more comments
## 2 Answers
Here is a partial answer: I will givea geometric interpretation of Levi flatness/pseudoconvexity. To fix some notation, let $j$ be endomorphism of the tangent bundle to $\mathbb{C}^2$ induced by its complex structure. (I'm being a bit pedantic, normally we say it is the complex structure, but I want to make it very clear what I am describing.)
If you have a real hypersurface $\Sigma$ in $\mathbb{C}^2$, its tangent bundle has a preferred complex line bundle inside of it. This consists of those vectors in $TM$ such that $j v$ is also in $TM$. Let $\xi$ be this subbundle. We say that $\xi$ is Levi-flat if this distribution is (locally) integrable in the sense of Frobenius.
So what does this mean, geometrically? Suppose that $\Sigma$ is Levi-flat in an open neighbourhood of $p \in \Sigma$. Then, by the Frobenius integrability theorem, you can find a local function $G \colon \Sigma \to \mathbb{R}$ whose level sets have $j$ invariant tangent spaces, i.e. the level set is a complex (local) submanifold of $\mathbb{C}^2$. Again, since we are working locally, this allows you to describe the neighbourhood of $p$ as being of the form $(-\epsilon, \epsilon) \times D^2(\epsilon)$, where $D^2$ is the disk in $\mathbb{C}$.
Levi convexity is a bit harder to explain without appealing to the Levi form. See the reference I gave in the comments above for some definitions and discussion of the concept. In particular, a convex hypersurface in $\mathbb{C}^2$ is Levi convex.
The key fact about flatness/convexity has to do with holomorphic disks whose boundaries are in $\Sigma$. If $\Sigma$ is flat, you can foliate $\Sigma$ locally by such disks. If $\Sigma$ is strictly pseudoconvex, then only the boundary of the disk touches $\Sigma$, the interior of the disk is forced to lie in the interior region bounded by $\Sigma$. (For instance, think of the unit sphere $S^3$ as the typical example of a pseudoconvex hypersurface. Any holomorphic disk with boundary in $S^3$ lives inside the unit ball -- furthermore, only its boundary is allowed to touch the $S^3$.)
In an example like the one you gave, the complex line is $\ker dF \cap \ker dF \circ j$. You then want to compute the two form $\omega := -d (dF \circ j)$ on a pair of (nonzero) vectors $v, jv$, $v \in \xi$. If this is positive, then it is pseudoconvex (at this point). If it is zero, it is Levi-flat.
-
Let $p=(z_0,w_0)$ and define $G(z,w)=F(z,w)-(z_0,w_0)$. Then the matrix is $$\left( \begin{matrix} G & G_z & G_w \cr G_z & (G_z)_z & (G_z)_w \cr G_w & (G_w)_z & (G_w)_w \cr \end{matrix} \right)_{\text{at }p}$$ Since $G(p)=0$. Is that any help?
-
I think you mean $G(z,w)= F(z,w)- F(z_0,w_0)$... I will reply after thinking if this gives some fruitful. thanks. – zapkm Feb 28 '12 at 8:17
the person who has upvoted this answer, and @bgins, will you please explain this... I will be very grateful... – zapkm Mar 1 '12 at 11:49
$G(p)$ is just shorthand $G(z_0,w_0)$. I'm afraid my comment was only very elementary, I don't have the background to answer your question (sorry if saying "Hint" was misleading, this was before you had mentioned Levi flatness & pseudoconvexity). However, writing the matrix this way, it seems to show that the (affine/projective) tangent approximations to $G$, $G_z$ & $G_w$ at $p$ are linearly dependent (rank $<3$). Do you have any good references for this stuff? The best I've found so far are some MIT OCW lecture notes. – bgins Mar 1 '12 at 12:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563446044921875, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/32011/direct-proof-of-irrationality/32030
|
## Direct proof of irrationality?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There are plenty of simple proofs out there that $\sqrt{2}$ is irrational. But does there exist a proof which is not a proof by contradiction? I.e. which is not of the form:
Suppose $a/b=\sqrt{2}$ for integers $a,b$.
[deduce a contradiction here]
$\rightarrow\leftarrow$, QED
Is it impossible (or at least difficult) to find a direct proof because ir-rational is a negative definition, so "not-ness" is inherent to the question? I have a hard time even thinking how to begin a direct proof, or what it would look like. How about:
$\forall a,b\in\cal I \exists \epsilon$ such that $\mid a^2/b^2 - 2\mid > \epsilon$
-
Not sure why the LaTeX worked in the preview, but not the post! Let me retype that last line in ASCII: For all integers a,b, there exists epsilon such that |a^2/b^2 - 2| > epsilon – RubeRad Jul 15 2010 at 14:59
3
Please see wikipedia before asking. – Abhishek Parab Jul 15 2010 at 15:16
4
See Andrej Bauer's post - math.andrej.com/2010/03/29/… – François G. Dorais♦ Jul 15 2010 at 15:37
5
See also gowers.wordpress.com/2010/03/28/… (A. Bauer's post is linked there, and there are additional references.) – Andres Caicedo Jul 15 2010 at 16:32
7
For those needing more inducement to click through than just links: Andrej's post discusses the point raised in the OP's last paragraph, the difference between classical “proof by contradiction” — “to prove $\varphi$, assume $\lnot \varphi$ and derive absurdity” — and the intuitionistically valid “to prove $\lnot \varphi$, assume $\varphi$ and derive absurdity”, which at a formal level (either classically or intuitionistically) any proof of a negative statement must essentially boil down to. Gowers' post discusses some related issues rather more informally, with a wide range of examples. – Peter LeFanu Lumsdaine Jul 15 2010 at 21:11
show 3 more comments
## 9 Answers
Below is a simple direct proof that I found as a teenager:
THEOREM $\;\rm r = \sqrt{n}\;$ is integral if rational, for $\;\rm n\in\mathbb{N}$.
Proof: $\;\rm r = a/b,\;\; {\text gcd}(a,b) = 1 \implies ad-bc = 1\;$ for some $\rm c,d \in \mathbb{Z}$, by Bezout
so: $\;\rm 0 = (a-br) (c+dr) = ac-bdn + r \implies r \in \mathbb{Z} \quad\square$
Nowadays my favorite proof is the 1-line gem using Dedekind's conductor ideal - which, as I explained at length elsewhere, beautifully encapsulates the descent in ad-hoc "elementary" irrationality proofs.
-
That's really slick too; I'll have to make some time to read the link... – RubeRad Jul 15 2010 at 15:57
10
So your proof of irrationality of $\sqrt{2}$ is follows: Assume $\sqrt{2}$ was rational. Then my theorem implies that $\sqrt{2}$ is integral, which is obviously impossible since no integer squares to $2$. QED. But this is a proof by contradiction isn't it? – Rasmus Jul 15 2010 at 16:31
3
Your theorem is neat, though. – Rasmus Jul 15 2010 at 16:34
1
But every proof does the same at some point - even if it's far down the line in some chain of lemmas, e.g. see the Gower's link above. – Bill Dubuque Jul 15 2010 at 16:47
Yes, of course there are many variations. But they are all *mechanically* derivable from the more conceptual ideal-based proofs, as I explain at length in the sci.math linked indirectly above. Thus they're all essentially equivalent and the key idea goes way back to Dedekind. I could easily write a program that could generate all of the known "elementary" proofs by unwinding the conceptual proofs, eliminating higher-order concepts like ideals, modules, and directly inlining lemmas, etc. Such elementary proofs are not novel in any way. – Bill Dubuque Jul 15 2010 at 22:39
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Wikipedia has a constructive proof. You can bound $\sqrt 2$ away from $p/q$.
-
Thanks, that exactly does it! (now if I could only reproduce the "easy calculation...") – RubeRad Jul 15 2010 at 15:52
3
I think there's some confusion going on here. The standard proof of the irrationality of $\sqrt 2$ is constructive, as noted in the comments above, pointing to Andrej's article: math.andrej.com/2010/03/29/… The "constructive" proof on Wikipedia is proving a result that is stronger from a constructive perspective. It's proving that $\sqrt 2$ and $a/b$ are "apart". You still need to derive a contradiction, or use a lemma that does so, in order to show that these values are distinct. – Dan Piponi Jul 15 2010 at 18:07
3
You have a point. This only bears on the constructivist distinction between 'It is not the case that $\sqrt 2$ is rational' and '$\sqrt 2$ is irrational'. The first of these means that the assumption that $\sqrt 2$ is rational leads to a contradiction. The second means that for any rational $p/q$, $\sqrt 2$ is apart from $p/q$ by a distance dependent on $p$ and $q$. – David Corfield Jul 15 2010 at 18:35
1
I'm pretty sure that with the usual definitions, 'It is not the case that $\sqrt 2$ is rational' is equivalent to '$\sqrt 2$ is irrational', constructively or not. The distinction I'm making hinges on the difference between 'apartness' and 'inequality'. The latter doesn't imply the former constructively as we don't have the familiar classical trichotomy. – Dan Piponi Jul 15 2010 at 20:36
2
Exercise 10 p. 62 of Constructive Analysis (Bishop and Bridges): Construct a real number that is not rational and not irrational. (A real number $x$ is irrational if $x \neq r$ for each rational number $r$.) A real number is a certain sequence of rationals. To prove it irrational, show that it is not equal to any $p/q$ considered as a constant sequence. To show inequality of two reals, show that entries at some point in the sequences are far enough apart. – David Corfield Jul 16 2010 at 8:37
show 1 more comment
Rational numbers have finite continuous fractions.
$\sqrt{2}=1+1/(\sqrt{2}+1)=1+1/(2+1/(\sqrt{2}+1))=...$ Then the continuous fraction is not finite 1+1/2+1/2+1/2+...
The geometric proof (not the one in Wikipedia), the one that proves $\sqrt{2}$ is not commensurable with $1$ is also direct (and is essentially the same as the continuous fraction).
-
2
These are continued (not continuous) fractions. – Robin Chapman Jul 16 2010 at 18:07
Spanish tricked me. We use the same word. Thanks. – Franklin Nov 4 2010 at 1:58
From the viewpoint of prime factorization, integers are products of powers of primes in which the exponents are non-negative integers. Rational numbers are products of powers of primes in which the exponents can be any integer. In both cases, any given prime can only appear once. This, of course, means that when you square a rational number, all the exponents will be even numbers. Since 2 is not an even power of a prime, it cannot be the square of a rational number.
-
Slight clarification: POSITIVE integers are products of powers of primes in which the exponents are natural numbers, and POSITIVE rational numbers are products of powers of primes in which the exponents can be any integer. – Sridhar Ramesh Jan 13 2012 at 22:25
Below is a direct proof that if $p,q,n$ are positive integers with $\gcd(p,q)=1$ and $p^2=nq^2$ then $q=1$ (so $n=p^2$). I would count that as a direct proof that $$\lbrace n \mid \sqrt{n}\in \mathbb{Q} \rbrace=\lbrace0,1,4,9,16,\cdots\rbrace$$ Given that $\gcd(p,q)=1$ there are integers $s,t$ with $ps+qt=1.$ Cube and regroup to get $p^2(ps+3qt)s^2+q^2(3ps+qt)t^2=1.$ Given that $p^2=nq^2$ we then have $nq^2(ps+3qt)s^2+q^2(3ps+qt)t^2=1$ so that $q$ divides 1. QED
later As Andres points out, it suffices to square. The cubing shows that $\gcd(p,q)=1$ implies $\gcd(p^2,q^2)=1$. Of course if $p,q,n$ are integers and we already know $\frac{p^2}{q^2} \ne n$ then it follows that $|\frac{p^2}{q^2}-n| \ge \frac{1}{q^2}$. I wanted an direct argument that if $p,q,n$ are positive integers with $\gcd(p,q)=1$ and $q \ge 2$ then $|\frac{p^2}{q^2}-n| \ge \frac{1}{q^2}$. I think that could be done but in this situation one wants to keep a proof short.
In my opinion, the vast majority of "indirect proofs" are actually direct proofs of something else. But that is another story.
-
1
@Aaron: This is nice, but why do you need to cube? Squaring suffices: $nq^2s^2+2psqt+q^2t^2=1$. – Andres Caicedo Dec 9 2010 at 23:06
A book on logic whose title and the identity of whose author escape me at the moment said that not all proofs by contradiction are indirect proofs. The idea is that when proving an inherently negative statement---one that asserts non-existence of something---one can proceed only by contradiction, which in that case constitutes a direct proof. I'll see if I can find it.
-
Observe that if $\sqrt 2$ is rational, then there is some positive integer q such that q × $\sqrt 2$ is an integer. Since the positive integers are well ordered, we may suppose that q is the smallest such number.
We next observe that since 1 < $\sqrt 2$ < 2, then $\sqrt 2$ – 1 < 1, and consequently q × ($\sqrt 2$ – 1) = (q ×$\sqrt 2$ – q ) is less than q. Let us call this new number r, and observe that it too is a positive integer. But we now have r × $\sqrt 2$ is also an integer, since r × $\sqrt 2$ = (q × $\sqrt 2$– q ) ×$\sqrt 2$ = (2q – q ×$\sqrt 2$ ). In short, r is a positive integer less than q and r ×$\sqrt 2$ is an integer. But we said that q was the smallest positive integer with this property, and so we have a contradiction.
The nice thing about this proof is how easily it generalizes. Let us denote by $|\sqrt n|$ the integer part of $\sqrt n$ . For example, since the square root of 5 is approximately 2.236, the integer part is 2. For any n that is not a perfect square, we may prove that is irrational exactly as above by considering q × ( $\sqrt n$– $|\sqrt n|$). (On the other hand, if n is a perfect square (so that $\sqrt n$ = $|\sqrt n|$) then there is no contradiction.)
More generally still, if x is a rational but not integral zero of a monic integer polynomial of degree d, let q be the least positive integer so that q$x^j$ is integral for all j < d. Then, considering q(x – n) where n is an integer with n < x < n + 1, we get a contradiction. In other words, we have proved that every rational “algebraic integer” is an integer.
-
There seems to be a problem with your code (the first paragraph is cut off.) – Andres Caicedo Jul 15 2010 at 16:33
Ye, Andres, thanks, don't know what the problem was but it's fixed now. – Dick Palais Jul 15 2010 at 16:37
1
According to your first sentence I believe that you provide a proof by contraction which is not what the OP has asked for. – Rasmus Jul 15 2010 at 16:40
In fact it generalizes quite widely to show that any PID/Dedekind domain is integrally-closed. It's just a specialization of a the 1-line proof by way of principality of the conductor (denominator) ideal which I mentioned above. See the link in my post for much further discussion. – Bill Dubuque Jul 15 2010 at 16:53
1
I thought it worth emphasis for the reader since rediscoverers often think that such proofs are novel - even professional mathematicians - even number theorists! E.g. Estermann [rediscovered](bit.ly/9Dxh04) such an elementary proof in 1975 and often boasted that it was "the first new proof since Pythagoras" and this claim was supported by some other number-theorists, e.g. Niven. – Bill Dubuque Jul 15 2010 at 19:11
show 2 more comments
Here's another take on Bill's integrality theorem, using existence and uniqueness of fractions in lowest terms:
If sqrt(2) = p/q is in lowest terms, then 2/1 = p^2/q^2 is also in lowest terms. Hence p^2 = 2, and q^2 = 1.
-
2
So why does $p/q$ being in lowest terms entail that $p^2/q^2$ is? If "in lowest terms" means having no common factors save units, then this implication doesn't hold in all integral domains. – Robin Chapman Dec 9 2010 at 16:50
Forgive me - I was tempted by some of these answers to offer the most elementary and shortest proofs I know, valid in Z, and using only facts acceptable to laypersons, that sqrt(2) is irrational. Since all high school students presumably learn the rational root theorem, it is odd e.g. that few US high school algebra books conclude that the only rational roots of X^2 - 2, are integer factors of 2. – roy smith Dec 9 2010 at 19:40
Although the first argument used unique factorization, the second only uses integral closure. I guess the general statement would be that an element of a given domain having no square root, still has none in the fraction field. Are there domains that are integrally closed for solutions of quadratic equations but not in general? – roy smith Dec 9 2010 at 21:03
@Robin But it does in Bezout Domains. Given that $s,t$ exist with $ps+qt=1$, cube both sides and regroup: $p^2(ps+3qt)s^2+q^2(3ps+qt)t^2=1.$ – Aaron Meyerowitz Dec 9 2010 at 21:17
Robin, on reflection, I don't quite understand your remark. Say we want to prove an element of a given domain is a square if and only if it is a square in its field of fractions. Bill's proof above uses that gcd(a,b) exists and is a linear combination of a,b, which holds only in a pid. My first argument above works more generally in a ufd, and my second one still more generally in an integrally closed domain. I do not know of a proof that works in a yet more general domain. Thus I asked if there is a domain not integrally closed, but closed "for squares". Do you know one?? regards, roy – roy smith Dec 12 2010 at 18:50
Most common axiom systems I've seen are a list of $\forall$ and $\exists$ axioms. If you look at a minimal underlying logic, most of the common rules for transforming these axioms shouldn't change the $\forall$ or $\exists$ into a $\neg \exists$. So you could fix a logic system, and argue that the only method that results in a $\neg \exists$ statement is the equivalent of proof by contradiction.
You haven't fixed a logic system in your original question, but the "proof by direct substitution" method won't be sufficient.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427727460861206, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/special-relativity?page=2&sort=active&pagesize=30
|
Tagged Questions
The special theory of relativity describes the motion and dynamics of objects moving at significant fractions of the speed of light.
3answers
81 views
How do we measure the range of distant objects despite relativistic effects?
When we observe astronomical objects like distant galaxies there are several complicating factors for estimating the distance: Relativistic speed result in length contraction Relativistic speed ...
1answer
93 views
The status / acceptance of block time?
What is the current status or acceptance of block time as it relates to Einstein's theory of relativity? Has quantum mechanics ruled it out or is it still the favored view of the world? Perhaps there ...
2answers
156 views
Relativity of Simultaneity
Relativity of Simultaneity seems to be about OBSERVING two events simultaneously (please correct me if I am wrong). However, as long as the two events are separated by a distance (any distance) then ...
1answer
91 views
What is Relativistic Navier-Stokes Equation Through Einstein Notation?
Navier-Stokes equation is non-relativistic, what is relativistic Navier-Stokes equation through Einstein notation?
2answers
61 views
Special Theory of relativity on electromagnetic waves
Since time slows down and length contracts, when we travel almost at speed of light, if the speed of light (or EM waves) remains same and the wavelength of light remains same, do we measure the ...
2answers
140 views
Doppler shift of radio signals to an accelerating observer
Suppose a man leaves from Earth to a star which is 1000 light years away. He accelerates to a velocity such that the entire trip lasts a year, from the reference frame of the rocket. Now lets pretend ...
1answer
55 views
Relativistic Computation?
Is it possible to employ relativity to develop computational technology? Here is a really basic example: Build a Computer and Feed it the Problem (say the problem is projected to take 10 years to ...
1answer
46 views
Live feed from a Rocket traveling near the speed of light?
Okay, odd question popped up in my physics class today. If a rocket ship is traveling at .99c for 1 year, and is streaming a video at 30 frames/sec to earth, how would the earth feed be affected? ...
1answer
45 views
if i want action to be positive number then it require that $\tau_i$ be bigger than $\tau_f$, isn't it true? [closed]
the action is the length of the geodesic $S=-E_o\int_i^f d\tau$ we get an action that is minimised for the correct path. if i want action to be positive number then it require that $\tau_i$ be ...
1answer
187 views
drift velocity of electrons in a superconductor
is there a formula for the effective speed of electron currents inside superconductors? The formula for normal conductors is: $$V = \frac{I}{nAq}$$ I wonder if there are any changes to this ...
3answers
151 views
Integration by parts to derive relativistic kinetic energy
I have come across a weird integration during derivation of relativistic kinetic energy. Our professor states that i can get RHS out of LHS using integration by parts: \int\limits_0^x \! ...
4answers
171 views
What is the exact mechanism by which time dilates?
What is the exact mechanism by which time dilates for a fast moving object? Can the time dilation be explained by any theory other than relativity?
3answers
102 views
Stuff can't go at the speed of light - in relation to what? [duplicate]
We all know that stuff can't go faster than the speed of light - it's length becomes negative and all kinds of weird stuff happens. However, this is in relation to what? If two objects, each moving ...
1answer
58 views
Rate of spontaneous tachyon emission
It's not possible for an electron to emit or absorb a photon without the presence of a third particle such as an atomic nucleus; without the third particle, it's impossible for such a process to ...
2answers
74 views
Faraday tensor, antisymmetric rank two
$F^{\mu \mathcal{V}}$ is defined in http://www.lecture-notes.co.uk/susskind/special-relativity/lecture-7/relativistic-lorentz-force/ How to show that $F^{\mu \mathcal{V}}$$F_{\mu \mathcal{V}}$, is ...
1answer
65 views
Understanding bending light beam perpendicular to motion
I'm just reading a book about gravity. An example it gives is a spaceship accelerating. A beam of light travelling at right angles to the direction of movement of the spaceship enters it via a small ...
3answers
312 views
Can something travel faster than light if it has always been travelling faster than light?
I know there are zillions of questions about faster than light travel, but please hear me out. According to special relativity, it is impossible to accelerate something to the speed of light. However, ...
1answer
44 views
Organic Proliferation In Terms of Speed [closed]
OK for 5 stars...Dealing with Einsteins theory of special relativity: Here is the question: As observed on earth, a certain type of bacterium is known to double in number every 24.0 hours. Two ...
1answer
55 views
Faraday tensor, antisymmetric electromagnetic tensor
I want to write $F^{\mu \nu}F_{\mu \nu}$ in terms of $F_{\mu \nu}F^{\mu \nu}$. How to do it?
3answers
134 views
Having trouble seeing the similarity between these two energy-momentum tensors
Leonard Suskind gives the following formulation of the energy-momentum tensor in his Stanford lectures on GR (#10, I believe): T_{\mu \nu}=\partial_{\mu}\phi \partial_{\nu}\phi-\frac{1}{2}g_{\mu ...
1answer
110 views
Cancelling special & general relativistic effects
We know that for a GPS we need to make a correction for both general and special relativity: general relativity predicts that clocks go slower in a higher gravitational field (the clock aboard a GPS ...
2answers
81 views
What is the process that gives mass to free relativitic particles?
When a free particle move in space with a known momentum and energy then what is the physical process that gives mass to that free (relativistic) particle? What is role does the Higgs field in that ...
1answer
28 views
If there's a light ray and it's turned to a new location by a certain angle
Imagine that there's a light ray, with source at point A, and it's directed towards point B (which is very far from point A) and it continues for a huge distance. How will an observer at point B ...
2answers
108 views
Relativistic Doppler effect derivation
This is about a step in a derivation of the expression for the relativistic Doppler effect. Consider a source receding from an observer at a velocity $v$ along the line joining the two. Light is ...
2answers
112 views
Does an accelerating spaceship move backwards due to length contraction?
Let's assume I have a spaceship in front of me let's say at 1000000km distance. Now let's assume I have also a stationary wall just behind the spaceship at 999999km. Initially the spaceship's speed is ...
1answer
93 views
Proton-proton collisions
I have a question about proton-proton collisions at the LHC. Firstly, the 4-momentum $p^\mu=(E/c,\vec{p})$ can be represented as \$p^\mu =(m_T \cosh \Psi, p_T \cos \phi , p_T \sin \phi, m_T c \sinh ...
0answers
38 views
Relativistic solution for Zeno's stadium paradox?
The stadium Zeno paradox (not the same paradox from the Quantum-Zeno-Effect, but the same Zeno) gives a paradox about time, when two runners move toward a standing person from different directions. ...
3answers
388 views
Does entanglement not immediately contradict the theory of special relativity?
Does entanglement not immediately contradict the theory of special relativity? Why are people still so convinced nothing can travel faster than light when we are perfectly aware of something that ...
1answer
61 views
$\frac{dt}{d\tau}=\gamma$ in special relativity
I hope this is not too silly a question: We often see $$\frac{dt}{d\tau}=\gamma=\frac{1}{\sqrt{1-v^2}},$$ taking $c=1$. Problem: I don't understand why... In the Minkowski metric, using the ...
1answer
36 views
Field Tensor and classical limits
I would be very grateful if someone would kindly explain this generalization of the Lorentz force law to the special relativity domain. Please bear with me. Classically, the Lorentz force law is ...
4answers
2k views
How is the classical twin paradox resolved?
I read a lot about the classical twin paradox recently. What confuses me is that some authors claim that it can be resolved within SRT, others say that you need GRT. Now, what is true (and why)?
0answers
64 views
Relativistic interaction: gamma + proton = delta
We have a proton at rest, and there's an incident photon that is absorbed by the proton producing the excited state "delta". Photon energy: $\hbar \omega$, Proton rest Energy: $m_p c^2$, Delta rest ...
0answers
60 views
Consequences of Third Postulate of Special Relativity [closed]
Consequences of SR arise from two postulates. know as this abstract states: "relativistic action is limited to planck's constant", and maybe we've to consider it as the possible third postulate of ...
1answer
40 views
Wavefront emitted by bodies at traveling near the velocity of light
I studied that no body can travel with the velocity of light. But, assuming that when a body moves nearly velocity of light, will it obey length contraction law of Einstein or will it emit the same ...
1answer
56 views
Relativity of simultaneity - An example
I am trying to understand the relativity of simultaneity in different frames, and I am trying to work out an example. Suppose along the x-axis there are two points 2000m apart. Event A happens at t=0 ...
10answers
6k views
Does the Pauli exclusion principle instantaneously affect distant electrons?
According to Brian Cox in his A night with the Stars lecture$^1$, the Pauli exclusion principle means that no electron in the universe can have the same energy state as any other electron in the ...
2answers
109 views
Can dark matter be relativistic dust?
As far as I know the mass of an observed object increases as it approaches the speed of light. Is it possible that the excess mass called "dark matter" is due to relativistic dust? Surely, stars ...
2answers
118 views
Do objects have energy because of their charge?
My gut feeling tells me things should have energy because of their charge, like they have energy because of their mass. Is this possible? Has it been shown? If not then what is missing to make such ...
3answers
154 views
Is there absolute proof that an object cannot exceed the speed of light?
Have any known experiments ruled out travelling faster than the speed of light? Or is this just a widely accepted theory?
3answers
145 views
What truly is mass, and is there a direct way to measure it?
We know a mass of an object of one kilogram as an object that weighs W = mg = 9.8 N and we reference it to that, (when it should as a fundamental parameter describe weight not the opposite). But if we ...
2answers
66 views
Can acceleration feel like constant gravity for indefinitely long?
So here's the setup: I'm in a spaceship, without windows as always, and the ship is accelerating upwards at a constant rate of $1\,\text{g}$. So inside the spaceship it feels like I'm being pulled ...
1answer
85 views
The Klein–Gordon equation
As we know that the Schrödinger equation presents basis of Quantum Mechanics and analogy with Newton second law in Classical Mechanics, I thought that relativistic interpretation of Schrödinger ...
2answers
158 views
Has anyone ever measured the one way speed of light perpendicular to the Earth at the Earth's surface?
1 - Has anyone ever measured the one way speed of photons traveling perpendicular to the Earth at the Earth's surface? 2 - Given our current understanding of Physics is there any way both the upward ...
1answer
110 views
Lorentz invariance of positive energy solutions to the Klein-Gordon equation
I am reading Arthur Jaffe's Introduction to Quantum Field Theory. (You can find it here.) There is an interesting question posed in Exercise 2.5.1: Solutions to the Klein-Gordon equation propagate ...
2answers
155 views
Relativistic equivalent of a spring-force?
Usually what helps me understand a concept better in physics is to write a simulation of it. I've got to the point where I'm competent in the basics of special relativity, but, I can't figure out how ...
5answers
420 views
Special Relativity Second Postulate
That the speed of light is constant for all inertial frames is the second postulate of special relativity but this does not means that nothing can travel faster than light. so is it possible the ...
3answers
505 views
Special Relativity and $E = mc^2$
I read somewhere that $E=mc^2$ shows that if something was to travel faster than the speed of light then they would have infinite mass and would have used infinite energy. How does the equation show ...
2answers
71 views
How does the wavelength change in relativistic limit?
In the text, it reads that the momentum of a particle will change if it is moving at speed close to light speed. In the general case, the wavelength is given as $$\lambda = \frac{h}{p}$$ and p ...
1answer
68 views
Properties of the Faraday tensor for constant fields
I'm doing a special relativity past exam paper and have got caught up with something that I hope someone can help me with! I have to show that for constant fields, the magnitude of A, the ...
2answers
118 views
Lorentz boost matrix for an arbitrary direction in terms of rapidity
We have derived the Lorentz boost matrix for a boost in the x-direction in class, in terms of rapidity which from Wikipedia is: Assume boost is along a direction \$\hat{n}=n_x \hat{i}+n_y \hat{j}+n_z ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243543148040771, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/28720/how-to-get-planck-length/28724
|
# How to get Planck length
I know that what Planck length equals to.
1. The first question is, how do you get the formula $$\ell_P~=~\sqrt\frac{\hbar G}{c^3}$$ that describes the Planck length?
2. The second question is, will any length shorter than the Planck length be inaccessible? If so, what is the reason behind this?
-
1
Hi user2346! For future reference, we prefer that you ask each separate question in a separate post. – David Zaslavsky♦ May 21 '12 at 20:35
## 5 Answers
The expression $(\hbar G/c^3)^{1/2}$ is the unique product of powers of $\hbar, G,c$, three most universal dimensionful constants, that has the unit of length. Because the constants $\hbar, G,c$ describe the fundamental processes of quantum mechanics, gravity, and special relativity, respectively, the length scale obtained in this way expresses the typical length scale of processes that depend on relativistic quantum gravity.
The formula and the value were already known to Max Planck more than 100 years ago, that's why they're called Planck units.
Unless there are very large or strangely warped extra dimensions in our spacetime, the Planck length is the minimum length scale that may be assigned the usual physical and geometric interpretation. (And even if there are subtleties coming from large or warped extra dimensions, the minimum length scale that makes sense – which could be different from $10^{-35}$ meters, however – may still be called a higher-dimensional Planck length and is calculated by analogous formulae which must, however, use the relevant Newton's constant that applies to a higher-dimensional world.) The Planck length's special role may be expressed by many related definitions, for example:
• The Planck length is the radius of the smallest black hole that (marginally) obeys the laws of general relativity. Note that if the black hole radius is $R=(\hbar G/c^3)^{1/2}$, the black hole mass is obtained from $R=2GM/c^2$ i.e. $M=c^2/G\cdot (\hbar G/c^3)^{1/2} = (\hbar c/G)^{1/2}$ which is the same thing as the Compton wavelength $\lambda = h/Mc = hG/c^3 (\hbar G/c^3)^{-1/2}$ of the same object, up to numerical factors such as $2$ and $\pi$. The time it takes for such a black hole to evaporate by the Hawking radiation is also equal to the Planck time i.e. Planck length divided by the speed of light. Smaller (lighter) black holes don't behave as black holes at all; they are elementary particles (and the lifetime shorter than the Planck time is a sign that you can't trust general relativity for such supertiny objects). Larger black holes than the Planck length increasingly behave as long-lived black holes that we know from astrophysics.
• The Planck length is the distance at which the quantum uncertainty of the distance becomes of order 100 percent, up to a coefficient of order one. This may be calculated by various approximate calculations rooted in quantum field theory – expectation values of $(\delta x)^2$ coming from quantum fluctuations of the metric tensor; higher-derivative corrections to the Einstein-Hilbert action; nonlocal phenomena, and so on.
The unusual corrections to the geometry, including nonlocal phenomena, become so strong at distances that are formally shorter than the Planck length that it doesn't make sense to consider any shorter distances. The usual rules of geometry would break down over there. The Planck length or so is also the shortest distance scale that can be probed by accelerators, even in principle. If one were increasing the energy of protons at the LHC and picked a collider of the radius comparable to the Universe, the wavelength of the protons would be getting shorter inversely proportionally to the protons' energy. However, once the protons' center-of-mass energy reaches the Planck scale, one starts to produce the "minimal black holes" mentioned above. A subsequent increase of the energy will end up with larger black holes that have a worse resolution, not better. So the Planck length is the minimum distance one may probe.
It's important to mention that we're talking about the internal architecture of particles and objects. Many other quantities that have units of length may be much shorter than the Planck length. For example, the photon's wavelength may obviously be arbitrarily short: any photon may always be boosted, as special relativity guarantees, so that its wavelength gets even shorter.
Lots of things (insights from thousands of papers by some of the world's best physicists) are known about the Planck scale physics, especially some qualitative features of it, regardless of the experimental inaccessibility of that realm.
-
3
According to which established, experimentally verified theory can one assert that''once the protons' center-of-mass energy reaches the Planck scale, one starts to produce the "minimal black holes" mentioned above''? Where is a proof of the assertion that ''the usual rules of geometry would break down'' at distances shorter than the Planck scale? How,in the absence of a consistent theory of quantum gravity, can one prove that ''the Planck length is the radius of the smallest black hole that (marginally) obeys the laws of general relativity''? – Arnold Neumaier May 21 '12 at 16:18
4
Dear Arnold, according to which established, experimentally verified theory can one assert what I did? The theory you're looking for is known as general relativity. One may prove that with a sufficient concentration of energy in a small volume such as one I described, one inevitably forms black holes. This has been known since the singularity theorems due to Penrose and Hawking from the 1970s. Also, for radii larger than the Planck length, one may show that the corrections to GR are small so that the conclusion is unchanged. – Luboš Motl May 21 '12 at 16:33
4
One may prove that distances shorter than the Planck scale fail to obey the laws of geometry in many independent ways, from semiclassical GR to individual full-fledged consistent descriptions of string/M-theory, from AdS/CFT to Matrix theory. – Luboš Motl May 21 '12 at 16:34
3
Concerning your question "How in the absence of a consistent theory of QG one may claim...", there are two points to say. First, it is not true that we don't have a consistent theory of QG. We have known we have one for almost 40 years at this point, it's known as string theory. Second, it was the very point of my answer that one doesn't really need QG to address these points. It's the very point of the Planck scale that for black holes (much) larger than that, one may ignore quantum effects and classical GR becomes a good description. – Luboš Motl May 21 '12 at 16:36
3
So for generic elementary objects that are much (or at least visibly) heavier than the Planck mass, one may use classical GR without QM to describe what's going on at a great accuracy. On the contrary, for objects much (or at least substantially) lighter than the Planck mass, one may use QFT without gravity as an excellent approximation. So a full consistent theory of QG – i.e. string theory – is really needed for the relatively narrow transition regime just in the vicinity of the Planck scale. Approximations that neglect either gravity or QM are good on both sides. – Luboš Motl May 21 '12 at 16:39
show 7 more comments
1. Using fundamental physical constants try to construct expression which unit is legth.
So using dimensional analysis, we have a data of: $G = m^3 \cdot kg^{-1} \cdot s^{-2}$, $c = m \cdot s^{-1}$ and $\hbar = J \cdot s = kg \cdot m^2 \cdot s^{-1}$.
Than we are to construct length $l = m$ in the following way: $$l = G^a c^b \hbar^d = m^{3a + b+d} \cdot kg^{-a+d} \cdot s^{-2a-b-d} \equiv m$$ It's equivalent to the following system of equations $$\begin{cases} 3a+b+2d & = 1 \\-a+d & = 0 \\-2a-b-d & = 0 \end{cases}$$ And the only solution is just what we call now Planck's length.
-
The formula is obtained by dimensional analysis. Up to a constant dimensionless factor, the given expression is the only one of dimension length that one can make of the fundamental constants $\hbar$, $c$, and $G$.
Discussions about the physical significance of the Planck length have no experimental (and too little theoretical) support, so that your second question cannot be answered (except speculatively).
-
Comment to the answer (v1): The sentence [...]the given expression is the only dimensionless one[...] seems to be a typo, since $\ell_P$ is not dimensionless, but has dimension of length. – Qmechanic♦ May 22 '12 at 12:20
@Qmechanic: Thanks, I corrected it. – Arnold Neumaier May 22 '12 at 14:05
I must agree with Lubos (except for the exception he makes regarding Photons, since SR is the wrong tool to use and GR doesn't let Photons stand out either) that it's theoretically very well established that Planck's scale sets a point beyond which new physics should happen and string theory gives one possible form this new physics might take.
Forgetting about strings, other than Blackhole arguments, one can appeal to the modern RG framework to claim any renormalizable but not asymptotically-free field theory at low energies (like Standard Model) signals the existence of a UV scale beyond which a new field theory must get replaced. The Planck's scale is the only relevant scale we know that might possibly be the candidate for a gravitational qft. Look at Delamotte's "A hint of renormalization" for a clear description of this point.
-
It's clear to me that if there are only a finite number of (length) scales in the theory, that one can expect something to happen in field theoretic considerations w.r.t. to that unit (say Planck scale here $\ell_P$). However, what makes $\ell_P$ more fundamental than $2\ell_P$, how can one conclude that a particular numeric value has any significance without a good theory one that level? – Nick Kidman May 22 '12 at 14:23
Just based on general considerations, the transition doesn't even have to be sharp I guess (it will be sharp only if some symmetry is spontaneously broken and I'm ignorant of whether or not that must be the case for a renormalizable low-energy theory to appear). So I agree with you, nothing is so special about Planck's length before considering a specific consistent QG. – Arash May 22 '12 at 15:48
This is an answer to the part of the question about why smaller scales are inaccessible.
Particle physicists are in the business of measuring things at very small distances. To do this, they have to use particles with wavelengths comparable to the distance scale they're trying to probe, and they have to collide those particles with the thing they're trying to probe.
However, something goes wrong if you keep trying to make the wavelength $\lambda$ shorter and shorter. Although accelerating a particle to ultrarelativistic speed doesn't make it into a black hole (after all, in its own frame it's at rest), the collision with the object being probed can create a black hole, and it will do so, roughly speaking, if the energy $E$ is equivalent to an $mc^2$ for which the Schwarzschild radius $2Gm/c^2$ is smaller than the $\lambda\sim hc/E$. (This is not rigorous, since it's really the stress-energy tensor that matters, not the energy, but it's good enough for an order-of-magnitude estimate.) Solving for $\lambda$, we get something on the order of the Planck length.
If you make the wavelength shorter than the Planck length, you're making the energy higher. The collision then produces a larger black hole, which means you're not probing smaller scales, you're probing larger ones.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410544037818909, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/158100-matrix-question.html
|
# Thread:
1. ## Matrix Question
Having some problems with this question.
The transformation T in the plane consists of an anticlockwise rotation through $90^{\circ}$ about the origin by a translation in which the point (x,y) is transformed to the point (x+1,y+2).
(a) Show that the matrix representing T is
\begin{bmatrix} 0 & -1 & 1 \\ 1 & 0 &2 \\ 0 & 0 & 1 \end{bmatrix}[/tex]
It doesn't say in what plane the rotation is in.. 90 degrees in which direction?
I assume its the yx plane but I tried that and my matrix was nothing like T.
Thanks
2. is that the whole question, unedited? your coordinates have 2 dimensions but your matrix has 3, which doesn't make sense to me
(of course, i could be wrong!)
3. I think z is just supposed to be invariant.. not sure. But that is the entire question
4. I assume you mean he the proposition is that (x,y,z) maps to (x+1,y+2,z) under transformation T. You can check it is true by applying the transformation to the vector (x,y,z) and checking you get the required answer.
Edit Dont see how the above could hold in general (it would be a shift, not a rotation) so the question must have meant something else. So, back to your original question. The question says "in the plane". that is not something a question would normally say without defining the plane first.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476752877235413, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/tagged/lower-bounds
|
# Tagged Questions
The lower-bounds tag has no wiki summary.
1answer
48 views
### Time complexity in Big O notation for Harmonic series with first k terms missing
Firstly, let's suppose there exists an algorithm where $i$ iterates from $1$ to $n$, spending $\frac{n^2}{i}$ time in each iteration. Thanks to the well known $O(\log n)$ upper bound for the Harmonic ...
3answers
217 views
### A Problem on Time Complexity of Algorithms
For every integer $t$, is there a problem whose solutions can be verified in $O(n^{s})$ time but cannot be found in $O(n^{st})$ time? By verifying, I mean that given a candidate solution $y$, we can ...
1answer
42 views
### Lower bound for sorting n arrays of size k each
Given $n$ arrays of size $k$ each, we want to show that at least $\Omega(nk \log k)$ comparisons are needed to sort all arrays (indepentent of each other). My proof is a simple modification of the ...
1answer
120 views
### Is detecting “doubly” arithmetic progressions 3SUM-hard?
This is inspired by an interview question. We are given an array of integers $a_1, \dots, a_n$ and have to determine if there are distinct $i \lt j \lt k$ such that $a_k - a_j = a_j - a_i$ \$k - j = ...
1answer
58 views
### Complexity of transposing matrices represented as list of row or column vectors
Given [[1,4,7],[2,5,8],[3,6,9]] which is a list of the column vectors of matrix |1, 2, 3| |4, 5, 6| |7, 8, 9| is $\Omega(n^2)$ a lower bound for transposing? ...
2answers
69 views
### Constraint violation and efficiency in search
It seems that (in a broad sense) two approaches can be utilized to produce an algorithm for solving various optimization problems: Start with a feasible solution and expand search until constraints ...
2answers
79 views
### Methods for Finding Asymptotic Lower Bounds
I've found in many exercises where I'm asked to show that $f(n)=\Theta(g(n))$ where the two functions are of the same order of magnitude I have difficulty finding a constant $c$ and a value $n_0$ for ...
0answers
123 views
### Is there a data-structure which is more efficient than both arrays and linked lists? [duplicate]
Background: In this question we care only about worst-case running-time. Array and (doubly) linked lists can be used to keep a list of items and implement the vector abstract data type. Consider the ...
1answer
92 views
### Simple lower bounds against AC0
It is known that $Parity \notin AC^0$ (nonuniform), but the proof is rather involved and combinatorial. Are there simpler, but weaker lower bounds, say for $NP \not \subseteq AC^0$ or \$NEXP \not ...
3answers
90 views
### Search spaces and computation time
This question follows on previous questions (1), (2), where we define an initial space of possibilities and reason about how a solution is chosen from that. Consider a problem P where we are given: ...
1answer
72 views
### Input that causes an operation on a binomial heap to run in $\Omega(\log n)$ time?
I was studying binomial heaps and its time analysis. Are there any inputs that cause DELETE-MIN, DECREASE-KEY, and DELETE to run in $\Omega(\log n)$ time for a binomial heap rather than $O(\log n)$?
1answer
39 views
### Progress of algorithms in problem spaces
Continuing in the vein of two prior questions (1) and (2), we started with sorting, where we had a set of $n!$ input possibilities a goal space of only one element consisting of the one correct ...
2answers
69 views
### Lower bound on size of proof that a list of integers is sorted
Suppose we have a list of unbounded integers, written in binary, and we want to write a (formal) proof that the list is sorted in ascending order. Such a proof might look (informally) like: "2 < ...
2answers
132 views
### Space complexity below $\log\log$
Show that for $l(n) = \log \log n$, it holds that $\text{DSPACE}(o(l)) = \text{DSPACE}(O(1))$. It's well known fact in Space Complexity, but how to show it explicitly?
1answer
62 views
### Proofs based on narrowing down sets of possibilities
Consider the argument made in this question based on the comparison sorting lower-bounds proof, which runs as follows. First, the comparison sorting lower-bounds proof was recited: For $n$ ...
1answer
133 views
### Generalizing the Comparison Sorting Lower Bound Proof
Let's start with the comparison sorting lower bound proof, which I'll summarize as follows: For $n$ distinct numbers, there are $n!$ possible orderings. There is only one correct sorted sequence of ...
1answer
113 views
### Lower bound for Convex hull
By making use of the fact that sorting $n$ numbers requires $\Omega(n \log n)$ steps for any optimal algorithm (which uses 'comparison' for sorting), how can I prove that finding the convex-hull of ...
2answers
194 views
### Lower bounds: queues that return their min elements in $O(1)$ time
First, consider this simple problem --- design a data structure of comparable elements that behaves just like a stack (in particular, push(), pop() and top() take constant time), but can also return ...
4answers
410 views
### Is every linear-time algorithm a streaming algorithm?
Over at this question about inversion counting, I found a paper that proves a lower bound on space complexity for all (exact) streaming algorithms. I have claimed that this bound extends to all linear ...
1answer
120 views
### Bound on space for selection algorithm?
There is a well known worst case $O(n)$ selection algorithm to find the $k$'th largest element in an array of integers. It uses a median-of-medians approach to find a good enough pivot, partitions ...
1answer
263 views
### How to use adversary arguments for selection and insertion sort?
I was asked to find the adversary arguments necessary for finding the lower bounds for selection and insertion sort. I could not find a reference to it anywhere. I have some doubts regarding this. I ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254812002182007, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/70866/list
|
## Return to Question
2 replaced one of the tags
1
# Question on Linear Operators
Let $V$ be a normed infinite dimensional vector space. Let $L: V \longrightarrow V$ be a bounded linear operator. Moreover assume that $L$ is 'locally nilpotent' that is: $$\forall v \in V \quad \exists n \in \mathbf{N}: L^n (v) = 0.$$ Now my question is if the linear operator: $$\exp (L) = \sum_{n=0}^{\infty} \frac{L^n}{n!}$$ is bounded or not.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8605771064758301, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/85441/velocity-field-of-fluid-and-maurer-cartan-form
|
## Velocity field of fluid and Maurer-Cartan form?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Chatting with an engineer, he suggested me to have a look to a certain book in order to understand what fluid mechanics is about (I know nothing about the subject). But this question is not about fluid mechanics in general, it's a bit more specific (and much, much more basic).
Starting to read the aforementioned book, I couldn't help stopping by the first sentence because I felt the need to 'translate' things into mathematiquese in order to understand. What is a fluid motion? Well, just like a rigid motion on $M=\mathbb{R}^3$ is a smooth map $\varphi:\mathbb{R}\to\mathrm{Euc}^{+}(3)$, we can take a fluid motion to be a smooth map $\varphi:\mathbb{R}\times M\to M$ such that the induced map is a diffeomorphism at every time, $\varphi:\mathbb{R}\to\mathrm{Diff}(M)$, $(t,x)\mapsto\varphi_t(x)$. (We also assume $\varphi_0=\mathrm{id}_M$). $\varphi$ is not assumed to be a flow in general, i.e. need not verify $\varphi_t\circ\varphi_s=\varphi_{t+s}$ and $\varphi_t^{-1}=\varphi_{-t}$. Then, the notion of velocity field $v(t,x)$ of the fluid was mentioned, which is assumed to be, at a time $t$ and point $x\in M=\mathbb{R}^3$, the velocity of a material particle passing through position $x$ at time $t$. The immediate thought was that $v(t,x)$ must just be the velocity $d\varphi_t(x)/dt$ of the curve $t\mapsto \varphi_t(x)$. But it does not live in $T_xM$. So, let's take it back to $x$, i.e. define $v(t,x)$ as $(\varphi_t^{-1})_{*}(d\varphi_t(x)/dt)$, so that now it lies in $T_xM$. This is still 'physically' wrong, as one can see by considering a waterfall with a horizontal part (with almost constant velocity) and a vertical part in which water accelerates. So, I came up with the following definition:
$v(t,x):=\frac{d}{ds}|_{s=t}(\varphi_s(\varphi_t^{-1}(x)))=\dot{\varphi}_t(\varphi_t^{-1}(x))$ .
First question:
Is my definition the one usually (implicitely or explicitely) taken in fluid dynamics?
If we assume the motion is affine, i.e. $\varphi_t(x)=A(t)\cdot x + \beta(t)$ with $A(t)\in\mathrm{GL}(3)$, we get:
$v(t,x)=\dot{A}\cdot A^{-1}\cdot x - \dot{A}\cdot A^{-1}\cdot\beta+\dot{\beta}$,
in which the linear term reminds me of the Maurer-Cartan form $\omega_{MC}=g^{-1}\cdot\mathrm{d}g$ on a matrix Lie group $G$.
Second question:
Does $v(t,x)$ actually have anything to do with a Maurer-Cartan form?
-
4
This point of view has been discussed in the literature. You might, for example, look at Arnol'd's remarks about treating fluid flow as dynamics in the group of (volume preserving) diffeomorphisms in his "Mathematical Methods of Classical Mechanics" and the references that he cites there. – Robert Bryant Jan 11 2012 at 20:09
Thanks for the suggestion. I found in Arnold, Khezin Topological Methods in Hydrondynamics that 'my' definition was indeed the correct one; so question 1 is answered. – Qfwfq Jan 11 2012 at 21:01
Also, looking at page 15 of the same book, it seems that my $\dot{A}\cdot A^{-1}$ is, in the case of a flow $\varphi:\mathbb{R}\to G=\mathrm{SO}(3)$, what they call "spatial angular velocity", which lives in the Lie algebra of $G$. In the case of a flow of (volume preserving) diffeomorphisms it might be linked to the velocity field $v(t,x)$... – Qfwfq Jan 11 2012 at 21:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9513090252876282, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/39197/most-suitable-metric-for-the-solar-system
|
# Most suitable metric for the Solar system?
1. If I wanted to solve the Einstein equations for the solar system, which choice of $g_{\mu\nu}$ and $T_{\mu\nu}$ is more suitable?
2. I thought about using a Schwarzschild metric near each planet, but how to connect them?
-
## 2 Answers
Solving the Einstein equation for a system as complex as the Solar System could only be done numerically, and in any case it's not terribly useful. Nothing in the Solar System, is relativistic enough to need more than a linearised treatment (this is how Einstein calculated the precession of Mercury).
Actually even solving Newton's equation for a system as complex as the Solar System can only be done numerically. The way you actually do things like detecting the presence of Neptune is done using perturbation theory. If you were interested in relativistic effects you start with a symmetric solution, apply classical perturbations using Newton's law then finally apply corrections using linearised GR.
To take the example of Mercury that I mentioned above: if the Solar System consisted only of the Sun and Mercury (and both were perfect spheres) the orbit of Mercury wouldn't precess. However it's observed to precess by 574 arc-seconds per century. 531 arc-seconds of this are down to classical perturbations by other planets, and this was known before GR was formulated. Only 43 arc-seconds was left for Einstein to explain, which he did using a linearised approximation.
-
+1 great answer... – Killercam Oct 6 '12 at 9:43
Ok, I didn't read this carefully enough, you mention linearized approximation, but it's still wrong--- the question asks how to superpose Schwarzschild metrics, you do it by adding up the metric perturbations (for stationary objects) and using boosts (for moving objects). – Ron Maimon Oct 7 '12 at 4:54
You just add the metric in isotropic form. You need the isotropic Schwartschild solution, which is found by transforming the r coordinate so that the metric looks like:
$$- g_{00}(u) dt^2 + g_{uu}(u) (du^2 + u^2 d\Omega^2)$$
This requires choosing $u(r)$ appropriately. Then you note that the result can be transformed again:
$$- g_{00}(u) dt^2 + g_{uu} (dx^2 + dy^2 + dz^2)$$
Where $u = \sqrt{x^2 + y^2 + z^2}$. Then you write the linearized approximation, which turns out to be:
$$- dt^2 + dx^2 + dy^2 + dz^2 - h(u) dt^2 - g(u)( dx^2 + dy^2 + dz^2)$$
And then you superpose the solution for the invidual masses, since the linearized small perturbation metric is additive among separate sources (like any other linear field theory). The result can be simply expressed in terms of the Newtonian potential:
$$h_{tt} = 2 \phi(x,y,z)$$
$$h_{ii} = 2 \phi(x,y,z)$$
Where $\phi(u)$ is the ordinary Newtonian potential. The superposition is valid in the weak field limit, which for the Solar system is essentially exact. The result can also be derived without going through isotropic coordinates, because the difference in r and u is higher order in the gravitational field, but I prefer to do it this way.
For moving objects, you have to tensorially boost the Schwarzschild metric to a moving frame. This introduces the Gravitomagnetic forces, and it is important in the solar system. The boosting is by the tensor law using the Lorentz boost. This gives the off diagonal h's for a moving source, and it is fine as long as the retardation effects are negligible, which is true throughout the solar system.
This approximation is introduced by Einstein in 1916 in the original papers on GR, and he uses it to calculate the effects of GR in the solar system, a few months before the exact Schwarzschild solution appears. Einstein thought the field equations were too complicated for an exact solution until Schwarzschild proved him wrong.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454408288002014, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/05/26/natural-transformations-and-functor-categories/?like=1&source=post_flair&_wpnonce=8cb43e0c56
|
The Unapologetic Mathematician
Natural Transformations and Functor Categories
So we know about categories and functors describing transformations between categories. Now we come to transformations between functors — natural transformations.
This is really what category theory was originally invented for, and the terminology predates the theory. Certain homomorphisms were called “natural”, but there really wasn’t a good notion of what “natural” meant. In the process of trying to flesh that out it became clear that we were really talking about transformations between the values of two different constructions from the same source, and those constructions became functors. Then in order to rigorously define what a functor was, categories were introduced.
Given two functors $F:\mathcal{C}\rightarrow\mathcal{D}$ and $G:\mathcal{C}\rightarrow\mathcal{D}$, a natural transformation $\eta:F\rightarrow G$ is a collection of arrows $\eta_C:F(C)\rightarrow G(C)$ in $\mathcal{D}$ indexed by the objects of $\mathcal{C}$. The condition of naturality is that the following square commutes for every arrow $f:A\rightarrow B$ in $\mathcal{C}$:
$\begin{matrix}F(A)&\rightarrow^{\eta_A}&G(A)\\\downarrow^{F(f)}&&\downarrow^{G(f)}\\F(B)&\rightarrow^{\eta_B}&G(B)\end{matrix}$
The vertical arrows from from applying the two functors to the arrow $f$, and the horizontal arrows are the components of the natural transformation.
If we have three functors $F$, $G$, and $H$ from $\mathcal{C}$ to $\mathcal{D}$ and natural transformations $\eta:F\rightarrow G$ and $\xi:G\rightarrow H$ we get a natural transformation $\xi\circ\eta:F\rightarrow H$ with components $(\xi\circ\eta)_C=\xi_C\circ\eta_C$. Indeed, we can just stack the naturality squares beside each other:
$\begin{matrix}F(A)&\rightarrow^{\eta_A}&G(A)&\rightarrow^{\xi_A}&H(A)\\\downarrow^{F(f)}&&\downarrow^{G(f)}&&\downarrow^{H(f)}\\F(B)&\rightarrow^{\eta_B}&G(B)&\rightarrow^{\xi_B}&H(B)\end{matrix}$
and the outer square commutes because both the inner ones do.
Every functor comes with the identity natural transformation $1_F$, whose components are all identity morphisms. Clearly it acts as the identity for the above composition of natural transformations.
A natural transformation is invertible for the above composition if and only if each component is invertible as an arrow in $\mathcal{D}$. In this case we call it a “natural isomorphism”. We say two functors are “naturally isomorphic” if there is a natural isomorphism between them.
All of this certainly looks like we’re talking about a category, but again the set theoretic constraints often work against us. There are, however, times where we really do have a category. If one of $\mathcal{C}$ or $\mathcal{D}$ (or both) are small, then all the set theory works out and we get an honest category of functors from $\mathcal{C}$ to $\mathcal{D}$. We will usually denote this category as $\mathcal{D}^\mathcal{C}$. Its objects are functors from $\mathcal{C}$ to $\mathcal{D}$, and its morphisms are natural transformations between such functors.
And now we can explain the notation $\mathcal{C}^\mathbf{2}$ for the category of arrows. This is the category of functors from $\mathbf{2}$ to $\mathcal{C}$! What is a functor from $\mathbf{2}$ to $\mathcal{C}$? Remember that $\mathbf{2}$ is the category with two objects $A$ and $B$ and one non-trivial arrow $f:A\rightarrow B$. Thus a functor $F:\mathbf{2}\rightarrow\mathcal{C}$ is defined by an arrow $F(f):F(A)\rightarrow F(B)$, and there’s exactly one functor for every arrow in $\mathcal{C}$.
Now let’s say we have two such functors $F(f):F(A)\rightarrow F(B)$ and $G(f):G(A)\rightarrow G(B)$. A natural transformation $\eta:F\rightarrow G$ consists of morphisms $\eta_A:F(A)\rightarrow G(A)$ and $\eta_B:F(B)\rightarrow G(B)$ so that the naturality square commutes. But this is the same thing we used to define morphisms in the arrow category, just with some different notation!
Natural transformations and functor categories show up absolutely everywhere once you know to look for them. We’ll be seeing a lot more examples as we go on.
Like this:
Posted by John Armstrong | Category theory
7 Comments »
1. [...] we start with some small category that describes the form of a diagram, and then we take the category of functors into the category we’re interested in [...]
Pingback by | June 16, 2007 | Reply
2. [...] natural transformation between two functors from to picks out a morphism in for each object in , subject to a [...]
Pingback by | August 17, 2007 | Reply
3. [...] us to the category of such functors. The objects, recall, are functors, while the morphisms are natural transformations. Now let’s consider what, exactly, a natural transformation consists of in this [...]
Pingback by | October 28, 2008 | Reply
4. [...] are also deep connections between -morphisms and natural transformations, in the categorical viewpoint. Those who are really interested in that can dig into the archives a [...]
Pingback by | September 21, 2010 | Reply
5. [...] should note that these are not just isomorphisms, but “natural” isomorphisms. That the construction is a functor is clear, and it’s straightforward to verify that these [...]
Pingback by | October 11, 2010 | Reply
6. [...] yes. Any two vector spaces having the same dimension are isomorphic, but they’re not “naturally” isomorphic. Roughly, there’s no universal method of giving an explicit isomorphism, and so it’s [...]
Pingback by | October 13, 2010 | Reply
7. [...] opposite way, and then said a presheaf of sets is a functor . So the natural home for them is the functor category , where the morphisms are natural [...]
Pingback by | March 19, 2011 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939337968826294, "perplexity_flag": "head"}
|
http://asmeurersympy.wordpress.com/2010/05/
|
# Aaron Meurer's SymPy Blog
My blog on my work on SymPy and other fun stuff.
## More information on my Google Summer of Code project this year
May 26, 2010
So, as I noted here, I have been accepted into the Google Summer of Code program again this year. I mentioned that my project involved improving the integrator, but I didn’t say much other than that. So here I plan on saying a bit more. If you want more details, you can read my application on the SymPy wiki.
My goal is to improve the integrator in SymPy, in other words, the back end to the `integrate()` function. This is no easy task. Currently, SymPy has a pretty decnet integration engine. It is even able to solve some integrals that no other system is known to be able to solve (the second integral here). But, as I discovered often many times throughout my work on ODEs last year, the integrator can often leave something to be desired. There are two problems that I hope to address.
First, the integrator often fails on elementary integrals. This is because all of the integration in SymPy is based on a heuristic called the Risch-Norman algorithm. Symbolic integration has been completely solved in the form of the Risch algorithm, meaning that there exists an algorithm to determine if an elementary function has an elementary antiderivative or not, and to find it if it does. This algorithm, called the Risch algorithm, is extremely complicated, to the extent that no computer algebra system has ever completely implemented all the parts of it. My plan is to begin implementing the full algorithm in SymPy. I don’t expect to finish the whole thing — as I said no one ever has. Rather, I hope to make a good headway into what is known as the transcendental part. The Risch algorithm is broken up into four parts: rational part, the transcendental part, the algebraic part, and the mixed part.
The rational part is involves integrating rational functions (functions of the form $\frac{a_nx^n + a_{n-1}x^{n-1} + \cdots + a_2x^2 + a_1x + a_0}{b_nx^n + b_{n-1}x^{n-1} + \cdots + b_2x^2 + a_1x + a_0}$). The rational part is the easiest part in the sense that the algorithm is the simplest, and also that all rational function integrals are elementary (a term that I will define later). Rational function integration is already implemented in sympy in full, though I may give a brief outline of how it works in a later post.
The transcendental part is the part that I will be implementing this summer. My guide will be Symbolic Integration I: Transcendental Functions by Manuel Bronstein, which describes and proves the transcendental part of the algorithm in some 300+ pages. I will try to explain a little of how the algorithm works in some blog posts, but understand that it is very complex. Therefore, I will probably explain it without proving things. If you are interested in buying the book and learning the algorithm rigorously, the only prerequisites that I can tell are calculus (so you know what an integral and a derivative are), and a semester of abstract algebra (you need to know about rings, fields, ideals, homomorphisms, etc., as well as the various theorems relating them).
In the book, I am still in the part that develops the theory called differential algebra necessary to prove the integration algorithm correct. So to begin the GSoC program, I am working on learning the polys module in sympy. My method of doing this is to write doctests for all the functions in the module. It’s a daunting task, but it’s been probably the best way of learning how a computer module works that I have ever tried. You really have to understand all aspects of a function to write a doctest for it, the types of the parameters and return value, as well as what the algorithm is actually doing. It’s especially helpful that the code for the functions is right below the docstring for each function, so I can see how it really works on the inside, removing the mystery of the module. Furthermore, it will serve as a reference for me for the remainder of the summer, as well for anyone else who wants to learn the polys module, or just needs to debug it. I’ve also ran into several bugs and inefficiencies in the module that I have taken the liberty of fixing.
Well that’s it for this post. If you want to follow my progress on the doctests, my branch is http://github.com/asmeurer/sympy/tree/polydocs-polys9. Note that the branch will be very unstable until I finish at some point at the end of this week or the beginning of the next.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505776762962341, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-math-topics/57562-l-1-space-integrable-functions.html
|
Thread:
1. L-1 space of integrable functions
If f is in L1 and delta > 0, then the L1 norm of f(delta*x)-f(x) goes to zero as delta goes to 1. I have been studying my text for two hours and still really don't understand it. Any thoughts?
By the way, I have some ideas--
If I can show that f is a continuous function of compact support, then I will know that f is dense in L-1. That will guarantee that there exists a function g in L-1 such that the L1 norm of f minus g is less than any epsilon (epsilon greater than zero).
rogerpodger
2. Originally Posted by rogerpodger
If f is in L1 and delta > 0, then the L1 norm of f(delta*x)-f(x) goes to zero as delta goes to 1. I have been studying my text for two hours and still really don't understand it. Any thoughts?
By the way, I have some ideas--
If I can show that f is a continuous function of compact support, then I will know that f is dense in L-1. That will guarantee that there exists a function g in L-1 such that the L1 norm of f minus g is less than any epsilon (epsilon greater than zero).
For $f\in L^1(\mathbb{R})$ and $\delta>0$, write $M_\delta f(x) = f(\delta x)$. You want to prove that $\|M_\delta f - f\|\to0$ as $\delta\to1$. I think that the idea of proving this by doing it first for the case when f is continuous with compact support is the best way to go about it.
The first step is to show that if $f,\,g\in L^1(\mathbb{R})$ then $\|M_\delta f - M_\delta g\| = \delta^{-1}\|f-g\|$. This is easily seen by making the substitution $y=\delta x$ in the integral $\int_{\mathbb{R}}|f(\delta x) - g(\delta x)|\,dx\ (= \|M_\delta f - M_\delta g\|)$. That shows that if $\delta$ is close to 1 and $f$ is close to $g$, then $M_\delta f$ is close to $M_\delta g$.
The next step is to show that if $g$ is continuous with compact support then $\|M_\delta g - g\|$ is small if $\delta$ is close to 1. This is an easy consequence of the fact that $g$ is uniformly continuous.
Now you just have to put those two steps together and use the triangle inequality: $\|f-M_\delta f\|\leqslant \|f-g\| + \|g-M_\delta g\| + \|M_\delta g - M_\delta f\|$. If $g$ is close enough to $f$ and $\delta$ is sufficiently close to 1, then all three terms on the right side of the inequality can be made arbitrarily small.
3. Thanks!
I was able to solve the problem! Thanks for you help! You gave me somewhere to go without doing the problem for me! I really appreciate it!
rogerpodger
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695625901222229, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/142133/does-this-polynomial-evaluate-to-prime-number-whenever-x-is-a-natural-number/142140
|
Does this polynomial evaluate to prime number whenever $x$ is a natural number?
I am trying to prove or disprove following statment:
$x^2-31x+257$ evaluates to a prime number whenever $x$ is a natural number.
First of all, I realized that we can't factorize this polynomial using its square root like $$ax^2+bx+c=a(x-x_1)(x-x_2)$$ because discriminant is negative, also I used excel for numbers from 1 to 1530 (just for checking), and yes it gives me prime numbers, unfortunately I dont know what the formula is, which evaluates every natural number for given input, maybe we may write $x_{k+1}=k+1$ for $k=0,\ldots\infty$, but how can I use this recursive statment? I have tried instead of $x$, put $k+1$ in this polynomial, and so I got
$$(k+1)^2-31k+257=k^2+2k+1-31k+257=k^2-29k+258$$ but now this last polynomial for $k=1$ evaluates to $259-29=230$, which is not prime, but are my steps correct? Please help me.
-
2
$(k+1)^2 - 31 k + 257$ is not the same polynomial. – Robert Israel May 7 '12 at 8:02
3
– pedja May 7 '12 at 8:03
1
Your question contains an incorrect statement. Every $ax^2 + bx + c$ polynomial has two complex roots $z_1$ and $z_2$ (not necessarily distinct) and can be written $a(x - z_1)(x - z_2)$. This follows directly from the Fundamental Theorem of Algebra which tells us that every degree $n$ polynomial has $n$ complex roots (if roots with multiplicity are multiply counted). It's true, of course, that these roots are not necessarily real. – Kaz May 7 '12 at 14:10
4 Answers
For $x=32$, we have $$x^2-31x+257=289=17^2.$$
-
nice counterexample – dato May 7 '12 at 8:05
If $x$ is divisible by $257$ then so is $x^2 - 31 x + 257$. More generally, if $f(x)$ is any polynomial $f(x)$ is divisible by $f(y)$ whenever $x-y$ is divisible by $f(y)$. So there are no non-constant polynomials that produce primes for all positive integers.
-
i see,thanks very much – dato May 7 '12 at 8:16
As Robert mentioned, there are easy counterexamples. However, you probably wonder why the first $32$ values are all prime. To explain that, notice that if you shift x by $16$ then it becomes $\rm\:f(x) = x^2 + x + 17.\:$ Now one may apply the following theorem on prime-producing polynomials (generalizing the famous example $\rm\:x^2+x+41\:$ noticed by Euler), noting that the ring of integers of $\rm\:\mathbb Q(\sqrt{-67})\:$ is a UFD.
Theorem $\$ The polynomial $\rm\ f(x)\ =\ (x-\alpha)\:(x-\alpha')\ =\ x^2 + x + k\$ assumes only prime values for $\rm\ 0\ \le\ x\ \le\ k-2 \ \iff\ \mathbb Z[\alpha]\$ is a PID.
Hint $\: (\Rightarrow)\$ Show all primes $\rm\ p \le \sqrt{n},\; n = 1\!-\!4k\$ satisfy $\rm\ (n/p) = -1\$ so no primes split/ramify.
For proofs, see e.g. Cohn, Advanced Number Theory, pp. 155-156, or Ribenboim, My numbers, my friends, 5.7 p.108. Note: both proofs employ the bound $\rm\ p < \sqrt{n}\$ without explicitly mentiioning that this is a consequence of the Minkowski bound - presumably assuming that is obvious to the reader based upon earlier results. Thus you'll need to read the prior sections on the Minkowski bound. Compare Stewart and Tall, Algebraic number theory and FLT, 3ed, Theorem 10.4 p.176 where the use of the Minkowski bound is mentioned explicitly.
Alternatively see the self-contained paper [1] which proceeds a bit more simply, employing Dirichlet approximation to obtain a generalization of the Euclidean algorithm (the Dedekind-Rabinowitsch-Hasse criterion for a PID). If memory serves correct, this is close to the approach originally employed by Rabinowitsch when he first published this theorem in 1913.
[1] Daniel Fendel, Prime-Producing Polynomials and Principal Ideal Domains,
Mathematics Magazine, Vol. 58, 4, 1985, 204-210
-
An easy way with polynomials in a single variable is to set the variable $x$ equal to the constant term - in this case $x=257$. If this turns out negative*, choose a suitably high multiple of the constant term (this assumes the leading coefficient is positive). The factorisation is obvious and you don't need to multiply large numbers.
*or equal to the constant when the constant is prime
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9055914878845215, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/58804?sort=newest
|
## Ostensibly different products on Ext-groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The following is presumably not the greatest generality in which this question makes sense.
1. Given a ring $k$, graded-commutative if it helps, and a Hopf-algebra $A$ over $k$, there is a Yoneda product making $\textrm{Ext}_A^*(k, k)$ into a ring (since $k$ is graded, this actually is a bigraded object, but I suspect the grading on $k$ is immaterial so I have suppressed it). We have from this construction an
2. There is also a product defined the following way `$\newcommand{\E}{\textrm{Ext}}\E_A^*(k,k) \otimes_A \E^*_A(k,k) \to \E_{A \otimes_A A}^*(k\otimes_A k, k\otimes_A k) \to \E_A^*(k,k)$` using the external product on $\textrm{Ext}$.
3. One has a development of this idea: `$\newcommand{\E}{\textrm{Ext}}\E_A^*(k,k) \otimes_k \E^*_A(k,k) \to \E_{A \otimes_k A}^*(k\otimes_k k, k\otimes_k k) \to \E_A^*(k,k)$` using the coproduct structure $A \to A \otimes_k A$ to change the base-ring. This appears to give a $k$-algebra rather than an $A$-algebra structure on $\textrm{Ext}$.
Should we expect these products to coincide? If they don't always coincide, are there conditions that ensure they do?
-
They do coincide, in the examples I know. I am not sure how general this is, I think it is always true, but I don't know a reference or the key word to help you get started on a proof. – Sean Tilson Mar 18 2011 at 2:52
In 2. the tensor products should be taken over $k$ instead of $A$. That's for two reasons: (i) $Ext_A$ usually doesn't have an $A$-module structure, but is only a module over the center of $A$ and therefore in particular over $k$. (ii) In the definition of a Hopf-algebra the tensor products occuring in the product and coproduct are taken over $k$. – Ralph Mar 18 2011 at 8:07
## 1 Answer
Show that each of those products distributes over the others, and use Hilton-Eckmann (over a field, or for $A$ projective; in general, I don't know...) This ends up being then an exercise in using naturality.
It can be done more concretely, too. For example, to show (1) and (2) are the same, one can show that if $E$ and $E'$ are $n$- and $m$-extensions of $k$ by $k$, then $E\circ E'$, the Yoneda composition, is equivalent as an $(n+m)$-extension, to $E\otimes E'$; this also shows (1) is (3), because why I wrote $E\otimes E'$, an iterated extension of $A$-modules, is really the result of restricting scalars along the coproduct of $A$ from the iterated extension $E\otimes E'$ of $A\otimes A$-modules. On the other hand, to show directly that (2) and (3) are the same, show that computation of both in terms of the bar resolution is actually the same.
Later. Let $E$ and $E'$ be $n$- and $m$-extensions of $A$-modules of $k$ by $k$. For example, $E$ is $$0\to k\to E_{n-1}\to E_{n-2}\to\cdots\to E_0\to k$$ I'm going to grade this complex so that $E_0$ is in degree $0$, and write $\bar E$ for the result of simply taking away the rightmost $k$, so that $\bar E$ is a complex with homology $k$, concentrated in degree $0$. Then $\bar E\otimes\bar E'$ is a complex of $A\otimes A$-modules, which, by the Künneth formula, has homology $k\otimes k$ in degree $0$. Since in degree $n+m$ the complex $\bar E\otimes\bar E'$ has $k\otimes k$, we can add the $k\otimes k$ to it in degree $-1$ and obtain an $(n+m)$-extension of $k\otimes k$ by $k\otimes k$ in the category of $A\otimes A$-modules, which is what I denote above $E\otimes E'$. If we restrict scalars along the diagonal map, we get a complex $\Delta^*(E\otimes E')$, which is the product of $E$ and $E'$ accoding to your product (3).
The other important observation is the following: what I wrote $\bar E\otimes\bar E'$ is a double complex with the shape of a rectangle. By looking at it mildly hard, you can see that if you go from the module in the largest degree (which is $k\otimes k$) towards the one in degree zero by walking along one side and then along another, you get a simple complex which is more or less trivially isomorphic to what now I would write $\overline{E\circ E'}$, the truncation of the Yoneda composition (if you walk along the other two sides of the rectangle, you get $\pm\overline{E'\circ E}$, and pursuing this you get a very concrete proof of graded-commutativity) Finally, you have to notice that there are maps of extensions $E\circ E'\to E\otimes E$ (and $\pm E'\circ E\to E\otimes E'$, of course)
Sorry if this came out rather messy: it is the kind of things that maximally adapts to an explanation face-to-face in front of a blackboard!
That (1) and (2) coincide is also shown here in a general nonsense manner.
-
that is always the way isn't it. – Sean Tilson Mar 18 2011 at 2:58
Thanks for the answer, which is both helpful and promising. I am afraid I don't understand it, being inexperienced with this sort of thing. What is $E \otimes E'$? I think if I understood this I would understand everything. Thanks again. – Ben Williams Mar 18 2011 at 3:53
The argument in the first paragraph also shows that the product is (graded) commutative, right? It's just like the proof that the fundamental group of a topological group is abelian. – John Palmieri Mar 18 2011 at 4:08
Thanks especially for the preprint, which I think meets my needs. I have qualms about the concrete construction though, because neither the functor $\otimes_A$ or $\otimes_k$ is exact (I assume $\otimes_k$ is what is meant by $\otimes$ in the explicit calculation). – Ben Williams Mar 18 2011 at 6:47
I suppose I should really try to understand the exterior product construction as a construction using the derived tensor product in the derived category of $A$-modules. – Ben Williams Mar 18 2011 at 6:52
show 5 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507627487182617, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/49918/integrate-frac1-sqrt1-x2
|
# Integrate $\frac{1}{\sqrt{1 - x^2}}$
I have to calculate $\int \frac{1}{\sqrt{1 - x^2}} \operatorname{d}x$ forwards, using known rules like partial integration or substitution.
What I'm not allowed to do is simply show that $\frac{\operatorname{d}}{\operatorname{d} x} \arcsin x = \frac{1}{\sqrt{1 - x^2}}$, but I don't see how I can use the proof backwards for integration either…
Any pointers?
-
1
– t.b. Jul 6 '11 at 19:05
## 4 Answers
There is a standard substitution in this sort of situation, namely $x=\sin\theta$, where we assume $-\pi/2 \le \theta \le \pi/2$. Then $dx=\cos\theta\, d\theta$, and $\sqrt{1-x^2}=\sqrt{1-\sin^2\theta}=\sqrt{\cos^2\theta}=\cos\theta$ since in our interval $\cos$ is non-negative.
Thus $$\int \frac{dx}{\sqrt{1-x^2}}=\int \frac{\cos\theta}{\cos\theta}d\theta=\int d\theta=\theta+C.$$
But $\theta=\arcsin x$. Now it's over.
Comment 1: Regrettably, it is commonplace in solutions not to mention $-\pi/2 \le \theta \le \pi/2$, and it is commonplace to not justify $\sqrt{\cos^2\theta}=\cos\theta$. So in an integration question, in most calculus courses, the solution would be even shorter.
Comment 2: Note how close this truly standard approach is, in this case, to the suggestion by David Speyer. The difference is that the calculus teacher would not notice. The same substitution is used in many other integrals that involve $\sqrt{1-x^2}$, and close relatives can be used for integrals that involve $\sqrt{a-bx^2}$ where $a$ and $b$ are positive.
-
OK. But, as I understand it the rule we have to use is: $\int f(x) \operatorname d x = \int g(h(x)) \cdot h'(x) \operatorname d x = \int g(t) \operatorname d t$ with $t := h(x)$. Here $g(x) := \frac{1}{\sqrt{1 - x^2}}$ and $h(x):= \sin x$ but: $g(h(x)) \cdot h'(x) = -\frac{\cos x}{\sqrt{1 - \sin^2 x}} \not= \frac{1}{\sqrt{1 - x^2}}$ – pascal Jul 6 '11 at 19:31
@pascal: If you are going to use this version of substitution, then you need to put $h(x)=\arcsin x$, not $\sin x$, and you have essentially broken the directive. But the substitution in the style I did it is essentially universal in calculus. Your text will have the method, maybe under the heading $u$-substitution if you are using an American text. It is early on in methods of integration. – André Nicolas Jul 6 '11 at 19:49
ah yes, as usual the homework required knowledge from a future lesson. Thanks for the help (well basically directly giving the answer :D) – pascal Jul 6 '11 at 19:54
@pascal: Giving the answer was hard to avoid, it is a one-step calculation. Conveniently it cleared a misunderstanding about the range of substitution techniques. – André Nicolas Jul 6 '11 at 20:00
I suspect I am going to annoy your calculus teacher by writing this but:
Suppose that you are given the problem of computing $\int f(x) dx$. A little fairy comes and whispers in your ear that the answer is $g(x)$. Then you can compute this integral in a "forward" way by making the substitution $x = g^{-1}(u)$. When you do this, $f(x) dx$ should turn into $du$, which can be integrated without difficulty.
-
Try substituting $x = cos(\theta)$.
-
2
Not $x = \sin \theta$? ;-) – Aryabhata Jul 6 '11 at 19:16
2
NO. Never!! ;-) – Pratik Deoghare Jul 6 '11 at 19:17
Using the substition $x=\sin t$, $dx = \cos t ~dt$, we get:
$$\int \frac{dx}{\sqrt{1-x^2}} = \int \frac{\cos t ~dt}{\sqrt{1-\sin^2 t}} = \int \frac{\cos t ~dt}{\cos t} = \int dt = t.$$
By our substition $x=\sin t$ we have $t=\arcsin x$.
Therefore
$$\int \frac{dx}{\sqrt{1-x^2}} = \arcsin x + C.$$
-
1
Which appears as (part of) another (already accepted) answer posted a whole year before. What is your game here? – Did Jul 15 '12 at 6:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273278713226318, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/10/19/characters-of-permutation-representations/?like=1&source=post_flair&_wpnonce=55e0e6033e
|
# The Unapologetic Mathematician
## Characters of Permutation Representations
Let’s take $(\mathbb{C}S,\rho)$ to be a permutation representation coming from a group action on a finite set $S$ that we’ll also call $\rho$. It’s straightforward to calculate the character of this representation.
Indeed, the standard basis that comes from the elements of $S$ gives us a nice matrix representation:
$\displaystyle\rho(g)_s^t=\delta_{\rho(g)s,t}$
On the left $\rho(g)$ is the matrix of the action on $\mathbb{C}S$, while on the right it’s the group action on the set $S$. Hopefully this won’t be too confusing. The matrix entry in row $s$ and column $t$ is $1$ if $\rho(g)$ sends $s$ to $t$, and it’s $0$ otherwise.
So what’s the character $\chi_\rho(g)$? It’s the trace of the matrix $\rho(g)$, which is the sum of all the diagonal elements:
$\displaystyle\mathrm{Tr}\left(\rho(g)\right)=\sum\limits_{s\in S}\rho(g)_s^s=\sum\limits_{s\in S}\rho(g)_s^s=\sum\limits_{s\in S}\delta_{\rho(g)s,s}$
This sum counts up $1$ for each point $s$ that $\rho(g)$ sends back to itself, and $0$ otherwise. That is, it counts the number of fixed points of the permutation $\rho(g)$.
As a special case, we can consider the defining representation $V^\mathrm{def}$ of the symmetric group $S_n$. The character $\chi^\mathrm{def}$ counts the number of fixed points of any given permutation. For instance, in the case $n=3$ we calculate:
$\displaystyle\begin{aligned}\chi^\mathrm{def}\left((1)(2)(3)\right)&=3\\\chi^\mathrm{def}\left((1\,2)(3)\right)&=1\\\chi^\mathrm{def}\left((1\,3)(2)\right)&=1\\\chi^\mathrm{def}\left((2\,3)(1)\right)&=1\\\chi^\mathrm{def}\left((1\,2\,3)\right)&=0\\\chi^\mathrm{def}\left((1\,3\,2)\right)&=0\end{aligned}$
In particular, the character takes the value $3$ on the identity element $e\in G$, and the degree of the representation is $3$ as well. This is no coincidence; $\chi(e)$ will always be the degree of the representation in question, since any matrix representation of degree $n$ must send $e$ to the $n\times n$ identity matrix, whose trace is $n$. This holds both for permutation representations and for any other representation.
## 3 Comments »
1. [...] pass from representations to their characters. Of course, this isn’t much of a stretch, since we saw that the character of a representation includes information about the dimension: [...]
Pingback by | October 25, 2010 | Reply
2. [...] The nice thing here is that it’s a permutation representation, and that means we have a shortcut to calculating its character: is the number of fixed point of the action of on the standard basis [...]
Pingback by | November 17, 2010 | Reply
3. [...] we’ve constructed. Since these come from actions of on various sets, we have our usual shortcut to calculate their characters: count fixed [...]
Pingback by | December 15, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916135847568512, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/39802?sort=newest
|
## Homotopic quotients of simplicial sets as infinity-groupoids
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $f:X \to Y$ is a function of sets. Then we can take the quotient $X/\text{~}$ by identifying $x \text{~} y$ if and only if $f(x)=f(y)$. Now suppose instead that $f:X \to Y$ is a map of simplicial sets. I want to emulate this homotopically, by adding a 1-simplex between $x$ and $y$ if there is a 1-simplex from $f(x)$ to $f(y)$, (and similarly on higher simplices). This is probably most clear if you think of $X$ and $Y$ as infinity groupoids (as indeed I have in mind). I want a way of "making guys equivalent if they're equivalent after applying f". So, if $X$ and $Y$ are Kan, adding a 1-simplex between two 0-simplices makes them "weakly isomorphic" (which is the correct thing to do, not just glue them together outright). Is there a standard construction for maps of (maybe Kan?) simplicial sets that does this?
-
1
Maybe I'm being dumb... would just taking the image of f as a subobject of $Y$ accomplish what I want? – David Carchedi Sep 23 2010 at 23:36
2
Maybe you should take the union of the path components of $Y$ which touch the image of $f$? Not that I really understand what you want. – Charles Rezk Sep 24 2010 at 0:12
2
Maybe the mapping cylinder of $f:X\to im(f)$? – Kevin Walker Sep 24 2010 at 0:32
## 4 Answers
You might like to look at the 1978 thesis of Nick Ashley on "Simplicial $T$-complexes and crossed complexes: a nonabelian version of a theorem of Dold and Kan." available from Esquisses Math. 1978 at http://ehres.pagesperso-orange.fr/Cahiers/Ctgdc.htm
He considers a filtered Kan complex $K_*$ and a natural homotopy relation to give a Kan fibration $p: K_* \to \rho(K_* )$, where $\rho(K_*)$ is a simplicial $T$-complex, i.e. a strong form of Kan complex with unique "thin" fillers. Modifications of this should give you strict consructions of the kind you want.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'd think the construction in question is the homotopy coimage of $f$ (it's unfortunately called "coimage" even though it behaves like the image).
First one forms the homotopy Cech nerve
C(f) = \left( \cdots X \times_Y X \times_Y X \stackrel{\to}{\stackrel{\to}{\to}}X \times_Y X \stackrel{\to}{\to} X \right)
This is the internal groupoid object that encodes the $\infty$-equivalence relation on the elements of $X$ "is equivalent in $Y$"
Forming its homotopy colimit
$$coim(f) := \lim_{\to} C(f)$$
produces the homotopy quotient of $X$ by this equivlence relation.
As an example, take $\mathbf{B}G$ the one-object groupoid of a group $G$ and $* \to \mathbf{B}G$ the point inclusion. One wants to see that the homotopy image of the point inclusion $* \to \mathbf{B}G$ is not just the point, but is $*//G$, i.e. $\mathbf{B}G$ itself, because there is $G$ worth of ways for the point to be equivalent to itself after inclusion into $\mathbf{B}G$.
So one computes the homotopy Cech nerve and finds the familiar
C(* \to \mathbf{B}G) = \left(
\cdots G \times G \times G \stackrel{\to}{\stackrel{\to}{\to}} G\times G \stackrel{\to}{\to} G \right)
but now regarded as a simplicial object in $\infty Grpd$. This is the diagram that encodes the action of $G$ on the point. Its homotopy colimit is indeed again $\mathbf{B}G$, so that we find
$$coim(* \to \mathbf{B}G) = \mathbf{B}G$$
That this comes out this way is an example of Giraud's axioms at work, since $\infty Grpd$ happens to be an $\infty$-topos: this implies that every $\infty$-groupoid object (simplicial diagram as above) in $\infty Grpd$ is effective .
-
Urs. Did you mean lim with a left pointing arrow? – Tim Porter Sep 24 2010 at 16:41
Tim, that was a typo. Thanks. Have fixed it now. – Urs Schreiber Sep 25 2010 at 10:07
This answer is perhaps a gloss on David's one. It is often useful to replace taking a quotient by forming the equivalence relation as a groupoid. Thus the initial situation you describe has the classical equivalence-relation-from-a-function form. This will work in any category with pullbacks as it is the pullback of f along itself. In a homotopy situation, such as you need, the analogue will be the homotopy pullback of $f$ along itself.
This does not form the quotient as such, but is, I maintain, better (especially in the presence of differential structures for instance).It corresponds to the idea that was sketched in the question, but is natural functorial and so less hassle(<- technical categorical term meaning 'less hassle'!). It is also going to give results that do not depend on the homotopy class of $f$ and that is often important especially if you are thinking of the simplicial sets as being weak infinity groupoids or similar. I believe there are extensions to quasicomplexes but do not have sources with me to check at the moment or to give chapter and verse.
This construction not only says two simplices in $Y$ are to be thought of as being the same but records WHY, and that is important.
(Edit: Thanks Tom. I should have said 'It is also going to give results that only depend on the homotopy class of $f$ ..')
-
Did you perhaps mean to say "results that depend only the homotopy class of f"? – Tom Goodwillie Sep 25 2010 at 20:13
You are looking at the coequaliser of the kernel pair, so my guess would be to take the homotopy pullback of $f$ along itself, then look at the nerve of the groupoid $X\times_Y X \rightrightarrows X$ in $sSet$ this gives rise to, then form the diagonal (=hocolim) of this bisimplicial set. I guess this comes with a map to $Y$, but I haven't checked.
-
2
If you take the (homotopy) Cech nerve of X --> Y then apply hocolim I believe you just get Rezk's suggestion above! I guess this is the homotopy version of the fact that X/~ identifies with im(f). – Dustin Clausen Sep 24 2010 at 1:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93431556224823, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/1230/whats-keeping-us-from-simply-redefining-avogadros-number-the-mole-as-a-defin
|
# What's keeping us from simply redefining Avogadro's Number / the Mole as a definite integer?
This might be a question to ask in a Chemistry site, but because there is a lot of talk about redefining many units of measurements in terms of Avogadro's Number / the Mole, I was wondering why we don't just redefine the Mole to infinite precision, since it is basically inherently an integer.
This might be the only unit/physical constant that can be defined to infinite precision. The only unit that is an "integer".
I guess it's not really a physical constant, per se, in that it is not a property of nature. But then we can easily create a definition of the Kilogram that doesn't change over time as exactly something-something moles worth of Carbon-12.
-
2
Firstly, you need an experiment to count the number atoms precise to 23 significant figures to know that definite integer... – KennyTM Nov 23 '10 at 6:46
2
I'm asking why we can't just go the other way around...specify an exact integer to infinite precision and make the definition of 12 grams equal to the mass of that amount of Carbon-12 atoms. – Justin L. Nov 23 '10 at 6:58
2
@Justin: Then the values in the past would all become wrong, if that exact number turns out doesn't fall within the error bound of the previous definition. – KennyTM Nov 23 '10 at 7:53
6
@KennyTM: No this proposal has been seriously envisaged by metrologists. Your argument would also have applied to previous redefinitions of the second and metre. – Frédéric Grosshans Nov 23 '10 at 10:16
1
– hwlau Nov 23 '10 at 10:48
show 2 more comments
## 6 Answers
There was a proposal in 2006 trying to define NA as an exact number[1,2]: $$N_A^* = 84\;446\;888^3 = 6.022\;141\;410\;704\;090\;840\;990\;72 \times 10^{23}$$ the problem? This value is incorrect, as the currently most accurate result is[3] $$N_A = 6.022\;140\;84(18) \times 10^{23}$$ i.e. $N_A^*$ is now 3 s.d. away from $N_A$. As I have commented, if we randomly pick a number within the current error bound and call it $N_A$, we risk the problem that a better experiment for the old definition will invalidate the proposed value. To be safe about the validity of that number, you need to produce an equally accurate experiment to show that it is actually valid (like the 299792458 m/s in the definition of meter, and 9192631770 Hz in the definition of second.)
Also, the rationale for redefinition of SI base unit always involve that the current one isn't accurate enough or hard to realize:
• second (1967):
• the definition ... is inadequate for the present needs of metrology
• meter (1960):
• the international Prototype does not define the metre with an accuracy adequate for the present needs of metrology,
• it is moreover desirable to adopt a natural and indestructible standard,
• meter (1983):
• the present definition does not allow a sufficiently precise realization of the metre for all requirements
• candela (1979):
• the time has come to give the candela a definition that will allow an improvement in both the ease of realization and the precision of photometric standards, ...
Do the current experiments which reduces to NA = 12 gram of carbon-12 atoms not accurate enough or hard to realize? I don't think so; 9 significant figures are already very accurate. However, the redefinition of mole would be put on board on 2011 (24th CGPM). One proposal is to define[4] $$N_A \overset{\underset{\mathrm{def}}{}}{=} 6.022\;141\;5 \times 10^{23} \mathrm{mol}^{-1},$$ to decouple kilogram from the definition of mole. So if this path is taken, the only thing that keep us from defining it as a definite number to 10 significant figures is because "the conference haven't started yet".
But infinite precision? That would be a long way before we can reach and need that.
Ref:
1. Ronald Fox and Theodore Hill, A Proposed Exact Integer Value for Avogadro's Number. http://arxiv.org/abs/physics/0612087
2. Ronald Fox and Theodore Hill, An Exact Value for Avogadro's Number. http://www.americanscientist.org/issues/pub/2007/2/an-exact-value-for-avogadros-number/3
3. B. Andreas, Y. Azuma, G. Bartl, et. al., An accurate determination of the Avogadro constant by counting the atoms in a 28Si crystal http://arxiv.org/abs/1010.2317
4. Ian M Mills, Peter J Mohr, Terry J Quinn, et. al., Redefinition of the kilogram, ampere, kelvin and mole: a proposed approach to implementing CIPM recommendation 1 (CI-2005). http://iopscience.iop.org/0026-1394/43/3/006
-
-1: This is ridiculous--- the definition is a definition. If you redefine the kilogram, it becomes correct. – Ron Maimon Oct 22 '11 at 6:15
@RonMaimon If you adopt a "wrong" value as the definition you introduce a new error in every measurement that went before. Certainly few of those earlier measurements had enough precision for this to matter, but there were some. For continuities sake you should (indeed, must) insure that your new definition agrees with the old one to the highest available precision. – dmckee♦ Jul 16 '12 at 14:04
The problem is that you want your unit definitions to be realizable - so specifying "1 mol is long number molecules, 1 gram is 1/12 of the mass of one mol of $C_{12}$" is nice for your thought process, but as long as there is no practical way to count molecules at such scales to a precision of better than $10^{-9}$ (which I think is the precision of the kilogram standard), there is no operational advantage to it.
-
1
There is the advantage that you don't have to rely on Paris for your system of units. – Ron Maimon Oct 22 '11 at 6:16
Basically, you are proposing to redefine the kilogram, and your approach has been proposed and recently (in october 2010) abandoned ( http://en.wikipedia.org/wiki/Kilogram#Carbon-12 ). I think the reason why the Watt-balance approach has been preferred for the future definition of the kilogramme was mainly technological : it is more precise and would allow more practical realization of the kilogramme.
-
Basically, there's no reason why we couldn't redefine the mole as as simple integer number of atoms or molecules. In fact, as other users have mentioned, there's a lot of people who'd like to do that.
On the other hand, that's not how chemists actually use the mole in practice. You simply can't count 6×10^23 atoms or molecules, nor do you need to. What is important for chemists is to know (for example) that there are the same number of atoms in 58 grams of iron as in 12 grams of carbon, and so on for all the other elements. It's not important to know exactly what that number is, just that it's the same number, and, for much of the history of chemistry, we had absolutely no idea what the number was.
I should point out as well that the mole is not the only unit which is an "integer". If you take the coulomb as a unit of electric charge, that should be equal to an integer number of elementary charges, shouldn't it? Actually, it's not, for historical reasons, and there doesn't seem to be much enthusiasm for making it an integer number of elementary charges either.
You can find my paper discussing this in more technical terms at http://precedings.nature.com/documents/5138/version/1
-
No one can prove with 100% certainty that there are will be no benefits. In every discipline acquiring a higher degree of precision is advancement. Unless there is an uncertainty principal for a more precise definition, then the idea of striving for a higher precision should be encouraged. The fact that no one at the moment knows how to achieve a higher precision makes the task a perfect Phd project!
-
In 2007, I submitted a letter-to-the-editors (AmSci, May/Jun2007, Vol. 95 Issue 3, p195). I suggested that the "half-life" decay of a mole of radioactive element should end with a whole atom; thus 20=1. Going in reverse, the current value of Avogadro’s Number is approximated by 279 = 604 462 909 807 314 587 353 088.
In a 1996 copyrighted pamphlet, “Mole, Bits, and Cubes” (TXu000593728), I submitted that this Binary definition should be the value of Avogadro’s Number. It is invariant, free of dimensionality, and free of all the physical measurements currently being used to “hone in” on a value that seems far too prejudiced considering it is all tangled up with a chunk of metal that needs periodic cleaning that removes few bits of matter in the process and is subject to experimental errors in measurements (80 parts in a billion? and at what accuracy).
With No = 279, the kilogram is as good as measurements can determine the purity, number, and atomic mass of the units of whatever material (e.g., the silicon-28 sphere that is being proffered) is being measured and the standard on which the standards folks make the reference point. Thus, a binary mole of absolutely pure carbon-12 would currently weight precisely 12.0· grams and always be so until they changed the reference point from C-12. At least Avogadro’s “Number” would finally be a “constant” – one and the same.
What I see is a great effort (by those who will be making a decision) not to alter the massive weights and measurements structure that is in place throughout the world of commerce. In that realm, Avogadro’s Number/constant is not a factor - only the size of the “king’s foot”; in this case “Le Gran K” or an article representing it. Science takes second place in this realm; but, then it is scientists who are doing the "defining"!
See http://pages.swcp.com/~jmw-mcw/binary_mole.htm for definition of the binary mole.
-
## protected by Qmechanic♦Apr 5 at 17:49
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335983395576477, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/trigonometry/105485-ok-im-stumpted-tan-cos-2-a.html
|
# Thread:
1. ## Ok Im Stumpted Tan - Cos^2
Hey I have a problem with some problems with an equation and got stuck when I got to a trig identity hope you can help
$4.6tan(x)-0.46cos^2(x)=50$
2. Originally Posted by Jonnyw2k
Hey I have a problem with some problems with an equation and got stuck when I got to a trig identity hope you can help
$4.6tan(x)-0.46cos^2(x)=50$
what was the original problem? were you expected to arrive at a solution algebraically?
... as it is, it looks as though you'll need to use technology to solve it.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9837200045585632, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/40240/elementary-reference-for-algebraic-groups/40415
|
## Elementary reference for algebraic groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm looking for a reference on algebraic groups which requires only knowledge of basic material on the theory of varieties which you could find in, for example, Basic Algebraic Geometry 1 by Shafarevich.
If possible, it would be nice to find such a book which also discusses representation theory, but that's not necessary.
-
2
Milne has lectures notes which are probably excellent (I haven't looked at them). Whatever you read, the beef is str. theory of reductive alg. gps. In absence of schemes, some things in char. $> 0$ are a bit clunky in comparison with char. 0 because kernels can be non-smooth; e.g., ${\rm{SL}}_p \rightarrow {\rm{PGL}}_p$ in char. $p > 0$ with "kernel" $\mu_p$, akin to purely insep. isogeny of ell. curves. When you learn schemes and sheaves, some awkward things with quotients and non-smooth subgroups (and centralizers, and center, and so on) in char. $> 0$ will become more straightforward. – BCnrd Sep 28 2010 at 2:04
I have to recuse myself from this question, having written an exposition of the Borel/Bass lecture notes in my carefree (and tenure-free) youth. All books mentioned here are useful, but for varied purposes and using geometry at different levels. One concrete early motivation for the algebraic group mixture of group theory and algebraic geometry is the Kolchin-Borel-Chevalley work showing the intrinsic nature of the multiplicative Jordan decomposition. This is elementary (albeit technical) but not conceptually obvious. Quotients and such get far more sophisticated. – Jim Humphreys Sep 28 2010 at 17:43
@Jim: Could you please provide a reference? – David Corwin Sep 29 2010 at 5:27
For discussion of Jordan decomposition in linear algebraic groups, look at the earlier MO post #30042 from June 30. – Jim Humphreys Sep 29 2010 at 16:35
## 7 Answers
If you're interested in the theory of linear algebraic groups, Linear Algebraic Groups by Humphreys is a great book. The other two standard references are the books (with the same name) by Springer and Borel. All of the algebraic geometry you need to know is built from scratch in any of those books.
-
Humphreys really fits the bill here. Especially since you are hoping it discusses rep theory. – B. Bischof Sep 28 2010 at 4:50
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
My favourite: Waterhouse's Introduction to Affine Group Schemes! It is very friendly and clearly written and gives you the complete basic package on Affine Group Schemes in just 150 pages. With the firm grounding you get from this book you can gather whatever else you need to know (if anything) by skimming through other literature like the books by Humphreys, Springer, Demazure/Gabriel or articles.
For a very down to earth approach with lots of matrix examples, which give you a sense of reality and material for hands-on practice (e.g. while reading Waterhouse), check out Ulf Rehmann's five or six lectures here. Don't mind the unusual introduction - it's from a K-theory school...
-
6
Waterhouse somehow managed to write an entire book on algebraic groups without discussing reductive or semisimple groups. I suppose that he was trying to make some kind of philosophical point, but the result is not helpful for most people who use algebraic groups. – Nikita Sep 28 2010 at 19:50
The first book I read on algebraic groups was An Introduction to Algebraic Geometry and Algebraic Groups by Meinolf Geck. As I recall, the book includes a lot of examples about the classical matrix groups, and gives elementary accounts of such things like computing the tangent space at the identity to get the Lie algebra. Whatever algebraic geometry Geck needs (which isn't much) he develops from scratch in Chapter 1. This book is really aimed at advanced undergraduates, so is much more elementary than the previously suggested titles.
There's a little bit towards the end of Geck's book about (virtual) characters for finite groups of Lie type.
-
My favorite references are Springer's book and Milne's notes, both of which have already been mentioned. However, if you're encountering these things for the first time, I recommend reading "Lectures On Lie Groups And Lie Algebras" by Carter, Segal, and Macdonald. It is based on short lecture courses by the authors -- Carter discusses the basic of semisimple Lie algebras, Segal covers compact Lie groups, and Macdonald covers the basics of (linear) algebraic groups. It omits many proofs and most technicalities, but it gives a very nice, clutter-free overview of the subject.
-
Thanks. The main reason I'm wondering is that I'm taking both algebraic geometry and representation theory of Lie groups/algebras (i.e. two courses) this semester, and in my algebraic geometry class, I might do a project which combines the two, and looks either at Lie groups, or at groups over more general fields, from a specifically algebro-geometric perspective, as opposed to the primarily analytic perspective one encounters in the general theory of Lie groups. – David Corwin Sep 28 2010 at 6:12
If you have gone through Shafarevich, you don't really need it to be "elementary". Go for Jantzen's Representations of Algebraic Groups. Part 1 is groups and no representation theory and Part 2 has as much representation theory as you may ever need...
-
2
Although RAGS is an amazing, thorough book, I think it's only worth diving into if you're interested in the positive-characteristic story. If you're interested in the characteristic 0 story, RAGS is overwhelming (indeed, I first tried to learn the characteristic 0 story out of RAGS, and had to give up and come back again later when I had a better feeling for the topic). – Chuck Hague Sep 28 2010 at 12:50
Maybe, but most weights-roots business should not even be studied in the context of algebraic groups. Any book on Lie algebras is preferable. Then the first true "Algebraic Group Representation Theory" fact is BWB and RAGS has a decent expo of it (if I remember correctly - I won't go into library to check)... – Bugs Bunny Sep 28 2010 at 13:49
Just note that by Shafarevich I said "Basic Algebraic Geometry 1," which has no schemes. – David Corwin Sep 29 2010 at 21:27
RAGS will teach you schemes then... – Bugs Bunny Sep 30 2010 at 10:14
If you are interested in algebraic groups over complex and real numbers only, try Onishchik and Vinberg, Lie Groups and Algebraic Groups, Springer-Verlag 1990. This book contains also representation theory. (Then later you will have to learn the characteristic p case and algebraic groups over non-closed fields, say from Springer's book and Milne's lecture notes.)
-
The following is an emended excerpt from my answer to a related question1 about books about Lie groups for someone with algebraic geometry background. I might add that Procesi's book ideally fits your goals, since you are also interested in representation theory.
For someone with algebraic geometry background, I would heartily recommend Procesi's Lie groups: An approach through invariants and representations. It is masterfully written, with a lot of explicit results, and covers a lot more ground than Fulton and Harris. If you like "theory through exercises" approach then Vinberg and Onishchik, Lie groups and algebraic groups is very good (the Russian title included the word "seminar" that disappeared in translation).
If you aren't put off by a bit archaic notation and language, vol 2 of Chevalley's Lie groups is still good.
1That question is exactly one year old and, according to Anton's MO birthday post on meta, was the second "real" question asked on Mathoverflow.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456701278686523, "perplexity_flag": "middle"}
|
http://medlibrary.org/medwiki/Cohomology
|
# Cohomology
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
In mathematics, specifically in homology theory and algebraic topology, cohomology is a general term for a sequence of abelian groups defined from a co-chain complex. That is, cohomology is defined as the abstract study of cochains, cocycles, and coboundaries. Cohomology can be viewed as a method of assigning algebraic invariants to a topological space that has a more refined algebraic structure than does homology. Cohomology arises from the algebraic dualization of the construction of homology. In less abstract language, cochains in the fundamental sense should assign 'quantities' to the chains of homology theory.
From its beginning in topology, this idea became a dominant method in the mathematics of the second half of the twentieth century; from the initial idea of homology as a topologically invariant relation on chains, the range of applications of homology and cohomology theories has spread out over geometry and abstract algebra. The terminology tends to mask the fact that in many applications cohomology, a contravariant theory, is more natural than homology. At a basic level this has to do with functions and pullbacks in geometric situations: given spaces X and Y, and some kind of function F on Y, for any mapping f : X → Y composition with f gives rise to a function F o f on X. Cohomology groups often also have a natural product, the cup product, which gives them a ring structure. Because of this feature, cohomology is a stronger invariant than homology, as it can differentiate between certain algebraic objects that homology cannot.
## Definition
In algebraic topology, the cohomology groups for spaces can be defined as follows (see Hatcher). Given a topological space X, consider the chain complex
$\cdots \rightarrow C_n \stackrel{ \partial_n}{\rightarrow}\ C_{n-1} \rightarrow \cdots$
as in the definition of singular homology (or simplicial homology). Here, the Cn are the free abelian groups generated by formal linear combinations of the singular n-simplices in X and ∂n is the nth boundary operator.
Now replace each Cn by its dual space C*n−1 = Hom(Cn, G), and ∂n by its transpose
$\delta^n: C_{n-1}^* \rightarrow C_{n}^*$
to obtain the cochain complex
$\cdots \leftarrow C_{n}^* \stackrel{ \delta^n}{\leftarrow}\ C_{n-1}^* \leftarrow \cdots$
Then the nth cohomology group with coefficients in G is defined to be Ker(δn−1)/Im(δn) and denoted by Hn(C; G). The elements of C*n are called singular n-cochains with coefficients in G , and the δn are referred to as the coboundary operators. Elements of Ker(δn−1), Im(δn) are called cocycles and coboundaries, respectively.
Note that the above definition can be adapted for general chain complexes, and not just the complexes used in singular homology. The study of general cohomology groups was a major motivation for the development of homological algebra, and has since found applications in a wide variety of settings (see below).
Given an element φ of C*n, it follows from the properties of the transpose that $\delta^n(\varphi) = \varphi \circ \partial_n$ as elements of C*n. We can use this fact to relate the cohomology and homology groups as follows. Every element φ of Ker(δn) has a kernel containing the image of ∂n. So we can restrict φ to Ker(∂n−1) and take the quotient by the image of ∂n to obtain an element h(φ) in Hom(Hn, G). If φ is also contained in the image of δn−1, then h(φ) is zero. So we can take the quotient by Ker(δn), and to obtain a homomorphism
$h: H^n (C; G) \rightarrow \text{Hom}(H_n(C),G)$
It can be shown that this map h is surjective, and that we have a short split exact sequence
$0 \rightarrow \ker h \rightarrow H^n(C; G) \stackrel{h}{\rightarrow} \text{Hom}(H_n(C),G) \rightarrow 0$
## History
Although cohomology is fundamental to modern algebraic topology, its importance was not seen for some 40 years after the development of homology. The concept of dual cell structure, which Henri Poincaré used in his proof of his Poincaré duality theorem, contained the germ of the idea of cohomology, but this was not seen until later.
There were various precursors to cohomology. In the mid-1920s, J.W. Alexander and Solomon Lefschetz founded the intersection theory of cycles on manifolds. On an n-dimensional manifold M, a p-cycle and a q-cycle with nonempty intersection will, if in general position, have intersection a (p + q − n)-cycle. This enables us to define a multiplication of homology classes
Hp(M) × Hq(M) → Hp+q−n(M).
Alexander had by 1930 defined a first cochain notion, based on a p-cochain on a space X having relevance to the small neighborhoods of the diagonal in Xp+1.
In 1931, Georges de Rham related homology and exterior differential forms, proving De Rham's theorem. This result is now understood to be more naturally interpreted in terms of cohomology.
In 1934, Lev Pontryagin proved the Pontryagin duality theorem; a result on topological groups. This (in rather special cases) provided an interpretation of Poincaré duality and Alexander duality in terms of group characters.
At a 1935 conference in Moscow, Andrey Kolmogorov and Alexander both introduced cohomology and tried to construct a cohomology product structure.
In 1936 Norman Steenrod published a paper constructing Čech cohomology by dualizing Čech homology.
From 1936 to 1938, Hassler Whitney and Eduard Čech developed the cup product (making cohomology into a graded ring) and cap product, and realized that Poincaré duality can be stated in terms of the cap product. Their theory was still limited to finite cell complexes.
In 1944, Samuel Eilenberg overcame the technical limitations, and gave the modern definition of singular homology and cohomology.
In 1945, Eilenberg and Steenrod stated the axioms defining a homology or cohomology theory. In their 1952 book, Foundations of Algebraic Topology, they proved that the existing homology and cohomology theories did indeed satisfy their axioms.[1]
In 1948 Edwin Spanier, building on work of Alexander and Kolmogorov, developed Alexander–Spanier cohomology.
## Cohomology theories
### Eilenberg–Steenrod theories
A cohomology theory is a family of contravariant functors from the category of pairs of topological spaces and continuous functions (or some subcategory thereof such as the category of CW complexes) to the category of Abelian groups and group homomorphisms that satisfies the Eilenberg–Steenrod axioms.
Some cohomology theories in this sense are:
### Generalized cohomology theories
See also: List of cohomology theories
When one axiom (the dimension axiom) is relaxed, one obtains the idea of generalized cohomology theory or extraordinary cohomology theory; this allows theories based on K-theory and cobordism theory. There are others, coming from stable homotopy theory. In this context, singular homology is referred to as ordinary homology.
The cohomology of a point is called the coefficients of the theory. The coefficients are very important, and are used to compute the cohomology of other spaces using the Atiyah–Hirzebruch spectral sequence. This implies uniqueness, in the sense that if there is a natural transformation between two generalized cohomology theories, which is an isomorphism for a one point space, then it is an isomorphism for all CW complexes. Nevertheless, unlike the case of ordinary cohomology theories, the coefficients alone do not determine the theory in the sense that there might be more than one theory with given coefficients.
One reason that generalized cohomology theories are interesting is that they are representable functors if one works in a larger category than CW complexes; namely, the category of spectra.
### Other cohomology theories
Theories in a broader sense of cohomology include:[2]
• André–Quillen cohomology
• BRST cohomology
• Bonar–Claven cohomology
• Bounded cohomology
• Coherent cohomology
• Crystalline cohomology
• Cyclic cohomology
• Deligne cohomology
• Dirac cohomology
• Étale cohomology
• Flat cohomology
• Galois cohomology
• Gel'fand–Fuks cohomology
• Group cohomology
• Harrison cohomology
• Hochschild cohomology
• Intersection cohomology
• Khovanov Homology
• Lie algebra cohomology
• Local cohomology
• Motivic cohomology
• Non-abelian cohomology
• Perverse cohomology
• Quantum cohomology
• Schur cohomology
• Spencer cohomology
• Topological André–Quillen cohomology
• Topological cyclic cohomology
• Topological Hochschild cohomology
• Γ cohomology
## Notes
1. Spanier, E. H. (2000) "Book reviews: Foundations of Algebraic Topology" Bulletin of the American Mathematical Society 37(1): pp. 114–115
## References
• Hatcher, A. (2001) "Algebraic Topology", Cambridge U press, England: Cambridge, p. 198, ISBN 0-521-79160-X [Amazon-US | Amazon-UK] and ISBN 0-521-79540-0 [Amazon-US | Amazon-UK]
• Hazewinkel, M. (ed.) (1988) Encyclopaedia of Mathematics: An Updated and Annotated Translation of the Soviet "Mathematical Encyclopaedia" Dordrecht, Netherlands: Reidel, Dordrecht, Netherlands, p. 68, ISBN 1-55608-010-7 [Amazon-US | Amazon-UK]
• E. Cline, B. Parshall, L. Scott and W. van der Kallen, (1977) "Rational and generic cohomology" Inventiones Mathematicae 39(2): pp. 143–163
• Asadollahi, Javad and Salarian, Shokrollah (2007) "Cohomology theories for complexes" Journal of Pure & Applied Algebra 210(3): pp. 771–787
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Cohomology", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Cohomology
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8873780369758606, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/1099/mandelbrot-like-sets-for-functions-other-than-fz-z2c/39534
|
# Mandelbrot-like sets for functions other than $f(z)=z^2+c$?
Are there any well-studied analogs to the Mandelbrot set using functions other than $f(z)= z^2+c$ in $\mathbb{C}$?
-
3
– yasmar May 2 '11 at 0:03
## 11 Answers
A variant on the M-set can be defined in straightforward fashion for any iterated function in the complex plane parameterized by a single initial value. For instance, slight modifications produce the tricorn and burning ship fractals. However, most such variations tend to be either boring, incoherent, or obviously derived from the Mandelbrot set--nothing particularly novel.
Some obvious patterns also emerge quickly from many variations: Real exponents alter symmetry, imaginary exponents cause asymmetric twisting, disreputable functions produce misshapen lumps like the burning ship, and so on.
On the other hand, it's more difficult than you might expect to avoid the M-set in the first place: For a well known example, consider the Newton-Raphson method for approximating roots, which can be generalized to the complex plane in straightforward fashion. For some polynomial, a point in the complex plane may or may not converge to a particular root after some number of Newton-Raphson iterations. In most cases, plotting the regions of divergence and convergence per root produces a fractal. Modifying the polynomial and the iteration formula produces effects analogous to modifying the constant and iteration function, respectively, of a standard Julia set, and in fact it turns out that Julia fractals can be considered a special case of Newton-Raphson fractals.
Analogous structure can also be found elsewhere than the complex plane. The period-doubling behavior of the logistic map relates to the behavior of points on the real line of the M-set, and islands of stability for the logistic map correspond to the positions of mini-Mandelbrots. Elaborate Julia-like structures can also be found in the quaternions, although unfortunately visualization of 4-dimensional fractals is somewhat challenging.
My suspicion is that Julia-like structure will arise for any fractal defined by similar means, e.g., classifying points into sets based on their behavior under repeated iteration of a position-sensitive transformation, but I'm not sure how to define "similar means" precisely enough to formalize that.
-
– J. M. Aug 4 '10 at 6:44
1
your hunch about Julia-like structures showing up for (almost) all examples is spot-on; that this happens is essentially another aspect of the notion of universality, the same phenomenon that explains the warped miniature copies of the Mandelbrot set that show up everywhere and that's arguably the most important insight to come out of fractals. (Essentially: near critical points, almost all functions look 'approximately' quadratic). – Steven Stadnicki May 1 '11 at 21:08
I have no idea about well-studied, but here's what I've gotten from a bit of playing around (here's the relevant Mathematica code).
$\boldsymbol{f_c(z)=\cos(z)+c}$ where $\cos(z)$ is defined however Mathematica defines it, using the escape time algorithm with escape radius $10\pi$ and 100 iterations maximum:
$\boldsymbol{f_c(z)=\sin(z)+c}$ appears to be the same as cosine, but horizontally translated.
$\boldsymbol{f_c(z)=e^z+c}$, using the escape time algorithm with escape radius 50 and 100 iterations maximum (it's vertically periodic with period $2\pi$):
edit (Mathematica code edited, too): $\boldsymbol{f_c(z)=cz(1-z)}$ (since camccann's answer mentions logistic maps) using the escape time algorithm with escape radius 50 and 200 iterations maximum:
-
7
+1 for giving me the beautiful experience of seeing fractals in the morning :) – Chau Jul 29 '10 at 6:53
2
The discontinuous boundaries where the blue meets the red without going through the white and yellow are quite surprising. Are you sure they're not an artifact of your escape time algorithm? Certainly for $\sin$ and $\cos$, you can't assume that if $|z|$ is large, the iterations will diverge... – Rahul Narain Sep 29 '10 at 22:44
@Rahul: Well, $\sin$ and $\cos$ get large if $\Im z$ is large... – J. M. Sep 30 '10 at 12:27
1
@J.M.: Yes, but that doesn't guarantee divergence either. If, say, $z = \pi+1000i$, then $\cos(z) = -\cosh(1000)$ is very large, but $\cos(\cos(z))$ is within $|z| \le 1$ (and further iterations will stay there). – Rahul Narain Sep 30 '10 at 17:16
1
@Ben: They look okay now. – J. M. Oct 11 '11 at 23:29
show 1 more comment
Isaac, the auxiliary question that we should ask ourselves before answering your question is: what is the "right" (i. e., most mathematically relevant/interesting) generalization of the Mandelbrot set to other parametrized families of functions? Of course, you can take some arbitrary holomorphic family of functions $f_c(z)$ that depends holomorphically on the parameter $c$, and for each $c$, pick some starting value $z_0$ (which depends on $c$ in an essentially arbitrary way) and color the plane somehow based on the long-term behavior of the iterates. However, as some of the other posts show, you typically end up with objects that don't really resemble the Mandelbrot set, so the status of this construction as the "right" generalization of the M-set is very dubious.
Here's what I think is the right answer to my auxiliary question: the boundary of the standard Mandelbrot set is the bifurcation locus for the holomorphic family $f_c(z) = z^2 + c$: it's the set of $c$ at which the long-term dynamical behavior of $f_c$ is not continuous. (For example, if you cross the boundary and move from the inside of the M-set to the outside, you go from having a finite-valued attracting orbit to not having one (with only the fixed point at infinity attracting)). This concept applies equally well to other holomorphic families, and this is what I think is the right generalization of the M-set: the bifurcation locus of a family $f_c(z)$ of holomorphic functions that depends holomorphically on $c$. More generally, we can think of not just the bifurcation locus, but a more complete "catalog" of the dynamics of the family: a partition of the complex plane into the bifurcation set plus the union of a bunch of open sets in which the long-term dynamical behavior varies continuously (which for the M-set, would be a partition of the plane into the outside, the boundary, and all of the cardiods and bulbs (including those in smaller copies of the M-set found near the boundary)).
Now, to answer your question: yes, these objects have been studied in some generality, but I'm not an expert in the field and thus can't tell you everything about them. However, for example, it is known that every nonempty bifurcation locus contains copies of the boundary of the standard M-set (or the bifurcation locus of the family $z\mapsto = z^d + c$ for some $d$, the "degree-$d$ M-set"), and moreover, that these copies are dense in every bifurcation locus.
-
Here's the Mandelbrot set on the Poincaré Disk. I made it by replacing all the usual operations in the iteration
$$z_{n+1} = z_n^2+c$$
by "hyperbolic" equivalents. Adding a constant was interpreted as translating in the plane, and the hyperbolic equivalent is then
$$z \mapsto \frac{z+c}{\bar{c}z+1}$$
For the squaring operation, that meant I used angle doubling plus rescaling of the distance by a factor two based on the distance formula for the Poincaré Disk:
$$d(z_1,z_2)=\tanh^{-1}\left|\frac{z_1-z_2}{1-z_1\bar{z_2}}\right|$$
-
+1. I'm not sure how surprised I should be that (at least superficially) it doesn't look all that different. – Isaac May 1 '11 at 20:48
I guess not. I just made it while I was toying around with hyperbolic geometry since another question prompted me to. It's just like the original but deformed. – Raskolnikov May 1 '11 at 20:50
Quite nice!$\,$ – J. M. May 1 '11 at 20:51
Regardless of how similar or different it looks to the original, it's still a really cool thing to have tried. – Isaac May 1 '11 at 20:53
Starting with the very well known iteration formula that creates the Mandelbrot set
$z_{n+1}=z_n^2+c$,
where $c=z_0=x_0+iy_0=\text{Re}(c)+i\text{Im}(c)$ is a complex constant which is the starting point of the trajectory generated in the complex plane by the iteration, in Doppelpot , from the German blog Fraktale Welten by Nachtwaechter, it is explained that, when we have two complex exponentiations the fractals generated by the recursive formula are normally beautiful, particularly those of Julia sets.
I suggested to the author to try the following formulae:
$z_{n+2}=z_{n+1}^{2}+z_{n}+c$
$z_{n+1}=z_{n}^{z_{n}c}$
$z_{n+1}=z_{n}^{z_{n}+c}$
$z_{n+2}=z_{n+1}^{3}+c^{z_{n}}$
$z_{n+1}=z_{n}^{c}$
In Fünf Formeln the fractals are shown.
I reproduced some with the author's permission in this entry of my blog.
This one is generated by $z_{n+2}=z_{n+1}^{2}+z_{n}+c$:
and this by $z_{n+2}=z_{n+1}^{3}+c^{z_{n}}$
-
There is the Julia set, which can be defined for any complex rational map f(z) = P(z)/Q(z) where P(z) and Q(z) are polynomials.
The Julia set for the map fc(z) = z^2 + c is related to the Mandelbrot set in that a point z is in the Mandelbrot set if the Julia set of fc(z) is connected.
-
I gathered the first part from the Wikipedia article on the Julia set and I knew the second part, but is it possible then to construct a set like the Mandelbrot set for some other family of functions f_c? – Isaac Jul 29 '10 at 3:21
Robert L. Devaney, An Introduction to Chaotic Dynamical Systems ... part III (the last 50 pages)
-
I couldn't say anything about "well-studied". What I do know is this:
• You can grab any random function you like and iterate it. Generally, the results aren't particularly interesting.
• You seem to get the most "natural" looking images (i.e., the ones most like the usual Mandelbrot and Julia sets) if you stick to nice, well-behaved complex-valued functions.
• If you want to be picky about it, the Mandelbrot set really ought to be the set of parameter values with connected Julia sets. There's some theorem about every basin of attraction containing on critical point or something like that, but I don't really understand the details. The important point is, the parameter-space image tends to "look nicest" when you use critical points of the interation.
• It's quite possible for the function to have more than one parameter. This doesn't change the Julia sets much (except that there are more of them), but it makes the Mandelbrot set have more than 2 dimensions.
My personal favourite is the cubic function $f(z) = z^3 - 3a^2z + b$. This has two critical points, $+a$ and $-a$. The Julia sets have 3-fold symmetry, and aren't especially stunning. The Mandelbrot set, however, is strictly speaking the intersection of two sets, one iterating with $z_0 = +a$ and the other $z_0 = -a$. If you combine the colourings of these two iterations, you get a strange "shadowy" effect. On top of that, the Mandelbrot set is 4D, and plotting various 3D slices of it looks interesting.
-
Barnsley's book FRACTALS EVERYWHERE has many examples of this. I believe some edition of that book (long ago...) came with software that lets you make your own examples, too.
-
Ultrafractal has a particularly impressive collection of thousands most of which have configurable parameters as well as julia variations. Some of these parameters are shared among many such as starting position and bailout whereas others are specific to individual fractals, most notably I would say:
Mandlebrot
Phoenix
Newton
Magnet & Magnet 2
Nova
Burning Ship
Lambda
Barnsley 1, Barnsley 2, Barnsley 3
Tricorn
Manowar
Druid
There are other things to explore as well, as just one example take coloring, the inside of sets for example is usually coloured solid black but Ultrafractal alone contains hundreds of other possibilities such as triangular inequality average.
http://www.incubism.com/dan/novaFractal/img/novaInter_cloudyInteriour_mandelBollock.png
-
There has been found a whole family of simply connected multipower m-like sets now, like f(z)->(((zc-1)^2)-1)^2-1, (z0=-1) that have been offered by Jeffrey Barthelmes (as "fracmonk")at fractalforums in a thread in the "New Theories and Research" section there. Many combinations of powers can coexist on the complex parameter plane at once, contrary to long-held popular belief, each sharing connectedness properties of M.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308298826217651, "perplexity_flag": "middle"}
|
http://mathematica.stackexchange.com/questions/tagged/polynomials?sort=votes&pagesize=30
|
# Tagged Questions
Questions on the functionality operating on polynomials
6answers
2k views
### Finding real roots of negative numbers (for example, $\sqrt[3]{-8}$)
Say I want to quickly calculate $\sqrt[3]{-8}$, to which the most obvious solution is $-2$. When I input $\sqrt[3]{-8}$ or Power[-8, 3^-1], Mathematica gives the ...
5answers
367 views
### How do I reassign canonical ordering of symbols?
I have a big polynomial that evaluates to: $$A^2 e^2 \phi ^- \phi ^++A e \phi ^- \phi ^+ c_{2 w} g_Z+\frac{1}{2} A e g h W^- \phi ^+ +\ll13\gg,$$ which is supposed to represent some terms in the ...
3answers
257 views
### How to keep Collect[] result in order?
For example, Collect[(1 + x + Cos[s] x^2)^3, x] gives the result ...
2answers
370 views
### Surprises simplifying simple polynomials
I came across some somewhat surprising behavior of Simplify today, on something very simple. Let's take two cubic polynomials that we know have the same value: ...
4answers
407 views
### Is there a way to Collect[] for more than one symbol?
Oftentimes you find yourself looking for polynomials in multiple variables. Consider the following expression: a(x - y)^3 + b(x - y) + c(x - y) + d as you can ...
1answer
289 views
### Rearranging a Polynomial
In Mathematica 8.04 on Windows, I want to display a formula in standard textbook format. The formula is the variance of an $N$-security portfolio. For two securities it is: ...
3answers
655 views
### Computing the genus of an algebraic curve
How-to compute the genus of an algebraic curve in Mathematica ? In my case the algebraic curve is explicitly defined by a polynomial.
2answers
370 views
### Factorizing polynomials over fields other than $\mathbb{C}$
I'd like to take a polynomial in $\mathbb{Z}_5[x]$ of the form $ax^2+bx+c$ and factor it into irreducible polynomials. For example: Input... x^2+4 Output... ...
1answer
278 views
### Funny behaviour when plotting a polynomial of high degree and large coefficients
I am trying to plot a polynomial of degree 29 on the domain [0,1], with fairly large coefficients: ...
2answers
188 views
### How can I compute the representation matrices of a point group under given basis functions?
Take the $C_{3v}$ point group for example: ...
1answer
342 views
### Find roots of polynomial in field extension $GF(2^n)$?
How can I find roots of polynomial in extension field $GF(2^n)$?
4answers
315 views
### How do I find the degree of a multivariable polynomial automatically?
I have a very simple question which appears not to have already been answered on this forum. Is there built-in functionality that returns the degree of a multivariable polynomial? For example if the ...
3answers
197 views
### Any efficient way to make complete homogeneous symmetric functions in Mathematica?
We do have elementary symmetric functions, SymmetricPolynomial[k, {x_1, ..., x_n}] . But I didn't find complete homogeneous symmetric functions. The induction ...
1answer
284 views
### Gröbner basis on a particular set of equations
This question is very similar in gist to equation solving with GroebnerBasis, but hopefully when I say that I make the system a little larger I mean little. I have ...
2answers
88 views
### What are Root objects with multiple polynomials?
In Mathematica 9 a new flavor of Root object with multiple polynomials was introduced. For example, ...
4answers
299 views
### “Evaluating” polynomials of functions (Symbols)
I want to implement the following type evaluation symbolically $$(f^2g + fg + g)(x) \to f(x)^2 g(x) + f(x) g(x) + g(x)$$ In general, on left hand side there is a polynomial in an arbitrary number of ...
6answers
761 views
### How do I replace a variable in a polynomial?
How do I substitue z^2->x in the following polynomial z^4+z^2+4? z^4+z^2+4 /. z^2->x ...
3answers
437 views
### Checking if the roots of a function are real
I'm trying to determine if the roots of a function are real. How would you do that? (In particular I'm interested in verifying that the roots of LegendreP[6, x] ...
2answers
265 views
### How to deduce a generator formula for a polynomial sequence?
Consider a polynomial sequence $\{p_n\}$ generated by some (simple) rule: \begin{array}{l} p_1(x)=x \\ p_2(x)=2 x-x^2 \\ p_3(x)= x^3-3 x^2+3 x \\ p_4(x)=-x^4+4 x^3-6 x^2+4 x \\ p_5(x)= x^5-5 ...
1answer
146 views
### GroebnerBasis without specifying variables
All the examples in the Mathematica documentation specify that the syntax for the GroebnerBasis command is ...
2answers
394 views
### Why does Expand not work within a function?
I'm writing this fairly simple function: ...
2answers
242 views
### 3D Plot: Number of Roots in x of a polynomial in x, a, b and c
I have a polynomial in four variables x,a,b and c. The number of roots of the polynomial in x depends of the choice of a, b and c. I would like to have a 3D-Plot with a, b and c on the axes, while the ...
4answers
133 views
### How to collect terms with positive powers in polynomial
I am trying to collect all terms with non-negative powers of $x$ in polynomials like $\frac{1}{x^2}\left(a x^2+x^{\pi }+x+z\right)^2$ First expand the polynomial ...
4answers
2k views
### Factoring polynomials to factors involving complex coefficients
I've run into some problems using Factor on polynomials with complex coefficient factors. Reading the documentation it looks like it only factors over the ...
1answer
341 views
### Finding the characteristic polynomial of a matrix modulus n
Given a square matrix, is it possible to calculate its characteristic polynomial modulo n? Unfortunately, this function ...
0answers
57 views
### Apart may use Padé method: what's that?
How does Apart work? The page tutorial/SomeNotesOnInternalImplementation#7441 says, "Apart ...
3answers
460 views
### What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$?
What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$? I want to evaluate It and I've tried to use the most obvious way: simply typing and evaluating $(x+y)^2$, But it gives me only ...
5answers
284 views
### Series expansion in terms of Hermite polynomials
I am trying to expand a polynomial in terms of orthogonal polynomials (in my case, Hermite). Maple has a nice built-in function for this, ChangeBasis. Is there a ...
4answers
511 views
### How to get exact roots of this polynomial?
The equation $$64x^7 -112x^5 -8x^4 +56x^3 +8x^2 -7x - 1 = 0$$ has seven solutions $x = 1$, $x = -\dfrac{1}{2}$ and $x = \cos \dfrac{2n\pi}{11}$, where $n$ runs from $1$ to $5$. With ...
1answer
131 views
### How to express the original ideal elements in the Groebner basis?
Suppose I call GroebnerBasis[{f1, f2, ...}, {x1,x2, ...}] The output is a list {g1,g2,...} For each $g_j$, there should be ...
3answers
375 views
### First positive root
Simple question but problem with NSolve. I need help how to extract first positive root? For example eq=-70.5 + 450.33 x^2 - 25 x^4; NSolve[eq== 0, x] If I ...
2answers
278 views
### expanding a polynomial and collecting coefficients
I'm trying to expand the following polynomial ...
2answers
190 views
### How can I make the output from Solve look nice?
I have a problem with presenting solutions. Roots of 4th order polynomials are big expressions. Is there a way to present the roots, s2 and s3, in normal form with some substitutions? Maybe a way to ...
1answer
324 views
### Polynomial Approximation from Chebyshev coefficients
I would like to expand a function $f(r)$ in the domain $[0,R]$, around the points $r =0$, and $r = R$ in the following manner $f(r = 0) = \Sigma_{i=0,i = even}^{imax} f_i (r/R)^i$ and \$f(r = R) = ...
1answer
89 views
### How can I prevent a polynomial from being simplified?
I'm having a problem with polynomials. Let's say I have a polynomial "2x^2 - 5x + 6 - 3x^2" .. How can I check that this expression is not simplified ? Additionally, I would like to locate the ...
0answers
90 views
### Computing Ehrhart's polynomial for a convex polytope
Is there a Mathematica implementation for computing the Ehrhart polynomial of a convex polytope which is specified either by its vertices or by a set of inequalities? I am interested in knowing this ...
1answer
175 views
### How do I get a two-term polynomial with a leading negative sign to display in the correct (i.e. textbook) order?
The first three expressions evaluate as expected and the polynomial is displayed in what I would call "textbook" form. The last expression, however, switches the order of terms. Mathematica employs ...
7answers
215 views
### Defining a function that completes the square given a quadratic polynomial expression
How can I write a function that would complete the square in a quadratic polynomial expression such that, for example, CompleteTheSquare[5 x^2 + 27 x - 5, x] ...
3answers
211 views
### Is it possible to use Composition for polynomial composition?
I want to do this: $P = (x^3+x)$ $Q = (x^2+1)$ $P \circ Q = P \circ (x^2+1) = (x^2+1)^3+(x^2+1) = x^6+3x^4+4x^2+2$ I used Composition for testing if that could ...
2answers
76 views
### How to define a polynomial/function from an array of coefficients?
I have the coefficients of my desired polynomial in an array CoefArr (I'm new to mathematica, so I think of everything as arrays, it is actually a list I believe) starting with the constant at index ...
1answer
157 views
### Small Issue with Chebyshev Derivative Appoximation
I am trying to get approximate the derivative of a function from its Chebyshev expansion. I start out with the following random function ...
2answers
109 views
### Better use of Mathematica's PolynomialReduce[]?
I've been using scPhiDecomp[expr_]:= PolynomialReduce[expr, {x^2-y^2,2 x y}, {x,y}] which works great on ...
3answers
977 views
### Get polynomial interpolation formula
I'm attempting to get a polynomial interpolation formula out of Mathematica but I am absolutely lost. I stared out using ...
1answer
155 views
### How to do the polynomial stuff over finite fields extensions fast?
This question is raised from the problem of package FiniteFields being very slow (please, see the corresponding question): I have had an evidence that Mathematica ...
2answers
82 views
### Evaluating Polynomials at Grid Points
I am continuing my quest on B-splines. The code below builds a 5x5 matrix out of B-splines, using the BSplineBasis[] routine. I now want to evaluate the polynomials that are stored in each matrix ...
3answers
142 views
### Symbolic manipulation of functional form
I have a functional polynomial expression of the form: ...
2answers
165 views
### Calculating Taylor polynomial of an implicit function given by an equation
I'd like to write a procedure that will take an equation: F(x,y,z) = 0 chosen variable: x a point: ...
1answer
117 views
### Implementation of Decompose
I'm curious as to how Decompose works so I decided to use Trace with the option ...
2answers
160 views
### How to find solutions that yield of root of unity?
I have a polynomial with coefficients that are integer polynomials in another (complex) variable. For example: 1 + (1 - v^2) #1 + (-3 - v^2) #1^2 + #1^3 & I ...
1answer
92 views
### Is there any way to force Mathematica to collect a symbol in a polynomial?
Let's say that I have a polynomial like this: a + b + c Is there any way that I can get Mathematica to transform it to: ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8907652497291565, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/53893/nmr-rotating-frame
|
# NMR rotating frame
I'm reading about a linearly polarized field (in the context of NMR). The field is given by
$${\bf H_{lin}}=2H_1({\bf i}cos(\omega_zt)).$$
This can be created by having a pulse field plus its mirror image; i.e.
$${\bf H_1}=H_1({\bf i}cos(\omega_zt)+{\bf j}sin(\omega_zt)).$$
The mirror image field in the lab frame is obviously the same as ${\bf H_1}$ expect $\omega_z \rightarrow -\omega_z.$
Now, my question is what is the expression for the counter-rotating "mirror image" field in the ROTATING frame? I've having some trouble working through this...
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588053226470947, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/192097-question-about-multidimentional-gaussian-integration.html
|
# Thread:
1. ## Question about multidimentional gaussian integration
Dear all,
I know how to derive a multidimentional gaussian integration like this:
$\int_{-\infty}^{\infty} \mathrm{d} \mathbf{x} ~ \mathrm{exp} \left\{\mathbf{x}^T \mathbf{G} \mathbf{x} + \mathbf{H}^T \mathbf{x} \right\}$
where $\mathbf{x}=\left\{x_1, x_2, \cdots, x_n\right\}$ and $\mathrm{G}$ is a full rank matrix, and $\mathbf{H}=\left\{H_1, H_2, \cdots, H_n\right\}$
Just diagnolize $\mathrm{G}$ and make a linear transformation:
$\mathbf{x} = \mathbf{V} \mathbf{y}$
where $\mathbf{V}$ is the eigenvector matrix of $\mathrm{G}$.
My question is how to calculate
$\int_{-\infty}^{\infty} \mathrm{d} \mathbf{x} ~ \left( \mathbf{x}^T \mathbf{K} \mathbf{x} \right) \mathrm{exp} \left\{\mathbf{x}^T \mathbf{G} \mathbf{x} + \mathbf{H}^T \mathbf{x}\right\}$
where $\mathrm{K}$ is a full rank matrix?
Does anyone has suggestion?
Thanks for your help!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8555772304534912, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/155136-riemann-sums-print.html
|
# Riemann Sums
Printable View
• September 3rd 2010, 05:08 PM
stobs2000
Riemann Sums
hi this is a Riemann sum q:
Find U, the Riemann Upper Sum for f(x) = x2 on [0,2], using 4 equal sub-intervals
i know that n=0.5 so i thought the upper limit would be 0.5^2+ 1^2+1.5^2+2^2 which gives 7.5 but the answer is 3.75 so could someone please explain why this is the answer?
cheers
• September 3rd 2010, 05:16 PM
stobs2000
i realised i forgot to multiply the whole thing by n! so i figured out that question but can someone help me with this one:
When calculating the Riemann Upper and Lower Sums (U and L) for the function f(x) = x2 on the interval [0,2], what is the smallest number of (equal) sub-intervals needed to make U - L ≤ 0.1 ?
• September 3rd 2010, 06:26 PM
skeeter
Quote:
Originally Posted by stobs2000
i realised i forgot to multiply the whole thing by n! so i figured out that question but can someone help me with this one:
When calculating the Riemann Upper and Lower Sums (U and L) for the function f(x) = x2 on the interval [0,2], what is the smallest number of (equal) sub-intervals needed to make U - L ≤ 0.1 ?
I assume $f(x) = x^2$
(fyi, use the caret to signify exponents ... x^2)
$\displaystyle \frac{2}{n}\left[f(x_1) + f(x_2) + ... + f(x_{n})\right] - \frac{2}{n}\left[f(x_0) + f(x_1) + ... + f(x_{n-1})\right] \le 0.1$
finish it.
• September 3rd 2010, 06:42 PM
stobs2000
thanks-according to that i get 80 which is the answer but i don't understand where you got 2/n from
• September 3rd 2010, 07:26 PM
stobs2000
Here is another question:
Given that $cos\sqrt{x}$ decreases on the interval [0,9], estimate the value of $\int_{0}^{9}cos\sqrt{x} dx$ using the Riemann Lower Sum L on this interval with three unequal sub-intervals [0,1], [1,4], [4,9]. Enter your answer correct to two decimal places.
I know that n for the first 3 terms is 1/3, for the next 3 it is 1 and the next 3 it is 5/3 so to find the lower riemann sum i did:
$(cos\sqrt{0}+cos\sqrt{1/3}+cos\sqrt{2/3})$x1/3 $+(cos\sqrt{1}+cos\sqrt{2}+cos\sqrt{3})$x1 $+(cos\sqrt{14/3}+cos\sqrt{19/3}+cos\sqrt{8})$x 5/3
and if i use radians mode on the calculator that gives me -2.49
ps- sorry about the above- im just learning hoe to use latex editor
• September 4th 2010, 04:41 AM
skeeter
Quote:
Originally Posted by stobs2000
thanks-according to that i get 80 which is the answer but i don't understand where you got 2/n from
how would you define the width of each subinterval?
• September 4th 2010, 05:09 AM
skeeter
Quote:
Originally Posted by stobs2000
Here is another question:
Given that $cos\sqrt{x}$ decreases on the interval [0,9], estimate the value of $\int_{0}^{9}cos\sqrt{x} dx$ using the Riemann Lower Sum L on this interval with three unequal sub-intervals [0,1], [1,4], [4,9]. Enter your answer correct to two decimal places.
I have no idea what you are doing in this calculation. There are three subintervals given, so you do not need to calculate n. The first subinterval has a width of 1, the second 3, and the third 5.
Right Riemann sum using the given subintervals ...
$\cos{{\sqrt{1}} \cdot 1 + \cos{{\sqrt{4}} \cdot 3 + \cos{{\sqrt{9}} \cdot 5 \approx -5.66$
btw ... next time, start a new problem w/ a new thread.
• September 4th 2010, 03:14 PM
stobs2000
thanks X1000
All times are GMT -8. The time now is 05:54 PM.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9003971219062805, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/75989/when-a-permutation-is-not-a-cycle
|
# When a Permutation is not a cycle
$\sigma = \pmatrix{1&2&3&4&5&6&7&8\\3 &8 &6 &7 &4 & 1 & 5 & 2} = (136)(28)(475)$
I just have a question about terminology. Would it be correct to say that $\sigma$ is not a cycle but $\sigma$ is a product of 3 disjoint cycles?
-
8
I believe your last cycle should be (475). Also, yes. – JSchlather Oct 26 '11 at 5:06
@JacobSchlather: Fixed and thanks for the quick response! – Student Oct 26 '11 at 5:07
@Jacob: Would you like to post an answer so the question can be listed as resolved? – Zev Chonoles♦ Dec 25 '11 at 15:36
@ZevChonoles Done. – JSchlather Dec 26 '11 at 8:12
## 1 Answer
Yes it would be to correct to say that $\sigma$ is a product of $3$ disjoint cycles.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236502647399902, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/discrete-math/200126-simplify-formula-rate-change-volume.html
|
# Thread:
1. ## Simplify formula for rate of change of Volume
I need to simplify further the formula for the volume of a cylinder in pdf attached. I also need to interpret it.
Can you give me some hints? thanks
Attached Files
• cylinder.pdf (195.0 KB, 9 views)
2. ## Re: Simplify formula for rate of change of Volume
Your PDF does not make much sense. I suggest you post the WHOLE question with ALL relevant information, not just the bits you think are appropriate.
3. ## Re: Simplify formula for rate of change of Volume
You are given that the volume of cylinder of radius r and height h is $\pi r^2h$, that $h= \frac{10}{\pi r^2}$ and that $r= 3+ 2sin(t)$.
ONE way to do this is to replace h in the volume formula with that formula in terms of r:
$V= \pi r^2h= \pi r^2\frac{10}{\pi r^2}= 10$ and then replace "r" in that by its formula in terms of t.
Oh, wait- there is no "r" in that formula- the " $\pi r^2$" terms canceled out and we got V= 10, a constant! Well, what does that tell you about its derivative?
4. ## Re: Simplify formula for rate of change of Volume
Originally Posted by HallsofIvy
You are given that the volume of cylinder of radius r and height h is $\pi r^2h$, that $h= \frac{10}{\pi r^2}$ and that $r= 3+ 2sin(t)$.
ONE way to do this is to replace h in the volume formula with that formula in terms of r:
$V= \pi r^2h= \pi r^2\frac{10}{\pi r^2}= 10$ and then replace "r" in that by its formula in terms of t.
Oh, wait- there is no "r" in that formula- the " $\pi r^2$" terms canceled out and we got V= 10, a constant! Well, what does that tell you about its derivative?
This is exactly why I have a feeling that the OP did not post all the information. E.g. looking at the given $\displaystyle \begin{align*} h = \frac{10}{\pi r^2} \end{align*}$, that implies straight away that the volume is always $\displaystyle \begin{align*} 10 \end{align*}$, where in the given context it might only start out being $\displaystyle \begin{align*} 10 \end{align*}$...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584124684333801, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/204031-factoring-question.html
|
# Thread:
1. ## Factoring Question
Hello,
I can't figure out how I should factor this expression.
$x^4 - 18x^2 + 1$
I know the answer is $(x^2 - 4x - 1)(x^2 + 4x - 1)$
But I don't know how it gets to this point. I've tried substituting x squared with y to get this...
$y^2 - 18y + 1$
but this can't really be factored much more, so I don't know what I should be doing..
2. ## Re: Factoring Question
Originally Posted by astuart
Hello,
I can't figure out how I should factor this expression.
$x^4 - 18x^2 + 1$
I know the answer is $(x^2 - 4x - 1)(x^2 + 4x - 1)$
But I don't know how it gets to this point. I've tried substituting x squared with y to get this...
$y^2 - 18y + 1$
but this can't really be factored much more, so I don't know what I should be doing..
You could complete the square...
3. ## Re: Factoring Question
Hello, astuart!
This require complete-the-square,
. . but in a different manner.
$\text{Factor: }\:x^4 - 18x^2 + 1$
We have: . $x^4 + 1 - 18x^2$
Subtract and add $2x^2\!:\;x^4\; {\color{red}-\; 2x^2} + 1 - 18x^2 \;{\color{red}+\; 2x^2}$
. . . . . . . . . . . . . . $=\;(x^4 - 2x^2 + 1) - 16x^2$
. . . . . . . . . . . . . . $=\;(x^2-1)^2 - (4x)^2$
. . . . . . . . . . . . . . $=\;\left([x^2-1)-4x\right]\left([x^2-1]+4x\right)$
. . . . . . . . . . . . . . $=\;(x^2-4x-1)(x^2+4x-1)$
4. ## Re: Factoring Question
Originally Posted by Soroban
Hello, astuart!
This require complete-the-square,
. . but in a different manner.
We have: . $x^4 + 1 - 18x^2$
Subtract and add $2x^2\!:\;x^4\; {\color{red}-\; 2x^2} + 1 - 18x^2 \;{\color{red}+\; 2x^2}$
. . . . . . . . . . . . . . $=\;(x^4 - 2x^2 + 1) - 16x^2$
. . . . . . . . . . . . . . $=\;(x^2-1)^2 - (4x)^2$
. . . . . . . . . . . . . . $=\;\left([x^2-1)-4x\right]\left([x^2-1]+4x\right)$
. . . . . . . . . . . . . . $=\;(x^2-4x-1)(x^2+4x-1)$
Ah, so by subtracting 2x^2 from the equation, it results in 16x^2, which is a perfect square.
I was confused by the subtraction of the 2x^2, I didn't know what the reason for it was..
Thanks
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8783246278762817, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/75131?sort=votes
|
## How to detect frequency?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $J$ be an arc in $\mathbb{S}^{1}\subset\mathbb{C}$ (no matter open or closed) and $\alpha\in(0,2\pi)$ be an angle such that $\alpha/\pi$ is irrational. Consider in $\mathbb{S}^{1}$ the sequence $z_{n}=e^{in\alpha}$. Then this sequence is dense in $\mathbb{S}^{1}$ by Kronecker's Theorem or by ergodicity. Let's associate with the arc $J$ its "indicator sequence" $s(J)={s_{n}}$ of zeroes and ones defined as follows:
$s_{n}=1$ if $z_{n}\in J$ and $s_{n}=0$ if $z_{n}\notin J$.
So, we get something like 0 0 1 1 1 0 0 0 1 1 0 0 1. . . Suppose that we are given such a sequence $s(J)$ for some $J$ and some $\alpha$. By the Ergodic Theorem one gets the measure of arc $J$ as the limit
$\mathtt{meas}(J)=2\pi\underset{n\rightarrow\infty}{\lim}\frac{\sigma_{n}}{n}$ where $\sigma_{n}$ is the number of 1's in ${s_{1},s_{2},...,s_{n}}$.
OK, but is it possible to detect the "frequency" $\alpha$ only by the 0-1 data contained in the sequence $s_{n}$? More precisely, my question is:
Let ${s_{n}}$ be a sequence of 0's and 1's and we know that it is an "indicator sequence" for some arc $J\subset\mathbb{S}^{1}$ and some angle $\alpha$. Is it then possible to get $\alpha$ by some formula similar to the above one for the measure of $J$? This would be something like a "rotation number" of sequence ${s_{n}}$.
Similar question may be posed for the torus $\mathbb{T}^{n\text{ }}$and an open set $J\subset\mathbb{T}^{n\text{ }}$. Then we should detect not only the frequencies $\alpha_{1},\alpha_{2},...$ but also the "dimension" $n$ of the sequence. Here $\alpha_{1},...,\alpha_{n},\pi$ have to be independent over $\mathbb{Z}$.
[I know that the "indicator sequence" is a standard construction in symbolic dynamics, but I am not very involved in the topic, so references are welcome.]
P.S. Curly brackets {} are not displayed in math mode. How to fix the problem?
-
Curly brackets can be obtained via \lbrace and \rbrace. – Andrew Sep 11 2011 at 10:20
@Andrew @Symbo'leon Or via \{ and \}. – Quinn Culver Sep 11 2011 at 15:27
$S((0,\alpha))$ forms what is called a Sturmian sequence, a special case of a subshift of finite type. If $J$ is a general interval, than one gets a more general subshift. I still believe, one always has uniqueness. – Helge Sep 11 2011 at 17:23
1
@Helge: Sturmian sequences - or, strictly speaking, their orbit closures - are subshifts of infinite type. Subshifts of finite type are those subshifts which can be obtained from the full shift by "forbidding" finitely many words, and have special properties not enjoyed by Sturmian shifts. (These days it is usually assumed that the forbidden words are all of length two, which can be achieved by re-coding the alphabet.) – Ian Morris Sep 11 2011 at 21:00
## 2 Answers
It seems likely to me that $\alpha$ can be computed by calculating the frequencies of subwords of the coding sequence, but in a manner which depends on certain parameters. For example, if $\alpha<\min\{|J|,2\pi-|J|\}$ then the interval $J \setminus J +\alpha$ has length precisely $\alpha$, and it follows easily that $\alpha$ equals the frequency of the subword 01. On the other hand if $|J|$ is very small and $\alpha, 2\pi-\alpha$ are both larger than $|J|$, then the frequency of the subwords 01 and 10 are both $|J|$, while the subword 00 has frequency $1-2|J|$, and we cannot gain anything by considering words of length 1 or 2. So the frequencies of words of arbitrary length probably need to be considered.
The articles "Coding rotations on intervals" by Berstel and Vuillon, and "Three-distance theorems and combinatorics on words" by Alessandri and Berthé appear to be relevant (especially Lemma 1 in the latter) but do not seem to yield a complete answer.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Unless I am missing something you can just compute $\lim_{N\rightarrow \infty} \frac{1}{N}\sum a_k \exp(2 \pi i k s)$ to find the Fourier transform of the characteristic function of the interval J, and then do an inverse transform to find J. This is more-or-less what you suggest about computing a rotation number.
-
What I said above was not quite what you asked - let me try again. The above should give you the Fourier transform with argument $\frac{s}{\alpha}$. From this and knowledge of the Fourier transform of an interval you can recover $\alpha$. – Jared Bronski Sep 11 2011 at 23:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933904767036438, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/47181/can-we-uniquely-define-a-graph-to-have-the-topology-of-a-polytope-via-proper-edge/47467
|
## Can we uniquely define a graph to have the topology of a polytope via proper edge length selection?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'll ask you to consider a situation wherein one has a series of edges for a graph, $(e_1, e_2, ..., e_N) \in E$, each with a specifiable length $(l_1, l_2, ..., l_N) \in L$, and the goal is to insure that the connected graph has a unique topology in 3-space. More specifically, I'm interested in insuring that some graph with the connectivity of a polytope can only be drawn as the skeleton of that particular polytope - that there should be no crossed edges or knots possible for the specified edge lengths.
To provide a physical example:
I use a group of rods to represent the edges of the desired graph (with pencils or the like) and color/symbol-encode their ends to represent vertex-assignments. I want to choose rod lengths in such a way that if I hand them to a naive-constructor (i.e. a 3-year old or a computer-controlled robot), and tell him/her/it to connect the ends of the rods together that have the same color or symbol, after waiting an arbitrarily long time there will only be a unique geometry satisfying the connectivity constraints of the graph I originally had in mind.
Is there a known computational complexity for this problem? Is there even a solution in the general case, or in the case where we apply the restriction that the specified polytope is convex?
I appreciate any feedback!
EDIT 1: The edges of the graph must be straight lines in 3-space, they cannot be bent to accommodate a particular edge length.
EDIT 2: Does the problem become easier if one assumes some physical diameter for the edges?
-
I am curious as to whether or not the author is talking about only convex polytopes, in which case the answer seems like the situation should be pretty clear. – Andrew D. King Nov 24 2010 at 5:53
Dear Andrew, no, I'm referring to arbitrary polytopes. However, for the convex case, it's not clear to me that it's always true that you can't find an alternate topology for a graph provided some set of edge lengths. – ShallowBlue Nov 24 2010 at 6:03
I apologize... but I don't see how the convex case is clear-cut. It seems to me like like it could potentially be possible to fix the edge lengths and connectivity constraints for a skeleton graph for some convex polytope, then reconfigure the graph (we don't care how this happens) into a knotted configuration without violating those constraints. – ShallowBlue Nov 24 2010 at 6:10
Ok I understand the problem now. I was considering the case in which each face is a triangle. In that case I am convinced that the topology is always unique. Another interesting problem, aside from deciding whether a given edge length function is uniquely realizable, is deciding when a 3-connected planar graph is uniquely realizable. For example, it is clear that if you take K4, the tetrahedron, then if an edge length function is realizable, it is uniquely realizable. – Andrew D. King Nov 26 2010 at 2:53
## 3 Answers
This seems like a question in rigidity theory. In particular it seems like part of what you want is conditions for global rigidity in 3 dimensions.
Let me write down some definitions and basic facts from the introduction of this nice set of slides by Dylan Thurston and then post some references that might be helpful.
A framework is a graph and a map from its vertices into $d$-dimensional Euclidean space $\mathbb{E}^d$. A framework is locally rigid if every other framework in a small neighborhood with the same edge lengths is related to it by an isometry of $\mathbb{E}^d$. A framework is globally rigid if every other framework in $\mathbb{E}^d$ with the same edge lengths is related to it by an isometry of $\mathbb{E}^d$.
It turns out that checking global rigidity is NP hard, even in 1 dimension (Saxe 1979). However, if you're just interested in "generic" frameworks, i.e. those for which the edge lengths do not satisfy any polynomial relation, then work of Connelly and S. Gortler, A. Healy and D. Thurston characterizes these frameworks in any dimension with an efficient randomized algorithm. See the paper of GHT or the slides above. I must admit that I have not yet studied their work in any detail.
Since you are requiring that your frameworks are skeleta of polytopes, there may be extra structure which you can exploit. Let me just point you to Cauchy's rigidity theorem which states that convex polyhedra are rigid if you force the faces to be rigid in addition to the edges. If you don't have this restriction on the faces, then there are nonrigid examples, e.g. the 1-skeleton of a cube can be sheared, also pointed out in sleepless in beantown's answer. If you do have the restriction on the faces, but you allow nonconvex polyhedra, then there are flexible polyhedra.
In addition to the links above, there are several surveys on the webpage of Robert Connelly on various topics in rigidity theory.
-
If instead of global rigidity, we ask that the naive-constructor must create a complete graph with some desired topology, is the problem still NP-hard? – ShallowBlue Nov 27 2010 at 0:50
By "complete graph" do you mean the "complete graph on n-vertices" K_n ? And I'm afraid I don't know what you mean by some "desired topology". From your question it seems like the graph structure is already fixed. Here's how I currently understand the setup in your problem. Every rod has two ends each of which is assigned a different color. All the ends with the same color end up stuck together, so every color will correspond to a distinct vertex in the resulting graph. Thus the set V of vertices of the abstract graph structure is indexed by the number of colors. – jc Nov 27 2010 at 1:22
1
Since each rod has two ends, every rod corresponds to an element of the set V x V, and thus the set E of edges is given by looking at the set of rods. The abstract graph structure (V,E) is thus fixed by the coloring. You are interested in the set of possible embeddings of these rods into 3-space, since all the rods are line segments, we just have solve for the positions of the vertices, and ask whether all the solutions are related by translations or rotations. – jc Nov 27 2010 at 1:26
I guess first you need to ask whether any solutions exist - this sort of question is answered in the field of "distance geometry", and the key tool is "Cayley-Menger determinants". Once you know that some solution exists you can probably start mapping your questions onto those of rigidity theory (actually rigidity theory usually starts out without any length assignments, but I believe that the situation where you choose some generic working length assignments maps into the theory of generic rigidity). – jc Nov 27 2010 at 1:30
"...we just have solve for the positions of the vertices, and ask whether all the solutions are related by translations or rotations." Right, I'm interested in picking rod lengths so that there's a solution for positioning vertices such that it can be drawn in 3-space. I'd also like to show that such a solution (and perhaps its mirror image) has a unique topology, that all such solutions are interconvertible by translations or rotations. – ShallowBlue Nov 27 2010 at 1:36
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You are asking about the embedding of a graph structure into 3-space $\mathbb{R}^3$. A graph structure by itself does not specify its embedding into $n$-space. In chemistry, these two different chiral instances of (tetrahedral) molecules below would be called stereo-isomers or enantiomers of each other.
In mechanical engineering, you'd be talking about building trusses and support structures, and a lot is known about the fact that quadrilaterals do not define a rigid structure. Quadrilaterals are easily sheared within a plane, and are not restricted to being coplanar, whereas triangular faces are at least limited to being coplanar.
Also, the presence of these constraints (on edge length and vertex-edge connectivity) also does not mean that it would be impossible to build partial structures that meet the specified partial constraints but which cannot be built upon to complete the structure. In other words, a "naive constructor" cold generate a partial assembly which is a configuration which is impossible to continue onto a final desired construction. There could be dead-end partial constructions which could not be completed. This type of problem could partially be avoided by also imposing a temporal constraint, or a sequence constraint, e.g. first add this, then add that.
However, there are chirality issues in play which cannot be avoided.
If the "vertices" do not impose restrictions on relative angles, then there are no additional contraints beyond edge-length, and the graph-structure and edge lengths will not usually define a single embedding in 3-space, relative to transformations such as translation and rotation.
If by topology, you do not also mean chirality, you may be correct. If you allow chirality differences to mean something, then there is a simple counterexample in the tetrahedron.
Let this tetrahedron $T_1$ in $\mathbb{R}^3$ be defined with a base triangle $ABC$ with the points $A=(0,0,0), B=(0,1,0), C=(1,0,0)$ and the top of the tetrahedron at $D=(0,0,1)$. Let the edge lengths of the skeleton of this polytope be defined based on this baseline instantiation in 3-space, $|AB|=1, |AC|=1, |BC|=\sqrt{2}, |AD|=1, |BD|=\sqrt{2}, |CD|=\sqrt{2}$.
Now note that if $D$ is instead placed at $D_2=(0,0,-1)$, that the this alternate tetrahedron (let's call it $T_2=ABCD_2$), has the same edge lengths as $T_1$, but has the mirror chirality. If we labeled the vertices with $A,B,C,D$, it is not possible to rotate and translate $T_1$ into $T_2$, whereas it is possible to turn $T_1$ inside-out and transform it into $T_2$.
If you don't have all triangular faces, e.g. you use the edge lengths of a cube as the only constraints on a skeleton of a cube, you'll quickly see the problem that engineers found in constructing trusses with square faces: parallelograms are not necessarily "rigid" and can be sheared easily and still maintain the correct edge-lengths between vertices. Thus it's not possible to build a rigid skelton with only square faces.
Thus, it depends on the axiomatic construction of your objects:
if you disallow disassembly and reconstruction, then the tetrahedra $T_1$ and $T_2$ are separate chiral mirror-images of each other. If you allow for disassembly and reconstruction, then $T_1$ and $T_2$ have the same topology. If you also define "topologically equivalent" to allow for elastic stretching (at least for transforming from one 3-d realization to another, then back to being solid and rigid while in a specific 3-d realization), then $T_1$ can be transformed into $T_2$ by pushing the vertex $D$ through the center of the face $ABC$ and onto the other side. If the faces actually have a physical planar object defining that face (like a kite has its tissue paper), then this sort of transform is disallowed and the mirror image tetrahedra $T_1$ and $T_2$ are different.
You can also visualize this by allowing the edges to be made of elastic springy rods with spring constants $k_i$. If the $k$'s are very large, then the springs are very stiff and the inversion will be impossible; if the $k$'s are small, the springs have a lot of give and it's easily possible to change between the two mirror-image configurations.
-
Not just parallelograms, this can be generalized to quadrilaterals not being rigid structures. – sleepless in beantown Nov 26 2010 at 21:50
Also, if any of the faces are quadrilaterals, there is no rigid construction unless there are sufficient other constraints. The presence of other constraints also does not mean that it would be impossible to build partial structures that meet the specified partial constraints but which cannot be built upon to complete the structure. In other words, a "naive constructor" cold generate a partial assembly which is a configuration which is impossible to continue onto a final desired construction. – sleepless in beantown Nov 26 2010 at 22:00
Also, you talking about the *embedding of a graph structure into \mathbb{R}^3 (3-space) * and in chemistry, these two different chiral instances of (tetrahedral) molecules would be called stereo-isomers or enantiomers of each other. – sleepless in beantown Nov 26 2010 at 22:04
Also, you talking about the embedding of a graph structure into 3-space $\mathbb{R}^3$ and in chemistry, these two different chiral instances of (tetrahedral) molecules would be called stereo-isomers or enantiomers of each other. – sleepless in beantown Nov 26 2010 at 22:05
A matrix realizing Colin de Verdi`eres $\mu$-invariant yields an answer if you accept that you get lengths at the end, not at the beginning.
A planar graph $G$ arising as the $1-$skeleton of a polytope has always $\mu$-invariant equal to $3$. There exists thus a combinatorial Schr\"odinger operator on $G$ whose second largest eigenvalue has multiplicity $3$ (and the eigenspace satisfies a stability condition). Choose a basis of $3$ eigenvectors for such an operator. Interpret these eigenvectors as $x,y$ and $z$ coordinates of points in $\mathbb R^3$, indexed by vertices of $G$. This yields a set of points in $\mathbb R^3$ which are extremal vertices of a polytope realizing $G$ (and the realization is of course the obvious one, vertices are already labeled by vertices of $G$).
Moreover, all convex polytopes realizing $G$ can be constructed in this way. The "moduli space" of such polytopes is thus (up to the choice of a basis) in bijection with Schr\"odinger operators realizing the $\mu$ invariant.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331217408180237, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/8176/are-newtons-gravity-waves-detectable-by-a-laser-interferometer
|
Are Newton's gravity waves detectable by a laser interferometer?
Newton's theory of gravity supports "gravity waves" in that moving objects cause changing gravitational fields. For example, two bodies rotating around their center of mass will have a stronger gravitational field when they are longitudinally oriented than when they are transverse oriented. Given two masses of mass $M$ orbiting on a circle of radius $r$, at a distance $d$ from an observer, the strength of the attraction is: $$\begin{matrix} F_{min} &=& \frac{2GM}{d^2},\\ F_{max} &=& \frac{2GM}{d^2}\frac{1+r^2/d^2}{(1-r^2/d^2)^2} \end{matrix}$$
This should be detectable at long distance.
My question is this: With that sort of gravity, would the laser interferometer based gravity wave detectors be able to detect the gravity wave? An example is the LIGO, Laster Interferometer Gravitational-wave Observatory.
-
You're asking purely about a Newtonian treatment, not the full GR analysis, right? – David Zaslavsky♦ Apr 7 '11 at 2:55
2
Dear Carl, there are no "waves" in Newtonian gravity. A "wave" is something that has shape - minima and maxima - that depends on space. There are no oscillations like that, as a function of space, in Newtonian gravity. The gravitational force is uniformly decreasing away from the source. This totally changes the situation. You apparently want to measure the time depedendence of the quadrupole moment - but it's very tiny, a hopeless power law. You can't measure "tides" from distant galaxies or binary stars. Gravity waves, which only exist in GR, have effects that drop slower with the distance. – Luboš Motl Apr 7 '11 at 5:32
@Lubos; Instead of "waves" call it what you like. My question is whether it's detectable at LIGO. – Carl Brannen Apr 8 '11 at 20:46
4 Answers
This is exactly the approach taken in Bernard Shutz's note "Gravitational waves on the back of an envelope" (Am. J. Phys. 50 vol 5 pp 412). The abstract reads:
Using only Newtonian gravity and a little special relativity we calculate most of the important effects of gravitational radiation, with results very close to the predictions of full general relativity theory. Used with care, this approach gives helpful back‐of‐the‐envelope derivations of important equations and estimates, and it can help to teach gravitational wave phenomena to undergraduates and others not expert in general relativity. We use it to derive the following: the quadrupole approximation for the amplitude h of gravitational waves; a simple upper bound on h in terms of the Newtonian gravitational field of the source; the energy flux in the waves, the luminosity of the source (called the ‘‘quadrupole formula’’), and the radiation reaction in the source; order‐of‐magnitude estimates for radiation from supernovae and binary star systems; and the rate of change of the orbital period of the binary pulsar system. Where our simple results differ from those of general relativity we quote the relativistic ones as well. We finish with a derivation of the principles of detecting gravitational waves, and we discuss the principal types of detectors under construction and the major limitations on their sensitivity.
(If you don't have access to Am J Phys, this talk seems to recapitulate the details.)
A major difference in this Newtonian scalar theory from the real GR theory of gravitational waves is in the effect of the waves on an inertial test particle. This Newtonian theory predicts that the waves would appear as an oscillating force along the direction between the source and the test particle. By contrast, GR predicts an oscillating differential tidal effect in the plane perpendicular to the line connecting the source and the test mass.
As a result, while I think LIGO would still detect the "Newtonian" form of gravitational waves, the antenna pattern of the detector would be different. The L-shaped LIGO detector has optimal sensitivity to a source located directly overhead in the GR case (allowing the gravitational wave to stretch one arm while it is compressing the other). There would be no sensitivity to a "Newtonian" source directly overhead. However, you could detect it if the "Newtonian" source were aligned with either arm.
By the way, "Newtonian noise" (the near-field action of Newtonian gravity arising from density waves in the material near the detector) is a real concern for terrestrial gravitational wave detectors!
P.S. To be pedantic, it is best to avoid the term "gravity wave" (as opposed to "gravitational wave"), since a "gravity wave" ("Newtonian gravity wave" even!) is something completely different.
-
2
Thanks for posting something referenced! – Sklivvz♦ Apr 7 '11 at 18:10
I don't think so. To first order, the maximum variation in your Newton's gravity force is $$F_{min}-F_{max}=\frac{6GMmr^2}{d^4}$$
If we assume that the effects of gravity propagate at light speed, then in order for one mirror to experience $F_{max}$ while the other experiences $F_{min}$, the objects must rotate 90 degrees in the time $l/c$ where $l$ is the path length of the interferometer (4 kilometers for LIGO).
If we choose the speed of light as an upper bound for the speed of the rotating objects, this means that they can be separated by no more than about 5 kilometers ($r=2.5$).
A lower bound on $d$ can be 4 lightyears, the distance to the nearest star.
Let's suppose the mass of a mirror is $m = 100 kg$, and the mass of the rotating objects is 20 billion solar masses, the largest known mass in the universe.
Punching this into google calculator gives a force difference of about $10^{-28}$N, which is 4 orders of magnitude smaller than any force yet measured - and of course most of these assumptions are unphysically generous.
-
2
It's newtonian gravity, so there is no propagation speed. The effect is instantaneous. That and the falling off as $1/d^4$ like you mention, makes this not very "gravitational radiation" like in my opinion. – John Apr 7 '11 at 3:06
1
yeah, I wasn't sure about that.. I was thinking you could maybe treat it in a semi-relativistic manner by assuming a propagation speed of c, kind of like basic relativistic E&M and retarded potentials. There's probably lots of problems with this approach, but I think it's less wrong than assuming an instantaneous propagation which clearly violates special relativity. – user2963 Apr 7 '11 at 3:24
The main problem is that there's no wave equation for the waves to satisfy. You can do something ad-hoc like applying retarded potentials, but that makes sense in Maxwell theory because retarded and advanced potentials solve Maxwell's equations. What is the "Newtonian" wave equation? – Jerry Schirmer Apr 7 '11 at 4:59
The operation of LIGO is actually more easily understood in the long wavelength approximation, in which the gravitational wave strain changes very slowly compared to the time needed for light to circulate in the interferometer arms. It is not necessary to arrange for one mirror (of the two in an arm) to see a maximum of the wave while the other sees a minimum, since we are looking for a tidal effect. – nibot Apr 7 '11 at 16:17
This does not imply gravity waves. This can be seen by the following calculation. The mass configuration is that of two masses in a circular orbit around each other. This is a dipole term, which defines ${\vec p}~=~m{\vec d}$, for $\vec d$ the distance between the two masses. So this dipole moment projected along the field determines a dipole potential $$\phi_d(r)~=~{\vec p}\cdot\nabla\frac{GMm}{r}$$ If this generates power in waves that escape the system, then a time derivative is dependent on $md{\vec d}/dt$. However, momentum conservation tells us this must be zero in the initial frame of the two masses.
So in order for gravity waves to exist one must have quadrupole moments and higher. However, this contributes to gravity waves in the post Newtonian expansion to second order. The
A gravity wave is a perturbation on a background metric $\eta_{ab}$ with the total metric $$g_{ab}~=~\eta_{ab}~+~h_{ab}.$$ The flat background metric has zero Ricci curvature so that to first order in the perturbation expansion $R_{ab}~=~\delta R_{ab}$, which enters into the Einstein field equation $R_{ab}~-~\frac{1}{2}Rg_{ab}~=~\kappa T_{ab}$, where $\kappa~=~8\pi G/c^4$ is the very small coupling constant between the momentum-energy source and the spacetime configuration or field. The Ricci curvature to first order is then $$R_{ab}~=~{1\over 2}\Big(\partial_c\partial_a{h^c}_b~+~\partial_c\partial_b{h^c}_a~-~\partial_a\partial_bh)~-~\partial_c\partial^ch_{ab}\Big).$$ The harmonic gauge $g^{bc}\Gamma^a_{bc}~=~0$, to first order as $\partial_c{h^c}_a~=~\frac{1}{2}\partial_\mu h$ the Einstein field equation gives $$\partial^c\partial_ch_{ab}~-~\frac{1}{2}\eta_{ab}\partial^c\partial_ch~=~{{16\pi G}\over {c^4}}T_{ab},$$ which is well defined for the traceless metric term ${\bar h}_{ab}~=~h_{ab}~-~\frac{1}{2}\eta_{ab}h$ with the simple wave equation $$\partial^c\partial_c{\bar h}_{ab}~=~{{16\pi G}\over {c^4}}T_{ab}.$$ This is then a basic wave equation for a tensor field with two independent components $h_{\times\times}$ and $h_{++}$ for polarization directions.
What is of interest is the source of the gravity waves $T_{ab}$. The metric perturbation far from the source $r~=~|{\bf x}|~>>~|{\bf y}|$ is approximated by, $${\bar h}_{ab}~=~{{4G}\over{rc^4}}\int_V d^3yT_{ab}(t~-~r/c,~{\bf y}).$$ The covariant constancy of the momentum-energy tensor $\nabla\cdot {\bf T}~=~0$ results in a number of relationships. The energy $T_{tt}~=~\rho U_0U_0$ component of with these relationships gives that for the quadrupole moment $I_{ij}~=~\int_Vx_ix_j\rho(t,~{\bf x})$ the metric wave is given by $${\bar h}_{ij}~=~{{2G}\over {rc^4}}{{d^2}\over{dt^2}}I_{ij}(t,~{\bf x}).$$ This term is different from a naïve idea of gravity waves suggested here. The coupling constant is $~\sim~G/c^4$ which comes from the Einstein field equation and is extremely small. The post Newtonian terms to $c^{-4}$ are second order, and this is a departure from a Newtonian gravity.
-
Isn't that one has to content with quadrupole moment, because mass dipoles don't exist? (Whether antimatter/matter might make up such a mass dipole I dont know.) Black holes circling each other are surmised to emit gravity waves. What is the difference to Carls masses? – Georg Apr 7 '11 at 13:56
I will try to amplify the effect based in angles to an ensemble of distant stars. I assume that the observer has a large mass or a sensitive device (the all planet where he inhabits or a Near Large Star, and all the other parameters are known. In the figure any change in position of the observer or the 'Device Star' in a periodic movement will be theoretically amplified in the measure of the angles a and b. I am measuring the gravitational influence but I do not think you are measuring 'gravity waves'. The problem with LIGO is that the device is subject to dynamic changes in length, and the laser light undergoes the same changes, making impossible the detection of any variation. After all LIGO is measuring 0.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9308977723121643, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/39255/list
|
2 edited tags
1
# Determinant of a sum of a diagonal matrix, a dyadic product matrix, and a Hermitian Toeplitz matrix
Hi
From a physics problem, I am trying to evaluate exactly the following kind of determinant:
G = A + M + N.
A is diagonal M is a product of a column (of 1s) and a row matrix N is a Hermitian Toeplitz matrix.
It would be of great help to me if anyone could point out known techniques. I've attempted various decompositions and had no luck. Further, I am more interested in the continuum limit of this determinant (i.e. when the matrix size N -> infinity, and the matrix indices are suitably taken to some continuous variable).
For completeness, here's the full expression.
$A(m,n) = (m+i\alpha)\delta(m,n)$, $M(m,n) = \beta f(n+\alpha)$, $N(m,n) = -\beta f(m-n)$
$\alpha$ and $\beta$ are real constants. $i$ is $\sqrt{-1}$. $f(x) = (e^{i x t}-1)/x$, and $t > 0$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387175440788269, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/210300/poisson-process-arrival-probability
|
# Poisson Process Arrival Probability
Just a quick question regarding two Poisson Processes:
Let $X_t$ and $Y_t$ be two independent Poisson Processes with rate parameters $\lambda_1$ and $\lambda_2$, respectively, measuring the number of customers arriving in stores 1 and 2, respectively. What is the probability that a customer arrives in store 1 before any customers arrive in store 2?
My approach to this problem thus far has been to consider all possible times where store 1 could have a customer arrive, but that gets into dealing with infinity and I'm not so sure that's correct. Mathematically, I'm thinking I should calculate
$$P(X_1 = 1)P(Y_1 = 0) + P(X_2 = 1|X_1 = 0)P(Y_2 = 0) + P(X_3 = 1|X_2 = 0)P(Y_3 = 0) + ...+ P(X_n = 1|X_{n-1} = 0)P(Y_n = 0).$$
Is there an easier approach than the one I am taking? Is the approach I'm taking even correct?
-
Do you know the relationship between Poisson distribution and exponential distribution? – André Nicolas Oct 10 '12 at 6:05
I know that the expected value of the waiting time is $1/{\lambda}$. Is that what you're getting at? – Jack Radcliffe Oct 10 '12 at 6:06
More. That the waiting time is exponentially distributed. – André Nicolas Oct 10 '12 at 6:07
Sorry, but I'm not really seeing where this is going. Care to elaborate a bit? – Jack Radcliffe Oct 10 '12 at 6:13
Should I give an answer? – André Nicolas Oct 10 '12 at 6:17
show 1 more comment
## 1 Answer
Let the random variables $X$ and $Y$ denote the respective waiting times until the first customer. These two random variables have exponential distribution with parameters say $\lambda$ and $\mu$. We want the probability that $X\lt Y$.
By independence, the joint density is the product of the individual densities, so $$\Pr(X\lt Y)=\int_{y=0}^\infty \mu e^{-\mu y}\left(\int_{x=0}^y \lambda e^{-\lambda x}\,dx\right)\,dy.$$ The integrations are not difficult.
-
It's no question why you have so much reputation. Thank you. – Jack Radcliffe Oct 10 '12 at 6:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9090451598167419, "perplexity_flag": "middle"}
|
http://marcofrasca.wordpress.com/2012/06/04/turing-machine-and-landauer-limit/
|
# The Gauge Connection
The curious ways to see the World of a theoretical physicist
## Turing machine and Landauer limit
My inactivity period was due to a lack of real news around the World. But I was not inactive at all. My friend Alfonso Farina presented to me another question that occupied my mind for the last weeks: What is the energy cost for computation? The first name that comes to mind in such a case is Rolf Landauer that, on 1961, wrote a fundamental paper on this question. The main conclusion drawn by Landauer was that at each operation on a bit there is an entropy cost of $K\ln 2$ being $K$ the Boltzmann constant. This means that, it you are operating at a temperature $T$ there will be heat emission for $KT\ln 2$ and this is the Landauer limit. This idea stems from the fact that information is not some abstract entity living in hyperuranium but just to stay in the real world it needs a physical support. And wherever there is a physical support thermodynamics and its second principle is there at work. Otherwise, we can use information to evade the second principle and build our preferred perpetual motion. As Charles Bennett proved, Maxwell demon cannot work due to Landauer limit (for a review see here).
Recently, a group of researchers was able to show, by a smart experimental setup, that Landauer’s principle is indeed true (see here). This makes mandatory to show theoretically that Landauer’s principle is indeed a theorem and not just a conjecture.
To accomplish this task, we would need a conceptual tool that can map computation theory to physics. This tool exists since a long time and was devised by Alan Turing: The Turing machine. A Turing machine is a thought computational device aimed to show that there exist mathematical functions that cannot have a finite time computation, a question asked by Hilbert on 1928 (see here). A Turing machine can compute whatever a real machine can (this is the content of the Church-Turing thesis). There exist some different kinds of Turing machines but all are able to perform the same computations. The main difference relies on the complexity of the computation itself rather than its realization. This conceptual tool is now an everyday tool in computation theory to perform demonstrations of fundamental results. So, if we are able to remap a Turing machine on a physical system and determine its entropy we can move the Landauer’s principle from a conjecture to a theorem status.
In my paper that appeared today on arXiv (see here) I was able to show that such a map exists. But how can we visualize it? So, consider a Turing machine with two symbols and a probabilistic rule to move it. The probabilistic rule is just coded on another tape that can be consulted to take the next move. This represents a two-symbol probabilistic Turing machine. In physics we have such a system and is very well-known: The Ising model. As stated above, a probabilistic Turing machine can perform any kind of computations a deterministic Turing machine can. What is changing is the complexity of the computation itself (see here). Indeed, a sequence of symbols of the tape in the Turing machine is exactly a configuration of a one-dimensional Ising model. This model has no critical temperature and any configuration is a plausible outcome of a computation of a Turing machine or its input. What we need is a proper time evolution that sets in the equilibrium state, representing the end of computation.
Time evolution of the one-dimensional Ising model has been formulated by Roy Glauber on 1963. Glauber model has a master equation that converge to an equilibrium as time evolves with a Boltzmann distribution. The entropy of the model at the end of its evolution is well-known and has the limit value for the entropy $K\ln 2$ as it should when a single particle is considered but this is just a lower limit. So, we can conclude that the operations of our Turing machine will involve a quantity of emitted heat in agreement with Landauer’s principle and this is now a theorem. What is interesting to note is that the emitted heat at room temperature for a petabit of data is just about a millionth of Joule, a very small amount. This makes managing information convenient yet and cybercrime still easy to perform.
Landauer, R. (1961). Irreversibility and Heat Generation in the Computing Process IBM Journal of Research and Development, 5 (3), 183-191 DOI: 10.1147/rd.53.0183
Bennett, C. (2003). Notes on Landauer’s principle, reversible computation, and Maxwell’s Demon Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, 34 (3), 501-510 DOI: 10.1016/S1355-2198(03)00039-X
Bérut, A., Arakelyan, A., Petrosyan, A., Ciliberto, S., Dillenschneider, R., & Lutz, E. (2012). Experimental verification of Landauer’s principle linking information and thermodynamics Nature, 483 (7388), 187-189 DOI: 10.1038/nature10872
Marco Frasca (2012). Probabilistic Turing Machine and Landauer Limit arXiv arXiv: 1206.0207v1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145020246505737, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/113885/list
|
Return to Answer
4 deleted 9 characters in body
As a matter of fact, P. L. Chebyshev knew already that for any $\epsilon > \frac{1}{5}$, there exists an $n(\epsilon) \in \mathbb{N}$ such that for all $n\geq n(\epsilon),$
$\pi((1+\epsilon)n)-\pi(n)>0.$
In [2], one can find a short report on the problem of determining explicit values for the smallest $n(\epsilon)$ explicitly once that $\epsilon$ has been fixed.
References
[1] P. L. Chebyshev. Mémoire sur les nombres premiers. Mémoires de l'Acad. Imp. Sci. de St. Pétersbourg, VII, 1850.
[2] H. Harborth & A. Kemnitz. Calculations for Bertrand's Postulate. Mathematics Magazine, 54 (1), pp. 33-34.
3 added 20 characters in body; deleted 12 characters in body
As a matter of fact, P. L. Chebyshev knew already that for any $\epsilon > \frac{1}{5}$, there exists an $n(\epsilon) \in \mathbb{N}$ such that for all $n\geq n(\epsilon),$
$\pi((1+\epsilon)n)-\pi(n)>0.$
In [2], one can find a short account of report on the problem of determining explicit values for the smallest $n(\epsilon)$ for fixed values of once that $\epsilon$.\epsilon\$ has been fixed.
References
[1] P. L. Chebyshev. Mémoire sur les nombres premiers. Mémoires de l'Acad. Imp. Sci. de St. Pétersbourg, VII, 1850.
[2] H. Harborth & A. Kemnitz. Calculations for Bertrand's Postulate. Mathematics Magazine, 54 (1), pp. 33-34.
2 added 92 characters in body
As a matter of fact, P. L. Chebyshev knew already that for any $\epsilon > \frac{1}{5}$, there exists an $n(\epsilon) \in \mathbb{N}$ such that for all $n\geq n(\epsilon),$
$\pi((1+\epsilon)n)-\pi(n)>0.$
In [2], one can find a short account of the problem of determining explicit values for the smallest $n(\epsilon)$.n(\epsilon)$for fixed values of$\epsilon\$.
References
[1] P. L. Chebyshev. Mémoire sur les nombres premiers. Mémoires de l'Acad. Imp. Sci. de St. Pétersbourg, VII, 1850.
[2] H. Harborth & A. Kemnitz. Calculations for Bertrand's Postulate. Mathematics Magazine, 54 (1), pp. 33-34.
1
As a matter of fact, P. L. Chebyshev knew already that for any $\epsilon > \frac{1}{5}$, there exists an $n(\epsilon) \in \mathbb{N}$ such that for all $n\geq n(\epsilon),$
$\pi((1+\epsilon)n)-\pi(n)>0.$
In [2], one can find a short account of the problem of determining explicit values for the smallest $n(\epsilon)$.
References
[1] P. L. Chebyshev. Mémoire sur les nombres premiers.
[2] H. Harborth & A. Kemnitz. Calculations for Bertrand's Postulate. Mathematics Magazine, 54 (1), pp. 33-34.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7350587844848633, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/98466/how-many-points-are-there-on-an-elliptic-curve-reduced-at-a-bad-prime/98469
|
## How many points are there on an elliptic curve reduced at a bad prime?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given an elliptic curve $E$ defined over $\mathbb{Z}$, and a prime $p$, I know that Hasse's theorem gives, when $p$ is a good prime, a relation between the number of solutions over $\mathbb{F}_{p^n}$ and the number of solutions over $\mathbb{F}_p$ (for this, the coefficients of the equation are reduced mod $p$).
Is there such a relation also at the bad primes?
-
2
For bad primes it is much easier as there are essentially three cases to consider: additive reduction, split multiplicative reduction and non-split multiplicative reduction. It is a good exercise to try to count the number of points in each case yourself. – Daniel Loughran May 31 at 7:38
## 1 Answer
If the reduction is additive, there are $p+1$ points including one singular point. If it is split multiplicative it is $p$ and if non-split multiplicative, then it is $p+2$. See Washington "Elliptic curves, Number Theory and Cryptography ", section 2.10 on page 59.
... and see François comment below for $n>1$.
-
1
The way to remember it is that when you remove the singularity, the rest has a group structure, and it is isomorphic to the additive group (order $p$) in the case of additive reduction, to the multiplicative group of the base field (order $p-1$) or to the kernel of the norm map from the quadratic extension of the base (order $p+1$) in the two cases of multiplicative reduction. – Chandan Singh Dalawat May 31 at 8:05
So when one considers the reduction of $E$ as a curve defined over $\mathbf{F}_{p^n}$, then the number of points is $p^n+1$, $p^n$ in the case of additive resp. split multiplicative reduction. In the non-split multiplicative case it will depend whether $n$ is odd or even. – François Brunault May 31 at 8:14
The question was specifically when $n>1$, so Francois answered that. As a consequence of all this, the relation $|E(\mathbb{F}_p)| = p+1-a_p$ remains true for primes of bad reduction, where $a_p$ are coefficients of the Hasse-Weil L-function. It fails however when $p$ is replaced by $q=p^n$, except for primes where the reduction is additive. – Anonymous May 31 at 8:55
Sorry I did not see the $n$ in the unedited version. – Chris Wuthrich May 31 at 9:45
5
Careful. This answer is correct when the elliptic curve is given in Weierstrass form but not in general. For instance, $xy(x-y)=p$ defines an elliptic curve whose reduction modulo $p$ has $3p+1$ points. – Felipe Voloch May 31 at 17:25
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312419295310974, "perplexity_flag": "head"}
|
http://cs.stackexchange.com/questions/10014/how-can-i-argue-that-3-mathsfsat-leq-p-mathsfindset-is-polynomial-in-time
|
# How can I argue that $3\mathsf{SAT}\leq_p \mathsf{IndSet}$ is polynomial in time?
Given the reduction $3\mathsf{SAT}\leq_p \mathsf{IndSet}$ as follows:
How can I argue that it's in polynomial time? I understand how the reduction works, but even though it appears rather trivial, I can't explain why it's efficient.
To place $\mathsf{IndSet}$ in $\mathsf{NP}$-Hard, we will show $3\mathsf{SAT}\leq_p \mathsf{IndSet}$:
Given $$\phi=\bigwedge_{m=1}^{n}(x_m\vee y_m\vee z_m)$$ with $m$ clauses, produce the graph $G_\phi$ that contains a triangle for each clause, with vertices of the triangle labeled by the literals of the clause. Add an edge between any two complementary literals from different triangles. Finally, set $k=m$. In our example, we have triangles on $x,y,\overline{z}$ and on $\overline{x},w,z$ plus the edges $(x,\overline{x})$ and $(\overline{z},z)$.
We need to prove two directions. First, if $\phi$ is satisfiable, then $G_\phi$ has an independent set of size at least $k$. Secondly, if $G_\phi$ has an independent set of size at least $k$, then $\phi$ is satisfiable. (Note that the latter is the contrapositive of the implication "if $\phi$ is not satisfiable, then $G_\phi$ does not have an independent set of size at least k".)
For the first direction, consider a satisfying assignment for $\phi$. Take one true literal from every clause, and put the corresponding graph vertex into a set $S$. Observe that $S$ is an independent set of size $k$ (where $k$ is the number of clauses in $\phi$).
For the other direction, take an independent set $S$ of size $k$ in $G_\phi$. Observe that $S$ contains exactly one vertex from each triangle (clause) , and that $S$ does not contain any conflicting pair of literals (such as $x$ and $\overline{x}$, since any such pair of conflicting literals are connected by an edge in $G_\phi$). Hence, we can assign the value True to all the literals corresponding with the vertices in the set $S$, and thereby satisfy the formula $\phi$.
This reduction is polynomial in time because $\Huge\dots?$
I've looked at many different examples of how this is done, and everything I find online includes everything in the proof except the argument of why this is polynomial. I presume it's being left out because it's trivial, but that doesn't help me when I'm trying to learn how to explain such things.
-
1
You have to be more specific. If you can't explain precisely how the reduction works, you cannot argue or convince yourself that it is polynomial time. What it happening in the figure? – Juho Feb 21 at 17:45
@Juho I've added my description so far... – agent154 Feb 21 at 17:47
## 1 Answer
You are given a formula, and you will construct a graph. You can argue the reduction is polynomial time by analyzing the time you need to construct the graph. The transformation is an algorithm: write each step down, and analyze them. If every step takes polynomial time, you have shown the reduction runs in polynomial time.
Scan the clauses of the formula in time that is linear in the number of clauses. Scan the 3 literals in every clause. Build 3 vertices, with labels corresponding to the literals in a clause. This you can do in constant time, clearly. Once you have built a triangle, add it to your graph. With reasonable assumptions, you can do this in linear time, but details naturally depend on the way you represent your graph. Nevertheless, the time taken will be polynomial.
Finally, you only need to add edges between any two complementary literals from different triangles. Scan your triangles. For every literal in the triangle, scan through every other triangle, and see if there is a complementary triangle. If so, add an edge. This process takes polynomial time; for every triangle you are scanning every other triangle, so think of something quadratic. After every triangle has been scanned, $G_\phi$ has been built.
-
That helps quite a bit, thanks. – agent154 Feb 21 at 18:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435168504714966, "perplexity_flag": "head"}
|
http://motls.blogspot.com.au/2012/05/psychology-of-dark-matter-denial.html
|
# The Reference Frame
## Thursday, May 24, 2012
... /////
### Psychology of dark matter denial
Sean Carroll mentioned some developments concerning dark matter that have been discussed on this blog, too. His short text is another example of the fact that his comments are sometimes right, however rare these events may be.
A month ago, the media overhyped a paper by Chilean astronomers who claimed that their measurements show that there was no dark matter in the vicinity of the Solar System. However, Bovy and Tremaine showed that with a more realistic model for the velocities of the galaxies, the corrected method seems to yield a dark matter density that is fully compatible with the value obtained by more common methods.
Among the media, only Universe Today, Phys Org, and Nude Socialist mentioned the new article which arguably is – unlike the previous, overhyped one – correct. The ordinary non-scientific media remained silent. Theories that work are not too interesting for the journalists; they prefer to write about things that don't work, especially if these "don't work" claims are untrue.
These days, many people – or at least a sufficiently large number of loud people – are literally obsessed by attacks against some key theories contained in the very foundations of modern science. String theory may be just too mathematically abstract for a number of amazingly aggressive "critics" if I have to avoid the term "imbeciles". Quantum mechanics brought the most profound conceptual revolution in the history of physics.
One could perhaps understand the existence of these people – because these theories really dramatically differ from our everyday understanding of the reality or from the expected amount of mathematical depth that is needed to to describe the observations properly.
However, the anti-scientific movement reaches corners of science that seem totally conventional. The existence of dark matter is an example. Many people just get unbelievably emotional when you say something about dark matter. They're as certain as any other religious bigot that dark matter shouldn't exist: it's so dark and blasphemous! The adjective "dark" surely means that it's just a collection of fudge factors that the cosmologists and physicists have to apply in order to hide the truth. The cosmologists and physicists must have signed a contract with the Devil if they promote dark matter.
The actual fact is that dark matter is just pretty much ordinary matter from a physics viewpoint; it's just composed of yet another type of an elementary particle. There are many types of elementary particles. We encounter some of them frequently, some of them less frequently, but all of them are just examples of particles that also behave as waves, quanta of some particular quantum fields which are intrinsically vibrating strings if weakly coupled string theory applies to the real world.
Where are those conspiracy theories about dark matter that surely has to be a fraud coming from?
A part of it may boil down to the name and the hype in the popular science media that dark matter is something totally shocking. People just don't like "dark" things. They're connected with the devil, they feel. And many popular science writers love to oversell the topics they're discussing so they stress that dark matter is something totally unusual, completely otherworldly, crazy. But it's not. It's as conservative as muons or Higgs bosons. It's just a different material but the difference doesn't really require any dramatic paradigm shift away from quantum field theory, our "theory of nearly everything" (TONE), or string theory, our "theory of everything" (TOE).
Many of the anti-scientific movements, whether we talk about quantum mechanics, string theory, or dark matter, boils down to an unbelievable degree of naivety of the critics. They just want all the objects in physics to look like a tree – or something else you encounter in your everyday life. It shouldn't be too small, it should reflect an amount of light that is neither too low nor too high, it should sit in the space without fluctuations, it should only have three dimensions, it shouldn't contract or get heavier when it's moving, and so on.
Except that the fundamental objects and concepts in physics simply aren't any trees! They are different. They get shorter and heavier when they're moving. They may be constructed out of new particles, the particles' internal structure may expose a one-dimensional string or higher-dimensional branes, all these objects live in extra dimensions and follow the probabilistic logic that must be described with the maths of quantum mechanics. The fundamental objects in physics have many properties that differ from the properties of a tree. Get used to it.
Those people can't get used to it. But even smart kids in the kindergarten must be able to understand why these people's criticism is utterly irrational. There is no reason why fundamental things should look like trees – just like there is no reason why the entity that has created the Universe has to look like a grandfather sitting on the cloud. Other possibilities are mathematically consistent so they may be realized in Nature. And indeed, the right ones are realized in Nature. We are just some composite objects, animals that evolved in a particular way and have been trained to perceive and evaluate a certain kind of empirical information. But it's surely not all the information, information in all the forms, that may exist in the real world.
In particular, a big part of the elementary particles (counted as a fraction of the total mass of particles) may be invisible through light. And in fact, we know that a majority of the localized mass/energy in the Universe is invisible via light; that's why we call it dark matter. It just doesn't interact with the electromagnetic field – or the interaction is so impressively weak that the practical outcome is the same. We don't see it.
In the Universe, 73% of the energy density seems to be composed of dark energy which is not localized and has no internal structure. It seems that it's just Einstein's cosmological constant adding some curvature – just a number – to the vacuum (spacetime) at each point. The remaining 27% are composed out of dark matter, 23%, and baryonic i.e. visible matter, 4%, most of which (counted as mass) is composed of protons and neutrons. Among those 27% assigned to localized matter, 85% of it is dark matter.
Is it a large percentage? Is it small? Well, I don't know. There is no a priori, philosophical calculation that would tell you what the right percentage should be. The Universe as we see it is consistent and allows life at this moment. One can show that if we changed some percentages but not others, certain things wouldn't work anymore. There wouldn't be any life at this moment, for example. Some other correlated changes could still be compatible with life at this moment but these alternative possibilities are simply not realized in the world around us even though they could seem acceptable. Our world has unique answers to most of such questions.
Spectrum of rainbow
But the fact that there is matter we don't directly see – by our eyes – just shouldn't be shocking. Our eyes only see light whose wavelength is between 0.4 and 0.8 microns. At the log scale, it's just some interval corresponding to one doubling. Inside the huge interval of interesting wavelengths of electromagnetic waves, going from $10^{-25}$ to $10^3$ meters, the visible interval is located at a seemingly random place in the middle – well, it's not too random: the frequencies we see are close to those that are mostly emitted by the Sun and that make it through the atmosphere. We are directly sensitive to the type of light that is widespread on Earth (although our devices are able to detect the remaining frequencies in the wide interval, too).
It's not an accident that our eyes became able to see the frequencies that are dominant on Earth. It's easier to evolve an observing optical apparatus – an eye – if it doesn't have to deal with one problem, the shortage of light of the detectable frequencies. And at the end, the eyes use similar chemical and electrical processes to detect the light that are employed during the emission of the light by atoms. So you shouldn't be shocked that the frequencies had to be close.
Evolution of the not-only-human eye. Sorry if clicking will produce a creationist page; they were just successful enough to be high-scorers with Google and the picture (in which they present a serious theory) just passed my tests.
However, one simple conceptual point that you should appreciate – and the "critics" of dark matter arguably don't appreciate it – is that the eyes got evolved to adapt to the environment and the composition of the electromagnetic waves in this environment. What I want to say is that the causal relationship wasn't going in the other way around. Some people seem to think that they decide what is the preferred way how they should be seeing the world and things in Nature are obliged to adapt so that they may be seen in this "user-friendly" way.
But Nature isn't obliged to obey such conditions. In particular, it doesn't have to be "user-friendly". Nature isn't obliged to obey any ad hoc man-made conditions whatsoever. By definition (of "natural" and "man-made"), there aren't any fundamental physical processes in Nature that would be man-made, a trivial fact that could drive the climate alarmist crackpots up the wall but that is true, anyway. If we want to understand Nature, our beliefs and our science has to adapt to what has existed in Nature for billions of years, not the other way around. Nature just won't adapt to your beliefs. Feel free to complain against Nature's totalitarian attitude and team up with a few billion of other crackpots who will demand Nature to surrender. She won't.
The electromagnetic spectrum: click. Note how narrow the visible band is on the log scale. And it still allows us to have so much fun with colors, to distinguish millions of them, etc.
Just like there are objects and processes that emit or absorb light at frequencies that are very different from the frequencies of the visible light, there are particles that aren't able to emit or absorb light at all – at least not at rates that would be significant and detectable. That's not shocking.
Light itself (or a photon) is just one elementary particle. Others aren't obliged to interact with it. In particular, electrically neutral particles don't interact with electromagnetic waves and there are many electrically neutral particles. Neutrons and atoms are examples of electrically neutral particles that still interact with light (emission spectra, magnetic moments etc.) because they're made out of electrically charged pieces – pieces that are sufficiently distant from each other. However, elementary particles such as neutrinos or neutralinos are both neutral and tiny so any substructure involving charged sub-particles is unobservable at achievable frequencies. So they just don't interact with light.
There isn't any a priori calculation of the fraction on the Universe's particulate mass that these invisible particles should constitute. It seems we know a lot to conclude that it is a majority of the matter (not counting dark energy). This conclusion is deeply incorporated into our modern picture of cosmology – I mean to the publicly available version of the CV of our cosmos.
The visible matter – protons, atoms etc. – is just a cherry on a pie, some parasitic exceptional stuff that decided to live, get ignited, and burn on the peaks of dark matter halos, the primordial environment where the galaxies were born. You could say that the visible matter is a "higher species" than the dark matter; after all, life is composed of visible matter. But the mass stored in the dark matter, the lower species, may be naturally larger. The number of insects is higher than the number of humans, too.
(I apologize to insect American readers and their environmentalist advocates for my suggestion that the insect Americans are a lower race than homo sapiens.)
We may observe the motion of stars in our galaxy and compare it with the theoretical predictions – either from Newton's theory or Einstein's theory i.e. the general theory of relativity. Without the dark matter, we find a disagreement. The Milky Way is rotating almost like an LP, with a velocity that doesn't depend on the center from the center. According to Newton's theory, the motion should be much closer to the Solar System in which the innermost planets are much faster because their centrifugal acceleration has to compensate the much stronger Sun's gravity they feel.
There are two classes of solutions to this discrepancy: the theory – Newton's or Einstein's theory – is fundamentally wrong; or we have just overlooked some sources of the gravitational field. Of course, the latter option – one leading to the concept of dark matter – is much more conservative from a physics viewpoint. It is favored by detailed observations, too.
But even if we're open-minded about both possibilities, we could try to reconstruct the metric tensor in our galaxy. Just imagine that you try to find a configuration of the metric tensor – the geometry – whose geodesics coincide with the observed world lines of the celestial objects. Assume that this task has a solution and our (imperfect but already nontrivial) observations suggest that it does. So we have something like $g_{\mu\nu}(x,y,z,t)$. Out of this metric tensor, we may calculate the Ricci tensor $R_{\mu\nu}(x,y,z,t)$.
We may use this Ricci tensor to calculate the stress-energy tensor from Einstein's equations,\[
R_{\mu\nu} - \frac 12 R g_{\mu\nu} = -8\pi G T_{\mu\nu}^\text{includes c.c.}
\] In our treatment, you may view Einstein's equations above to be a definition of the stress-energy tensor $T$ which I have defined to include the dark energy term $\rho g_{\mu\nu}$, too. (More often, it would be written as an extra term and moved to the left hand side.) You see that in this approach, we may always obey Einstein's equations. We just define the stress-energy tensor to be the usual multiple of the Einstein tensor constructed out of the curvature components.
In this setup and so far, the question whether the dark matter explanation is the right one remains vacuous. We may always calculate the stress-energy tensor and the difference between this gravitationally calculated stress-energy tensor and the stress-energy tensor from the observed matter may be called the stress-energy tensor of "dark matter". A pure fudge factor.
However, if we assume that the stress-energy tensor of the dark matter is really calculated out of some particular form of matter – such as a cloud of new particles, WIMPs, behaving as a particular kind of dust – which have some particular relations between the pressure and energy density and which evolve according to the same laws as the visible matter – we may already make nontrivial predictions about the stress-energy tensor contributed by the dark matter. And what we observe is already a nontrivial consistency check: the observed cosmology is compatible with the idea that the dark matter whose distribution was decoded from its gravitational influence – i.e. from the motion of the stars etc. – does seem to obey the otherwise known laws of physics, too.
This check is a huge argument in favor of the dark matter paradigm. Analogous consistency checks trying to verify the MOND theories – theories that want to avoid dark matter and blame the anomalous motion on Nature's hypothetical refusal to obey Newton's or Einstein's laws at cosmological length scales – don't work this well. One may continue with other, more detailed checks and the dark matter paradigm just seems to work fine. In fact, it allowed us to decide that most of the dark matter should really be a cloud of a new particle, either WIMP or axions or their mixture or something similar. These theories seem to make sense. Their implications for the early cosmic history seem to make sense, too.
The only "new thing" about this dark matter is that it is dark: we can't observe its presence via light. But we may still observe its presence by other tools, especially by its gravitational influence on other celestial bodies – and if we're lucky, also from its impact on the direct search experiments such as LUX discussed in the previous blog entry.
So where does the emotional opposition to the dark matter come from? There is no observation that would really contradict it; the theory has passed many nontrivial consistency checks; the overall cosmological picture including dark matter makes sense; and the very assumption that some particles don't interact with light is no heresy because it obviously follows from pretty ordinary theories in particle physics. After all, we know that neutrinos have the same property although they're too light to account for the relatively compact and "slowly changing" dark matter halos. But the neutrinos may have heavier cousins. There's clearly absolutely no simple enough way to show that the dark matter paradigm is insane or impossible. All the people who try to convince themselves that they have such a proof are just deluding themselves and others.
In these dark matter discussions, I still think that most of the people's irrational attitudes boil down to their dogmas, to their inability to impartially and rationally compare competing hypotheses that are assigned comparable prior probabilities. All the "critics" just start with the assumption that the dark matter paradigm has to be super insanely unlikely for some emotional reasons – some completely unjustifiable would-be argument that there is something contrived about dark matter – and no amount of evidence is capable of convincing them that the answer differs from their dogmas.
It's very important that qualitatively different theories must be given non-negligible, mutually comparable prior probabilities. You may only falsify theories by showing that they disagree with the evidence; you may only falsify them a posteriori. Many critics – and this is true for the dark matter denial just like it is true for the staggeringly shitty anti-stringy imbeciles or for the anti-quantum, anti-Copenhagen zealots - just don't want to obey this basic rule of science that falsification has to boil down to the evidence and not some a priori emotions.
If one has a theory that tells you something about every element of a class of phenomena, the only way to weaken this theory is to find a disagreement between the theory and some observations; or to show that it is totally vacuous and doesn't have any implications whatsoever. This is clearly not case of the Copenhagen quantum mechanics; it's not the case of the dark matter paradigm; it's not the case of string theory. All the people who say that they have some evidence against those things are just lying to themselves.
And that's the memo.
Posted by Luboš Motl
|
Other texts on similar topics: astronomy, science and society
#### snail feedback (9)
:
reader PN said...
I would warmly recommend reading through both papers in question. I'm not particularily impressed by Bovy & Tremains paper. They focus on the approximation Moni Bidini et al. makes in section VIII but seems to entirely ignore the discussion MB has on this topic in section 4.2. Even though I'm an astronomer (high energy astrophysics) I'll readily admit I'm no expert on stellar kinematics and DM halo models and would need to read a number of papers to check the validity of what MB claims in 4.2. But the fact is that they DO explore the parameter space Bovy & Tremaine suggests. Some highlights;
- Observational knowledge of azimuthal velocities for stars high above the galactic plane is scarce. Hence it is a bit disingenious to claim that the values Bovy & Tremain show are required for standard DM halo models are "more realistic".
- High azimuthal velocities on the one hand agrees with current DM halo models but on the other hand (apparently) requires that the total amount of DM is much higher than the standard models suggest so you end up with a conflict if you assume high velocities. As far as I know Bovy & Tremain does not at all explore this problem.
- Other researchers also approximate the term in question to zero.
Any which way, Moni Bidini is by no means proof of no DM at all, that's just media spin. We'll still have the (pretty old) problem of galaxy rotation curves to explain. What they do show, if proven correct, is that the current model of DM halos might be incorrect and might require other forms of DM than are currently considered as the most likely particles.
Bovy and Tremain would have done a better job if they had explored section 4.2 of the original paper in much more detail. As it stands it looks like they really didn't add much at all to the discussion as MB already covered the issues B&T brings up.
reader Luboš Motl said...
Dear pn,
I know that comments formulated such as yours sound convincing to many ears but mine aren't among them.
I just find your attitude lacking in impartiality and ultimately illogical.
First, the big claims by Moni Bidin et al. that are needed for someone to say that the paper is "important" (whether or not it is a "disproof" of the dark matter paradigm) do depend on the section 4.1 so if there's a significant bug in it, it would invalidate the Moni Bidin paper. To write a valid derivation of something, one needs all the steps in the chain to be right.
But we don't really have to solve this partial question because Bovy and Tremaine aren't just correcting an isolated error. They offer their own calculation using the same but corrected methodology as Moni Bidin et al. They end up with a density of dark matter that seems totally compatible with other methods. So the more logical approach now is to take the calculation that looks most comprehensive and takes all the previous observations and corrections into account - the Bovy's and Tremaine's calculation - and try to find problems with it (or confirm it).
You don't seem to be doing this at all. Your comment sounds to me as if you don't want papers of this kind to be read and studied at all. It surely sounds as you want such papers to be viewed as heretical ones. You haven't found any particular problem with Bovy's and Tremaine's paper but your description of it still sounds negative. This looks like a disconnect between the evidence and the conclusions to me. Your judgement sounds prejudiced to me. You want some people to study a section 4.2 of a paper that seems wrong as a whole. Why should the people be doing so? Why are you still trying to sell the Moni Bidin et al. paper to be the holy scripture that should be the benchmark for all these discussions forever? As far as I can say, it is already in the trash bin.
The disagreement about the behavior of the velocities is very clear - and clearly stated in the Bovy-Tremaine paper, even in the abstract. Near the mid plane, the Chilean folks need a constant mean azimuthal velocity; Bovy and Tremaine say that a constant circular speed is more realistic. That's what I've been hearing about the observed speeds since the first moment I've heard about dark matter. So I find it bizarre for you to attack the assumption of constant circular speeds as a fringe speculation. It's what's been said about the dark matter models since the beginning and it's pretty much true for any distribution of the dark matter halo - and even largely true in models without dark matter.
"Other researchers also approximate the term in question to zero."
Well, you surely don't think that this sociological fact proves that this is a legitimate approximation for all purposes, do you?
You may find people and papers thinking it's essentially zero and you may find a justified paper explaining that it's nonzero and the nonzero value is justified by arguments. The result is compatible with other methods to determine the dark matter distribution and you can't find any mistake except for insisting on unjustified approximations written in some papers. So the appraisal of this situation is pretty clear, isn't it?
You and the Chilean have *no* genuine evidence against the model including the calculation by Bovy and Tremaine at all. Am I wrong? If I am not wrong, why are you implicitly suggesting that you have such evidence?
Best wishes
LM
reader PN said...
I was going to write a more thorough reply, but it is sadly clear that you haven't bothered to read the papers much at all beyond the abstracts.
Suffice it to say that MB does not make simple errors on the level of a physicist not knowing his Newton. No one, me included, are making claims to the contrary of observations from Oort in the 1930's onward (I did mention the problem of galactic rotational curves, didn't I?)
All I can do is, once again, to encourage a thorough reading of both papers. MB doesn't succumb to beginners errors. A more thorough discussion of their paper is clearly in order but the assumption that they mixed up their velocities is childish.
reader Luboš Motl said...
Dear PN, the Chilean folks screwed the velocity dependence on the radial coordinate.
Not only this fact was clearly demonstrated in Bovy's and Tremaine's paper but I have described everything that one needs to verify this claim in my previous comment. It's the absolute circular speed, and not the angular velocity, that is approximately constant in the Milky Way (and other galaxies). It's very clear which answer is right, which answer is wrong, and the Chilean answer is the wrong one while the Bovy-Tremaine answer is the right one.
You must be illiterate if you haven't been able to detect this self-evident error of the Chilean paper.
Since the effects of dark matter on the curves have been calculated, and it was many, many decades ago, there hasn't been any outstanding "problem of galactic rotational curves". By your repetition that there is this non-existent problem, you are just re-emphasizing your utter incompetence.
Cheers
LM
reader boguta said...
what is the evidence to suggest that dark matter in the Milky Way is made of particles? s
reader Luboš Motl said...
Dear Boguta, the stability of the required dark matter distribution implies that most of the dark matter has non-relativistic velocities, it's "cold dark matter".
The only type of cold dark matter that isn't composed of a gas of particles are MACHOs (Massive Compact Halo Objects) and they're pretty much excluded because if they existed, they would produce lots of gravitational lensing for galaxies behind them which is not observed.
So the dispersed particle-like dark matter is left. It may be composed out of WIMPs or axions, among less plausible choices I can't even enumerate.
Cheers
LM
reader rn said...
ultra-relativistic hot dark matter cannot clump fast enough to produce the small scale structure observed earlier in the universe and cannot have a big effect on galaxy rotation curves.
primordial fluctuations at the end of inflation grew and attracted dark matter and gas. this is supported by galaxies at higher redshifts that are more than present, smaller and their emission lines indicate star birth. this bottom-up growing of structures could happen only of the dark matter is cold with non-relativistic velocities.
i don't know much about dark matter but i think the neutralino is one of the most popular candidates. i don't know much about it but i think the axion was more popular 15-20 years ago but i don't hear it as much being a candidate now or much research done about it but this is something i am not sure about.
reader Rosy Mota said...
the invisible matter and e energy doesn't exist.all is based in deep mathematical divergences. believe that a scalar and vectorial , tensorial fields.the metric tensor as placed "distances" doesn't appear,but yes,sets of vectorial or complex numbers must to be fundamental entities in the universe.so as quaternions and octonios that are discrete and continue fields.that unify quantum theory and STR and GTR
reader Shannon said...
...and PMS.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603908658027649, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/6624-stuck-one-first-post.html
|
# Thread:
1. ## stuck on this one - first post
just joined so don't let me down
Can't sort this one:
y = 3x
2y(squared) - xy = 15
Tried substituting y = 3x and x = y/3. I only get half of the right answer using x = y/3 as y = -3 and x= -1. Can't seem to get the full right answer.
Any ideas
Becca.
2. Originally Posted by becca999
just joined so don't let me down
Can't sort this one:
y = 3x
2y(squared) - xy = 15
Tried substituting y = 3x and x = y/3. I only get half of the right answer using x = y/3 as y = -3 and x= -1. Can't seem to get the full right answer.
Any ideas
Becca.
$\begin{array}{l}<br /> y = 3x \\ <br /> 2y^2 - xy = 15 \\ <br /> \end{array}<br />$
$\begin{array}{l}<br /> 2(3x)^2 - x(3x) = 15 \\ <br /> 2(9x^2 ) - 3x^2 = 15 \\ <br /> 18x^2 - 3x^2 = 15 \\ <br /> 15x^2 = 15 \\ <br /> x^2 = 1 \\ <br /> x = 1 \\ <br /> \\ <br /> y = 3 \\ <br /> \end{array}<br />$
3. Originally Posted by OReilly
$\begin{array}{l}<br /> y = 3x \\ <br /> 2y^2 - xy = 15 \\ <br /> \end{array}<br />$
$\begin{array}{l}<br /> 2(3x)^2 - x(3x) = 15 \\ <br /> 2(9x^2 ) - 3x^2 = 15 \\ <br /> 18x^2 - 3x^2 = 15 \\ <br /> 15x^2 = 15 \\ <br /> x^2 = 1 \\ <br /> x = 1 \\ <br /> \\ <br /> y = 3 \\ <br /> \end{array}<br />$
Of course, $x^2 = 1$ also has the solution x = -1, but substitution shows that this x value does not provide a new intersection point.
-Dan
4. Originally Posted by topsquark
Of course, $x^2 = 1$ also has the solution x = -1, but substitution shows that this x value does not provide a new intersection point.
-Dan
Both solutions (1,3) and (-1,-3) are correct and they providing intersection points.
Graph is showing that.
Attached Thumbnails
5. Originally Posted by topsquark
Of course, $x^2 = 1$ also has the solution x = -1, but substitution shows that this x value does not provide a new intersection point.
-Dan
Hi,
actually you calculate the intersection between a hyperbola and a straight line. And of course there are two intersection points.
I've attached a diagram to show you where these points are:
Attached Thumbnails
6. Pffl on me. The second equation is $2y^2 - xy = 15$. When I did the problem in my head for x = -1 I was using "+xy" Ah well. You win some, you lose some!
-Dan
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.943219006061554, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/30243/computing-places-over-x-in-f-kx/30348
|
## Computing places over x in F/K(x)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $F$ be a function field of "transcendental degree one" over its full constant field $K$. Let $x \in F \backslash K$. We know the divisor of $(x) = (x) - (1/x)$ in $K(x)$. Could you please give me an algorithm to compute the places over two above places in $F$ and the ramification degrees.
If this setting is too abstract, what if we have $F$ is the field of fraction of $K(x)[y]/f(x,y)$ where $K$ is a finite field, could you show me any algorithm to find places over zero place and infinite place of $x$.
As KConrad suggested, I'm telling you a little about how I got involved with this problem.
Once upon a time when I was a bit younger (and a bit more stupid but not much less than now) I dared to ask Noam Elkies that how I can represent a curve with an equations of different degree than the one I'm given. For example an elliptic curve of degree 5 (you see, it's not only your time that I waste, so don't take it personal). He wrote me something that time I didn't quite understand at the time but today I went back to the email and fortunately I understood almost all of it:
start from your sample curve y^2 + xy + x^3 + 1 = 0 over Z/2Z
and choose any function of degree 5, say z = x*y. Then eliminate y from the equations by computing the resultant with respect to y of y^2 + xy + x^3 + 1 with the equation satisfied by x,y,z, which is here z - x*y. This gives z^2 + x*z = x^2 + x^5 with x,z functions of degree 2 and 5 on the curve.*
Sincerely, --Noam D. Elkies
The only point which wasn't clear for me was "function of degree 5, say z = x*y". So I assumed it means that the degree of the zero divisor or the pole divisor should be 5. Although I checked it with Magma and it was the case, but I felt the need to compute the divisor for function $z$ in $K(x,y)$ myself. So I tried to compute the divisor of $x$ as the first step. Using the "Extensions = Ramified covers" rule of thumb, and looking at $x$ (the coordinate function) as the covering map to $\mathbb{P}^1$, I said that $(x)$ (the function) correspond to point $x - 0$ in $\mathbb{P}^1$ scheme so I put zero instead of $x$ in my equation and I get my two ramified points $y^2 = 1$. But for the places of over place at infinity downstairs $(1/x)$, I couldn't go that far. I changed the variable $1/\theta = x$ and put zero in $\theta$, I'll get 1=0, unless I replace $y$ with something like $\omega/\theta^2$ as well (which I don't see why) to see my ramification at infinity.
Now my question unfolded is: 1. Do you think what I'm doing makes sense and why it doesn't work for the infinite place. 2. Is there an algebraic/arithmetic way to do what I did instead of the geometric approach of covering space that I used, which I suppose would be more algorithmic friendly.
Sorry I think I gave too much of background.
-
1
Is this something you know how to do for an extension of number fields already (to "find" the primes over a given prime)? What is the nature of your constant field? Is it perfect (e.g., finite or char. 0)? Some context behind why you're asking the question may be useful, in case someone has a good suggestion about the goal behind the question. – KConrad Jul 1 2010 at 23:30
Thank you very much for your comment. I'll re-arrange it right away. – Syed Jul 2 2010 at 2:26
## 2 Answers
For $x\in F\setminus K$, the degree of $x$ is the degree of the field extension $F/K(x)$. For example, in the $F$ corresponding to your curve, the degree of $x$ is $2$, since the extension $F/K(x)$ is the simple extension corresponding to $y^2 + xy + x^3 + 1 = 0$. Similarly, the degree of $y$ is $3$, since $F/K(y)$ is the simple extension corresponding to $x^3 + yx + y^2 + 1 = 0$.
The degree of $x$ is also the degree of the positive (or negative) part of the divisor of $x$. Thus if $x$ and $y$ are such that the divisor of zeros of each is disjoint from the divisor of poles of the other, then the degree of $xy$ is the sum of the degrees of $x$ and of $y$. These last conditions can be checked as follows: if $y$ is integral over $k[x]$ (i.e. if it satisfies a monic polynomial with coefficients in $k[x]$), then $y$ is finite wherever $x$ is finite. In particular, $y$ never has a pole where $x$ vanishes. Since for your $x$ and $y$, you have $y$ integral over $k[x]$ and $x$ integral over $k[y]$, you know without further calculation that the degree of $xy$ is $5$. Perhaps Elkies had something like this in mind when he wrote you.
Going back to the general case, if you want to compute the actual places where an $x\in F\setminus K$ vanishes (and the vanishing multiplicities), let me recommend http://www.cse.chalmers.se/~coquand/place.pdf, which gives an algorithm. The essential step of the calculation is this: take $K$ to be algebraically closed, and suppose your field $F$ is presented as the fraction field of $K[x,y]/f(x,y)$. Suppose furthermore that you have a solution $(a,b)$ of $f(x,y) = 0$. You must then find the places of $F$ centered at $(a,b)$. If $(a,b)$ is a non-singular point of the affine model, then there is a single place, but generally one needs to resolve the singularity.
For your example, here is how to do the calculations by hand: let's compute the places where $1/x$ vanishes. Let $u = 1/x$ so that $F/K(u)$ is the extension corresponding to $u^3y^2 + u^2y+1 +u^3 = 0$. As you point out, setting $u=0$ gives no solutions. That should not worry us, since $y$ is not integral over $K[u]$, and so we have no reason to expect that $y$ should be finite where $u$ is finite.
Let's try again with $v = 1/y$. We then have $(1+u^3)v^2 + u^2v + u^3 = 0$, which is not monic in $v$, but for which the leading coefficient is invertible when $u$ vanishes. Thus $v$ will be finite when $u$ vanishes. Setting $u=0$, we find $v=0$. We still have a problem here, since $$K[u,v]/((1+u^3)v^2 + u^2v + u^3)$$ is not non-singular at the prime ideal $(u,v)$, as there is no linear term in the defining polynomial, and so it is not clear how many places of $F$ are centered at this prime.
Finally, let's try $w = v/u = x/y$. We then have $(1+u^3)w^2 + uw + u = 0$. As before, $w$ will be finite when $u$ vanishes. We set $u=0$ and find $w=0$, but now we get a place of $F$, since $$K[u,w]/((1+u^3)w^2 + uw + u)$$ is non-singular at the prime ideal $(u,w)$. Furthermore, $w$ is a local uniformizer, since $u = w((1+u^3)w + u)$ implies that $u$ is divisible by $w$. Pulling one more factor of $w$ out on the right, we see $u$ is divisible by $w^2$. Finally, since $u = w^2(\mathrm{unit} + \mathrm{multiple of }w)$, we find that $u$ vanishes to order exactly $2$ at this place. Since $u = 1/x$, we find that $x$ has a pole only at the place of $F$ corresponding to $(1/x,x/y)$ and that the pole order is $2$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
it seems that u should take a look at stichtenoth's book "algebraic function fields and codes" where you'll find many general results and examples on computing number of places, ramification degrees and so on for finite and separable extensions of a rational function field...also, there is this software called kash (it's free and works under linux and windows) which gives you all of that for concrete examples over finite fields.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 110, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578086137771606, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/5966/is-it-possible-to-break-a-hash-based-block-cipher/5968
|
Is it possible to break a hash-based block cipher?
Let's define the following block cipher:
$C_n = M_n \oplus H(k + n)$ where $C_n$ is the nth block of ciphertext, $M_n$ is the nth block of plaintext, $H$ is a cryptographic hash function, and $k$ is the secret key.
Decryption can be performed trivially, by computing $M_n = C_n \oplus H(k + n)$.
Assuming a proper mode of operation, such as CBC, how would we go about attacking such a scheme? Is this scheme provably as secure as $H$ is?
-
3
What you describe is not a block cipher, as a block cipher by definition has no notion of position (i.e. $n$). What you're describing is a stream cipher made out of a hash in counter mode (see Salsa20 for a similar construction). – Samuel Neves Jan 10 at 12:20
You'll need an IV somewhere, otherwise you have key reuse issues. – Thomas Jan 10 at 12:21
@Thomas Hence my choice of CBC. – Polynomial Jan 10 at 12:25
@SamuelNeves Perhaps my nomenclature is off a little. However, wouldn't introduction of a CBC-like mode of operation preclude it being a stream cipher? Granted it's not strictly in either category. Regardless, the question stands as-is. – Polynomial Jan 10 at 12:25
1
A block cipher is a keyed permutation of a single block. So this is no block-cipher, and using it in CBC is a nonsensical operation. It looks like you're using a hash function in CTR mode, to construct a stream cipher. – CodesInChaos Jan 10 at 13:21
show 2 more comments
2 Answers
This is not a "block cipher" because a block cipher is a key-dependent permutation of the space of blocks of a given size. Here, you handle data by blocks, but the "encryption" part is done by XORing with a value $H(k+n)$ which depends on the key $k$ and on the "block number" $n$. So you do not have one permutation (for a given key), but a lot of them.
Correspondingly, it is unclear what "CBC" would look like with such a beast. If you imagine it as this:
$$C_0 = M_0 \oplus IV \oplus H(k+0)$$ $$C_n = M_n \oplus C_{n-1} \oplus H(k+n)$$
then this is unfortunately hopelessly weak if you encrypt two distinct messages with the same key, even if you use distinct IV -- because the XORs with $IV$ and $C_{n-1}$ can be trivially cancelled by anybody ($x\oplus y \oplus y = x$ for all $x$ and $y$) and you end up with the infamous "two times pad".
As @PaŭloEbermann says, your cipher propose is a stream cipher: it is XORing with a key-dependent pseudo-random stream, generated from the key $k$ but independent of the plaintext. Here, the pseudo-random number generator is built from a hash function. There are several ways to do that, not all being good, because some well-known hash function (MD5 and the whole SHA family) suffer from something known as the length extension attack. The "safe" way to build a PRNG out of a hash function is to use HMAC, and that is called HMAC_DRBG. It is a NIST standard. The same standard defines Hash_DRBG, which is faster (and similar to your proposal) but potentially weaker because it relies on some ill-defined properties of the underlying hash function.
Of course, an IV should be added, and not with a kind-of-CBC, as explained above. Rather, replace your key $k$ with $IV||k$ (concatenation of the IV and the key). There again, subtle weaknesses may lurk.
Either way, building a (stream) cipher out of a hash function is possible, but it requires care, and often offers rather poor performance. Indeed, hash functions are fast at processing lots of input data. Here, you want it to produce lots of output data -- and for that, hash functions are not very fast. Even with SHA-1 and Hash_DRBG, the resulting cipher would be slower than a common AES.
-
As said in the comments, your construction is not what usually is called a block cipher.
A block cipher is a pair of (deterministic) functions with just two inputs: key and plaintext or ciphertext.
Your function has an additional input, the block number.
One could name this a tweakable block cipher (i.e. $n$ is the "tweak"):
$$Enc_k^n(P) = P \oplus H(k, n)$$ $$Dec_k^n(P) = P \oplus H(k, n)$$
(This construction is a quite bad tweakable block cipher, since it is linear in $P$ and as such easily distinguishable from a pseudorandom permutation, as mentioned in the comment from Ilmari Karonen.)
There are modes of operation for tweakable block ciphers, too, and If I understand right, you just invented your own mode as a combination of counter and CBC mode, like this:
$$C_n = Enc_k^n(P_n \oplus C_{n-1})$$ $$P_n = Dec_k^n(C_n) \oplus C_{n-1}$$
But with your construction of a tweakable block cipher this reduced to
$$C_n = C_{n-1} \oplus P_n \oplus H(k || n)$$ $$P_n = C_{n-1} \oplus C_n \oplus H(K || n)$$
The effect is that the actual initialization vector can be canceled out simply by XORing it, so it doesn't fulfill the function of making the encryption unique.
Another way of viewing your description is as using a hash function in counter mode. As mentioned, you'll should include an initialization vector, and this should come as an input into the hash function. One way would be this:
$$C_n = H(k || IV || n) \oplus P_n$$
Provided your hash function behaves like a pseudorandom function (when used with the key), this will be of similar security as using a block cipher in counter mode.
(The more standard way of counter mode would be to start the $n$ counter input (but not the block number) at the initialization vector instead of 0. I think both ways don't differ in security.)
-
It might be worth noting that SHA-2, for example, does not become a pseudorandom function when keyed in this way; length-extension attacks may not be applicable in this particular context, but you'd still be making non-standard assumptions about $H$. – Seth Jan 10 at 17:45
2
Paŭlo, I know you're perfectly aware of this, but for others reading this answer, it may be worth pointing out that the "tweakable block cipher" $Enc_k^n(P)=P\oplus H(k,n)$ described above is also trivially distinguishable from a pseudorandom permutation, being linear in $P$. As indistinguishability from a PRP is the usual security property expected of a block cipher, I'd be hesitant to call this a block cipher of any kind at all (except maybe a "hopelessly broken" one). – Ilmari Karonen Jan 11 at 2:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306267499923706, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/tags/differential-analysis/hot?filter=year
|
# Tag Info
## Hot answers tagged differential-analysis
4
### S-box with differential uniformity = 2
There are 256! possible 8x8 S-boxes (i.e., bijective functions from $\{0,1\}^8$ to $\{0,1\}^8$. This is an absolutely enormous number. You couldn't possibly enumerate all of them within the lifetime of the universe. So, yes, this is one reason why it is not straightforward to determine whether there exists such a S-box with differential uniformity 2. ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8988065123558044, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/176392-orthogonal-matrices.html
|
# Thread:
1. ## orthogonal matrices
Show that if A and B are orthogonal matrices, then AB is an orthogonal matrix.
I think I need to show that the columns of A are orthonormal to each other, but I don't know how.
2. Originally Posted by Jskid
Show that if A and B are orthogonal matrices, then AB is an orthogonal matrix.
I think I need to show that the columns of A are orthonormal to each other, but I don't know how.
You often pick the hardest characterization to use. Try picking another one--solution below--use at your own risk
Spoiler:
You sure you want to look so soon?
Spoiler:
Fine, do it
Spoiler:
Isn't it true that $A$ is real orthogonal if and only if $A^{-1}=A^\top$? So that $(AB)^{-1}=B^{-1}A^{-1}=B^\top A^\top=(AB)^\top$?
3. A "tiny" alternative:
$M\in\mathbb{R}^{n\times n}$ is orthogonal iff $MM^{t}=I$
So, $(AB)(AB)^t=\ldots =I$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197323322296143, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/19009-normal-have-two-differing-gradients-one-point.html
|
# Thread:
1. ## Is it normal to have two differing gradients for one point?
I have an exam tomorrow and part of it will consist of dervitives...
I was given a revision sheet and this is one of the questions:
Find the average rate of change of f(x) = x^3 - 4x^2 + x +2 from x=1 to x=4
So, I found the derivitive: f'(x) = 3x^2 + 8x + 1
Subbed in 1: f'(1) = -4
Subbed in 4: f'(4) = 17
So, average rate of change is delta y over delta x
17-(-4)/4-1
= 21/3
= 7
Therefore, average rate of change is 7...
Looked at the revision sheet answer and the answer was 2.
I realise how they've gotten it... Just subbed in the values into the original equation.
What I don't get is how there's two different averages. Am I doing something wrong? -It's a test on derivitives so it defeats the purpose if you don't work the derivitive out... Hmmm.
Which method should I use??
Help would be appreciated.
2. Originally Posted by Lucille
Find the average rate of change of f(x) = x^3 - 4x^2 + x +2 from x=1 to x=4
So, I found the derivitive: f'(x) = 3x^2 + 8x + 1
You messed up here.
3. Can you please tell me how I messed it up?
4. Do you mean with the derivitive?
f'(x) = 3x^2 + 8x + 1
--- I'm sorry, it was a typing error. My working out says it's"
f'(x) = 3x^2 - 8x + 1
5. Originally Posted by Lucille
Can you please tell me how I messed it up?
the derivative gives the instantaneous rate of change, we do not need f'(x) here.
for a function $f(x)$, the average rate of change between $x = a$ and $x = b$ is given by:
$\mbox {Average Rate of Change} = \frac {f(b) - f(a)}{b - a}$ ......that is, the slope of the secant line connecting the two points
6. Thankyou so much. I was wondering why it wasn't working. That makes sense.
I just wondered why they would put something like that in a derivitive test.
7. Originally Posted by Lucille
Thankyou so much. I was wondering why it wasn't working. That makes sense.
I just wondered why they would put something like that in a derivitive test.
well, it is the basic structure from which the derivative evolved. remember, the derivative is actually the limit as a gets close to b of the secant line
that is, $f'(a) = \lim_{x \to a} \frac {f(x) - f(a)}{x - a}$
or equivalently, $f'(x) = \lim_{h \to 0} \frac {f(x + h) - f(x)}{h}$
this may help
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9605432748794556, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/4445/model-for-population-density-estimation
|
Model for population density estimation
A database of (population, area, shape) can be used to map population density by assigning a constant value of population/area to each shape (which is a polygon such as a Census block, tract, county, state, whatever). Populations are usually not uniformly distributed within their polygons, however. Dasymetric mapping is the process of refining these density estimates by means of auxiliary data. It is an important problem in the social sciences as this recent review indicates.
Suppose, then, that we have available an auxiliary map of land cover (or any other discrete factor). In the simplest case we can use obviously uninhabitable areas like waterbodies to delineate where the population isn't and, accordingly, assign all the population to the remaining areas. More generally, each Census unit $j$ is carved into $k$ portions having surface areas $x_{ji}$, $i = 1, 2, \ldots, k$. Our dataset is thereby augmented to a list of tuples
$$(y_{j}, x_{j1}, x_{j2}, \ldots, x_{jk})$$
where $y_{j}$ is the population (assumed measured without error) in unit $j$ and--although this is not strictly the case--we may assume every $x_{ji}$ is also exactly measured. In these terms, the objective is to partition each $y_{j}$ into a sum
$$y_j = z_{j1} + z_{j2} + \cdots + z_{jk}$$
where each $z_{ji} \ge 0$ and $z_{ji}$ estimates the population within unit $j$ residing in land cover class $i$. The estimates need to be unbiased. This partition refines the population density map by assigning the density $z_{ji}/x_{ji}$ to the intersection of the $j^{\text{th}}$ Census polygon and the $i^{\text{th}}$ land cover class.
This problem differs from standard regression settings in salient ways:
1. The partitioning of each $y_{j}$ must be exact.
2. The components of every partition must be non-negative.
3. There is (by assumption) no error in any of the data: all population counts $y_{j}$ and all areas $x_{ji}$ are correct.
There are many approaches to a solution, such as the "intelligent dasymetric mapping" method, but all those I have read about have ad hoc elements and an obvious potential for bias. I am seeking answers that suggest creative, computationally tractable statistical methods. The immediate application concerns a collection of c. $10^{5}$ - $10^{6}$ Census units averaging 40 people apiece (although a sizable fraction have 0 people) and about a dozen land cover classes.
-
Formatting issue now fixed. It was a bug. – Rob Hyndman Nov 12 '10 at 3:27
@Rob Thank you, and thanks to all the people who looked at this: I saw your comments before they were deleted and am grateful for your efforts. – whuber♦ Nov 12 '10 at 16:25
Whuber, would you be willing to provide an answer to your own question here? It seems that you have gained some insight into this problems since initially posing it. Sorry to enter this as a question instead of a comment, but I don't have enough reputation to comment. – fgregg Jan 3 '11 at 18:23
I'm still working on it. My initial model was similar to the one @Srikant Vadali proposed, but with an equality constraint on total population imposed. I implemented it as a Poisson GLM with linear link. It worked rather poorly on realistic test data. My suspicion is that the population density in an area may depend as much, or even more, on the characteristics of the surrounding area as it does on the characteristics of that area itself. I'm still open to creative suggestions or other references! – whuber♦ Jan 3 '11 at 19:08
– fgregg Jan 3 '11 at 21:08
show 2 more comments
2 Answers
You might want to check work of Mitchel Langford on dasymetric mapping.
He build rasters representing population distribution of Wales and some of his methodological approaches might be useful here.
Update: You might also have a look at work of Jeremy Mennis (especially these two articles).
-
1
Thank you. That work provides a pointer into a web of recent research on dasymetric mapping. – whuber♦ Dec 9 '10 at 21:55
Interesting question. Here is a tentative stab at approaching this from a statistical angle. Suppose that we come up with a way to assign a population count to each area $x_{ji}$. Denote this relationship as below:
$z_{ji} = f(x_{ji},\beta)$
Clearly, whatever functional form we impose on $f(.)$ will be at best an approximation to the real relationship and thus the need to incorporate error into the above equation. Thus, the above becomes:
$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$
where,
$\epsilon_{ji} \sim N(0,\sigma^2)$
The distributional error assumption on the error term is for illustrative purposes. If necessary we can change it as appropriate.
However, we need an exact decomposition of $y_{ji}$. Thus, we need to impose a constraint on the error terms and the function $f(.)$ as below:
$\sum_i{\epsilon_{ji}} = 0$
$\sum_i{f(x_{ji},\beta)} = y_j$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$z_j \sim N(f_j,\sigma^2 I) I({f_j}' e = y_j) I((z_j-f_j)' e = 0)$
where,
$e$ is a vector of ones of appropriate dimension.
The first indicator constraint captures the idea that the sum of the deterministic terms should sum to $y_j$ and the second one captures the idea that the error residuals should sum to 0.
Model selection is trickier as we are decomposing the observed $y_j$ exactly. Perhaps, a way to approach model selection is to choose the model that yields the lowest error variance i.e., the one that yields the lowest estimate of $\sigma^2$.
Edit 1
Thinking some more the above formulation can be simplified as it has more constraints than needed.
$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$
where,
$\epsilon_{ji} \sim N(0,\sigma^2)$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$z_j \sim N(f_j,\sigma^2 I) I({z_j}' e = y_j)$
where,
$e$ is a vector of ones of appropriate dimension.
The constraint on $z_j$ ensures an exact decomposition.
-
1
@Srikant Thank you. I was thinking along similar lines when I posed the question and have since tested out a GLM (Poisson distribution with linear link) as well as some other models. Unfortunately, it now looks like any model based solely on land cover type and proportion will not work well: a sample of these data suggests that population patterns depend on a larger spatial context. At a minimum, then, we would need to include spatially lagged covariates in a linear model. – whuber♦ Dec 9 '10 at 21:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461641311645508, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/4600/a-basic-question-on-the-derivation-of-the-wave-equation
|
# A basic question on the derivation of the wave equation
Today I saw the derivation of the wave equation in class, and I did not understand the following step.
We are modeling a uniform-density string as being made up of tiny masses spaced a small amount $h$ apart connected by springs obeying Hooke's law. Let $y_i(t)$ be the vertical position of the $i$'th such particle. The mass of this particle is $(h/L) M$ where $L$ is the total length, $M$ is the total mass. On the other hand, the displacement of the string between the $i$'th and the $i+1$st particle can be shown to be approximately proportional to $y_{i+1}(t) - y_i(t)$.
It seems to me that this leads to the equation
$$\frac{hM}{L} y_i''(t) = k \left( [y_{i+1}(t) - y_i(t)] + [y_{i-1}(t) - y_i(t)] \right)$$
but that is, in fact, wrong; the derivation by instructor gives sticks a $1/h$ on the right hand side.
Can someone explain to me (i) why the right hand side needs a $1/h$ (ii) intuitively, what is happening?
My first guess is that the spring constant $k$ should be proportional to the length of the spring, keeping everything else constant? But that seems weird to me; it seems to me like if you have a spring with spring constant $k$, then cutting the same spring into two halves should result in two springs each with half the length but the same spring constant. Can someone unconfuse me?
Thanks.
-
## Just addressing the question of the spring constant of the cut down spring pieces:
For a typical material spring you will find that cutting it in half result in two springs with twice the spring constant.
To understand why imagine that you tie the two half back together again. At their natural length the system look like:
````O~~~x~~~O
````
where `O` is an end, `x` the place you tied them together and `~` a piece of spring. Now you pull them apart by a distance $2s$:
````O~~~~x~~~~O
````
because the two halves are identical they each stretch by $s$. You also know that each spring exerts the same force $F$ on `x` (because `x` is not accelerating and neither is either segment of spring).
So for the combined spring
$$F = -k_c (2s)$$
and for each half-spring
$$F = -k_h s$$
Thus $k_h = k_c*2$, and you can generalize this to a fragment of length $h$ taken from a total length of $L$ to find that you need a factor of $L/h$.
-
Thanks, that unconfused me. – alex Feb 5 '11 at 3:48
Probably you got into trouble because of the detailed quantity $\frac{hM}{L}$, which is simply the mass of small weights. However, how many of them are there? Let's suppose you string is made up of $N$ small weights, each weight separated by a small space of length $h$. Therefore, we can see that the total length is $L=N h$. Likewise, the total mass should be $M=Nm$, right?. Having this in mind, we can find that the expression $\frac{hM}{L}$ is actually the mass $m$.
Notice also that your $k$ is actually the stiffness of each spring. However, the total stiffness is given by $K = \frac{k}{N}$. Substituting this value in your equation, we have
$$\frac{hM}{L} y_i''(t) = KN \left( [y_{i+1}(t) - y_i(t)] + [y_{i-1}(t) - y_i(t)] \right)$$
However, we know can put $N$ in terms of $h$, that is, $N = \frac{L}{h}$.
$$y_i''(t) = \frac{K L^{2}}{M} \left( \frac{y_{i+1}(t) - y_i(t) + y_{i-1}(t) - y_i(t)}{h^{2}} \right)$$
There you have the $h$ you were looking for.
By the way, take a look at the difference inside the parentheses. They are very interesting because those terms provide a nice way to understand the Laplacian.
UPDATE: I realized I didn't answer your specific question "Why does the right hand needs an (extra) h?". Well, you're equating two forces, therefore, the right hand needs to have something involving second order derivatives. You need that $h$ to obtain such derivative (that is, the whole thing inside the parentheses) when the separation between weights tends to $0$. However, in my opinion, it's easier to understand the derivation from first principles. Also, you could make this clearer if you write explicitly the dependence of $y$ on $h$.
-
By the way, I didn't finish the derivation because I think you already know it. – Robert Smith Feb 5 '11 at 2:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585545063018799, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/173518-affine-cipher-encryption.html
|
# Thread:
1. ## Affine Cipher Encryption
Hi!
Anyone any good with codebreaking?....
Let plaintext be written in the alphabet a, b, . . . , y, z and let ciphertext be
written in the alphabet A, B, . . . , Y, Z. Both alphabets have N = 26 letters.
Consider the affine cipher on digraphs with encryption
E( $\pi$) = 9 $\pi$ + 43mod676:
Encipher the plaintext TRUTHS.
In this question I want to use standard numbering, so A=0, B=1..... Z=25
2. i would have helped u if encryption is of the form (ax+b)modc
but here inclusion of 9 $\pi$ makes it helpless for me
3. i can may be help u if u tell me whats use of 9 $\pi$ here
between , see this for some help: http://www.math.cornell.edu/~kozdron...uts/affine.pdf
4. i think that link is only useful for the simple version of the affine cipher.... thanks anyway
any other ideas anyone?
5. Is $\pi$ supposed to be a number or a variable?
6. $\pi$ represents the letter that is to be ciphered or deciphered
7. This is somewhat unusual notation. I've seen people use a letter to denote this, say, $P$.
In any case, you need to represent your digraphs numerically. To do this, you first convert each letter to a number. For the digraph TR, we have $T\mapsto 19$ and $R\mapsto 17$. Then, we write this as a number base 26: $TR\mapsto 19\times 26+17=511$. Then pass it through $E$: $E(511)=586$. Then, $586=22\times 26+14\mapsto WO$.
In summary, the plaintext TR is encrypted into WO. Try the rest for yourself.
8. i tried the others for myself and got UT~GG and HS~SX.... sound right?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8748000860214233, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/53549/is-there-an-infinite-increasing-sequence-of-primes-with-bounded-second-or-larger
|
Is there an Infinite increasing sequence of primes with bounded second or larger differences?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The opposite question is if for any infinite increasing sequence of primes and any $k$ the sequence of the $k$-th order differences of the elements of the sequence is unbounded.
But if the question is true then there are integers $k$ and $B$ and a sequence of primes $q_1 < q_2 < \dots$ such that $$\lvert \Delta^k q_n \rvert \le B \quad \quad \forall n=1,2,\dots$$ and it is natural to ask what can be the least value of $k$ for which such a sequence exists, can we have $k=2$?
Does somebody know of any result or heuristic in one or the other sense?
EDIT: I expect now the question to have a negative answer. The reason is that if we start with the first $k$ primes a given bound $B$, and follow every possible chain of primes with $k$th order differences bounded by $B$. From an element $q$ of one of these chains we get at most $2B+1$ possible succesors and as the "probability" of one them being prime is about $(2B+1)/\log q$ the expected number of "active chains" decreases to zero as $q$ increases.
By the way I have computed the size of the longuest chain of primes with bounded second differences starting at 2,3 and for small values of $B=2,4,6,8,10$ it gives $57, 421, 1860, 24661, 380028$, it seems to increase very roughly as $e^{2B}$, is that a reasonable estimate?.
-
If Ben Green or Terence Tao could tell us exactly where are all these arbitrarily long arithmetic progressions in the primes, that might help with k=2. Gerhard "Ask Me About System Design" Paseman, 2011.01.27 – Gerhard Paseman Jan 28 2011 at 4:00
@Gerhard Paseman How would that help? Arbitrarily long is not the same as infinite. It is pretty clear that there can be no infinite arithmetic progression. Where would these progressions have to be to help with $k=2$? – Alex Bartel Jan 28 2011 at 7:23
1
It was a somewhat tongue-in-cheek reference to existence proofs. But for example, if there were a sequence of such AP's, with the last term of each "close enough" to the first of the next, that would get a sequence of primes with second differences mostly 0, and possibly bounded. Gerhard "Trying Square Jaw This Time" Paseman, 2011.01.27 – Gerhard Paseman Jan 28 2011 at 7:38
1 Answer
If this were known for $k=2$, it would mean that for sufficiently large $n$ and some $A$, there is always a prime between $n^2$ and $(n+A)^2$. I believe that this is still open; see Legendre's conjecture. For comparison, the Riemann Hypothesis would imply there is a prime between $n^2$ and $(n+A \log n)^2$.
It is known that for large enough $n,$ there are primes between $n^3$ and $(n+1)^3$ (and you can lower that exponent from $3$). Perhaps you can use this to construct sequences with some differences bounded, although this is not immediate to me.
-
So your second paragraph implies that the minimal $k$ for which the $k$-th differences are bounded is at most $k=3$. Do you have a reference? – Alex Bartel Jan 28 2011 at 7:26
@Alex, I don't understand your comment, how does Ingham's result imply bounded 3-differences? As a reference, check out Ingham, A. E. "On the difference between consecutive primes", Quarterly Journal of Mathematics (Oxford Series), 8, pages 255–266, (1937) – Gjergji Zaimi Jan 28 2011 at 7:43
As the last sentence indicates, I don't have a construction which uses it, and in fact, my inclination would be to try to use the $(n+1)^3$ result on values of $k$ greater than $3$. – Douglas Zare Jan 28 2011 at 7:50
Am I missing something? If you can construct a sequence $p_n$ of primes that grows roughly as $n^3$, then their differences grow roughly as $n^2$ and the differences of those are roughly linear. Or am I oversimplifying things? – Alex Bartel Jan 28 2011 at 8:09
1
The deviations of the locations of the primes from the cubes is not bounded. A priori, the errors may be quadratic. So, the second differences are like $c_1 n \pm c_2 n^2$, and the third differences are like $c_1 \pm c_2 n^2$, if you don't use a careful construction. Thanks for fixing the link. – Douglas Zare Jan 28 2011 at 8:45
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563745856285095, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/53384/in-noethers-theorem-what-is-a-classical-solution-of-the-equations-of-motion/53388
|
# In Noether's theorem, what is a “classical solution of the equations of motion”?
I'm reading a book which states that:
for each generator of a global symmetry transformation, there is a current $j^{\mu}_{a}$ which, when evaluated on a classical solution of the equations of motion $\phi_{cl}$, is conserved. I.e. $\partial_{\mu}j^{\mu}_{a}\vert_{\phi=\phi_{cl}} =0$
I get the general principle, but I'm uncertain about the bit I have made italics. Can anyone shed some light on this?
-
2
It means $\phi_{cl}$ satisfies Euler-Lagrange equation. – Jia Yiyang Feb 8 at 14:23
## 2 Answers
1) The word classical in this context means $\hbar=0$.
2) In the context of an action principle, the Euler-Lagrange equations
$$\frac{\delta S}{\delta\phi^{\alpha}}~\approx~0$$
are often referred to as the (classical) equations of motion (eom), cf. comment by Jia Yiyang. Here the $\approx$ symbol means equality modulo eom.
Let on-shell (off-shell) refer to whether eom are satisfied (not necessarily satisfied), respectively.
3) In the context of a global continuous (off-shell) symmetry of an action, Noether's (first) theorem implies an off-shell Noether identity
$$d_{\mu} J^{\mu} ~\equiv~ - \frac{\delta S}{\delta\phi^{\alpha}} Y_0^{\alpha},$$
where $J^{\mu}$ is the full Noether current, and $Y_0^{\alpha}$ is a (vertical) symmetry generator.
This leads to an on-shell conservation law
$$d_{\mu} J^{\mu}~\approx~0.$$
-
The classical equation of motion can be find solving the Euler-Lagrange equation for the Lagrangian of the system $L=L(q,\dot{q})$, then EL equation states $$\frac{\partial L}{\partial q_i}-\frac{\text{d}}{\text{d}t}\frac{\partial L}{\partial\dot{q}_i},\text{ }i=1,\dots,n.$$ The conserved current $j_a^\mu$ come from the fact that if a system has a symmetry then something linked with the symmetry is kept by the system. If a system show a time symmetry then the Energy must be conserved, if a system show a traslational symmetry then linear momentum must be conserved and if a system has a rotational symmetry then angolar momentum must be conserved.
-
– joshphysics Feb 8 at 16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8829483389854431, "perplexity_flag": "head"}
|
http://skullsinthestars.com/2009/10/02/the-first-paper-on-invisibility-1902/
|
The intersection of physics, optics, history and pulp fiction
## The first paper on invisibility? (1902)
Posted on October 2, 2009 by
When discussing the history of invisibility physics, I typically cite Ehrenfest’s 1910 paper on radiationless motions as the first publication dedicated to the subject. Ehrenfest’s paper, which attempts to explain how electrons could oscillate in a classical atom without radiating, is a direct precursor to the long history of nonradiating sources and nonscattering scatterers that I’ve been chronicling on this blog.
However, it turns out that Ehrenfest was not the first author to discuss some form of invisibility! I recently stumbled across an article in an early issue of the Physical Review: “The invisibility of transparent objects,” by R.W. Wood, 1902. It is not an earth-shattering paper, but it presents some intriguing ideas and suggests that visions of invisibility may go even further back in the sciences…
In 1902, I think it would be fair to say that the Physical Review was still in its infancy. First published out of Cornell University in 1893, it would not be taken over by the American Physical Society until 1913; the APS itself was not founded until 1899. Reading the papers from that early period, my impression is that the journal was still untested, and most important results were still gravitating to Philosophical Magazine and the Philosophical Transactions of the Royal Society. Physical Review was a place for scientists to publish more technical research, brief musings on specialized topics, and pedagogical strategies.
Nevertheless, some very interesting papers were published in that era, and a number of important researchers contributed to the journal in its early days. One of the most notable is Robert W. Wood (1868-1955), a physicist and optical scientist who made important contributions to the study of infrared and ultraviolet radiation and diffraction theory. Wood contributed regularly to the journal in its early years, mostly with suggestions for novel classroom demonstrations, for instance: “Lecture-room demonstration of orbits of bodies under the action of a central attraction” (1897), “Demonstration of the Doppler effect” (1897), “Apparatus for illustrating potential gradient” (1898).
In 1902, though, Wood published, “The invisibility of transparent objects.” This article is best summarized as the musings of Wood on invisibility, and it culminates in a rather intriguing hypothesis, with an experiment to back it up!
Wood starts with a familiar and well-known observation:
A transparent body, no matter what its shape, disappears when immersed in a medium of the same refractive index and dispersion. Could a transparent solid substance be found, whose refractive index and dispersion were the same as those of air, it would be absolutely invisible.
By “transparent body”, Wood means an object which does not significantly absorb light in the visible spectrum, such as glass. Though such an object does not absorb light, it is still visible because its index of refraction is different than the surrounding air, and light therefore is partially reflected and partially refracted as it meets the object: some of the light is “bounced back” (reflection), while the light that enters the object has its direction of propagation altered (reflection). At a planar surface, for instance, an incident and refracted light ray satisfy Snell’s law,
$n_1\sin\theta_1 = n_2\sin\theta_2$,
where $n_1$ and $n_2$ are the refractive indices of air and the medium, respectively, and $\theta_1$ and \$\theta_2\$ are the angles which the rays make in each region with respect to the normal. This is illustrated below:
If a transparent object is put into a medium with exactly the same refractive index, i.e. $n_1=n_2$, the direction of the ray is unchanged. There is also no reflection, though one must use the more complicated Fresnel equations to demonstrate this. An imperfect example of this can be done by putting a glass rod (index $n\approx 1.5$) into a glass of water (index $n\approx 1.33$):
(Note: vegetable oil works even better!)
It is interesting to note that the type of invisibility described here may have been the inspiration for what is considered to be the very first “scientific” story of invisibility, What Was It? by Fitz-James O’Brien (1828-1862), written in 1859. In the story, an invisible monster unexpectedly invades a home one evening. Later, one of the characters attempts to explain the idea rationally:
“Here is a solid body which we touch, but which we cannot see. The fact is so unusual that it strikes us with terror. Is there no parallel, though, for such a phenomenon? … It is not theoretically impossible, mind you, to make a glass which shall not reflect a single ray of light — a glass so pure and homogeneous in its atoms that the rays from the sun will pass through it as they do through the air, refracted but not reflected…”
“That’s all very well, Hammond, but these are inanimate substances. Glass does not breathe, air does not breathe. This thing has a heart that palpitates — a will that moves it — lungs that play, and inspire and respire.”
The physics here is garbled — it is not possible, even theoretically, to make an object which refracts but never reflects — but the material O-Brien writes about sounds very much like the “transparent solid substance” that Wood mentions.
Liquids exist which can hide a glass rod much better than water, but the effect is still not necessarily perfect, as Wood notes:
The disappearance of a transparent substance when immersed in a medium of identical optical properties is usually illustrated by dipping a glass rod into Canada balsam, but the disappearance is not complete, for the dispersion of the glass and the liquid are not the same. A better fluid is a solution of chloral hydrate in glycerine which is quite colorless… A glass rod disappears completely when dipped into it and when withdrawn presents a curious aspect, for the end appears to melt and run freely in drops.
Dispersion refers to the fact that the refractive index of a material depends upon the frequency $\omega$ of the illuminating light, i.e. $n = n(\omega)$. Red light, then, will refract at a different angle than green light. A prism separates white light into multiple colors due to the dispersive properties of the glass, as illustrated in a classic Pink Floyd album cover:
To make an object invisible to all colors, then, requires that the object’s refractive index match that of air for all frequencies of visible light. Wood was unknowingly quite prescient in pointing out the challenge in making such objects; modern invisibility cloaks are subject to a similar limitation, in that they are typically only invisible for a single frequency.
In 1902, it was simply impossible to design and fabricate a material with tailored optical properties; even today, such capabilities fall under the heading of, “plausible, but not yet possible.” Wood, however, knew of another idea for producing an invisibility of sorts:
Lord Rayleigh in his article on optics in the Encyclopædia Britanica [sic] points out that perfectly transparent objects are only visible in virtue of non-uniform illumination, and that in uniform illumination they would become absolutely invisible. A condition approaching uniform illumination might, he says, be attained on a top of a monument in a dense fog.
The hypothesis of Lord Rayleigh is a rather surprising one which I had never heard of before. “Uniform illumination” refers to a situation in which an object is illuminated by natural light of equal intensity from all directions, i.e.
“Non-uniform illumination” refers to a situation where light falls on an object more in some directions than others:
Even if an object is perfectly transparent, it still reflects light and refracts light; if you look in the direction of such an object, you will typically see it because less light is transmitted through it due to reflection and the light which is transmitted is deflected by refraction.
Lord Rayleigh, however, suggests what may be considered a conservation law of sorts: if an object is uniformly illuminated, the sum total of all reflections and refractions from the object is uniformly radiated. This would in principle look not different than a situation in which no object was present at all, making the object invisible!
To see how this works, we consider a crude model of uniform illumination which involves only four directions: up, down, left, right. In the absence of an object, we would see the same amount of light if we observe the system from any direction:
An observer on the left will see the same amount of light coming from the right that the observer on the right sees coming from the left. Now let us suppose we put a transparent object in the middle of this system. Rayleigh’s hypothesis suggests that any right-going light which is deflected from reaching the right observer is perfectly balanced by light which is deflected from another direction. An example of such an effect is shown below, where it is assumed that the object makes each ray of light make a right turn:
As far as an observer is concerned, the object is completely invisible! From each of the four directions, the same amount of light is measured as was measured in the case of no object. There is no way for the observer to distinguish between a ray which came straight across the domain and one which was bent onto that path:
Is Rayleigh’s hypothesis true? I’m not sure, though I have my doubts that it is true in general. However, as long as geometrical optics (light travels along well-defined rays) is valid, and the illuminating light is natural (i.e. incoherent), it seems like the hypothesis works very well. I’m actually going to try and prove it as a little research project, though the proof is certainly rather difficult.
Wood actually made a simple experimental test of this invisibility effect:
I have recently devised a method by which uniform illumination can be very easily obtained and the disappearance of transparent objects when illuminated by it illustrated. The method in brief is to place the object within a hollow globe, the interior surface of which is painted with Balmain’s luminous paint and view the interior through a small hole.
…
If the inner surfaces be exposed to bright daylight, sun or electric light, and the apparatus taken into a dark room, a crystal ball or the cut glass stopper of a decanter placed inside, it will be found to be quite invisible when viewed through the small aperture. A uniform blue glow fills the interior of the ball and only the most careful scrutiny reveals the presence of a solid object within it. One or two of the side facets of the stopper may appear if they happen to refect or show by refraction any portion of the line of junction of the two hemispheres.
In short, Wood applied luminous paint to the interior of a hollow sphere, which provides uniform illumination inside the sphere. With a small hole for viewing, he found that glass objects became quite invisible while inside the device:
This is a quite fascinating experiment, and one that could be done by pretty much anyone, thanks to the ease of acquiring luminous paint these days! I’m sorely tempted to try the experiment myself; if I do, and am successful, I’ll blog about the results. (Let me know if any of you readers try it, too!)
There’s one problem with Rayleigh’s hypothesis: I can’t actually find the original reference! Wood does not give a specific citation, though he is clearly referring to Lord Rayleigh’s articles in the 9th edition of the Encyclopædia Britannica, the so-called Scholar’s Edition. I’ve found Rayleigh’s articles in that edition, but can find no mention of invisibility in them. The lack of citation suggests that Wood was citing Lord Rayleigh from memory, and he may have misremembered the actual location of the discussion. (It is also possible that Lord Rayleigh’s comments appeared in the revised 10th edition of the Encyclopædia, but I find that unlikely.)
Lord Rayleigh’s articles in the Britannica are also fascinating from a historical point of view, and I’ll come back to them in a future post. Wood’s comments, however, suggest that the history of invisibility physics may go back significantly further than I originally imagined.
### Like this:
This entry was posted in Invisibility, Optics. Bookmark the permalink.
### 8 Responses to The first paper on invisibility? (1902)
1. Markk says:
Woods is the guy that debunked N-rays right? Quite the interesting academic. He creates neat classroom demonstrations, is a noted skeptic (I think he was asked to go to see the n-ray folks) and publishes neat articles like this.
• skullsinthestars says:
Markk: Yep, I believe he’s the “N-ray debunker guy.” I forgot to mention in my post that he also co-authored a couple of science fiction novels, such as The Man Who Rocked the Earth. Being an aspiring physicist/pulp fiction writer myself, I’m going to have to read his book in the near future…
The mention of “N-rays” reminds me of another post I need to write soon…
2. The Ridger says:
Very cool…
3. IronMonkey says:
Fascinating dust off of old papers on invisibility. The Rayleigh-proposed experiment is certainly worth a try, and if successful could make a neat classroom demonstration. I guess a modernized version of it would be interesting also… like replacing the luminous paint with walls of LEDs controlled by the user which could control the “amount” of uniformity of illumination. Maybe the pinholes could be replaced with miniature cameras…
And if everything works, let’s make it spin!
• skullsinthestars says:
IM: I’m pretty sure I’m going to try this experiment at some point. The first step, however, is finding some relatively large set of hemispheres that I can work with. Now that I think about it, I’m not sure where to find them…
4. Markk says:
Re – large hemispheres – How large? I remember seeing someone put plaster, no I think it was paper mache and shellac or something like that over a beach ball (inflated) and then just let the air out when all was dry.
They then just cut the thing in half. I don’t know if the surface would be good enough for you though. That was for a display.
• skullsinthestars says:
Markk: I haven’t quite decided how large they should be. They need to be large enough to put a nontrivial piece of glass inside; maybe 6” in diameter? I’m currently leaning towards these, but I’m also still interested in other options.
5. rangarao says:
my grandson, just passed out of school, trying to put a pattern of LEDs on acrylic sheets, 8 of them arranged in layers and dipped in glycerin contained in a glass aquarium. only the glow of LEDs appear along with the fine connecting wires. according to him, this may be an initial step for a real 3D TV.
his proposal for the time being is demonstrate the 3D effect.
any useful hints to improve the model will be highly appreciated
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410369396209717, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/52023?sort=newest
|
## Is there a Poincare-Hopf Index theorem for non compact manifolds?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Does Poincare-Hopf index theorem generalizes in any way to non compact manifolds ? In particular, I am interested in the case of a smooth vector field on a cylinder $\mathbb{T}_1\times\mathbb{R}$? If so, are there some additional assumption that one has to impose on a vector field considered (maybe it should vanish outside some compact set or decay very fast "at infinity"?). Sorry if the question is silly - I know the Hodge index theorem only from very elementary sources (Arnold's book on ODEs and Wikipedia).
Motivation (physical digression): A friend of mine tries to model action of a cardiac tissue in a heart and its soundings. One of his aims is to understand phenomena called "spiral waves" (they are believed to be partially responsible for hearth attacks). I don't know the details but those "spiral waves" can be described by some ODE defined on a domain which is closely related to the real geometry of considered tissue. From the information about indexes of singular points of a corresponding vector field it is possible to deduce some qualitative information about occurrence of this phenomenon.
-
## 3 Answers
Every noncompact manifold admits nonzero vector fields, or more generally, vector fields with any specified set of isolated zeros along with the behavior near that zero.
However, if you have information of the behavior of a vector field near infinity, or just in a neighborhood of the boundary of a compact set, there is an index theorem. Perhaps this is the case with your $\mathbb T^! \times \mathbb R$.
In the particular case of a cylinder, there is a simple way to calculate the index. Take any compact subcylinder delimited by two circles. Map the cylinder to the plane minus the origin. Around each of the curves, the vector field has a turning number: as you go around the curve counterclockwise, the vector field turns by some number of rotations (counting counterclockwise as positive. The index of the vector field in the compact subannulus is the difference: the number of turns on the outer boundary minus the number of turns on the inner boundary.
One way to describe a general formula is this: let $N^n$ be manifold, and let $M, \partial M \subset N$ be a compact submanifold. Let $X$ be a vector field that is is nonvanishing in a neighborhood of $\partial M$.
Choose an outward normal vector field $U$ along $\partial M$; now arrange $X$ so that its direction coincides with $U$ only in isolated points, so if we project $X$ to $N$ along $U$, it is a vector field with isolated singularities. Let $i+(X)$ be the sum of the Poincaré-Hopf indices over all singularities where $X$ is oriented outward. Then the Poincaré-Hopf index $i(X)$ of $X$ in $M$ equals the Euler characteristic of $M$ minus $i_+(X)$.
Here's one proof: triangulate a neighborhood of $N$ so that $\partial M$ and $T$ are subcomplexes, and so that $X$ is transverse to the triangulation except near the singularities, in the sense that in any simplex, the foliation defined by $X$ is topologically equivalent to the kernel of a linear map in general position of the simplex to $\mathbb R^{n-1}$. Put a $+1$ at the barycenter of each triangle of even dimension, and a $-1$ at the barycenter of each triangle of odd dimension. Think of $X$ as a wind that blows these numbers along, so that after an instant, all numbers (except for exceptions near the zeros of $X$) are inside an $n$-simplex. In any typical simplex, all the signs cancel out. However, along the boundary, some of the numbers are blown away and lost. To regularize the situation, modify $X$ by pushing in the negative normal direction. Now $X$ points inward everywhere except in a neighborhood of points where it coincides with the outward normal. Thus everything cancels out except for local contributions given by $i(X)$ and $i_+(X)$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Suppose $M$ has empty boundary. Let $U\subset M$ be an open set with compact closure whose topological boundary contains no zero of the continuous vector field $X$ on $M$. Suppose $X$ is smooth and hence generates a local semiflow $f_t$, $t \geq 0$.
For sufficiently small $t>0$ the map $f_t$$\colon U \to M$ is defined and has a "fixed point index" $I(f_t, U)$ (see A. Dold, Lectures on Algebraic Topology,'' Die Grundlehren der matematischen Wissenschaften Bd. 52. Springer, New York (1972)). It can be shown that the integer $i(X,U):=I(f_t, U)$ is independent of $t$ and $U$, and is stable under perturbation of $X$.
If $X$ is not smooth, approximate it by a sequence of smooth fields $X_j$ and define $I(X,U):= \lim_{j\to \infty} I(X_j,U)$.
If $X$ has only finitely many zeros in $U$ and none on $U\cap \partial$, then $I(X, U)$ is the sum of their Poincare-Hopf indices.
If $M$ has nonempty boundary, this work if at every boundary point $p$ there is an integral curve $u\colon [0,\epsilon)\to M$ with initial point $u(0)=p$.
-
1
Welcome (belatedly) to MO, Dr. Hirsch! – Ryan Budney Dec 5 2011 at 21:27
As soon as you can construct a vector field with finitely many isolated singularities on a non-compact manifold, you can slide them all the way to infinity and get a vector field with no singular points. If the Poincaré-Hopf index worked, then the Euler characteristic of all non-compact manifolds would vanish.
On the other hand, you can say more useful things. For example, if you can find in your non-compact manifold $M$ a submanifold $M'\subseteq M$ of the same dimension which is compact and with a boundary, such that the inclusion is, say, a homotopy equivalence, then the Poincaré-Hopf theorem works for vector fields with finitely many singular points, all inside the interior of $M'$ and pointing outward on $\partial M'$.
Question: Can one always find such an $M'$ if we start with a non-compact $M$ that has finitely many ends and a vector field with finitely many regular singular points?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340726733207703, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/9239/how-to-find-the-effective-mass-of-a-hole/9241
|
# how to find the effective mass of a hole
how can we find out the effective mass of a hole,since a hole in the valence band is just an absence of electron?
-
Conceptually, a hole is a virtual object. That virtual object behaves like a real particle, occupying energy levels, "moving" in the lattice, etc. Mathematically, then, since mass affects these properties, we can assign an effective mass that explains the observed numerical quantities associated with those properties. – Mitchell Apr 28 '11 at 16:29
Are you interested in how we measure the mass experimentally or calculate it theoretically? Experimentally, you can measure the effective mass by, e.g., seeing how the energy and momentum of a system containing a hole are related to each other, and applying $E=p^2/(2m)$. – Ted Bunn Apr 28 '11 at 18:54
## 4 Answers
Actually the concept of hole is an idealization because is easier to describe how electrons behave in a band describing the behavior of the holes rather than that of all the electrons that compose it. So the hole is our own definition and is defined as the absence of an electron and all its properties are obtained according to this definition. in particular we find that
$( \frac{1}{m})_{ij}^{(h)}=-( \frac{1}{m})_{ij}^{(e)}$
where $m_{ij}$ is the effective mass tensor.
-
This is correct, but it should be emphasized that for a filled band the electronic effective mass is negative, so that it is the hole mass which is positive. – Ron Maimon Oct 14 '11 at 0:55
To understand hole dynamics, consider an idealized model of a band-gap, where the electronic energy at wavenumber k has two branches, one with energy rising quadratically from a minimum value,
$$E_1(k) = A + B k^2$$
and one with energy falling quadratically from a maximum value
$$E_2(k) = - A - Bk^2$$
the energy gap for the insulator is 2A, while the effective mass of a small number of added electrons in the upper band is given by the usual Schrodinger formula:
$$B = {1\over 2m}$$
The Schrodinger equation describes any excitation quadratically rising energy/wavenumber dependence.
The electrons are Fermions, and you assume that the lower band is completely filled. When you add a few electrons (doping with donor atoms), the new electrons make a free Schrodinger Fermi gas, which has a usual Fermi-surface and conductivity. The effective mass is as above.
When you remove electrons, however, the electron excitations near the Fermi surface have a negative value for B, so their dynamics is opposite intuition. If you apply a force to them, they will accelerate away from the force. This property is best treated by reversing the notion of creation and annihilation operator in the Schrodinger field and calling the positively charged annihilation operators creation operators for holes. This reverses the sign of the kinetic term, and gives the holes a positive mass. The description of the semiconductor in the acceptor atom regime is of a Fermi sea of holes, with the usual Fermi sea conductivity, except for positively charged positive mass carriers.
You can equivalently think of this as a reverse-Fermi surface for negative mass electrons. The physics is identical (it only differs by what operator you call annihilation and what you call creation). The negative carriers have a Hall voltage which is opposite the usual case because they have a negative mass.
The aversion to negative mass Schrodinger equation means that the entire literature works exclusively in terms of positive mass effective holes. I think this is a bit unfortunate, because there are situations where a negative mass hole description is the best one available, such as the description of Moseley's law for heavy atoms.
-
In terms of energy spectrum, holes are electron states. They are just unfilled. So the description is basically the same for holes and electrons. (negative) curvature of electron dispersion branch right below the band gap gives you the hole mass.
You may also take a look at the paper "Motion of Electrons and Holes in Perturbed Periodic Fields" by Luttinger and Kohn in Phys. Rev. 97, 869 (1955). It gives some details.
-
I only know the basics.. I won't care to explain but look up the Tight Binding Model. Kittel explains it in this book. That should give you the basic explanation.
-
... really now? – wsc Apr 30 '11 at 2:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186007976531982, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/822/is-angular-momentum-truly-fundamental/828
|
# Is Angular Momentum truly fundamental?
This may seem like a slightly trite question, but it is one that has long intrigued me.
Since I formally learned classical (Newtonian) mechanics, it has often struck me that angular momentum (and generally rotational dynamics) can be fully derived from normal (linear) momentum and dynamics. Simply by considering circular motion of a point mass and introducing new quantities, it seems one can describe and explain angular momentum fully without any new postulates. In this sense, I am lead to believe only ordinary momentum and dynamics are fundamental to mechanics, with rotational stuff effectively being a corollary.
Then at a later point I learned quantum mechanics. Alright, so orbital angular momentum does not really disturb my picture of the origin/fundamentality, but when we consider the concept of spin, this introduces a problem in this proposed (philosophical) understanding. Spin is apparently intrinsic angular momentum; that is, it applies to a point particle. Something can possess angular momentum that is not actually moving/rotating - a concept that does not exist in classical mechanics! Does this imply that angular momentum is in fact a fundamental quantity, intrinsic to the universe in some sense?
It somewhat bothers me that that fundamental particles such as electrons and quarks can possess their own angular momentum (spin), when otherwise angular momentum/rotational dynamics would fall out quite naturally from normal (linear) mechanics. There are of course some fringe theories that propose that even these so-called fundamental particles are composite, but at the moment physicists widely accept the concept of intrinsic angular momentum. In any case, can this dilemma be resolved, or do we simply have to extend our framework of fundamental quantities?
-
## 9 Answers
Note As David pointed out, it's better to distinguish between generic angular momentum and orbital angular momentum. The first concept is more general and includes spin while the second one is (as the name suggests) just about orbiting. There is also the concept of total angular momentum which is the quantity that is really conserved in systems with rotational symmetry. But in the absence of spin it coincides with orbital angular momentum. This is the situation I analyze in the first paragraph.
Angular momentum is fundamental. Why? Noether's theorem tells us that the symmetry of the system (in this case space-time) leads to the conservation of some quantity (momentum for translation, orbital angular momentum for rotation). Now, as it happens, Euclidean space is both translation and rotation invariant in compatible manner, so these concepts are related and it can appear that you can derive one from the other. But there might exist space-time that is translation but not rotation invariant and vice versa. In such a space-time you wouldn't get a relation between orbital angular momentum and momentum.
Now, to address the spin. Again, it is a result of some symmetry. But in this case the symmetry arises because of Wigner's correspondence between particles and irreducible representations of the Poincaré group which is the symmetry group of the Minkowski space-time. This correspondence tells us that massive particles are classified by their mass and spin. But spin is not orbital angular momentum! The spin corresponds to group $Spin(3) \cong SU(2)$ which is a double cover of $SO(3)$ (rotational symmetry of three-dimensional Euclidean space). So this is a completely different concept that is only superficially similar and can't really be directly compared with orbital angular momentum. One way to see this is that spin can be a half-integer, but orbital angular momentum must always be an integer.
So to summarize:
• orbital angular momentum is a classical concept that arises in any space-time with rotational symmetry.
• spin is a concept that comes from quantum field theory built on the Minkowski space-time. The same concept also works for classical field theory, but there we don't have a clear correspondence with particles, so I omitted this case.
Addition for the curious
As Eric has pointed out, there is more than just a superficial similarity between orbital angular momentum and spin. To illustrate the connection, it's useful to consider the question of how particle's properties transform under the change of coordinates (recall that conservation of total angular momentum arises because of the invariance to the change of coordinates that corresponds to rotation). Let us proceed in a little bit more generality and consider any transformation $\Lambda$ from the Lorentz group. Let us have a field $V^a(x^{\mu})$ that transforms in matrix representation ${S^a}_b (\Lambda)$ of the Lorentz group. Thanks to Wigner we know this corresponds to some particle; e.g. it could be scalar (like Higgs), bispinor (like electron) or vector (like Z boson). Its transformation properties under the element ${\Lambda^{\mu}}_{\nu}$ are then determined by (using Einstein summation convention)
$$V'^a ({\Lambda^{\mu}}_{\nu} x^{\nu}) = {S^a}_b(\Lambda) V^b (x^{\mu})$$
From this one can at least intuitively see the relation between the properties of the space-time ($\Lambda$) and the particle ($S$). To return to the original question: $\Lambda$ contains information about the orbital angular momentum and $S$ contains information about the spin. So the two are connected but not in a trivial way. In particular, I don't think it's very useful to imagine spin as the actual spinning of the particle (contrary to the terminology). But of course anyone is free to imagine whatever they feel helps them grasp the theory better.
-
2
The second statement is not really correct. Spin is a natural concept for nonrelativistic QM. Moreover, spin variables is not a good way to classify the representations of Poincare, the correct way is to use helicity and total angular momentum. – Grisha Kirilin Nov 15 '10 at 11:48
3
@Grisha: spin is not natural in QM. It is inserted by hand. If you want to understand its origin, you have to study QFT (or at least Dirac equation). As for the latter part: I am talking about massive particles. There is really no need to talk about helicity there. You only need it for massless particles. – Marek Nov 15 '10 at 12:43
3
@Marek. There are Poisson brackets for the angular momentum in classical mechanics. If you use Heisenberg's canonical quantization, you obtain the algebra of the angular momentum operators. Only from this algebra you can easily show that 2j+1 should be an integer number, where "j" is the maximal projection of the momentum. Therefore, "j" can be either integer or half-integer. Of course, it is because Lie algebra accounts for local properties of a group which is the same for SO(3) and SU(2). It is a simple QM, where is no Poincare. – Grisha Kirilin Nov 16 '10 at 10:06
2
Moreover, if you use the method of induced representations to construct the representations of Poincare, then, of course, the little group (aka stabilizer subgroup) would be SU(2) for a massive state. But this little group classifies the total angular momentum of the state in the rest frame. In QFT you can not construct operator only for the spin for any state - it exists only for the total angular momentum. Consider a Dirac particle in the Coulomb field, there aren't states with a definite electron spin - just because it is not conserved. Spin is an essentially nonrelativistic concept. – Grisha Kirilin Nov 16 '10 at 10:14
2
@Marek. It seems you skip my arguments finished by the phrase "Therefore, j can be either integer or half-integer", where I said nothing about orbital momentum. I was talking about the operator, because you mentioned the representations of Poincare. The physical way to classify representations is to choose operators which are separately commute with the Hamiltonian. There is no such operator as "spin". You can check the explicit form of spherical bispinor (Landau&Lifshitz Vol.4 Eq.(24.13)) - they mix different projections of spin and orbital momentum - only total angular momentum is definite. – Grisha Kirilin Nov 16 '10 at 13:21
show 18 more comments
In the field of classical mechanics angular momentum is almost always derived form linear momemtum. This actually might be the problem because it is also possible to do it the other way around: linear momentum is a limiting case of angular momentum where the radius of rotation becomes infinite. In this view the split between rotational and linear vanishes - the new concept that is introduced is: infinity.
This is not a new idea of mine, it has been established since the 19th century. By using projective geometry one can integrate linear and angular kinematics and dynamics in one framework (i.e. a translation is a rotation around an infinite axis; a pure moment is a force along an infinite line of action). Keywords: Felix Klein, linear complexes.
Another issue is intrinsic angular momentum. I could say: study the fundamentals, the principles, and the maths, and eventually you'll get a holistic picture, but that's not what I believe. I think we are in need of some kind of geometrical electron model that permits us to depict intrinsical angular momentum.
-
Interesting thoughts there, I agree we're in need of a more geometrical model. May have a look at that framework you mention. – Noldorin Nov 15 '10 at 12:12
Do you have some references for seeing linear momentum as a limiting case of angular momentum via projective geometry? – student May 28 '12 at 17:48
– Gerard Sep 6 '12 at 20:52
Basically a translation can be seen as a rotation around a line at infinity at 0 degrees. Likewise a pure moment (i.e. zero net force) can be seen as force of 0 along a line at infinity. – Gerard Sep 6 '12 at 20:55
whether you call a similar concept "fundamental" is a matter of taste - and the proposition is just a meaningless emotional slogan. The angular momentum is surely an important quantity that is, in a very well-defined sense, as important as the normal momentum. Incidentally, both of them are conserved if the physical laws are symmetric with respect to translations and rotations, respectively.
So the real question is why the spin in quantum mechanics can't be reduced to the orbital motion - i.e. to the "linear motion" and ordinary "momentum". It's because the objects in quantum mechanics are described not just by their shape in space but by wave functions, and wave functions may be said to transform nontrivially (into something else) under rotations.
In particular, if the wave function (or a field) is a vector or a tensor or, most typically, a spinor, then it means that in a different coordinate system, the values of the components of the wave function will be different. This is possible even in the case when the wave function (or field) is fully localized at one point, i.e. nothing is rotating "orbitally".
The angular momentum is defined by the change of the phase of the wave function under rotations, which may come from the dependence of the wave function on space, but also from the transformations of the components of the wave function among each other, which is possible even if everything is localized at a point. So even point-like objects may carry an angular momentum in quantum mechanics, the spin.
Note that the spin is a multiple of $\hbar/2$ and $\hbar$ is sent to zero in the classical limit, so in the classical limit, the spin as the internal angular momentum becomes zero and disappears, anyway.
Another new feature of the spin is that unlike the angular momentum, it may be half-integer, not just a multiple of $\hbar$: also $\hbar/2$ is possible. That's because the wave functions (and fields) may transform as spinors that change the sign if they're rotated by 360 degrees. Only a rotation by 720 degrees is topologically indistinguishable from "no rotation", so wave functions are obliged to return to their original values under a 720 degree rotation. But fermions change their signs under rotations by 360 degrees which corresponds to their half-integral spin.
If the word "fundamental" means that it can't be reduced to some other things such as a classical intuition about motion and rotation, then be sure that the spin is damn fundamental, much like the rest of quantum mechanics.
Best wishes Lubos
-
Thanks for your reply. I think your reasoning is right. Physicists like to use the term "fundamental" a lot, but it probably isn't very well-defined. – Noldorin Jan 14 '11 at 17:48
1
Dear Noldorin, I actually use it often, too - but just not for random quantities such as the angular momentum. I use it for important principles and universal laws - anything that is not just an approximation; anything that is unique and doesn't have lots of "sibling concepts"; anything that matters in the whole Universe. In particular, the fundamental scale is probably the Planck scale - more generally, it's the place where the most accurate, not approximate laws of the Universe show their physical consequences directly. – Luboš Motl Jan 17 '11 at 10:49
In classical mechanics, the fundamental entities change according to the framework you opt for. If you do classical Newtonian mechanics, I'd say the fundamental entities are postions and velocities. All the others can be derived from them and the dynamics of particles are described in terms of functions of these (forces are functions of time, positions and velocities).
But if you go to Hamiltonian mechanics, then positions and momenta become fundamental. And the Hamiltonian can be expressed as a function of these and possibly time.
Clearly, in classical mechanics, angular momentum is always a derived quantity, because it is always an orbital angular momentum, never an intrinsic angular momentum. Even when you have an object rotating on an own axis, this can be understood as the particles constituting the object executing an orbital motion. Of course, you can write Hamiltonians which depend on the angular momentum of the top, but these are higher level descriptions, the angular momentum of the top could still be decomposed in principle into the orbital angular momenta of its constituents. This would not be a very practical approach to problem-solving of course.
Therefore, as you say, a fundamental intrinsic angular momentum is a novelty in quantum mechanics. The way it enters the equations is usually through the multiple-valuedness of the wave function. Say a spin 1/2 particle has to be described by two independent component wave functions (there could be more components, but these would not be independent). I don't know of any way around this. This is a basic fact of how nature works and it is related to the representations of the symmetry group of space-time.
Since the symmetry group of space-time is basically the same in quantum and in classical physics, I don't see however why it should not be possible to describe particles with intrinsic momenta in classical mechanics. I think it is certainly possible in principle. The question is, is it useful? Since all our elementary particles have to be described at the quantum level, what use is a classical theory of particles with intrinsic momenta? Except in the sense of tackling problems like the top by simplification or something?
EDIT: As a matter of fact, classical field theories do have spin. Think of the Maxwell equations for instance.
-
Thanks for your reply. It confirms a few of my views for sure. I wasn't aware classical field theories predict spin. Ordinary quantum mechanics is not a field theory and predicts spin, however? – Noldorin Nov 15 '10 at 12:13
@Noldorin: it doesn't predict it. You can work in QM without spin as well. Also, in QM mechanics you can have spin 1/2 bosons, which isn't really consistent with reality. That is why Dirac equation was such a huge success: it indeed did predict spin! But it wasn't until later that people understood where spin is really coming from. For that you need to consider fields. – Marek Nov 15 '10 at 15:02
@Raskolnikov: classical field theory and quantum particles are deeply related. The bridge is through quantum field theory. This is obtained by quantization of classical field theory. Once you have it quantized, you can notice that there exists something called "particle approximation" (this is about Feynman diagrams). So in the end you'll arrive at particles. So it's morally correct to say their spin comes from classical field theory. – Marek Nov 15 '10 at 15:08
1
– Marek Nov 15 '10 at 18:39
2
Really? I've never heard it used by anyone in Britain, least of all in public. Physicists are known for corrupting language, however! I can concede it's used in some areas though, so fair enough. :) Just a warning: the chance you are understood outside of the physics/science community is about zero. – Noldorin Nov 16 '10 at 0:09
show 5 more comments
Lubosh wrote: "The angular momentum is defined by the change of the phase of the wave function under rotations, which may come from the dependence of the wave function on space, but also from the transformations of the components of the wave function among each other, which is possible even if everything is localized at a point. So even point-like objects may carry an angular momentum in quantum mechanics, the spin."
In QM it is impossible and is not necessary to impose R = 0 (see my blog) to have a system at rest. On the contrary, one has to put P = 0. It does not mean point-likeness but ubiquity instead.
There is an article by R. Ohanian on spin. But I am afraid it is finally a tautology or so.
I think the angular momentum is fundamental. I think that even in Classical Mechanics a description of anything with help of solely three coordinates R(t) is too primitive. Generally everything is not point-like and rotates, roughly speaking. So the intrinsic angular momentum J is as fundamental as the linear momentum P (as well as color, charge, and flavor ;-).
-
1
Vlad, you are in a catch-22 situation. In most cases, you don't want to answer, but only to comment an answer. So, it is not an answer and you get negative scores. But you will not be able to comment until you accummulate 50 points of reputation. Break the loop, look for some questions you can answer in a useful way, and/or do questions of general interest. – arivero Jan 18 '11 at 0:12
1
@Vladimir: I'm not sure I agree with your answer, but I'm not sure why you got the down-votes either. (People should indeed leave reasons!) – Noldorin Jan 18 '11 at 23:16
1
@Noldorin: many think of elementary particles in QM as of stable point-like objects whereas there is no stable solution localized at a point all the time. Wide wave packets may be more or less "stable" but they are not point-like objects. The latter case is much more realistic due to necessity of stability while preparing and measuring the spin projections. – Vladimir Kalitvianski Jan 18 '11 at 23:50
1
Interesting. I'm not very familiar with QFT, but you saying that all particles (field wave packets) are unstable to some degree? Are there any solitons in QFT? – Noldorin Jan 19 '11 at 0:19
1
@Noldorin: Yes, they (the wave packets) are unstable and the extent of their instability is determined by the preparation device (source, diaphragms, etc.). In addition, if we speak of charge scattering, in the final state you always have many (soft) photons. You cannot scatter without radiation (elastically). It means the initial system is always "broken apart" in some way (inelastic scattering). It is a strict QED result. The system is "large and soft", easy to deformate inelastically. It is incompatible with a soliton-like construction. – Vladimir Kalitvianski Jan 19 '11 at 10:21
show 8 more comments
A hint of the special role of angular momentum happens when you look for its conjugate variable. It is angular position, which is adimensional. And then you have that any product of a variable times its conjugate has units of action, which are the same units that angular momentum. So classical mechanics already tell us that something is going on. (Caveat: that you can have the same units with scalar product and with cross product, and the physical meaning is different. If you have checked the pamphlets from German car and enginemakers, you could have noticed the unit "Nm", newton times meter, and the unit "Joule", used differently.)
-
As for spin and extended particles, I'd say the contrary: it is not contrary to intuition that point particles have some intrinsic angular momentum, because a point looks as if it had some rotation invariance built-in. The surprising thing is that extended objects have this angular momentum, without a point for the rotational symmetry to pivot in.
-
Existence of a spin of a particle is of course an indication that the particle is in fact composed of space-separated parts. This does not mean though that the particle is composed of other particles.
For example, at least a part of the electron's spin is currently known to be in fact the orbital momentum of the quantum vacuum fluctuations which are involved by electron's core into rotation. This part is known as anomalous angular momentum of electron.
Another example is photon where the spin can be explained as an order in which the energy contained in electric and magnetic fields rotates around the axis laid along the direction of the photon's propagation.
-
1
-1: this answer is wrong. There is no anomalous angular momentum of the electron. There is an anomlous magnetic moment, but this is not angular momentum, it is current. – Ron Maimon Sep 19 '11 at 0:36
There is more to it than Spin being intrinsic angular momentum. An electron has an "internal degree of freedom" - to be left handed or right handed, and it can leave point A with RH spin and arrive at B with LH spin. Thus Pauli needs two complex components in his equation. (unlike a photon which arrives with the same spin although it too has LH and RH, so there is no internal degree of freedom ). This is distinct from the spin-vector which defines a direction in space. The two-valuedness comes from rotation being about a bivector which can point up or down along the axis of rotation. One can do spatial rotations either way - and electrons seem to make the distinction - as if there are two kinds, but everything else is the same mass and charge, so we say it is the same particle, with opposite spins. So it seems that there is no necessary connection to either relativity (except for fixing up the Thomas factor in the Pauli eq ) or QFT. Hamilton had the algebra to make the classical distinction between left and right - it is built into quaternion algebra, but he did not see it as a mechanical property of particles - but heck, he did not see the Maxwell equation either.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378858208656311, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/tags/volatility/new
|
# Tag Info
## New answers tagged volatility
1
### Is my VaR calculation correct?
you need to use the forecast for both the mean and sigma. It should look something like this: forecast = ugarchforecast(modelfit, n.ahead = 1, data = mydata); sigma(forecast); fitted(forecast) Then plug these values into the equation: \begin{align} \hat{VaR}_{0.99,T|T-1}&=\hat{\mu}_{T|T-1} + \hat{\sigma}_{T|T-1} * q_{0.99} \end{align} where $T$ is ...
1
### Relationship between European, American options volatility
So to expand a bit further on what Brian had mentioned, you're going to get a different vol surface given american vs european. So this is something Brian already pointed out, but one very simple and practical way that you can prove this to yourself is just to think about how the implied forwards are generated. In the European case we use the entire strip ...
3
### Relationship between European, American options volatility
For a standard American exercise option expiring at $T>0$, price is still monotically increasing in volatility under the Black-Scholes model (though obviously it is not strictly monotonic, due to early exercise rendering price insensitive to volatility in some regions of parameter space). To see this, you can use one of three techniques: Investigate ...
0
### Time Varying Volatility
The first and the second moment are independent, so even if returns are not autocorrelated the size of returns can be. of course. Example: generate path of variable with binomial distribution that takes value of 1 or -1, that is $\sum_{i=1}^n{} x_i,x_i=\{1,-1\}$ now you can generate another path $\sum_{i=1}^n{} x'_i,x'_i=\{1,-1\}$ choosing values of ...
2
### Time Varying Volatility
To your first question: The first and the second moment are independent, so even if returns are not autocorrelated the size of returns can be. To your second question: Autocorrelated volatility implies volatility not being constant but varying, which is heteroscedasticity or volatility clustering.
1
### Transformation to reduce standard deviation without changing median
As Quartz says it is possible to make non-linear transformations taking into account skew and kurtosis, but this is mostly is limited to univariate processes (one approach for a t distribution is to match moments). For multivariate processes, it is considerably more difficult. A more general solution is to rely on Entropy Pooling. You could take views on ...
3
### Transformation to reduce standard deviation without changing median
It's not possible with a simple linear transformation like the one you mentioned: since scale and thus the distance between mean and median are required to change, either the mean or the median will not be preserved. Therefore you must use nonlinear transformations, which will complicate quite a bit mantaining skew and kurtosis and imho will not be ...
1
### Fitting distributions to financial data using volatility model to estimate VaR
The standard answer to your question would be to do the maximum likelihood estimation. When you say "plug in $\sigma$" you can show that the sample estimate of $\sigma$ is actually the maximum likelihood estimate of $\sigma$ for the normal distribution. If I can assume that your data are IID then what you do is use your distribution with parameters ...
Top 50 recent answers are included
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8930497169494629, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/75023/is-this-pleasing-polynomial-irreducible/75028
|
## Is this pleasing polynomial irreducible?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let:
$f(x)=x^n+2x^{n-1}+3x^{n-2}+4x^{n-3}+\ldots + (n-1)x^2+nx+(n+1)$.
Is $f(x)$ irreducible?
In light of the answers to this question, I now know that this is true when $n+1$ is prime. What about when $n+1$ is composite? I have checked a lot of cases and it seems to be true.
(Unnecessary background information: this is linked to this recent question of mine. At the point where I say:
"$g_n(x,−n)=x^2h(x)$,"
the $h(x)$ in question has the property that substituting $x-1$ for $x$ puts it in the form of $f(x)$ above. If I can show that $h(x)$ is irreducible, then I will have shown that the galois group of the original polynomial is doubly transitive. Not that I am trying to draw attention back to my original question!)
-
7
Not an answer, but: it was observed a few years ago that for $n=6$ this polynomial $x^6 + 2x^5 + 3x^4 + 4x^3 + 5x^2 + 6x + 7$, though irreducible, has non-generic Galois group: it's the transitive copy of $S_5$ in $S_6$! Anybody remember who first noted this? – Noam D. Elkies Sep 9 2011 at 18:35
## 1 Answer
It is a conjecture that for any $k < n$ one has $$\frac{d^k}{dx^k}(1+x+\cdots+x^n)$$ is irreducible in $\mathbb Z[x]$. Your question is the case $k=1$. It is known that the set of integers $n\in [0,t]$ for which the polynomial $nx^{n-1}+(n-1)x^{n-2}+\cdots+1$ is reducible has size at most $O(t^{1/3+\epsilon})$, see the paper
A. Borisov, M. Filaseta, T. Y. Lam, O. Trifonov, "Classes of polynomials having only one non-cyclotomic irreducible factor", Acta Arith. 90 (1999) 121-153
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408736824989319, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/78910?sort=oldest
|
Computer algebra system for calculation of characteristic polynomial of sparse matrix
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I have a $n \times n$ matrix, for which i need to calculate the characteristic polynomial. The matrix is over $GF(2)$, and $n \approx 10^4$. However the matrix is very sparse, with around $n$ non zero entries.
Is there a computer algebra system with sparse matrix structures capable of handling the calculation of the characteristic polynomial? What would be the most promising system to try?
-
What do you need the characteristic polynomial for, if I may ask? Maybe you can avoid computing it explicitly. – Federico Poloni Oct 23 2011 at 19:41
@Frederico Polini: The coefficients of the characteristic polynomial will give the next term of a recursion given the n previous terms. – George Arg Oct 23 2011 at 20:45
I don't think sage or gp/pari can do $10^4$ with less than 8G RAM. sage does $10^3$ in about 9 seconds. Probably you don't want to use the fact that in $GF(2)$ $x^n=x$? – joro Oct 24 2011 at 6:05
@unknown: In other words, you're iterating matrix multiplications? I think that this linear recurrence boils down to it... – Federico Poloni Oct 26 2011 at 8:48
5 Answers
It seems that Fermat (see http://home.bway.net/lewis/) specializes in these sorts of computations.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I have routinely used sage (google "sage math") to compute the characteristic polynomial of adjacency matrices of graphs on around 800 vertices. I believe sage calls linbox for the actual computation. Note that you can read of the characteristic polynomial from the rational normal form and this might be cheaper to compute.
-
I have been experimenting with Fermat to see how this goes. For a 5000x5000 test case I made up, it computed the minimal polynomial (Wiedemann algorithm) in about 30 seconds, then using that, the characteristic poly in about 22 minutes. It used about 45 meg of RAM. This is on a three year old IMac.
I am the author of Fermat. You can find my email online, and the Fermat web site.
To the Maple person, James:
I tried your code in Maple and the kernel crashed, memory allocation failed. (My IMac has 4 gig RAM.)
Does your example really have only 1 entry per row? The test case I quote above with Fermat has about two per row. With one per row, I get the answer in 0.92 seconds.
-
@rhlewis As shown, I used a randomly generated matrix with density about 1/N, so there were, in fact, some zero rows, as well as some rows with 1 or 2 ones, and one row with 3 ones (based on a handful of random samples I looked at). If I change the density to 2/N, the runtime goes up to about 22 seconds. I just now tried a random permutation matrix as well, and got a runtime of about 5 minutes. That seems to make sense, as there are no zero rows to cut down the work. I used a 64-bit Linux machine with 8 Gb of memory. – James Oct 25 2011 at 5:02
In Maple 15 I'm getting times of about 12 seconds on decent, fairly recent hardware. I used
````with( LinearAlgebra ):
N := 10^4:
A := RandomMatrix( N, N, 'generator' = 0 .. 1, 'density' = evalf( 1 / N ) ):
time( Modular:-CharacteristicPolynomial( 2, A, t ) );
````
for testing. The fact that it is sparse is important. Runtimes for dense examples were much, much longer.
EDIT: It seems that if you use a sparse matrix data structure, and smaller (one byte) integers, this can be done much more efficiently. If you have access to Maple 14 or 15, try this (and note that, here, $N = 10^5$):
````with( LinearAlgebra ):
N := 10^5:
A := RandomMatrix( N, N, 'generator' = 0 .. 1, 'density' = evalf( 1 / N ), 'storage' = 'sparse', 'datatype' = 'integer'[1] ):
time( CharacteristicPolynomial( A, t ) mod 2 );
````
On my machine, I get the answer in less then 0.5 seconds. (It took longer - about 0.75 seconds - to construct the matrix!) Memory used was about 45.3 Mb, but going back to size $10^4\times 10^4$ reduces the memory to 2.7 Mb. I did examples with $N = 10^6$ in about 5 seconds and 220 Mb of memory.
Anyway, it's clear from the various answers that there are computer algebra systems out there that can handle the computations you are interested in (at least Maple and Fermat, and likely others), so you should be in a position to choose whichever system is most convenient for you.
-
Update.
The versions of Fermat available for download were not able to use the best determinant algorithms for matrices with > 5000 rows. I've changed that now. Google "Lewis Fermat"
I experimented some more with examples like this. I created a random 10000 row square matrix with density 1.86. Using the best algorithm over GF(2), which turns out to be just Gaussian elimination for this example, the determinant (characteristic poly) is computed in 18 seconds using 18 megabytes, on a three-year old iMac. For larger densities, at some point the algorithm I mentioned above will be better.
Correction: Apologies for a mistake in the above: Gaussian elimination is effective only for densities < 1.01. However, the other method works well as far as I tried it, up to density 2.0, taking about 100 minutes and only about 68 meg of RAM.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218946099281311, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/39885?sort=newest
|
## What spaces have well known horofunctions?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Following Gromov, take a metric space $(X,d)$ and consider $C(X)/\mathbb{R}$ the set of continuous functions to $\mathbb{R}$ with the topology of uniform convergence on compact sets after taking the quotient by constant functions (i.e. two functions are equivalent if the difference is a constant). Embed $X$ into this space by means of the map:
$x \mapsto f(x) = [d(x,\cdot)]$
A horofunction is an element of the closure of $f(X)$ that is not in $f(X)$.
In $\mathbb{R}^d$ all horofunctions are given by inner product with a unit vector.
In the upper-half plane model of the hyperbolic disc the function $h(z) = \log(\text{Im}(z))$ is a horofunction and all others can be obtained by composing with hyperbolic isometries.
What other spaces have well known horofunctions?
For example, are horofunctions of the model geometries $\text{Nil}$, $\text{Sol}$, and $\widetilde{\text{Sl}(2,R)}$ in dimension $3$ known relatively explicitly?
I know there is a general relationship between horofunctions and geodesics (Busseman functions) in non-positively curved spaces. However I'm interested in spaces where one can compute relatively explicitly (e.g. spaces where one knows what the distance function looks like).
-
2
Just to supplement your posting, the horofunction definition (first?) appears in Gromov's 1980 paper, "Hyperbolic manifolds, groups and actions." ihes.fr/~gromov/PDF/hyperbolic_manifold.pdf – Joseph O'Rourke Sep 24 2010 at 18:49
Hi Pablo! Just to see if I've got something.. If X is compact there are no horofunctions, are there? – Michele Triestino Sep 24 2010 at 18:54
2
@Michele: You're correct, no horofunctions if the space is compact (the idea is that adding the horofunction one obtains a nice compactification of $X$ by "directions to infinity"). – Pablo Lessa Sep 24 2010 at 19:16
and yet another silly question: does the same conclusion of my previous comment hold for discrete spaces (I'm thinking of graphs, trees actually)? – Michele Triestino Sep 24 2010 at 19:17
1
Understanding Busemann functions involves understanding rays, which I think is a difficult task even for surfaces of revolution in $\mathbb R^3$. – Igor Belegradek Sep 24 2010 at 23:25
show 1 more comment
## 3 Answers
Of course, the first example should be non-compact Riemannian symmetric spaces, where the Busemann (horofunctions in your terminology) functions are known in a pretty explicit form. I don't think there are other explicit examples.
More comments.
1) Geodesic rays always converge in this Busemann-Gromov compactification, and it has nothing to do with curvature.
2) I never understood why the definition of this compactification is formulated in terms of uniform convergence on compact sets instead of plain pointwise convergence (anyway, for Lipschitz functions the result is the same). Actually, this is a particular case of the general Constantinescu-Cornea compactification (see the book of Brelot "On Topologies and Boundaries in Potential Theory"), other examples of which are the Martin compactification in potential theory or Thurston compactification in the Teichmuller theory.
ADDED REFERENCES
In what concerns Busemann or horofunctions (I prefer to call them Busemann cocycles, because in invariant language they are cocycles, not functions) for symmetric spaces, there are several sources of various degree of explicitness.
(1) For the group $SL(d,\mathbb R)$ (and the associated symmetric space) the Busemann cocycle essentially appears in Furstenberg's formula for the rate of growth of random matrix products, see
MR0163345 (29 #648) Furstenberg, Harry Noncommuting random products. Trans. Amer. Math. Soc. 108 1963 377--428.
From the geometrical point of view the main idea there is that if you want to find the linear rate of growth of $d(o,g_1 g_2\dots g_n o)$, where $o$ is a reference point, and $g_i$ is a stationary sequence of random isometries, then you can look at the increment $d(o,g_1 g_2\dots g_n o)-d(o,g_2 g_3\dots g_n o)=d(g_1^{-1}o,g_2g_3\dots g_n o)-d(o,g_2 g_3\dots g_n o)$, which converges to the Busemann cocycle $\beta_\gamma(o,g^{-1}o)$ provided $g_2g_3\dots g_n o$ converges to a boundary point $\gamma$ in the Busemann compactification. The cocycle itself looks, roughly speaking, like $\log \|gv\|/\|v\|$, where $g$ is the matrix (or its exterior power) representing a point in the symmetric space, and $v$ is a vector representing the boundary point. There is also a lot about it in later papers by Guivarc'h.
(2) Busemann cocycles naturally appear in various works related to compactifications of symmetric spaces. Historically the first source is the monograph of Karpelevich
MR0231321 (37 #6876) Karpelevic, F. I. The geometry of geodesics and the eigenfunctions of the Beltrami-Laplace operator on symmetric spaces. Trudy Moskov. Mat. Obšc. 14 48--185 (Russian); translated as Trans. Moscow Math. Soc. 1965 1967 pp. 51--199. Amer. Math. Soc., Providence, R.I., 1967.
where he explicitly discusses pencils of convergent geodesics and introduces the associated horospheric coordinates. Later expositions are in two books on compactifications of symmetric spaces:
MR1633171 (2000c:31006) Guivarc'h, Yves(F-RENNB-IM); Ji, Lizhen(1-MI); Taylor, J. C.(3-MGL) Compactifications of symmetric spaces. (English summary) Progress in Mathematics, 156. Birkhäuser Boston, Inc., Boston, MA, 1998. xiv+284 pp. ISBN: 0-8176-3899-7
and
MR2189882 (2007d:22030) Borel, Armand(1-IASP); Ji, Lizhen(1-MI) Compactifications of symmetric and locally symmetric spaces. Mathematics: Theory \& Applications. Birkhäuser Boston, Inc., Boston, MA, 2006. xvi+479 pp. ISBN: 978-0-8176-3247-2; 0-8176-3247-6
-
Great! Thank you. Do you have some reference for Busemann functions of non-compact Riemannian symmetric spaces? – Pablo Lessa Sep 25 2010 at 9:53
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One possible place to look (other than symmetric spaces) is the warped product of hyperbolic metrics, a metric of the form $dr^2+e^{C_1r} dx^2 + e^{C_2r}dy^2$, $C_1, C_2 >0$. Then $r$ is a horofunction. If one could compute the horofunction for another point in the Gromov boundary, then since the isometry group acts transitively on these other points of the Gromov boundary, one would have a computation for all points. However, I imagine computing the other horofunctions could be tricky, since I don't know how to compute the geodesics explicitly.
-
some remarks, not an answer. It seems, from [J. Cheeger and D. Gromoll, The splitting theorem for manifolds of nonnegative Ricci curvature, J. Diff. Geom. 6 (1971) 119-128.] it follows that horofunctions in Nil, Sol and \tilde Sl(2,R) are super-harmonic? (Ricci curvature is non-positive?) I know that the spaces of horofunction, Busseman functions coincide with Gromov ideal boundary (spaces of rays) when the sectional curvature is non-positive. (For non-negative curvature these ideal boundaries might be different. For the Heisenberg group Heis^{2n+1} with left-invariant metric the Gromov ideal boundary is the sphere S^{2n-1} with Carnot-Caratheodory metric, and geodesics equations are known - but not horofunctions :( )
-
Thanks! I'll look into this. Any reference for the facts about the Heisenberg group? – Pablo Lessa Sep 24 2010 at 20:04
valeri probably meant to say "Ricci curvature is nonnegative" else the splitting theorem does not apply, but more to the point the homogeneous metrics on Sol, Nil, $SL(2,\mathbb R)$ do not have nonnegative Ricci curvature. Indeed, compact manifolds of nonnegative Ricci curvature have virtually abelian fundamental group, which is not true for compact manifolds modeled on Sol, Nil, $SL(2,\mathbb R)$. – Igor Belegradek Sep 24 2010 at 21:05
to Pablo: there are some papers by Kaplan (if I remember correctly) and by Marenich in Geom Dedicata 66:2 with calculations. To Igor: of course, Cheeger Gromoll for non-negative, but what I mean is that for non-positive their arguments will give super-harmonic, is this wrong? – valeri Sep 24 2010 at 21:28
@valeri, I haven't though enough of nonpositive Ricci curvature, but I do not see why what you assert is true. The proof of subhamonicity (of Busemann functions on manifolds of nonnegative Ricci curvature) that I know hinges on Laplacian comparison. To prove the latter one uses Cauchy-Schwarz inequality to get a lower bound bound the Hessian in terms of the Laplacian. Then Weitzenbock formula and lower bound on Ricci finishes the proof. If you have nonpositive Ricci, the Cauchy-Schwarz goes the wrong way and this argument does not work (I think). How do you make it work? – Igor Belegradek Sep 24 2010 at 23:20
@Igor - " haven't though enough of nonpositive Ricci curvature" - me too :). My guess was that in Hadamard manifolds horofunctions are smooth, so Laplacian comparison with flat tangent space (where horofunctions are linear) via exponential map will do. Probably not ... I can not state smth right now. – valeri Sep 25 2010 at 8:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8751510381698608, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/coulombs-law+dimensions
|
# Tagged Questions
4answers
501 views
### Why are so many forces explainable using inverse squares when space is three dimensional?
It seems paradoxical that the strength of so many phenomena (Newtonian gravity, Coulomb force) are calculable by the inverse square of distance. However, since volume is determined by three ...
2answers
704 views
### Electric potential due to a point charge in Gaussian/CGS units
I learned electrostatics in SI units. In SI, the electrostatic potential due to a point charge $q$ located at $\textbf{r}$ is given by $\Phi(\textbf{r}) = \frac{q}{4 \pi \epsilon_0 |\textbf{r}|}$. ...
5answers
2k views
### Does Coulomb's Law, with Gauss's Law, imply the existence of only three spatial dimensions?
Coulomb's Law states that the fall-off of the strength of the electrostatic force is inversely proportional to the distance squared of the charges. Gauss's law implies that a the total flux through a ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398396611213684, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/27335/matrix-geometry-for-f-strings?answertab=votes
|
# Matrix geometry for F-strings
A stack of N D-branes has the strange property that the traverse D-brane coordinates are matrix-valued. When the matrices commute, they can be interpreted as ordinary coordinates for N indistinguishable objects. But in general they correspond to something classical geometry doesn't describe. In particular this leads to a non-perturbative description of string theory on asymptotically Minkowski spacetime: Matrix Theory.
S-Duality exchanges F-strings and D1-strings. This means this odd "matrix geometry" should happen for F-strings as well. The question is, how can we see it directly, without invoking S-duality?
-
1
I apologize for posting additional questions before carefully reading and replying to the discussion around the answers to my previous questions. This is not out of disrespect to the effort put into writing these answers, for which I am very grateful. This is merely because I suspect the site will be closed in 2 days for which reason I'm shooting all the questions I got. – Squark Dec 11 '11 at 21:44
## 1 Answer
Matrix string theory
http://arxiv.org/abs/hep-th/9701025
http://arxiv.org/abs/hep-th/9702187
http://arxiv.org/abs/hep-th/9703030
is indeed an exact description of fundamental type IIA strings (and similarly $E_8\times E_8$ heterotic strings) at any (e.g. weak) coupling where you can explicitly see the off-diagonal degrees of freedom. You could say that this description is was obtained by dualities from the low-energy dynamics of D1-branes and you would be right. However, when properly interpreted etc., it's a description of fundamental strings, too.
The reason why we normally (outside matrix string theory) don't see the off-diagonal degrees of freedom is that these off-diagonal degrees of freedom sit in their ground state for generic quantum states. For D1-branes, which are heavy, you may imagine a stack of several D1-branes which are located at the same point (along the same curve, to be more precise), which subsequently guarantees that the open strings connecting 2 different D1-branes – the off-diagonal modes – are light.
However, if the objects we want to connect are fundamental strings, which are light, the uncertainty principle guarantees that they will not be sitting in a fixed position determined with the accuracy better than $L_{\rm string}$ which is why the description of the perturbations in terms of off-diagonal open strings is impossible.
The asymmetry is particularly obvious in type IIB string theory. Two different D1-branes may be connected by light F1-strings. By S-duality, F1-strings may also be connected by D1-branes. However, D1-branes are heavy and F1-strings' separation is at least a string length. So the mass of the D1-branes connecting two different F1-strings, or two different points of an F1-string, will be much greater than the string mass. So there's no systematic description of physics that would consistently incorporate such massive degrees of freedom: there are many more additional degrees of freedom that are lighter and that should be incorporated before the D1-branes connecting the F1-strings.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305087327957153, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/84557/list
|
2 edited tags
1
# Is there any literature about inner-replacement rule?
Hello all,
If you have a theorem $\vdash \alpha \rightarrow \beta$ and a theorem $\vdash \gamma$ then if $\alpha$ is a sub-expression of $\gamma$, and this sub-expression has an even number of negations within $\gamma$ (and it is not within a xor or boolean equality), then a new theorem can be obtained by replacing $\alpha$ with $\beta$ in $\vdash \gamma$.
Similar, if $\beta$ is a sub-expression of $\gamma$, and this has a odd number of negations, then a new theorem can be obtained by replacing $\beta$ with $\alpha$.
It is not so difficult to see that this is true, but has this rule a name? And is there any literature about it? If you introduce this rule, you can probably skip some axioms. Also, it is a more general form of modus pones.
The background of this question is, that one of my interests is how to do real mathematics in a formal logic. My opinion is that both sides should make steps to come closer to each other. Logicians should make logics that are more practical to use, and mathematicians should make proofs that are easier to formalize. I think above rule is quite natural for mathematicians, and can be verified by computer. At least when I am doing mathematics, I am well aware if a certain sub-expression is in a position that it can be weakened or strengthened. And I think that counts for all mathematicians.
With this rule you don't need the low-level proving with shifting assumptions in front of the $\vdash$ sign, to weaken or strengthen the sub-expression. This low-level shifting is a practice that in general can not be found in books of mathematics that do not deal with logic.
Regards,
Lucas
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597885012626648, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/158772-find-basis-subspace-s-r3-spanned.html
|
# Thread:
1. ## Find a basis for the subspace S of R3 spanned by....
Hi, i am struggling with the idea of basis and dim.
Find a basis for the subspace S of R3 spanned by
{ v1 = (1,2,2), v2 = (3,2,1), v3 = (11,10,7), v4 = (7,6,4) }.
What is dim s ?
2. A basis is a spanning set that is also independent. You are given that this subspace is spanned by these four vectors so a basis would be a subset of that. Start with any one of them, say, v1.
v2 is not a multiple of v1 so {v1, v2} are independent.
Now look at v3. Is v3 independent of v1 and v2? That is, can v3 be written as a linear combination of v1 and v2? That is the same as asking "can we have av1+ bv2+ cv3= 0 where a, b, c are not all 0?".
Since $R^3$ itself has dimension 3, a basis cannot contain more than 3 vectors.
3. ok, so i got
v3 = 2(v1) + 3(v2)
v4 = v1 + 2(v2)
so the basis is v1 and v2 and dimension 2?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.915910005569458, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/46316/harmonic-oscillator-and-lorentz-symmetry/46317
|
# Harmonic oscillator and Lorentz symmetry
There is a analog between harmonic oscillator $x=\frac{1}{\sqrt{2\omega}}(a+a^\dagger)$ and quantum field $\phi=\int dp^3\frac{1}{(2\pi)^3}\frac{1}{\sqrt{2\omega_p}}(a_p e^{ipx}+a^\dagger e^{-ipx})$, which is used to quantize the field operator.
However, one thing confuse me is about the coefficient $\frac{1}{\sqrt{2\omega}}$. For field operator, this comes from Lorentz invariance, just because we have integrated time t. However, for harmonic oscillator, there seems no apparent Lorentz symmetry give me this. Is there any hidden symmetry behind the harmonic oscillator?
-
## 2 Answers
The normalization in front of $a$ and $a^\dagger$ is totally arbitrary until you specify what their commutation relation is. Fixing what multiple of identity $[a,a^\dagger]$ you like fixes the proper normalization to be used (i.e. in order to agree with the canonical commutation relation $[q,p]=i$ among conjugate variable). This is true in QM as much as in QFT (that is in fact a QM theory). The factor $1/\sqrt{2\omega(p)}$ is not coming from Lorentz (Lorentz says only that $\omega(p)=\sqrt{p^2+m^2}$) but rather from your choice of normalization of the states such as $a^\dagger(p)|0\rangle$. Some people like for instance to impose a Lorentz preserving normalization, some don't. It's a free choice with no physical content. You could have an arbitrary function in front of $a$ and $a^\dagger$ but still end up with the same Hamiltonian just by picking the proper $[a,a^\dagger]$ normalization.
-
For the standard harmonic oscillator ($m \equiv 1$) with
$$\tag{1} H = \frac{1}{2}\left(p^2 + \omega^2 x^2 \right)$$
you need this factor to get it into the ‘nicer’ form using the ladder operators:
$$\tag{2} H = \omega \left( a^\dagger a + \frac{1}{2} \right) \quad.$$
That is, if you substitute
$$x = \frac{1}{\sqrt{2\omega}} \left( a + a^\dagger \right) \quad ; \quad p = i \sqrt{\frac{\omega}{2}} \left( a - a^\dagger \right)$$
into the original equation (1), you get
$$H = \frac{\omega}{2} \left( a^\dagger a + a a^\dagger \right)$$
which, after normal ordering by making use of
$$[ a , a^\dagger ] = 1 \Leftrightarrow [ x , p ] = i \hbar$$
is equivalent to the second equation (2).
To answer your question: The factor of $\frac{1}{\sqrt{2\omega}}$ arises not from Lorentz invariance but from the frequency $\omega$ of the harmonic oscillator, which is related to the energy between two excited states.
-
1
I am truly confused what the question is actually asking and in what sense this answer is trying to answer it. The right dependence of the frequency of the QFT oscillators surely is dictated by the Lorentz invariance because it must be the frequency associated with the energy of one excitation in a given mode and the energy for a given mode is calculated via $E=\sqrt{p^2+m_0^2}$ from Lorentz inv. A simple single harmonic oscillator has no Lorentz symmetry but $\omega$ is its free parameter and this simple "building" block appears in QFT and elsewhere, with one value of $\omega$ or another. – Luboš Motl Dec 9 '12 at 6:34
1
What's the problem or question here? It's hard to see. The Solar System has an elliptical orbit of the Earth with periodicity 1 year which is related to seasons, calendars, and other things. Kepler's problem predicts elliptical orbits with some parameters but it knows nothing about the Mayan or Jewish calendar. Is that a problem? Why should it? In some contexts, parameters may be determined by some input/assumptions, in others by other inputs, in more general ones, they can't be determined at all. Different situations have different assumptions which lead to different detailed consequences. – Luboš Motl Dec 9 '12 at 6:35
@LubošMotl I was trying to show that the factor of $\frac{1}{\sqrt{2\omega}}$ in $x$ is not a result of implied Lorentz invariance but a result of us wanting to diagonalise the original HO Hamiltonian. You are right that it is, in this sense, a free parametre of the original Hamiltonian. – Claudius Dec 9 '12 at 11:04
1
Dear Claudius, the physically invariant, correct statement you made is that $\hbar\omega$ is the spacing between adjacent levels. But what you call $x,p,H$ depends on conventions. For example, you could rescale $x,p$ by factors $K,L$ and change the coefficients of $x^2$ and $p^2$ in the Hamiltonian $H$ correspondingly so that the physics (spectrum of $H$ etc.) doesn't change at all. – Luboš Motl Dec 10 '12 at 8:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364686012268066, "perplexity_flag": "head"}
|
http://www.reference.com/browse/be+advised
|
Definitions
# Axiom of choice
In mathematics, the axiom of choice, or AC, is an axiom of set theory. Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin, even if there are infinitely many bins and there is no "rule" for which object to pick from each. The axiom of choice is not required if the number of bins is finite or if such a selection "rule" is available.
The axiom of choice was formulated in 1904 by Ernst Zermelo. Although originally controversial, it is now used without reservation by most mathematicians. One motivation for this use is that a number of important mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. Unlike the axiom of choice, these alternatives are not ordinarily proposed as axioms for mathematics, but only as principles in set theory with interesting consequences.
## Statement
A choice function is a function f, defined on a collection X of nonempty sets, such that for every set s in X, f(s) is an element of s. With this concept, the axiom can be stated:
For any set of non-empty sets, X, there exists a choice function f defined on X.
Thus the negation of the axiom of choice states that there exists a set of nonempty sets which has no choice function.
Each choice function on a collection X of nonempty sets can be viewed as (or identified with) an element of the Cartesian product of the sets in X. This leads to an equivalent statement of the axiom of choice:
An arbitrary Cartesian product of non-empty sets is non-empty.
### Variants
There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it.
One variation avoids the use of choice functions by, in effect, replacing each choice function with its range.
Given any set X of pairwise disjoint non-empty sets, there exists at least one set C that contains exactly one element in common with each of the sets in X.
Another equivalent axiom only considers collections X that are essentially powersets of other sets:
For any set A, the power set of A (minus the empty set) has a choice function.
Authors who use this formulation often speak of the choice function on A, but be advised that this is a slightly different notion of choice function. Its domain is the powerset of A (minus the empty set), and so makes sense for any set A, whereas with the definition used elsewhere in this article, the domain of a choice function on a collection of sets is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as
Every set has a choice function.
which is equivalent to
For any set A there is a function f such that for any non-empty subset B of A, f(B) lies in B.
The negation of the axiom can thus be expressed as:
There is a set A such that for all functions f (on the set of non-empty subsets of A), there is a B such that f(B) does not lie in B.
## Usage
Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set X contains only non-empty sets, a mathematician might have said "let F(s) be one of the members of s for all s in X." In general, it is impossible to prove that F exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo.
Not every situation requires the axiom of choice. For finite sets X, the axiom of choice follows from the other axioms of set theory. In that case it is equivalent to saying that if we have several (a finite number of) boxes, each containing at least one item, then we can choose exactly one item from each box. Clearly we can do this: We start at the first box, choose an item; go to the second box, choose an item; and so on. The number of boxes is finite, so eventually our choice procedure comes to an end. The result is an explicit choice function: a function that takes the first box to the first element we chose, the second box to the second element we chose, and so on. (A formal proof for all finite sets would use the principle of mathematical induction.)
For certain infinite sets X, it is also possible to avoid the axiom of choice. For example, suppose that the elements of X are sets of natural numbers. Every nonempty set of natural numbers has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set and we can write down an explicit expression that tells us what value our choice function takes. Any time it is possible to specify such an explicit choice, the axiom of choice is unnecessary.
The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our set exists? For example, suppose that X is the set of all non-empty subsets of the real numbers. First we might try to proceed as if X were finite. If we try to choose an element from each set, then, because X is infinite, our choice procedure will never come to an end, and consequently, we will never be able to produce a choice function for all of X. So that won't work. Next we might try the trick of specifying the least element from each set. But some subsets of the real numbers don't have least elements. For example, the open interval (0,1) does not have a least element: If x is in (0,1), then so is x/2, and x/2 is always strictly smaller than x. So taking least elements doesn't work, either.
The reason that we are able to choose least elements from subsets of the natural numbers is the fact that the natural numbers come pre-equipped with a well-ordering: Every subset of the natural numbers has a unique least element under the natural ordering. Perhaps if we were clever we might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice is true.
## Nonconstructive aspects
A proof requiring the axiom of choice is, in one meaning of the word, nonconstructive: even though the proof establishes the existence of an object, it may be impossible to define the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no well-ordering of the reals is definable. As another example, a subset of the real numbers that is not Lebesgue measurable can be proven to exist using the axiom of choice, but it is consistent that no such set is definable.
Some mathematicians dislike the axiom of choice because it produces these intangibles. Because there is no canonical well-ordering of any set, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). The small community of mathematical constructivists posit that all existence proofs should be totally explicit; it should be possible to construct, in an explicit and canonical manner, anything that is proven to exist. They reject the full axiom of choice because it asserts the existence of an object without uniquely determining its structure. In fact the Diaconescu–Goodman–Myhill theorem shows how to derive the constructively unacceptable law of the excluded middle, or a restricted form of it, in constructive set theory from the assumption of the axiom of choice.
Another argument against the axiom of choice is that it implies the existence of counterintuitive objects. One example of this is the Banach–Tarski paradox which says that it is possible to decompose ("carve up") the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are extremely complicated.
Despite these arguments, the majority of mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics.
It is possible to prove many theorems using neither the axiom of choice nor its negation; this is common in constructive mathematics. Such statements will be true in any model of Zermelo–Fraenkel set theory (ZF), regardless of the truth or falsity of the axiom of choice in that particular model. The restriction to ZF renders any claim that relies on either the axiom of choice or its negation unprovable. For example, the Banach–Tarski paradox is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Similarly, all the statements listed below which require choice or some weaker version thereof for their proof are unprovable in ZF, but since each is provable in ZF plus the axiom of choice, there are models of ZF in which each statement is true. Statements such as the Banach–Tarski paradox can be rephrased as conditional statements, for example, "If AC holds, the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice.
## Independence
By work of Kurt Gödel and Paul Cohen, the axiom of choice is logically independent of the other axioms of Zermelo–Fraenkel set theory (ZF). This means that neither it nor its negation can be proven to be true in ZF, if ZF is consistent. Consequently, if ZF is consistent, then ZFC is consistent and ZF¬C is also consistent. So the decision whether or not it is appropriate to make use of the axiom of choice in a proof cannot be made by appeal to other axioms of set theory. The decision must be made on other grounds.
One argument given in favor of using the axiom of choice is that it is convenient to use it: using it cannot hurt (cannot result in contradiction) and makes it possible to prove some propositions that otherwise could not be proved. Many theorems which are provable using choice are an elegant general character: every ideal in a ring is contained in a maximal ideal, every vector space has a basis, and every product of compact spaces is compact. Without the axiom of choice, these theorems may not hold for mathematical objects of large cardinality.
The proof of the independence result also shows that a wide class of mathematical statements, including all statements that can be phrased in the language of Peano arithmetic, are provable in ZF if and only if they are provable in ZFC. Statements in this class include the statement that P = NP, the Riemann hypothesis, and many other unsolved mathematical problems. When one attempts to solve problems in this class, it makes no difference whether ZF or ZFC is employed if the only question is the existence of a proof. It is possible, however, that there is a shorter proof of a theorem from ZFC than from ZF.
The axiom of choice is not the only significant statement which is independent of ZF. For example, the generalized continuum hypothesis (GCH) is not only independent of ZF, but also independent of ZF plus the axiom of choice (ZFC). However, ZF plus GCH implies AC, making GCH a strictly stronger claim than AC, even though they are both independent of ZF.
## Stronger axioms
The axiom of constructibility and the generalized continuum hypothesis both imply the axiom of choice, but are strictly stronger than it.
In class theories such as Von Neumann–Bernays–Gödel set theory and Morse–Kelley set theory, there is a possible axiom called the axiom of global choice which is stronger than the axiom of choice for sets because it also applies to proper classes. And the axiom of global choice follows from the axiom of limitation of size.
## Equivalents
There are a remarkable number of important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice. The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering principle.
• Set theory
• Well-ordering theorem: Every set can be well-ordered. Consequently, every cardinal has an initial ordinal.
• If the set A is infinite, then A and A×A have the same cardinality.
• Trichotomy: If two sets are given, then either they have the same cardinality, or one has a smaller cardinality than the other.
• The Cartesian product of any nonempty family of nonempty sets is nonempty.
• König's theorem: Colloquially, the sum of a sequence of cardinals is strictly less than the product of a sequence of larger cardinals. (The reason for the term "colloquially", is that the sum or product of a "sequence" of cardinals cannot be defined without some aspect of the axiom of choice.)
• Every surjective function has a right inverse.
• Order theory
• Zorn's lemma: Every non-empty partially ordered set in which every chain (i.e. totally ordered subset) has an upper bound contains at least one maximal element.
• Hausdorff maximal principle: In any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset.
• Restricted Hausdorff maximal principle: In any partially ordered set there exists a maximal totally ordered subset.
• Algebra
• Every vector space has a basis.
• Every unital ring other than the trivial ring contains a maximal ideal.
• General topology
• Tychonoff's theorem stating that every product of compact topological spaces is compact.
• In the product topology, the closure of a product of subsets is equal to the product of the closures.
• Any product of complete uniform spaces is complete.
### Category theory
There are several results in category theory which invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), or even locally small categories, whose hom-objects are sets, then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above.
Examples of category-theoretic statements which require choice include:
• Every small category has a skeleton.
• If two small categories are weakly equivalent, then they are equivalent.
• Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint (the Freyd adjoint functor theorem).
## Weaker forms
There are several weaker statements that are not equivalent to the axiom of choice, but are closely related. One example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set X, and the stronger axiom of dependent choice (DC). These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the axiom of choice.
Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization.
### Results requiring AC (or weaker forms) but weaker than it
One of the most interesting aspects of the axiom of choice is the large number of places in mathematics that it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF.
• Set theory
• Any union of countably many countable sets is itself countable.
• If the set A is infinite, then there exists an injection from the natural numbers N to A (see Dedekind infinite).
• Every infinite game $G_S$ in which $S$ is a Borel subset of Baire space is determined.
• Measure theory
• The Vitali theorem on the existence of non-measurable sets which states that there is a subset of the real numbers that is not Lebesgue measurable.
• The Hausdorff paradox.
• The Banach–Tarski paradox.
• The Lebesgue measure of a countable disjoint union of measurable sets is equal to the sum of the measures of the individual sets.
• Algebra
• Every field has an algebraic closure.
• Every field extension has a transcendence basis.
• Stone's representation theorem for Boolean algebras needs the Boolean prime ideal theorem.
• The Nielsen-Schreier theorem, that every subgroup of a free group is free.
• The additive groups of and are isomorphic.
• Functional analysis
• The Hahn-Banach theorem in functional analysis, allowing the extension of linear functionals
• The theorem that every Hilbert space has an orthonormal basis.
• The Banach-Alaoglu theorem about compactness of sets of functionals.
• The Baire category theorem about complete metric spaces, and its consequences, such as the open mapping theorem and the closed graph theorem.
• On every infinite-dimensional topological vector space there is a discontinuous linear map.
• General topology
• A uniform space is compact if and only if it is complete and totally bounded.
• Every Tychonoff space has a Stone–Čech compactification.
## Stronger forms of ¬AC
Now, consider stronger forms of the negation of AC. For example, if we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Note that strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC + BP is consistent, if ZF is.
It is also consistent with ZF + DC that every set of reals is Lebesgue measurable; however, this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals).
## Results requiring ¬AC
There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We will abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to prove the negation of some standard facts. Note that any model of ZF¬C is also a model of ZF, so for each of the following statements, there exists a model of ZF in which that statement is true.
• There exists a model of ZF¬C in which there is a function f from the real numbers to the real numbers such that f is not continuous at a, but for any sequence {xn} converging to a, limn f(xn)=f(a).
• There exists a model of ZF¬C in which real numbers are a countable union of countable sets.
• There exists a model of ZF¬C in which there is a field with no algebraic closure.
• In all models of ZF¬C there is a vector space with no basis.
• There exists a model of ZF¬C in which there is a vector space with two bases of different cardinalities.
For proofs, see Thomas Jech, The Axiom of Choice, American Elsevier Pub. Co., New York, 1973.
• There exists a model of ZF¬C in which every set in Rn is measurable. Thus it is possible to exclude counterintuitive results like the Banach–Tarski paradox which are provable in ZFC. Furthermore, this is possible whilst assuming the Axiom of dependent choice, which is weaker than AC but sufficient to develop most of real analysis.
• In all models of ZF¬C, the generalized continuum hypothesis does not hold.
## Quotes
"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" — Jerry Bona
This is a joke: although the three are all mathematically equivalent, most mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition.
"The Axiom of Choice is necessary to select a set from an infinite number of socks, but not an infinite number of shoes." — Bertrand Russell
The observation here is that one can define a function to select from an infinite number of pairs of shoes by stating for example, to choose the left shoe. Without the axiom of choice, one cannot assert that such a function exists for pairs of socks, because left and right socks are (presumably) indistinguishable from each other.
"The axiom gets its name not because mathematicians prefer it to other axioms." — A. K. Dewdney
This quote comes from the famous April Fools' Day article in the computer recreations column of the , April 1989.
## References
• Herrlich, Horst, Axiom of Choice, Springer Lecture Notes in Mathematics 1876, Springer Verlag Berlin Heidelberg (2006). ISBN 3-540-30989-6.
• Paul Howard and Jean Rubin, "Consequences of the Axiom of Choice". Mathematical Surveys and Monographs 59; American Mathematical Society; 1998.
• Thomas Jech, "About the Axiom of Choice." Handbook of Mathematical Logic, John Barwise, ed., 1977.
• Gregory H Moore, "Zermelo's axiom of choice, Its origins, development and influence", Springer; 1982. ISBN 0-387-90670-3
• Ernst Zermelo, "Untersuchungen über die Grundlagen der Mengenlehre I," Mathematische Annalen 65: (1908) pp. 261-81. PDF download via digizeitschriften.de
Translated in: Jean van Heijenoort, 2002. From Frege to Godel: A Source Book in Mathematical Logic, 1879-1931. New edition. Harvard Univ. Press. ISBN 0-674-32449-8
*1904. "Proof that every set can be well-ordered," 139-41.
*1908. "Investigations in the foundations of set theory I," 199-215.
## External links
• Axiom of Choice and Its Equivalents at ProvenMath includes formal statement of the Axiom of Choice, Hausdorff's Maximal Principle, Zorn's Lemma and formal proofs of their equivalence down to the finest detail.
• Consequences of the Axiom of Choice, based on the book by Paul Howard and Jean Rubin.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9184086918830872, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Kullback-Leibler_divergence
|
# Kullback–Leibler divergence
(Redirected from Kullback-Leibler divergence)
Not to be confused with divergence in calculus.
In probability theory and information theory, the Kullback–Leibler divergence[1][2][3] (also information divergence, information gain, relative entropy, or KLIC) is a non-symmetric measure of the difference between two probability distributions P and Q. Specifically, the Kullback–Leibler divergence of Q from P, denoted DKL(P||Q), is a measure of the information lost when Q is used to approximate P:[4] KL measures the expected number of extra bits required to code samples from P when using a code based on Q, rather than using a code based on P. Typically P represents the "true" distribution of data, observations, or a precisely calculated theoretical distribution. The measure Q typically represents a theory, model, description, or approximation of P.
Although it is often intuited as a metric or distance, the KL divergence is not a true metric — for example, it is not symmetric: the KL from P to Q is generally not the same as the KL from Q to P. However, its infinitesimal form, specifically its Hessian, is a metric tensor: it is the Fisher information metric.
KL divergence is a special case of a broader class of divergences called f-divergences. It was originally introduced by Solomon Kullback and Richard Leibler in 1951 as the directed divergence between two distributions. It can be derived from a Bregman divergence.
## Definition
For discrete probability distributions P and Q, the K–L divergence of Q from P is defined to be
$D_{\mathrm{KL}}(P\|Q) = \sum_i \ln\left(\frac{P(i)}{Q(i)}\right) P(i).\!$
In words, it is the expectation of the logarithmic difference between the probabilities P and Q, where the expectation is taken using the probabilities P. The K-L divergence is only defined if P and Q both sum to 1 and if $Q(i)=0$ implies $P(i)=0$ for all i (absolute continuity). If the quantity $0 \ln 0$ appears in the formula, it is interpreted as zero because $\lim_{x \to 0} x \ln(x) = 0$.
For distributions P and Q of a continuous random variable, KL-divergence is defined to be the integral:[5]
$D_{\mathrm{KL}}(P\|Q) = \int_{-\infty}^\infty \ln\left(\frac{p(x)}{q(x)}\right) p(x) \, {\rm d}x, \!$
where p and q denote the densities of P and Q.
More generally, if P and Q are probability measures over a set X, and P is absolutely continuous with respect to Q, then the Kullback–Leibler divergence from P to Q is defined as
$D_{\mathrm{KL}}(P\|Q) = \int_X \ln\left(\frac{{\rm d}P}{{\rm d}Q}\right) \,{\rm d}P, \!$
where $\frac{{\rm d}P}{{\rm d}Q}$ is the Radon–Nikodym derivative of P with respect to Q, and provided the expression on the right-hand side exists. Equivalently, this can be written as
$D_{\mathrm{KL}}(P\|Q) = \int_X \ln\left(\frac{{\rm d}P}{{\rm d}Q}\right) \frac{{\rm d}P}{{\rm d}Q} \,{\rm d}Q,$
which we recognize as the entropy of P relative to Q. Continuing in this case, if $\mu$ is any measure on X for which $p = \frac{{\rm d}P}{{\rm d}\mu}$ and $q = \frac{{\rm d}Q}{{\rm d}\mu}$ exist, then the Kullback–Leibler divergence from P to Q is given as
$D_{\mathrm{KL}}(P\|Q) = \int_X p \ln \frac{p}{q} \,{\rm d}\mu. \!$
The logarithms in these formulae are taken to base 2 if information is measured in units of bits, or to base e if information is measured in nats. Most formulas involving the KL divergence hold irrespective of log base.
Various conventions exist for referring to DKL(P||Q) in words. Often it is referred to as the divergence between P and Q; however this fails to convey the fundamental asymmetry in the relation. Sometimes it may be found described as the divergence of P from, or with respect to Q (often in the context of relative entropy, or information gain). However, in the present article the divergence of Q from P will be the language used, as this best relates to the idea that it is P that is considered the underlying "true" or "best guess" distribution, that expectations will be calculated with reference to, while Q is some divergent, less good, approximate distribution.
## Motivation
Illustration of the Kullback–Leibler (KL) divergence for two normal Gaussian distributions. Note the typical asymmetry for the KL divergence is clearly visible.
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value xi out of a set of possibilities X can be seen as representing an implicit probability distribution q(xi) = 2−li over X, where li is the length of the code for xi in bits. Therefore, KL divergence can be interpreted as the expected extra message-length per datum that must be communicated if a code that is optimal for a given (wrong) distribution Q is used, compared to using a code based on the true distribution P.
$\begin{matrix} D_{\mathrm{KL}}(P\|Q) & = & -\sum_x p(x) \log q(x)& + & \sum_x p(x) \log p(x) \\ & = & H(P,Q) & - & H(P)\, \! \end{matrix}$
where H(P,Q) is called the cross entropy of P and Q, and H(P) is the entropy of P.
Note also that there is a relation between the Kullback–Leibler divergence and the "rate function" in the theory of large deviations.[6][7]
Kullback brings together all notions of information in his historic text, Information Theory and Statistics. For instance he shows that the mean discriminating information between two hypotheses is the basis for all of the various measures of information, from Shannon to Fisher. Shannon's rate is the mean information between the hyptheses of dependence and independence of processes. Fisher's information is second order term and dominant in the Taylor approximation of the discriminating information between two models of the same parametric family.[8]
## Computing the closed form
For many common families of distributions, the KL-divergence between two distributions in the family can be derived in closed form. This can often be done most easily using the form of the KL-divergence in terms of expected values or in terms of information entropy:
$\begin{align} D_{\mathrm{KL}}(P\|Q) & = - \operatorname{E}(\ln q(x)) + \operatorname{E}(\ln p(x)) \\ & = H(P,Q) - H(P) \end{align}$
where $H(P) = -\operatorname{E}(\ln p(x))$ is the information entropy of $P ,$ and $H(P,Q)$ is the cross entropy of $P$ and $Q .$
## Properties
The Kullback–Leibler divergence is always non-negative,
$D_{\mathrm{KL}}(P\|Q) \geq 0, \,$
a result known as Gibbs' inequality, with DKL(P||Q) zero if and only if P = Q almost everywhere. The entropy H(P) thus sets a minimum value for the cross-entropy H(P,Q), the expected number of bits required when using a code based on Q rather than P; and the KL divergence therefore represents the expected number of extra bits that must be transmitted to identify a value x drawn from X, if a code is used corresponding to the probability distribution Q, rather than the "true" distribution P.
The Kullback–Leibler divergence remains well-defined for continuous distributions, and furthermore is invariant under parameter transformations. For example, if a transformation is made from variable x to variable y(x), then, since P(x)dx=P(y)dy and Q(x)dx=Q(y)dy the Kullback–Leibler divergence may be rewritten:
$D_{\mathrm{KL}}(P\|Q)= \int_{x_a}^{x_b}P(x)\log\left(\frac{P(x)}{Q(x)}\right)\,dx= \int_{y_a}^{y_b}P(y)\log\left(\frac{P(y)dy/dx}{Q(y)dy/dx}\right)\,dy= \int_{y_a}^{y_b}P(y)\log\left(\frac{P(y)}{Q(y)}\right)\,dy$
where $y_a=y(x_a)$ and $y_b=y(x_b)$. Although it was assumed that the transformation was continuous, this need not be the case. This also shows that the Kullback–Leibler divergence produces a dimensionally consistent quantity, since if x is a dimensioned variable, P(x) and Q(x) are also dimensioned, since e.g. P(x)dx is dimensionless. The argument of the logarithmic term is and remains dimensionless, as it must. It can therefore be seen as in some ways a more fundamental quantity than some other properties in information theory[9] (such as self-information or Shannon entropy), which can become undefined or negative for non-discrete probabilities.
The Kullback–Leibler divergence is additive for independent distributions in much the same way as Shannon entropy. If $P_1, P_2$ are independent distributions, with the joint distribution $P(x,y) = P_1(x)P_2(y)$, and $Q, Q_1, Q_2$ likewise, then
$D_{\mathrm{KL}}(P \| Q) = D_{\mathrm{KL}}(P_1 \| Q_1) + D_{\mathrm{KL}}(P_2 \| Q_2)$.
## KL divergence for Normal Distributions
The Kullback–Leibler divergence between two multivariate normal distributions of the dimension $k$ with the means $\mu_0, \mu_1$ and their corresponding nonsingular covariance matrices $\Sigma_0, \Sigma_1$ is:
$D_\text{KL}(\mathcal{N}_0 \| \mathcal{N}_1) = { 1 \over 2 } \left( \mathrm{tr} \left( \Sigma_1^{-1} \Sigma_0 \right) + \left( \mu_1 - \mu_0\right)^\top \Sigma_1^{-1} ( \mu_1 - \mu_0 ) - k - \ln \left( { \det \Sigma_0 \over \det \Sigma_1 } \right) \right).$[10]
The logarithm in the last term must be taken to base e since all terms apart from the last are base-e logarithms of expressions that are either factors of the density function or otherwise arise naturally. The equation therefore gives a result measured in nats. Dividing the entire expression above by loge 2 yields the divergence in bits.
## Relation to metrics
One might be tempted to call it a "distance metric" on the space of probability distributions, but this would not be correct as the Kullback–Leibler divergence is not symmetric – that is, $D_{\mathrm{KL}}(P\|Q) \neq D_{\mathrm{KL}}(Q\|P)$, – nor does it satisfy the triangle inequality. Still, being a premetric, it generates a topology on the space of generalized probability distributions, of which probability distributions proper are a special case. More concretely, if $\{P_1,P_2,\cdots\}$ is a sequence of distributions such that
$\lim_{n \rightarrow \infty} D_{\mathrm{KL}}(P_n\|Q) = 0$
then it is said that $P_n \xrightarrow{D} Q$. Pinsker's inequality entails that $P_n \xrightarrow{\mathrm{D}} P \Rightarrow P_n \xrightarrow{\mathrm{TV}} P$, where the latter stands for the usual convergence in total variation.
Following Rényi (1970, 1961)[11][12] the term is sometimes also called the information gain about X achieved if P can be used instead of Q. It is also called the relative entropy, for using Q instead of P.
### Fisher information metric
However, the Kullback–Leibler divergence is rather directly related to a metric, specifically, the Fisher information metric. This can be made explicit as follows. Assume that the probability distributions P and Q are both parameterized by some (possibly multi-dimensional) parameter $\theta$. Consider then two close by values of $P = P(\theta)$ and $Q = P(\theta_0)$ so that the parameter $\theta$ differs by only a small amount of the parameter value $\theta_0$. Specifically, up to first order one has (using the Einstein summation convention)
$P(\theta) = P(\theta_0) + \Delta\theta^jP_j(\theta_0) + ...$
with $\Delta\theta^j = (\theta - \theta_0)^j$ a small change of $\theta$ in the j direction, and $P_{j}(\theta_0) = \frac{\partial P}{\partial \theta^j}(\theta_0)$ the corresponding rate of change in the probability distribution. Since the KL divergence has an absolute minimum 0 for P = Q, i.e. $\theta = \theta_0$, it changes only to second order in the small parameters $\Delta\theta^j$. More formally, as for any minimum, the first derivatives of the divergence vanish
$\left.\frac{\partial}{\partial \theta^j}\right|_{\theta = \theta_0} D_{KL}(P(\theta) || P(\theta_0)) = 0,$
and by the Taylor expansion one has up to second order
$D_{\mathrm{KL}}(P(\theta)||P(\theta_0)) = \Delta\theta^j\Delta\theta^k g_{jk}(\theta_0) + ...$
where the Hessian matrix of the divergence
$g_{jk}(\theta_0) = \left.\frac{\partial^2}{\partial \theta^j\partial \theta^k}\right|_{\theta = \theta_0} D_{KL}(P(\theta)||P(\theta_0))$
must be positive semidefinite. Letting $\theta_0$ vary (and dropping the subindex 0) the Hessian $g_{jk}(\theta)$ defines a (possibly degenerate) Riemannian metric on the $\theta$ parameter space, called the Fisher information metric.
## Relation to other quantities of information theory
Many of the other quantities of information theory can be interpreted as applications of the KL divergence to specific cases.
The self-information,
$I(m) = D_{\mathrm{KL}}(\delta_{im} \| \{ p_i \}),$
is the KL divergence of the probability distribution P(i) from a Kronecker delta representing certainty that i=m — i.e. the number of extra bits that must be transmitted to identify i if only the probability distribution P(i) is available to the receiver, not the fact that i=m.
The mutual information,
$\begin{align}I(X;Y) & = D_{\mathrm{KL}}(P(X,Y) \| P(X)P(Y) ) \\ & = \mathbb{E}_X \{D_{\mathrm{KL}}(P(Y|X) \| P(Y) ) \} \\ & = \mathbb{E}_Y \{D_{\mathrm{KL}}(P(X|Y) \| P(X) ) \}\end{align}$
is the KL divergence of the product P(X)P(Y) of the two marginal probability distributions from the joint probability distribution P(X,Y) — i.e. the expected number of extra bits that must be transmitted to identify X and Y if they are coded using only their marginal distributions instead of the joint distribution. Equivalently, if the joint probability P(X,Y) is known, it is the expected number of extra bits that must on average be sent to identify Y if the value of X is not already known to the receiver.
The Shannon entropy,
$\begin{align}H(X) & = \mathrm{(i)} \, \mathbb{E}_x \{I(x)\} \\ & = \mathrm{(ii)} \log N - D_{\mathrm{KL}}(P(X) \| P_U(X) )\end{align}$
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the KL divergence of the uniform distribution PU(X) from the true distribution P(X) — i.e. less the expected number of bits saved, which would have had to be sent if the value of X were coded according to the uniform distribution PU(X) rather than the true distribution P(X).
The conditional entropy,
$\begin{align}H(X|Y) & = \log N - D_{\mathrm{KL}}(P(X,Y) \| P_U(X) P(Y) ) \\ & = \mathrm{(i)} \,\, \log N - D_{\mathrm{KL}}(P(X,Y) \| P(X) P(Y) ) - D_{\mathrm{KL}}(P(X) \| P_U(X)) \\ & = H(X) - I(X;Y) \\ & = \mathrm{(ii)} \, \log N - \mathbb{E}_Y \{ D_{\mathrm{KL}}(P(X|Y) \| P_U(X)) \}\end{align}$
is the number of bits which would have to be transmitted to identify X from N equally likely possibilities, less the KL divergence of the product distribution PU(X) P(Y) from the true joint distribution P(X,Y) — i.e. less the expected number of bits saved which would have had to be sent if the value of X were coded according to the uniform distribution PU(X) rather than the conditional distribution P(X|Y) of X given Y.
The cross entropy between two probability distributions measures the average number of bits needed to identify an event from a set of possibilities, if a coding scheme is used based on a given probability distribution $q$, rather than the "true" distribution $p$. The cross entropy for two distributions $p$ and $q$ over the same probability space is thus defined as follows:
$\mathrm{H}(p, q) = \mathrm{E}_p[-\log q] = \mathrm{H}(p) + D_{\mathrm{KL}}(p \| q).\!$
## KL divergence and Bayesian updating
In Bayesian statistics the KL divergence can be used as a measure of the information gain in moving from a prior distribution to a posterior distribution. If some new fact Y = y is discovered, it can be used to update the probability distribution for X from p(x|I) to a new posterior probability distribution p(x|y,I) using Bayes' theorem:
$p(x|y,I) = \frac{p(y|x,I) p(x|I)}{p(y|I)}$
This distribution has a new entropy
$H\big( p(\cdot|y,I) \big) = \sum_x p(x|y,I) \log p(x|y,I),$
which may be less than or greater than the original entropy H(p(·|I)). However, from the standpoint of the new probability distribution one can estimate that to have used the original code based on p(x|I) instead of a new code based on p(x|y,I) would have added an expected number of bits
$D_{\mathrm{KL}}\big(p(\cdot|y,I) \big\|p(\cdot|I) \big) = \sum_x p(x|y,I) \log \frac{p(x|y,I)}{p(x|I)}$
to the message length. This therefore represents the amount of useful information, or information gain, about X, that we can estimate has been learned by discovering Y = y.
If a further piece of data, Y2 = y2, subsequently comes in, the probability distribution for x can be updated further, to give a new best guess p(x|y1,y2,I). If one reinvestigates the information gain for using p(x|y1,I) rather than p(x|I), it turns out that it may be either greater or less than previously estimated:
$\sum_x p(x|y_1,y_2,I) \log \frac{p(x|y_1,y_2,I)}{p(x|I)}$ may be ≤ or > than $\sum_x p(x|y_1,I) \log \frac{p(x|y_1,I)}{p(x|I)}$
and so the combined information gain does not obey the triangle inequality:
$D_{\mathrm{KL}} \big( p(\cdot|y_1,y_2,I) \big\| p(\cdot|I) \big)$ may be <, = or > than $D_{\mathrm{KL}} \big( p(\cdot|y_1,y_2,I)\big\| p(\cdot|y_1,I) \big) + D_{\mathrm{KL}} \big( p(\cdot |y_1,I) \big\| p(x|I) \big)$
All one can say is that on average, averaging using p(y2|y1,x,I), the two sides will average out.
### Bayesian experimental design
A common goal in Bayesian experimental design is to maximise the expected KL divergence between the prior and the posterior.[13] When posteriors are approximated to be Gaussian distributions, a design maximising the expected KL divergence is called Bayes d-optimal.
## Discrimination information
The Kullback–Leibler divergence DKL( p(x|H1) || p(x|H0) ) can also be interpreted as the expected discrimination information for H1 over H0: the mean information per sample for discriminating in favor of a hypothesis H1 against a hypothesis H0, when hypothesis H1 is true.[14] Another name for this quantity, given to it by I.J. Good, is the expected weight of evidence for H1 over H0 to be expected from each sample.
The expected weight of evidence for H1 over H0 is not the same as the information gain expected per sample about the probability distribution p(H) of the hypotheses,
DKL( p(x|H1) || p(x|H0) ) $\neq$ IG = DKL( p(H|x) || p(H|I) ).
Either of the two quantities can be used as a utility function in Bayesian experimental design, to choose an optimal next question to investigate: but they will in general lead to rather different experimental strategies.
On the entropy scale of information gain there is very little difference between near certainty and absolute certainty—coding according to a near certainty requires hardly any more bits than coding according to an absolute certainty. On the other hand, on the logit scale implied by weight of evidence, the difference between the two is enormous – infinite perhaps; this might reflect the difference between being almost sure (on a probabilistic level) that, say, the Riemann hypothesis is correct, compared to being certain that it is correct because one has a mathematical proof. These two different scales of loss function for uncertainty are both useful, according to how well each reflects the particular circumstances of the problem in question.
### Principle of minimum discrimination information
The idea of Kullback–Leibler divergence as discrimination information led Kullback to propose the Principle of Minimum Discrimination Information (MDI): given new facts, a new distribution f should be chosen which is as hard to discriminate from the original distribution f0 as possible; so that the new data produces as small an information gain DKL( f || f0 ) as possible.
For example, if one had a prior distribution p(x,a) over x and a, and subsequently learnt the true distribution of a was u(a), the Kullback–Leibler divergence between the new joint distribution for x and a, q(x|a) u(a), and the earlier prior distribution would be:
$D_\mathrm{KL}(q(x|a)u(a)||p(x,a)) = \mathbb{E}_{u(a)}\{D_\mathrm{KL}(q(x|a)||p(x|a))\} + D_\mathrm{KL}(u(a)||p(a)),$
i.e. the sum of the KL divergence of p(a) the prior distribution for a from the updated distribution u(a), plus the expected value (using the probability distribution u(a)) of the KL divergence of the prior conditional distribution p(x|a) from the new conditional distribution q(x|a). (Note that often the later expected value is called the conditional KL divergence (or conditional relative entropy) and denoted by DKL(q(x|a)||p(x|a))[15]) This is minimised if q(x|a) = p(x|a) over the whole support of u(a); and we note that this result incorporates Bayes' theorem, if the new distribution u(a) is in fact a δ function representing certainty that a has one particular value.
MDI can be seen as an extension of Laplace's Principle of Insufficient Reason, and the Principle of Maximum Entropy of E.T. Jaynes. In particular, it is the natural extension of the principle of maximum entropy from discrete to continuous distributions, for which Shannon entropy ceases to be so useful (see differential entropy), but the KL divergence continues to be just as relevant.
In the engineering literature, MDI is sometimes called the Principle of Minimum Cross-Entropy (MCE) or Minxent for short. Minimising the KL divergence of m from p with respect to m is equivalent to minimizing the cross-entropy of p and m, since
$H(p,m) = H(p) + D_{\mathrm{KL}}(p\|m),$
which is appropriate if one is trying to choose an adequate approximation to p. However, this is just as often not the task one is trying to achieve. Instead, just as often it is m that is some fixed prior reference measure, and p that one is attempting to optimise by minimising DKL(p||m) subject to some constraint. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be DKL(p||m), rather than H(p,m).
## Relationship to available work
Pressure versus volume plot of available work from a mole of Argon gas relative to ambient, calculated as To times KL divergence.
Surprisals[16] add where probabilities multiply. The surprisal for an event of probability p is defined as s ≡ k ln[1/p]. If k is {1,1/ln 2,1.38×10−23} then surprisal is in {nats, bits, or J/K} so that, for instance, there are N bits of surprisal for landing all "heads" on a toss of N coins.
Best-guess states (e.g. for atoms in a gas) are inferred by maximizing the average-surprisal S (entropy) for a given set of control parameters (like pressure P or volume V). This constrained entropy maximization, both classically[17] and quantum mechanically,[18] minimizes Gibbs availability in entropy units[19] A ≡ −kln Z where Z is a constrained multiplicity or partition function.
When temperature T is fixed, free-energy (T × A) is also minimized. Thus if T, V and number of molecules N are constant, the Helmholtz free energy F ≡ U − TS (where U is energy) is minimized as a system "equilibrates." If T and P are held constant (say during processes in your body), the Gibbs free energy G ≡ U + PV − TS is minimized instead. The change in free energy under these conditions is a measure of available work that might be done in the process. Thus available work for an ideal gas at constant temperature To and pressure Po is W = ΔG = NkToΘ[V/Vo] where Vo = NkTo/Po and Θ[x] ≡ x − 1 − ln x ≥ 0 (see also Gibbs inequality).
More generally[20] the work available relative to some ambient is obtained by multiplying ambient temperature To by KL-divergence or net-surprisal ΔI ≥ 0, defined as the average value of k ln[p/po] where po is the probability of a given state under ambient conditions. For instance, the work available in equilibrating a monatomic ideal gas to ambient values of Vo and To is thus W =ToΔI, where KL-divergence ΔI = Nk(Θ[V/Vo] + 3⁄2Θ[T/To]). The resulting contours of constant KL-divergence, at right for a mole of Argon at standard temperature and pressure, for example put limits on the conversion of hot to cold as in flame-powered air-conditioning or in the unpowered device to convert boiling-water to ice-water discussed here.[21] Thus KL-divergence measures thermodynamic availability in bits.
## Quantum information theory
For density matrices P and Q on a Hilbert space the K–L divergence (or relative entropy as it is often called in this case) from P to Q is defined to be
$D_{\mathrm{KL}}(P\|Q) = \operatorname{Tr}(P( \log(P) - \log(Q))). \!$
In quantum information science the minimum of $D_{\mathrm{KL}}(P\|Q)$ over all separable states Q can also be used as a measure of entanglement in the state P.
## Relationship between models and reality
Just as KL-divergence of "ambient from actual" measures thermodynamic availability, KL-divergence of "model from reality" is also useful even if the only clues we have about reality are some experimental measurements. In the former case KL-divergence describes distance to equilibrium or (when multiplied by ambient temperature) the amount of available work, while in the latter case it tells you about surprises that reality has up its sleeve or, in other words, how much the model has yet to learn.
Although this tool for evaluating models against systems that are accessible experimentally may be applied in any field, its application to models in ecology via Akaike information criterion are particularly well described in papers[22] and a book[23] by Burnham and Anderson. In a nutshell the KL-divergence of a model from reality may be estimated, to within a constant additive term, by a function (like the squares summed) of the deviations observed between data and the model's predictions. Estimates of such divergence for models that share the same additive term can in turn be used to choose between models.
When trying to fit parametrized models to data there are various estimators which attempt to minimize Kullback–Leibler divergence, such as maximum likelihood and maximum spacing estimators.
## Symmetrised divergence
Kullback and Leibler themselves actually defined the divergence as:
$D_{\mathrm{KL}}(P\|Q) + D_{\mathrm{KL}}(Q\|P)\, \!$
which is symmetric and nonnegative. This quantity has sometimes been used for feature selection in classification problems, where P and Q are the conditional pdfs of a feature under two different classes.
An alternative is given via the λ divergence,
$D_{\lambda}(P\|Q) = \lambda D_{\mathrm{KL}}(P\|\lambda P + (1-\lambda)Q) + (1-\lambda) D_{\mathrm{KL}}(Q\|\lambda P + (1-\lambda)Q),\, \!$
which can be interpreted as the expected information gain about X from discovering which probability distribution X is drawn from, P or Q, if they currently have probabilities λ and (1 − λ) respectively.
The value λ = 0.5 gives the Jensen–Shannon divergence, defined by
$D_{\mathrm{JS}} = \tfrac{1}{2} D_{\mathrm{KL}} \left (P \| M \right ) + \tfrac{1}{2} D_{\mathrm{KL}}\left (Q \| M \right )\, \!$
where M is the average of the two distributions,
$M = \tfrac{1}{2}(P+Q). \,$
DJS can also be interpreted as the capacity of a noisy information channel with two inputs giving the output distributions p and q. The Jensen–Shannon divergence, like all f-divergences, is locally proportional to the Fisher information metric. It is similar to the Hellinger metric (in the sense that induces the same affine connection on a statistical manifold), and equal to one-half the so-called Jeffreys divergence (Rubner et al., 2000; Jeffreys 1946).
## Relationship to Hellinger distance
If P and Q are two probability measures, then the squared Hellinger distance is the quantity given by
$H^2(P,Q) = \frac{1}{2}\displaystyle \int \left(\sqrt{\frac{{\rm d}P}{{\rm d}\lambda}} - \sqrt{\frac{{\rm d}Q}{{\rm d}\lambda}}\right)^2 {\rm d}\lambda.$
The Kullback–Leibler divergence can be lower bounded in terms of the Hellinger distance[24]
$D_{KL}(Q||P) \geq 2 H^2(P,Q).$
## Other probability-distance measures
Other measures of probability distance are the histogram intersection, Chi-squared statistic, quadratic form distance, match distance, Kolmogorov–Smirnov distance, and earth mover's distance.[25]
## Data differencing
Main article: Data differencing
Just as absolute entropy serves as theoretical background for data compression, relative entropy serves as theoretical background for data differencing – the absolute entropy of a set of data in this sense being the data required to reconstruct it (minimum compressed size), while the relative entropy of a target set of data, given a source set of data, is the data required to reconstruct the target given the source (minimum size of a patch).
## References
1. Kullback, S.; Leibler, R.A. (1951). "On Information and Sufficiency". 22 (1): 79–86. doi:10.1214/aoms/1177729694. MR 39968.
2. S. Kullback (1959) Information theory and statistics (John Wiley and Sons, NY).
3. Kullback, S. (1987). "Letter to the Editor: The Kullback–Leibler distance". 41 (4): 340–341. JSTOR 2684769.
4. Kenneth P. Burnham, David R. Anderson (2002), Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach. Springer. (2nd ed), p.51
5. C. Bishop (2006). Pattern Recognition and Machine Learning. p. 55.
6. Sanov I.N. (1957) "On the probability of large deviations of random magnitudes". Matem. Sbornik, v. 42 (84), 11--44.
7. See the section "differential entropy - 4" in Relative Entropy video lecture by Sergio Verdú NIPS 2009
8. A. Rényi (1970). Probability Theory. New York: Elsevier. Appendix, Sec.4. ISBN 0-486-45867-9.
9. A. Rényi (1961). "On measures of information and entropy". Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability 1960. pp. 547–561.
10. Chaloner K. and Verdinelli I. (1995) Bayesian Experimental Design: A Review. 10 (3): 273-304. [2] doi:10.1214/ss/1177009939
11. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 14.7.2. Kullback-Leibler Distance". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8
12. Thomas M. Cover, Joy A. Thomas (1991) Elements of Information Theory (John Wiley and Sons, New York, NY), p.22
13. Myron Tribus (1961) Thermodynamics and thermostatics (D. Van Nostrand, New York)
14. E. T. Jaynes (1957) Information theory and statistical mechanics, Physical Review 106:620
15. E. T. Jaynes (1957) Information theory and statistical mechanics II, Physical Review 108:171
16. J.W. Gibbs (1873) A method of geometrical representation of thermodynamic properties of substances by means of surfaces, reprinted in The Collected Works of J. W. Gibbs, Volume I Thermodynamics, ed. W. R. Longley and R. G. Van Name (New York: Longmans, Green, 1931) footnote page 52.
17. M. Tribus and E. C. McIrvine (1971) Energy and information, Scientific American 224:179–186.
18. P. Fraundorf (2007) Thermal roots of correlation-based complexity, Complexity 13:3, 18–26
19. Kenneth P. Burnham and David R. Anderson (2001) Kullback–Leibler information as a basis for strong inference in ecological studies, Wildlife Research 28:111–119.
20. Burnham, K. P. and Anderson D. R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition (Springer Science, New York) ISBN 978-0-387-95364-9.
21. Rubner, Y., Tomasi, C., and Guibas, L. J., 2000. The Earth Mover's distance as a metric for image retrieval. International Journal of Computer Vision, 40(2): 99–121.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8997453451156616, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/58573?sort=votes
|
## About a Delzant polytope. (In particular dodecahedron)
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi. I have a question.
Definition. Delzant polytope $P$ is a rational convex simple polytope with the smooth condition. Here, "smooth" means that for each vertex $v$, the $n$ edges containing $v$ form an element of $SL(n,\mathbb{Z})$, where $n$ is a dimension of $P$.
(If you wonder why this condition is called smooth, See Fulton. Introduction to toric variety chap I)
My question is as follow.
Can dodecahedron be the Delzant polytope? I mean, is there a symplectic toric manifold whose moment map image is combinatorially equivalent to a dodecahedron?
Delzant's classfication theorem of compact symplectic toric manifold is surely very strong. But I think it is very hard to check whether the given polytope (having many faces) is of Delzant type or not. If you know any reference of give me any comment, I really appriciate for your help.
Thank you.
-
As far as the classification of toric symplectic manifolds is concerned, the property "having a Delzant polytope of the combinatorial type of the regular dodecahedron" seems unmotivated. Nevertheless, it's a nice question. – André Henriques Mar 15 2011 at 21:46
The obvious first thing to try is a rational pyritohedron, but that fails: at one of the eight vertices with threefold rotational symmetry, the adjacent edges lie along directions which are the even permutations of $(p^2, -pq, q^2)$ for positive coprime integers $p \gt q$, giving $(p^3+q^3)^2$ for a determinant, rather than $1$. – Tracy Hall Mar 15 2011 at 22:50
## 1 Answer
This is just a partial answer giving a solution to a dual problem. I will use the language of toric varieties (http://www3.amherst.edu/~dacox/), but in essence this answer is purely combinatorial. I will construct a smooth 3-dimensional toric variety whose fan has combinatorial structure of icosahedron. Unfortunately, I am not sure that this toric variety is projective (Update David proves in the comment below that this variety is not pojective!*). If it were projective this would of course give a solution to your question but I have doubts that it is not projective (this should not be hard to check). But even if we can not get the dodecahedron this way, we will get a collection of Delzant polytops quite "close" to dodecahedron it terms of thire combinatorial structure.
In terms of combinatorics, fist thing what we will do is the following: we will show how to decompose $\mathbb R^3$ into $20$ rational simplicial cones, where each simplicial cone can be sent to the positive octant by a matrix from $SL(3,\mathbb Z)$. There will be in total $12$ rays and each ray is in the border of $5$ simplicial cones (just like for icosahedron).
Construction. First we need to chose an integral lattice $N$ in $\mathbb R^3$, it will be an index two sublattice in $\mathbb Z^3$; $(a,b,c)\in N$ if $a,b,c\in \mathbb Z$, $a+b+c\in 2\mathbb Z$. Next we specify $12$ points in $N$ that lay on $12$ ray of our fan. These are: $(\pm 1, \pm 1,0)$, $(\pm 1, 0, \pm 1), (0,\pm 1, \pm 1)$. In fact these points are vertices of the cuboctahedron http://en.wikipedia.org/wiki/Cuboctahedron . Cuboctahedron has $8$ triangular faces and $6$ square faces, and to finish the construction we should cut each square face by a diagonal into two triangles. This is done as follows: the faces $z=\pm 1$ are cut into two by the plane $x=0$, faces $y=\pm 1$ by $z=0$, and the faces $x=\pm 1$ by $y=0$. Now it is not hard to see that the obtained triangulation of cuboctahedron gives us a decomposition of $\mathbb R^3$ into $12$ standard simplicial cones (just notice that the triples of vectors ($(1,1,0)$, $(0,1,1)$, $(1,0,1)$) and $((0,\pm 1,1), (1,0,1))$ from integral bases in $N$).
Now we can ask the question. Have we constructed the fan of a projective variety? One can answer this question (but I don't do it here (Update David proved in the comment that this example is not projective)). First of all, if we don't cut square faces of cuboctahedron by diagonals, we have a fan of a singular projective variety with $6$ ordinary double singularities (i.e. singularities given locally by $(x^2+y^2+z^2+t^2=0)$). A moment polytope of this variety is dual to cuboctahedron and is called rombic dodecahedron http://en.wikipedia.org/wiki/Rhombic_dodecahedron . This is not a Delzant polytope because it has $8$ bad vertices. Now, this polytope has $12$ faces and in order to make the polytope Delzant we should just generically perturb the faces (by replacing them by nearby parallel planes). Any such generic perturbation will give us Delzant polytope. One just need to check if among these polytopes there will be the Dodecahedron... In other words, each perturbation corresponds to a symplectic structure on a small resolution of the singular variety, but it is not clear we can get the resolution corresponding to choices of diagonals in squares that we made.
-
1
Sadly, you lose. Recall that a fan is the fan of a projective toric variety if and only if it supports a convex piecewise linear function, which is linear on each cone and strictly convex whenever two cones meet. Suppose that f is such a function. Then the convexity condition gives $f(0,1,1)+f(0,−1,1)<f(1,0,1)+f(-1,0,1)$, and five similar conditions coming from the other squares. Add all six inequalities together, and you get that a sum of twelve terms is less than the same sum of twelve terms, a contradiction. – David Speyer Apr 13 2011 at 20:32
1
Nice approach though! I definitely hope that some variant of this would work. – David Speyer Apr 13 2011 at 20:32
David, thanks a lot for this comment!! (I was suspecting that something like this will happen, since the picture is too symmetric...) – Dmitri Apr 13 2011 at 20:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249971508979797, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/10/10/monoidal-structures-on-span-2-categories/?like=1&source=post_flair&_wpnonce=39170bc4ce
|
# The Unapologetic Mathematician
## Monoidal Structures on Span 2-Categories
Now we want to take our 2-categories of spans and add some 2-categorical analogue of a monoidal structure on it.
Here’s what we need:
• An object $\mathbf{1}$ called the unit object.
• For objects $A_1$ and $A_2$, an object $A_1\otimes A_2$.
• For an object $A$ and a 1-morphism $f:B\rightarrow C$, 1-morphisms $A\otimes f:A\otimes B\rightarrow A\otimes C$ and $f\otimes A:B\otimes A\rightarrow C\otimes A$.
• For an object $A$ and a 2-morphism $\alpha:f\Rightarrow g$, 2-morphisms $A\otimes\alpha:A\otimes f\Rightarrow A\otimes g$ and $\alpha\otimes A:f\otimes A\Rightarrow g\otimes A$.
• For 1-morphisms $f:A\rightarrow A'$ and $g:B\rightarrow B'$, a 2-morphism $\bigotimes_{f,g}:(f\otimes B')\circ(A\otimes g)\Rightarrow(A'\otimes g)\circ(f\otimes B)$ called the “tensorator”.
Notice that instead of defining the tensor product as a functor, we define its action on a single object and a single 1-morphism (in either order). Then if we have two 1-morphisms we have two ways of doing first one on one side of the tensor product, then the other on the other side. To say that $\underline{\hphantom{X}}\otimes\underline{\hphantom{X}}$ is a functor would say that these two are equal, but we want to weaken this to say that there is some 2-morphism from one to the other.
Now let’s assume that we’ve got a regular monoidal structure on our category $\mathcal{C}$, and further that this monoidal structure preserves the pullbacks we’re assuming exist in $\mathcal{C}$. That is, if $D_1$ is a pullback of the diagram $A_1\rightarrow C_1\leftarrow B_1$ and $D_2$ is a pullback of the diagram $A_2\rightarrow C_2\leftarrow B_2$, then $D_1\otimes D_2$ will be a pullback of the diagram $A_1\otimes A_2\rightarrow C_1\otimes C_2\leftarrow B_1\otimes B_2$.
So what does this mean for $\mathbf{Span}(\mathcal{C})$? Well, the monoidal structure on $\mathcal{C}$ gives us a unit object $\mathbf{1}$ and monoidal product objects $A\otimes B$. If we have a span $B\stackrel{f_1}{\leftarrow}X\stackrel{f_2}{\rightarrow}C$ and an object $A$, we can form the spans $A\otimes B\stackrel{1_A\otimes f_1}{\leftarrow}A\otimes X\stackrel{1_A\otimes f_2}{\rightarrow}A\otimes C$ and $B\otimes A\stackrel{f_1\otimes 1_A}{\leftarrow}X\otimes A\stackrel{f_2\otimes 1_A}{\rightarrow}A\otimes C$. If we have spans $B\stackrel{f_1}{\leftarrow}X\stackrel{f_2}{\rightarrow}C$ and $B\stackrel{g_1}{\leftarrow}Y\stackrel{g_2}{\rightarrow}C$ and an arrow $\alpha:X\rightarrow Y$ with $f_1=\alpha\circ g_1$ and $f_2=\alpha\circ g_2$ then the arrow $1_A\otimes\alpha:A\otimes X\rightarrow A\otimes Y$ satisfies $1_A\otimes f_1=1_A\otimes\alpha\circ1_A\otimes g_1$ and $1_A\otimes f_2=1_A\otimes\alpha\circ1_A\otimes g_2$, and similarly the arrow $\alpha\otimes1_A:X\otimes A\rightarrow Y\otimes A$ satisfies $f_1\otimes1_A=\alpha\otimes1_A\circ g_1\otimes1_A$ and $f_2\otimes1_A=\alpha\otimes1_A\circ g_2\otimes1_A$. And so we have our monoidal products of objects with 1- and 2-morphisms.
When we take spans $f=A\stackrel{f_1}{\leftarrow}A''\stackrel{f_2}{\rightarrow}A'$ and $g=B\stackrel{g_1}{\leftarrow}B''\stackrel{g_2}{\rightarrow}B'$, we can form the following two composite spans:
where we use the assumption that the monoidal product preserves pullbacks to show that the squares in these diagrams are indeed pullback squares.
As we’ve drawn them, these two spans are the same. However, remember that the pullback in $\mathcal{C}$ is only defined up to isomorphism. That is, when we define the pullback as a functor, we choose some isomorphism class of cones, and these diagrams say that the pullbacks we’ve drawn are isomorphic to those defined by the pullback functor. But that means that whatever the “real” pullbacks $C_1$ and $C_2$ are, they’re both isomorphic to $A''\otimes B''$, and that those isomorphisms play nicely with the other arrows we’ve drawn. And so there will be some isomorphism $\bigotimes_{f,g}:C_1\rightarrow C_2$ between the “real” pullbacks that make the required triangles commute, giving us our tensorator.
Therefore what we have shown is this: Given a monoidal category $\mathcal{C}$ with pullbacks such that the monoidal structure preserves those pullbacks, we get the data for the structure of a (weak) monoidal 2-category on $\mathbf{Span}(\mathcal{C})$. Dually, we can show that given a monoidal category $\mathcal{C}$ with pushouts, such that the monoidal structure preserves them, we get the data for a monoidal 2-category $\mathbf{CoSpan}(\mathcal{C})$.
[UPDATE]: In my hurry to get to my second class, I overstated myself. I should have said that we have the data of the monoidal structure. The next post contains the conditions the data must satisfy.
### Like this:
Posted by John Armstrong | Category theory
## 3 Comments »
1. [...] Structures on Span 2-Categories II As I just stated in my update to yesterday’s post, I’ve given the data for a monoidal structure on the 2-category . Now we need some conditions [...]
Pingback by | October 11, 2007 | Reply
2. [...] on Span 2-Categories Now that we can add a monoidal structure to our 2-category of spans, we want to add something like a [...]
Pingback by | October 12, 2007 | Reply
3. [...] If we pay attention to this homotopy between homotopies, we get a structure analogous to the tensorator. I’ll leave you to verify the exchange identity on your own, which will establish the [...]
Pingback by | October 17, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8996534943580627, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/45648/euler-equation-of-fluid-dynamics?answertab=active
|
# Euler equation of fluid dynamics
I'm trying to obtain Euler equation for a perfect fluid in laminar or stationary flow. A particle fluid is submitted at volume forces and surface force. The fist, in my case, is giving only by gravity and the second by pressure. By Newton's second law I obtain:
$$\vec{F}_V + \vec{F}_s = m\frac{d\vec{v}}{dt}.$$
An element of volume force is given by $$d\vec{F}_V = dm\vec{g}=\rho d\omega\vec{g}$$ and an element of surface force is given by $$d\vec{F}_S = -pd\vec{S}.$$
Integrating I obtain
$$\int_V \rho \,d\omega\vec{g} - \int_S p\,d\vec{S} = \frac{d\vec{v}}{dt}\int_V \rho\, d\omega$$.
Now Euler equation is written in local form as $$\rho\vec{g} - \nabla p = \rho \frac{d\vec{v}}{dt}.$$
My question is this: where the gradient of $p$ comes from? I must have the following identity $$-\int_S pd\vec{S} = -\int_V \nabla p\,d\omega.$$
Why the transformation from a surface integral to a volume integral is given by the gradient and not by the divergence? I'm doing something wrong in the previous calculations?
-
## 2 Answers
This confusion is caused by vector calculus. You should treat each component separately, and then it is obvious. For example, for the x component:
$$\int_V \partial_x p dx dy dz = \int_{\partial V} p(x) dy dz = \int_{\partial V} p dS_x$$
by the fundamental theorem of calculus (do the x integral first). Likewise for the other components. You can make up a proof for this from the divergence theorem by introducing the fictitious vector field
$$Q = (p,0,0)$$
And then the divergence of Q is the left hand side, while the right hand side is $Q\cdot dS$. But it's really just the fundamental theorem of calculus.
-
It is simpler and more general considering it as a special case of the Gauss-Ostrogradsky theorem. – Anuar Dec 2 '12 at 5:58
@Anuar: Except it's so simple I don't think it needs Gauss or Ostrogradsky's names attached to it. – Ron Maimon Dec 2 '12 at 6:04
Well, it's just matter of likes. – Anuar Dec 2 '12 at 6:09
Because the Gauss-Ostrogradski theorem says that $$\iiint_{V}\nabla\cdot\mathbf{C}dv=\iint_{\partial V}\mathbf{C}\cdot\mathbf{n}da$$ Where $\mathbf{C}$ is a vector field. Here you don't have a vector field inside the integral. So, why do you expect that the G-O theorem is applied in this case?? By the way, the last equality that you wrote is correct. I don't know how to prove it, but I'm pretty sure that I saw that kind of theorem in Jackson's book of Electrodynamics.
-
Right. p is not a vector field... By the way I still don't understand the last equality... – R. M. Dec 1 '12 at 23:27
– Anuar Dec 1 '12 at 23:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484889507293701, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/142520-solved-homogeneous-system.html
|
# Thread:
1. ## [SOLVED] Homogeneous system
If A is a 3x3 matrix and $a_1+2a_2-a_3=0$, then A must be singular.
True since $\begin{bmatrix}<br /> 0\\ <br /> 0\\ <br /> 0<br /> \end{bmatrix}$ and $\begin{bmatrix}<br /> 1\\ <br /> -2\\ <br /> -1<br /> \end{bmatrix}$ are solutions to the equations Ax=b.
2. true since, it has a nontrivial solution to the homogenous eqn.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8343649506568909, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/85846/list
|
## Return to Question
3 added 2 characters in body
Hello
While working on understanding the space spanned by certain integer relations of real numbers I have come across the following problem. Given $v_1,\dots, v_n \in \mathbb{Z}^m$ I am would like to find $w_1, \ldots w_n \in \mathbb{Z}^m$ such that
$$1.) \ \ \ \ \ \ \ \mathbb{Z}v_1+\dots +\mathbb{Z}v_n \ \subseteq \mathbb{Z}w_1 + \dots + \mathbb{Z}w_n$$
$$2.) \ \ \ \ \ \ \ \ |w_i|^2 \ \text{is small/ small as possible/ a lot smaller that $|v|^2$}$$|v_j|^2$}$\$
$$3.) \ \ \ \ \ \ \text{The vectorspaces spanned by ${v_i}$ and ${w_i}$ are identical}$$
In other words I want that if each $v_i$ satisfies that $v_{i,1}\alpha_i + \dots + v_{i,m}\alpha_m=0$ for a fixed collection of $\alpha_i \in \mathbb{R}$ the same remains true for the $w_i$ (this is of course the real condition I want).
If I wanted the lattices spanned by $v_i$ and $w_i$ to be identical LLL would clearly be the natural approach, but since I am not require this, this seems to not make my $v_i$ nearly as small as I can achieve under these weaker conditions. Does there exists an algorithm, approach, idea which could come up with a such a basis $v_i$ given the $w_i$.
EDIT : I missed the last condition and had some trouble updating correctly. Hope it makes sense and is not completely trivial now.
2 added 473 characters in body; edited body; added 1 characters in body
Hello
While working on understanding the space spanned by certain integer relations of real numbers I have come across the following problem. Given $v_1,\dots, v_n \in \mathbb{Z}^n$ mathbb{Z}^m$I am would like to find$w_1, \ldots w_n \in \mathbb{Z}^n$mathbb{Z}^m$ such that
$$1.) \ \ \ \ \ \ \ \mathbb{Z}v_1+\dots +\mathbb{Z}v_n \ \subseteq \mathbb{Z}w_1 + \dots + \mathbb{Z}w_n$$
$$2.) \ \ \ \ \ \ \ \ |w_i|^2 \ \text{is small/ small as possible/ a lot smaller that $|v|^2$}$$
$$3.) \ \ \ \ \ \ \text{The vectorspaces spanned by ${v_i}$ and ${w_i}$ are identical}$$
In other words I want that if each $v_i$ satisfies that $v_{i,1}\alpha_i + \dots + v_{i,m}\alpha_m=0$ for a fixed collection of $\alpha_i \in \mathbb{R}$ the same remains true for the $w_i$ (this is of course the real condition I want).
If I wanted the lattices spanned by $v_i$ and $w_i$ to be identical LLL would clearly be the natural approach, but since I am not require this, this seems to not make my $v_i$ nearly as small as I can achieve under these weaker conditions. Does there exists an algorithm, approach, idea which could come up with a such a basis $v_i$ given the $w_i$.
EDIT : I missed the last condition and had some trouble updating correctly. Hope it makes sense and is not completely trivial now.
1
# Finding lattice with short basis-vectors containing given lattice
Hello
While working on understanding the space spanned by certain integer relations of real numbers I have come across the following problem. Given $v_1,\dots, v_n \in \mathbb{Z}^n$ I am would like to find $w_1, \ldots w_n \in \mathbb{Z}^n$ such that
$$1.) \ \ \ \ \ \ \ \mathbb{Z}v_1+\dots +\mathbb{Z}v_n \ \subseteq \mathbb{Z}w_1 + \dots + \mathbb{Z}w_n$$
$$2.) \ \ \ \ \ \ \ \ |w_i|^2 \ \text{is small/ small as possible/ a lot smaller that $|v|^2$}$$
If I wanted the lattices spanned by $v_i$ and $w_i$ to be identical LLL would clearly be the natural approach, but since I am not require this, this seems to not make my $v_i$ nearly as small as I can achieve under these weaker conditions. Does there exists an algorithm, approach, idea which could come up with a such a basis $v_i$ given the $w_i$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618169069290161, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/33222/how-do-you-start-self-learning-physics/46347
|
# How do you start self-learning physics [closed]
I think this question has its place here because I am sure some of you are "self-taught experts" and can guide me a little through this process.
Considering that :
• I don't have any physics scholar background at all.
• I have a little math background but nothing too complicated like calculus
• I am a fast learner and am willing to put many efforts into learning physics
• I am a computer programmer and analyst with a passion for physics laws, theories, studies and everything that helps me understand how things work or making me change the way I see things around me.
Where do I start ? I mean.. Is there even a starting point ? Do I absolutly have to choose a type of physic ? Is it possible to learn physics on your own ?
I know it's a general question but i'm sure you understand that i'm a little bit in the dark here.
-
start with newtonian mechanics – Neo Feb 4 at 16:56
## closed as not constructive by dmckee♦Apr 14 at 14:29
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 11 Answers
1.) Find something that interests you. The secret to learning is to do something you can be passionate about. For one person it may be building metal detectors (cicuits etc.) and another may be more interested in string theory or crystal physics. Explore your local library's physics section.
2.) Become competent in the area that interests you. Thomas Edison was home-schooled and taught himself everything, but he also was given the freedom to learn whatever interested him. You will have to use relevant books. The internet alone will not suffice.
3.) Learn the Math. Focus on the theory and the equations should come naturally. Albert Einstein had to learn a lot of math before he could express his ideas in equations. Calculus 1, 2, and 3 are commonly used in much of physics. Khan academy may be useful here.
4.) Find another topic that interests you within physics. That should be easy since you will doubtlessly have stumbled across many fascinating concepts while investigating the first.
5.) Repeat steps 2-4.
There is nothing we can't do if we work hard, never sleep, and shirk all other responsibility.
-
Wow this is a good approach. From whay I read , QM and String theory interest me a lot but still the same problem, i dont think I can start there – PhaDaPhunk Aug 1 '12 at 2:45
The issue of foundation is a tricky one. Start with a general physics book (1st year undergrad) and find the advanced topic(s) of interest in the index. If you understand the pages referenced in the index, then find a more advanced book and use the index again. Repeat until you encounter concepts you are unfamiliar with and then use the index procedure on them! If you find the index is needed much too often, take a course in general physics from MIT OCW (free online). Another way to approach an advanced topic is to read its history. Relevant physicist biographies may also help. – JoeHobbit Aug 1 '12 at 4:05
1
@PhaDaPhunk: you seem to be aware of very few high level terms in physics, so u r thinking only that...but actually u need to work on basics...the lowest level. If u can imagine it happening around you in daily life than u r fit to go ahead...join some correspondence course and start from books that are more of verbal description rather than big equations. and remember "Patience is learning"..be ready to spend days on learning even the simplest of concepts. ASK YOURSELF IF U CAN ANSWER ANY QUESTION RELATING TO IT !! – rafiki Aug 1 '12 at 5:49
@JoeHobbit Albert Einstein flunked math? Yes, and Micheal Jordan was cut out of the high school basketball team, right? I hate these myths, they're purely manufactured by incompetent people, for incompetent people. Those people worked hard, stop making excuses. – OmnipresentAbsence Mar 10 at 18:44
@OmnipresentAbsence Thankyou for pointing that out. – JoeHobbit Mar 15 at 20:59
You start by starting the process. It really doesn't matter all that much how you start, only that you start. Go to the library, look through some books, etc.
At first, you'll find much of what you read opaque. But, in a short time, you'll start connecting some dots and then more and then more still. You'll revisit material that was initially incomprehensible to you and find that bits and pieces now make some sense. This non-linear process will continue with great strides followed by slow spells.
At some point, you'll determine that you must make the effort to become familiar with the language of calculus if you're to make any more meaningful progress.
Once you've done that, it's like you've opened your eyes to an entirely new world.
-
Ok clear, encouraging and concrete. So it is possible. I just bought a few books today can't wait to receive them and see where I can get from there. – PhaDaPhunk Jul 31 '12 at 18:09
@PhaDaPhunk: reference request please – Argus Jul 31 '12 at 21:21
@Argus of what I bought ? – PhaDaPhunk Jul 31 '12 at 22:38
Quantum Physics: A Beginner's Guide , QED: The Strange Theory of Light and Matter, The Quantum World: Quantum Physics for Everyone – PhaDaPhunk Jul 31 '12 at 22:40
@PhaDaPhunk: I'd start with mechanics and optics before looking at QM/QED. – Ignacio Vazquez-Abrams Jul 31 '12 at 22:42
show 5 more comments
The learning time problem affects everybody, physics is intimidating because to learn it, you have to recapitulate the history, there's no underlying aximatic system to deduce from. On t'Hooft's website, you will find a self-study guide put together for this purpose. It should get you started, and I don't think I can improve on t'Hooft.
But if you know mathematics and programming, there is a simple way to study the field--- just go over the famous problems yourself. You can simulate the Ising model in a very short time:
Make a 2d grid of bits with value 0 or 1. Choose a bit at random, and calculate the energy it feels with its neighbors, by counting the number of neighbors that have a different bit value. The energy is this number.
Now flip the bit, and if the energy goes down, keep the flipped value. If the energy goes up, keep it with a probability $e^{-\beta\Delta E}$ where you are free to change $\beta$. Do this a very large number of times, and you get an equilibrium configuration. Then you can draw a picture of what it looks like.
As you change $\beta$, you will see the phase transition appear. The Ising model will settle to be either mostly 0's or mostly 1's. Studying this transition and the related problems is a good introduction to most modern physics (past 1940). It contains the seed for everything except string theory.
-
Like you, I want to self-teach myself physics, yet here I am still at around the same stage I was a few years back. Why?
Because to learn physics effectively, I need to be immersed in it for days, weeks, months even years at a time. I also need to be coached by good teachers and peers that can steer me in the right direction, and prevent me from being lead astray by wrong ideas and bad problems. It's therefore essential people are in the right social environment.
So my answer would be to find the right social environment to make things easier for yourself, such as studying part time at night school, joining a physics club, forum--as you've done here.
I really do think people underestimate the importance of the correct social environment when studying physics because of the support it provides and the warding off of depression from social isolation.
-
I like the idea although I have a hard time finding a single post I understand in here. But I guess that by reading and connecting some dots ill start to understand a few things. – PhaDaPhunk Jul 31 '12 at 22:41
@PhaDaPhunk I see it all the time--people wanting to write a novel, set up a business, learn a foreign language etc. They never do in the end because the obligations of other parts of their life are too distracting. If you want to massage your ego by answering the odd question here, then make it your hobby over the next few years to do all the problems in the book Introduction to Electrodynamics by Griffiths. Or you could give up and answer questions in your discipline on another Stack Exhchange site. There is no getting away from sacrificing time and effort in mastering physics. – Larry Harson Jul 31 '12 at 23:18
I was actually saying that because of.what you said about social environnement. I.dont believe I should limit myself to one discipline. But if thats how you see it its fine too. I dont come here for my ego but because its one of the best source of information I know online. My discipline is know it all. I just love to learn – PhaDaPhunk Aug 1 '12 at 2:20
And ive never said anything about mastering anything I just want some.information about how to start on my own because I love this science. – PhaDaPhunk Aug 1 '12 at 2:22
3
The problem of social isolation can be circumvented rather easily by reading literature, and explaining it in a quick monologue to people who don't understand what you are talking about (and don't care). The social parts of the brain can be easily fooled into mistaking a one-way rant for a social interaction, so you can get by without going nuts. The social environment in academia is a mixed blessing. Seminars are wonderful, because detailed work is presented, but the day-to-day blah-blah-blah stuff just tends to produce conformity. – Ron Maimon Aug 1 '12 at 6:40
I would recommend starting with calculus and maybe linear algebra. A basic understanding of the properties of functions, derivatives, integrals and especially differential equations is vital. Vectors, matrices, tensors, different coordinate systems and their metrics, vector calculus are equally important. I would also take up a book on general (classical) physics, a good reference for that is Physics for scientists and engineers by Randall D. Knight. It's a big book, but it starts from the very basics and it goes all the way up to Special Relativity and QM (although you'll probably want to study those topics more in-depth in other, more specialized books).
From then on, analytical mechanics would probably be a good choice of topic. Learning about the lagrangian, Hamilton formalism,... That should give you a nice idea of the beauty in theoretical physics as well. And from then on I think you can go pretty much any way you want. For QM, a good book to start with is Introduction to Quantum Mechanics by David J. Griffiths. But you're not there yet, the maths comes before the physics. Good luck!
-
Just a note in addition to the advice being given here is this:
ACTUALLY DO THE PROBLEMS. Like on pen and paper. Do not under any circumstances look at a solution and go "Oh yeah, I get this. Next!" That is absolute bull and what many, many people who attempt to self-study physics end up trying and why a lot of them fail. It is very easy to skip on the grinding, difficult work associated with trying to actually solve problems and just read examples and theories but at that point you may as well pick up a popular science book and save yourself some heartbreak.
I am not kidding about this. If you take one piece of advice from this thread, let it be this. I am sure other people who have been formally educated as physicists will echo my sentiment.
-
First and the most important thing to do is to study calculus . I recommend the 2 textbooks by Apostol.Then, study classical mechanics(Taylor is good) and electrodynamics (Griffith) .They are basically calculus .If you understand linear algebra, read Griffith book in Quantum mechanics(You don't need anything outside calculus and linear algebra to read this book) Classical mechanics ,Electrodynamics and Quantum mechanics constitute the foundation of physics . After that , you can learn whatever you want in physics or math. Be sure ,to solve all the problems contained in the textbooks you read.
-
When I read your question I saw myself in you! I did what you want to do about 5 years ago, to be honest, I am 18 now and I'm not a physicist or engineer yet however self tutoring has brought me a long way and i think my relative ignorance compared to the others who answer your question makes it easier for you to relate to me.
As it has been previously stated there really is no exact place to start however if you ask me following the chronology of the advancements in physics sets out a pretty good self tutoring schedule.
First of all, the maths:
learn maths to the extent that you can solve differential equations with ease. After a point you won't be a able learn much more without knowing calculus.
• Buy used textbooks or follow video courses to learn the basics of integral and differential calculus after you nail down every concept in pre-calc.
If you're overly keen about it like I was, learn multi-variable calculus.
-I recommend stewart's multivariable calculus textbook.
After you're done with the maths you need general physics knowledge like the major branches and current researches etc.
After you have a feel for what physics is start by learning classical mechanics. There are countless textbooks and online sources and for this case you cant go wrong with any of them since its pretty basic stuff. But I recommend the schaums college physics: It's cheap and easy to understand not to mention the ridiculous number of problems it offers.
• For classical mechanics start with kinematics: newton laws, uniformly accelerated motion, harmonic motion, projectile motion and so on. Then move onto thermal physics. Make sure you understand this concept very well because the understanding of thermodynamics will be very helpful for further subjects.
• After kinematics and thermodynamics my opinion is that you should learn electricity and magnetism. Learn about circuitry and magnetism and don't be afraid of Maxwell and his equations.
• Then start trying to understand relativity, special at least because general relativity is far more complex then learn relativistic kinematics. (this is not classical physics but you should still know it before starting quantum mechanics)
• You should also have a pretty good understanding of electromagnetic waves and optics.
After classical mechanics move on to quantum mechanics: the physics of the small and strange. Quantum mechanics is a relatively hard concept to grasp. As for reading material you should definatley read dirac's principles of quantum mechanics cover to cover. After that get a textbook and solve problems.
After you have a good understanding of quantum mechanics explore particle and accelerator physics.
Beyond this point it is up to you, now you know the basics of physics and you can understand any further concepts. In my case for example I looked into gravity and string theory after teaching myself physics for a considerable portion of my relatively short life time.
After you learn this much physics your understanding of everyday concepts change. For example you will know why street lamps are yellow or why a singer can shatter a wine glass with her voice. Besides everyday concepts, you will know how to theoretically break the law of conservation of energy and you will know why two like charges repel each other and so on.
Good luck! If you like mathematical subjects physics wont disappoint you!
-
It strongly depends on what you want to do with your physics knowledge.
If you want to make something practical or experiment then don't agonize over the math or you'll end up learning math alone and you won't understand the whole point of doing physics. Learn math on an as-needed basis and stick to the basic undergraduate physics literature.
If you want to do some theoretical/simulation modeling or just curious about the universe, like for example string theory/astrophysics/field theory etc, you need to start from basic calculus and work your way up to multi-variable calculus, differential geometry, abstract algebra and group theory. Then study just a little about introductory classical mechanics, E&M and go straight into QM/QFT or whatever. Don't be scared. Solving problems as opposed to just reading stuff will greatly quicken the learning process.
Hope this helps.
-
First and the most important thing to do is: don't get Apostol's Calculus. Unless you want to go into math as well, I don't see the purpose of getting that book when you want to self-study physics. Second, I am shocked that the previous commenter said that you only need calculus and linear algebra for Griffiths' QM when he said in his book's preface that you also need to be familiar with the complex variables and fourier series. You need more math than just calculus and linear algebra.
So what do you do? I say that you can do this free. Here's my list with websites.
1. precalculus (http://www.opentextbookstore.com/precalc/)
2. Strang's calculus (http://ocw.mit.edu/resources/res-18-001-calculus-online-textbook-spring-2005/)
3. Nearing's undergraduate math methods (http://www.physics.miami.edu/~nearing/mathmethods/)
4. Stone and Goldbart's graduate math methods (http://webusers.physics.illinois.edu/~goldbart/PG_MS_MfP.htm)
And for the physics? It's just one website: (http://farside.ph.utexas.edu/teaching.html)
-
I think it depends a bit on what you want to do in physics. For mechanics you need the least math knowledge (basic calculus). At least my opinion and experience. – mick Oct 23 '12 at 16:17
I've begun a self-study schedule on my own. I began with a review of algebra and geometry and have now begun calculus. I'm basically trying to follow a typical college physics curriculum. There are alot of resources on the internet: MIT online, cheap used books on amazon and barnes & noble, etc. I think most of the above comments are on target. I want to get the math down first, then on to classical physics....like college.
-
## protected by Manishearth♦Feb 4 at 15:31
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463046789169312, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/math-topics/164424-classical-mechanics-problem-print.html
|
# classical mechanics problem
Printable View
• November 25th 2010, 10:23 PM
magus
classical mechanics problem
A ball is thrown with initial speed $v_0$ up an inclined plane. the plane is at an angle $\phi$ and the ball's initial velocity is at angle $\theta$. Choose an axis with x measured up the slope, y normal, and z across it. Write down Newton's second law using these axes and find the ball's position as a function of time. Show that the ball lands a distance
$R=\dfrac{2v_0^2\sin{\theta}\cos(\theta+\phi)}{\cos ^2(\phi)}$
from it's launch point.
Show that for a given $v_0$ and $\phi$ that the max range is $R_{max}=\dfrac{v_0^2}{g(1+\sin\theta)}$
• November 26th 2010, 04:21 AM
Unknown008
Um... have you tried anything?
• November 26th 2010, 10:56 PM
magus
I've tried many things but none of them get me even close to getting that result. I know that the force in the x direction is $mg\sin\theta$ and so the displacement formula has this subed for g without the for the displacement in the x direction. The displacement in the x direction is $v_0\cos\theta t$ and the y is normal so there's no displacement in the y.
• January 12th 2011, 07:18 AM
magus
Ok I'm coming back to this now.
I took a look at the derivation for this problem in two dimensions for a projectile and tried to model my approach after it. This is how far I got.
$0=x(t)= v_0 \sin(\phi)t-\frac{1}{2}g \cos(\theta)t^2$
then from this we can get
$t=\dfrac{2v_0\sin(\phi)}{g\cos(\theta)}$
Then as in the 2d simple case I multiplied $t v_0$ to obtain $R$
but this doesn't give the result I was looking for.
Can anyone help me with this?
• January 12th 2011, 09:18 AM
skeeter
Quote:
Originally Posted by magus
A ball is thrown with initial speed $v_0$ up an inclined plane. the plane is at an angle $\phi$ and the ball's initial velocity is at angle $\theta$. Choose an axis with x measured up the slope, y normal, and z across it. Write down Newton's second law using these axes and find the ball's position as a function of time. Show that the ball lands a distance
$R=\dfrac{2v_0^2\sin{\theta}\cos(\theta+\phi)}{g\co s^2(\phi)}$
note the correction in your formula for R from your original post.
$\displaystyle \Delta x = v_0\cos{\theta} \cdot t - \frac{1}{2}g\sin{\phi} \cdot t^2$
$\displaystyle \Delta y = v_0\sin{\theta} \cdot t - \frac{1}{2}g\cos{\phi} \cdot t^2$
since $\Delta y = 0$ ...
$\displaystyle t = \frac{2v_0\sin{\theta}}{g\cos{\phi}}$
substituting for $t$ in the $\Delta x$ equation ...
$\displaystyle \Delta x = \frac{2v_0^2 \sin{\theta} \cos{\theta}}{g\cos{\phi}} - \frac{2v_0^2 \sin^2{\theta} \sin{\phi}}{g\cos^2{\phi}}$
common denominator ...
$\displaystyle \Delta x = \frac{2v_0^2 \sin{\theta} \cos{\theta}\cos{\phi}}{g\cos^2{\phi}} - \frac{2v_0^2 \sin^2{\theta} \sin{\phi}}{g\cos^2{\phi}}$
combine and factor ...
$\displaystyle \Delta x = \frac{2v_0^2 \sin{\theta}(\cos{\theta}\cos{\phi} - \sin{\theta}\sin{\phi})}{g\cos^2{\phi}}$
finally, note that ...
$\cos{\theta}\cos{\phi} - \sin{\theta}\sin{\phi} = \cos(\theta+\phi)$
... and you're there.
• January 12th 2011, 09:27 AM
Unknown008
I think there is a 'g' missing in your first part.
Make a sketch.
(I'll be using v instead of vo to make it easier to type)
The velocity of the ball up the plane is $v \cos\theta$
That perpendicular to the plane is $v \sin\theta$
The acceleration is down the plane and is given by $g\ sin\phi$
And that perpendicular to the plane is given by $g \cos\phi$
From this, the distance perpendicular to the plane where the particle lands is 0 and is given by:
$s = ut + \dfrac12 at^2$
This:
$0 = v\sin\theta t - \dfrac12 g\cos\phi t^2$
For the distance along the plane, we have:
$s = v\cos\theta t - \dfrac12 g\sin\phi t^2$
Can you complete the first part now?
EDIT: Didn't see you replied Skeeter (Itwasntme)
• January 12th 2011, 09:55 AM
Unknown008
For the second part now. (there is also a mistake, see below)
$R = \dfrac{2v^2\sin\theta\cos(\theta+\phi)}{g\cos^2\ph i}$
Use the identity: $2\sin A\cos B = \sin(A+B) + \sin(A-B)$
This gives:
$R = \dfrac{v^2(\sin(2\theta + \phi) + \sin(-\phi))}{g\cos^2(\phi)}$
Simplify:
$R = \dfrac{v^2(\sin(2\theta + \phi) - \sin(\phi))}{g(1+\sin\phi)(1 - \sin\phi)}$
At the maximum range, $\sin(2\theta + \phi) = 1$ since only theta can vary.
Hence we get:
$R = \dfrac{v^2(1 - \sin(\phi))}{g(1+\sin\phi)(1 - \sin\phi)}$
Something cancels out, giving:
$R_{max} = \dfrac{v^2}{g(1+\sin\phi)}$
From this, you can even find the relation between theta and phi for this value of range
• January 13th 2011, 04:24 AM
magus
Thanks. The [LaTeX ERROR: Convert failed] is the other equation I really needed I guess.
All times are GMT -8. The time now is 04:22 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423084259033203, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/48290/finding-a-simple-basis-having-the-minimal-and-characteristic-polynomial?answertab=oldest
|
# finding a simple basis, having the minimal and characteristic polynomial
If I have a minimal polynomial ( p irred) $\ p(x)^n$, for example $(x^2+x+1)^3$ a characteristic polynomial $p(x)^2$ , how can i get a simple basis using the rational theorem? i dont understand it
-
5
It is difficult to understand what you are asking, really. – Mariano Suárez-Alvarez♦ Jun 28 '11 at 20:12
@Mariano: And it's impossible to have an operator with minimal polynomial equal to $(x^2+x+1)^3$ and characteristic polynomial equal to $(x^2+x+1)^2$. Presumably, he is asking about how to go about finding the rational canonical basis (or form), but it's pretty garbled. – Arturo Magidin Jun 28 '11 at 20:24
A simple basis for...what? – Gerry Myerson Jun 29 '11 at 1:35
## 1 Answer
Well, for one thing this is impossible: you cannot have the minimal polynomial be $(x^2+x+1)^3$ and the characteristic polynomial be $(x^2+x+1)^2$. By the Cayley-Hamilton Theorem, the minimal polynomial must divide the characteristic polynomial.
Now, if you had it reversed, the minimal polynomial $(x^2+x+1)^2$ and the characteristic polynomial $(x^2+x+1)^3$, then you can argue the same way one argues about the Jordan canonical form:
The highest power of $x^2+x+1$ in the minimal polynomial tells you the size of the largest companion block in the Rational Canonical form; so the Rational canonical form will have blocks associated to $(x^2+x+1)^2$, and possibly to $(x^2+x+1)$, but none to any higher degree.
The characteristic polynomial tells you that the dimension is $6$; a block associated to $(x^2+x+1)^2$ accounts for $4$ dimensions, a block associated to $x^2+x+1$ accounts for $2$. So the Rational Canonical Form will necessarily consist of a block that is the companion matrix of $(x^2+x+1)^2$, and a block that is the companion matrix of $x^2+x+1$.
(The information in the characteristic and minimal polynomials are not always enough to uniquely determine the rational canonical form, just like they don't always uniquely determine the Jordan canonical form; we got lucky here).
So: you want something which is in $\mathrm{Ker}((T^2+T+I)^2)$ but not in $\mathrm{Ker}(T^2+T+I)$ in order to obtain the block corresponding to $(x^2+x+1)^2$. Everything is in $\mathrm{Ker}((T^2+T+I)^2)$ (because $(x^2+x+1)^2$ is the minimal polynomial); so first find $\mathrm{Ker}(T^2+T+I)$. Then take a vector $\mathbf{v}$ that is not in $\mathrm{Ker}(T^2+T+I)$, and take its $T$-cyclic basis, $\mathbf{v}, T(\mathbf{v}), T^2(\mathbf{v}), T^3(\mathbf{v})$. This gives you the first cycle. Then take some $\mathbf{w}\in \mathrm{Ker}(T^2+T+I)$ that is not in the span of the $T$-cyclic basis generated by $\mathbf{v}$, and take its $T$-cyclic basis, $\mathbf{w}, T(\mathbf{w})$. The union of the two bases is a rational canonical basis for $V$.
The process is completely analogous to how we find a Jordan canonical basis, we just use $p(T)$ instead of $T-\lambda I$, and apply $T$ to get the "next" vector instead of applying $T-\lambda I$ to get the "previous" vector (if your Jordan canonical forms have $1$s over the diagonal; if they have $1$s under the diagonal, the second part is exactly the same, only using $T$ instead of $T-\lambda I$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9229723811149597, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/189265-differentiate-find-maximum-minimum-turning-point.html
|
# Thread:
1. ## differentiate and find maximum and minimum turning point
2(x^2/3) + (x^4/5) +11
i differnetiated this to get
4/3(x^-1/3) + 4/5(x^-1/5)
i know then, that i have to set the differentiated formula = 0. Please help with what i have to do next.
Thanks
2. ## Re: differentiate and find maximum and minimum turning point
The first derivative is indeed:
$f'(x)=\frac{4}{3}x^{\frac{-1}{3}}+\frac{4}{5}x^{\frac{-1}{5}}$
To find the minumum/maximum then you have to solve $f'(x)=0$. What do you get? Afterwards make a sign table to determine where the function increases/decreases.
3. ## Re: differentiate and find maximum and minimum turning point
I can't remember how to solve it. i know im missing something simple. so can anyone help me solve this please. thank you
4. ## Re: differentiate and find maximum and minimum turning point
Originally Posted by salma2011
I can't remember how to solve it. i know im missing something simple. so can anyone help me solve this please. thank you
I suggest finding the lowest common denominators of 3 and 5 and change your exponents into that form:
$f'(x) = \dfrac{4}{3}x^{-5/15} + \dfrac{4}{5}x^{-3/15} = 0$
You can then factor out $4x^{-1/15}$ as a common factor
$4x^{-3/15}\left(\dfrac{x^2}{3} + \dfrac{1}{5}\right) = 0$
Can you solve now?
To test whether you have a maximum or minimum check the sign of $f''(x)$.
• If $f''(x) < 0$ you have a maximum
• If $f''(x) > 0$ you have a minimum
• If $f''(x) = 0$ you have a point of inflection
5. ## Re: differentiate and find maximum and minimum turning point
There're probably different ways to solve the equation, but if you have to solve:
$0=\frac{4}{3}x^{\frac{-1}{3}}+\frac{4}{5}x^{\frac{-1}{5}}$
$\Leftrightarrow 4x^{\frac{-1}{3}}\left(\frac{1}{3}+\frac{1}{5}x^{\frac{2}{15} }\right)=0$
$\Leftrightarrox x^{\frac{-1}{3}}=0 \ \mbox{or} \ x^{\frac{2}{15}}=\frac{-5}{3}$
This means there're no solutions, because there's no $x$ value wherefore:
$x^{\frac{-1}{3}}=0$ or $x^{\frac{2}{15}}=\frac{-5}{3}$.
6. ## Re: differentiate and find maximum and minimum turning point
i solved for x and got x = + or - square root of 3/5
(sorry don't know how to put mathematical functions onto the computer)
and then from the derivative i work out whether it's a maximum or a minimum.
thank you for all your help
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9070976972579956, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/38464-finding-center-conic-sections.html
|
# Thread:
1. ## Finding the center of conic sections
Hey guys, just ran across this question on a homework:
"Find the center of the conic section whose equation is: A x ²+B xy +C y ²+D x +E y +F=0"
How do I find them without any numbers in each term? Is there a general formula to solve for the center like -b/2a?
Any help is appreciated
Derg
2. Originally Posted by Dergyll
Hey guys, just ran across this question on a homework:
"Find the center of the conic section whose equation is: A x ²+B xy +C y ²+D x +E y +F=0"
How do I find them without any numbers in each term? Is there a general formula to solve for the center like -b/2a?
Any help is appreciated
Derg
This is messy but you can complete the square
3. Originally Posted by Dergyll
Hey guys, just ran across this question on a homework:
"Find the center of the conic section whose equation is: A x ²+B xy +C y ²+D x +E y +F=0"
How do I find them without any numbers in each term? Is there a general formula to solve for the center like -b/2a?
Any help is appreciated
Derg
Define what you mean by "center." For example I have never heard of the term used for a parabola. (Though there is a point called the focus that might serve a similar function.)
If B = 0 then it's fairly easy. As Mathstud28 suggested you can complete the square on the x and/or y terms (as applicable) and simplify from there. If B is not 0 then you need to rotate the coordinate system such that the new form has the x'y' coefficient equal to 0.
The question is odd. There is no way to approach this until at least some of the coefficients are known.
-Dan
4. That is exactly word for word what the question says. The center (because of the fact that the graph is NOT a parabola since both x and y are both squared) should be that of an eclipse. But I still don't know how to solve for it!!! Does the problem want me to plug numbers in?
Thanks
Derg
5. Originally Posted by Dergyll
[snip]
The center (because of the fact that the graph is NOT a parabola since both x and y are both squared) should be that of an eclipse. But I still don't know how to solve for it!!!
[snip]
It makes sense to talk about the centre of a hyperbola too, you know - it's the intersection point of its asymptotes .......
Originally Posted by Dergyll
[snip]
Does the problem want me to plug numbers in?
Thanks
Derg
Not being able to consult with the author of the question, who knows? Although, if that's what the question wanted you to do, shouldn't it say so ....?
6. Originally Posted by Dergyll
That is exactly word for word what the question says. The center (because of the fact that the graph is NOT a parabola since both x and y are both squared) should be that of an eclipse. But I still don't know how to solve for it!!! Does the problem want me to plug numbers in?
Thanks
Derg
What graph? And the equation you posted was in terms of unknown constants A, B, ... You didn't state any conditions on them so some of them may be zero and may be positive or negative. The equation you gave was the general form for a conic section: it could be anything.
-Dan
7. Hello, Derg!
If you haven't been taught about Rotations,
. . this problem is inappropriate.
Find the center of the conic section whose equation is:
. . . $Ax^2 + Bxy + Cy^2 + DX + Ey + F \;=\;0$
There is a "discriminant": . $\Delta \;=\;B^2-4AC\qquad \begin{Bmatrix}\Delta = 0 & \text{Parabola} \\ \Delta > 0 & \text{Hyperbola} \\ \Delta < 0 & \text{Ellipse} \end{Bmatrix}$
If there is no $xy$-term $(B = 0)$, it is a standard conic.
. . Its axes are parallel to the coordinate axes.
If $B \neq 0$, the conic has been rotated through angle $\theta$
. . where: . $\tan2\theta \:=\:\frac{B}{A-C}$
We can "un-rotate" the graph with these conversions:
. . $\begin{array}{ccc} X &=& x\cos\theta - y\sin\theta \\ Y &=& x\sin\theta - y\cos\theta \end{array}$
We also have:
. . $A' \;=\;A\cos^2\!\theta + B\sin\theta\cos\theta + C\sin^2\!\theta$
. . $B' \;=\;0$
. . $C' \;=\;A\sin^2\!\theta - B\sin\theta\cos\theta + C\cos^2\!\theta$
. . $D' \;=\;D\cos\theta + E\sin\theta$
. . $E' \;=\;\text{-}D\sin\theta + E\cos\theta$
. . $F' \;=\;F$
We get a new equation: . $A'X^2 + C'Y^2 + D'X + E'Y + F' \;=\;0$
. . and now you can complete the square, etc.
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Note: They always use the term "center".
. . . . .In a parabola, it refers to the vertex.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945462703704834, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/11558/taylor-series-of-a-complex-function-that-is-not-holomorphic/11565
|
## Taylor series of a complex function that is not holomorphic
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I want to create Taylor series of a complex function that has complex conjugate in it. Obviously I cannot do a total derivative but derivations over real and imag parts exist.
Bonus question: Can I produce a Taylor series using only derivations over real part?
-
Maybe there is something about Taylor series using directional derivatives instead of partial derivatives. – Domagoj Peharda Jan 13 2010 at 10:39
The reason why I wanted to find a Taylor series was to produce a Newton method. Finally I found that I needed to look into CR Calculus citeseerx.ist.psu.edu/viewdoc/… – Domagoj Peharda Feb 16 2010 at 21:26
## 2 Answers
Another option is $\sum c_{mn}z^m \bar z^n$, which still keeps track of the complex structure. For instance, harmonic functions will have $c_{mn}=0$ unless $mn=0$.
-
So I can always transform my complex function to f(z,z*). But how do I do a derivative of that? – Domagoj Peharda Jan 13 2010 at 10:51
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Remember that the complex plane is $\mathbb{R}^2$ and use normal old multivariable Taylor series.
-
My function is thus R^2 to R^2 and it seems that your link is only about scalar valued multivariable function. – Domagoj Peharda Jan 13 2010 at 10:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052883386611938, "perplexity_flag": "middle"}
|
http://quant.stackexchange.com/questions/4456/how-do-you-synthesize-a-probability-density-function-pdf-from-equally-weighted
|
# How do you synthesize a probability density function (pdf) from equally weighted price data?
What I'm working with: I have a collection of prices that has very few to no repeating values (depending on the look back period) ie each price value is unique, some prices are clustered and some can be spread apart by great distances.
Because there is only one count for each price, each price therefore has an equal probability weight. This type of data produces a flat (pdf). I'm looking for a curvy linear (pdf), so I can find levels of interest.
Question: How do I construct a curvy-linear (p.d.f.) from data that have the same frequency/count and probability distribution of (1)?
Potential Solutions:
1) Some of the values are clustered, and they look like they could be grouped to generate an aggregate-frequency/count.
I like this idea, but what technique do you use?
2) I could use volume or ticks to weight the notional price value.
For my work, I'm not interested in the influence that volume or tick weighted distributions would have.
Recommendations of papers or other resources is greatly appreciated.
@vanguard2k
First, I assume that your price data are all from the same asset but spread over a certain time range. Correct, all prices are from one symbol the S&P500 futures, intraday price. As a first step you could make a histogram of your data. It's because of the 'lack' of shape of my histogram (it's flat, like a rug plot) that I'm looking for a technique to tease out a curvy-linear (pdf). Due to the infrequency of similar price values in my data set, the probability weight of any price is equal to all other price probability values, P(\$price)=1/sample qty.
My histogram looks similar to this wiki picture: http://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Fair_dice_probability_distribution.svg/200px-Fair_dice_probability_distribution.svg.png
You could look into the topic of density estimation here.
I've spent the day reviewing your links, and the method of kernel density estimation (kde) looks promising. But I don't completely comprehend how to construct a (kde).
I've started a list of how to plot a (kde). What steps have to be taken to implement a kernel density estimations with real world price examples?
Procedure?:
1 Determine what type of partitioning/clustering method to apply to a financial time series (5 categories/methods: partitioning, hierarchical, density, grid-based, and model-based).
2 Apply the clustering technique to partition observations into groups.
3 Calculate a Practical Estimate of a kernel bandwidth, h = (1.06*StDev*qtySampled)¯¹/5, or MISE.
4 Determine what kernel function to use (Epanechnikov, Quartic, Triangular, Gaussian, Cosine, etc)
5 Calculate a kernel density estimation for each price.
6 Sum the kernels to make the kernel density estimate.
Question: Does the (kde) assign a probability value to the prices that were not in the price data set? The first (kde) example image on Wikipedia suggest that it does. http://en.wikipedia.org/wiki/File:Comparison_of_1D_histogram_and_KDE.png
If you dont have time series data but only price data and you want to cluster it (you are speaking of "price level clusters") you should look into the topic of unsupervised learning. I don't understand the difference between 'time series data' and 'price data'?
-
## 2 Answers
First, I assume that your price data are all from the same asset but spread over a certain time range.
If you are looking for the distribution of the price of this asset on the real axis, you have plenty of methods (several fields in mathematics and statistics deal with this topic).
As a first step you could make a histogram of your data. There you can see about the clusters you were talking about. It gives you a good impression of the distribution of the data.
Answer to question:There are lots of ways how to get a density out of your discrete dataset. You could look into the topic of density estimation here. The free software R (www.r-project.org) has lots of packages that helps you achieve this.
Generally, in the case of time-dependent data (financial time series) you will soon realize other effects(see time series). One notices for example that the density changes over time (due to seasonality, for example). That still not being enough, a lot of (financial) time series appear to be dependent on the past (see for example the topic autocorrelation). The approach to estimate a single density from the data is often not advisable as it changes over time! One tries to model the dependence of the data over time. It is therefore often necessary to speak of "conditional density at time $t$".
As you see, there is a lot you can do here and this is just a small sample of the possible methods.
If you dont have time series data but only price data and you want to cluster it (you are speaking of "price level clusters") you should look into the topic of unsupervised learning. But please be aware of possible changes of your results over time!
In general all mentioned topics are widely used and interrelated. I hope this answers your question (and I got the meaning of your question right) at least to some extent.
EDIT: Just some remarks to the comments you posted in your question. I hope I found all of them:
• As far as the histograms are concerned: The "art" of nice histogramps partly depends on how you choose your intervals. If you take as interval length between 2 and 5 points of your futures contract (for example) you will get a different picture and you should be able to spot something that more resembles a density. You divide your price data in 5 point intervals and count how many of your price data are in each interval. Then you can say $5\%$of the data was between $1408$ and $1410$. I have to stress again here that it would be more than brave to say that there is a $5\%$ probability of future S&P-future values to lie in this interval!
• I am not sure how you should link the topics of clustering and density estimation here. For both topics you could definitely look into this resource: Elements of Statistical Learning. It is a free book and is widely used for teaching and learning of (but not only) these topics.
• Answer to new question: The density estimation in your picture (or Fig. 6.13 of the book I mentioned) assigns a probability to every value - including those not in the dataset. Just that this is not a property of kernel density estimation in gereral but rather of the kernel used (here it is Gaussian).
• Difference between time series data and price data: In mathematics a random sample consists of indipendent random variables with identical distributions. There is overwhelming evidence that the distribution of financial returns varies over time and that they are not independent. Financial time series should not be viewed as random sample because they are neither independent nor identically distributed. That was what I wanted to say here.
-
I have a comment that is longer than what is allowed. How do I convey a constructive reply? FYI: your reply is about 3X long than what I'm being allowed to enter. – montyhall Nov 3 '12 at 6:42
– montyhall Nov 3 '12 at 7:37
1
@montyhall This is Q&A site, not a discussion forum. In general you shouldn't need to reply to answers. – Alexey Kalmykov Nov 3 '12 at 9:34
@AlexeyKalmykov I don't have enough credits to use the chat room to directly ask for help. FAQ does not address how a new member should respond. Some one voted that there is an answer to my question, there is only a good suggestion (one that I'm working on tks!). How do I change to not ans? I'm looking for a method of how to create a (pdf) utilizing density estimation with real world price data. Since the question has been answered, should I resubmit my problem, but with a different title and I'll included the knowledge gained from vanguard2k with a slightly different angle and questions? – montyhall Nov 3 '12 at 15:03
One simple approach is
1. Construct the cumulative probability function (CDF), which will be a step-function.
2. Smooth the CDF; for example, by using splines or a kernel smoothing function.
3. Calculate the slope of the smoothed CDF, giving a curvy linear PDF.
In R, this could be done using the ecdf function and one of the kernel smoothers.
Again, as vanguard2k warned, this procedure assumes your distribution is stationary over time.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234254956245422, "perplexity_flag": "middle"}
|
http://catalog.flatworldknowledge.com/bookhub/reader/128?e=fwk-redden-ch01_s04
|
# Elementary Algebra, v. 1.0
by John Redden
Study Aids:
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass:
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
Printing:
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
Search:
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
## 1.4 Fractions
### Learning Objectives
1. Reduce a fraction to lowest terms.
2. Multiply and divide fractions.
3. Add and subtract fractions.
## Reducing
A fractionA rational number written as a quotient of two integers: $ab$, where b is nonzero. is a real number written as a quotient, or ratioRelationship between two numbers or quantities usually expressed as a quotient., of two integers a and b, where $b≠0$.
The integer above the fraction bar is called the numeratorThe number above the fraction bar. and the integer below is called the denominatorThe number below the fraction bar.. The numerator is often called the “part” and the denominator is often called the “whole.” Equivalent fractionsTwo equal fractions expressed using different numerators and denominators. are two equal ratios expressed using different numerators and denominators. For example,
Fifty parts out of 100 is the same ratio as 1 part out of 2 and represents the same real number. Consider the following factorizations of 50 and 100:
The numbers 50 and 100 share the factor 25. A shared factor is called a common factorA factor that is shared by more than one real number.. We can rewrite the ratio $50100$ as follows:
Making use of the multiplicative identity property and the fact that $2525=1$, we have
Dividing $2525$ and replacing this factor with a 1 is called cancelingThe process of dividing out common factors in the numerator and the denominator.. Together, these basic steps for finding equivalent fractions define the process of reducingThe process of finding equivalent fractions by dividing the numerator and the denominator by common factors.. Since factors divide their product evenly, we achieve the same result by dividing both the numerator and denominator by 25 as follows:
Finding equivalent fractions where the numerator and denominator have no common factor other than 1 is called reducing to lowest termsFinding equivalent fractions where the numerator and the denominator share no common integer factor other than 1.. When learning how to reduce to lowest terms, it is helpful to first rewrite the numerator and denominator as a product of primes and then cancel. For example,
We achieve the same result by dividing the numerator and denominator by the greatest common factor (GCF)The largest shared factor of any number of integers.. The GCF is the largest number that divides both the numerator and denominator evenly. One way to find the GCF of 50 and 100 is to list all the factors of each and identify the largest number that appears in both lists. Remember, each number is also a factor of itself.
Common factors are listed in bold, and we see that the greatest common factor is 50. We use the following notation to indicate the GCF of two numbers: GCF(50, 100) = 50. After determining the GCF, reduce by dividing both the numerator and the denominator as follows:
Example 1: Reduce to lowest terms: $105300$.
Solution: Rewrite the numerator and denominator as a product of primes and then cancel.
Alternatively, we achieve the same result if we divide both the numerator and denominator by the GCF(105, 300). A quick way to find the GCF of the two numbers requires us to first write each as a product of primes. The GCF is the product of all the common prime factors.
In this case, the common prime factors are 3 and 5 and the greatest common factor of 105 and 300 is 15.
Answer: $720$
Try this! Reduce to lowest terms: $3296$.
Answer: $13$
### Video Solution
An improper fractionA fraction where the numerator is larger than the denominator. is one where the numerator is larger than the denominator. A mixed numberA number that represents the sum of a whole number and a fraction. is a number that represents the sum of a whole number and a fraction. For example, $5 12$ is a mixed number that represents the sum $5+12$. Use long division to convert an improper fraction to a mixed number; the remainder is the numerator of the fractional part.
Example 2: Write $235$ as a mixed number.
Solution: Notice that 5 divides into 23 four times with a remainder of 3.
We then can write
Note that the denominator of the fractional part of the mixed number remains the same as the denominator of the original fraction.
Answer: $435$
To convert mixed numbers to improper fractions, multiply the whole number by the denominator and then add the numerator; write this result over the original denominator.
Example 3: Write $357$ as an improper fraction.
Solution: Obtain the numerator by multiplying 7 times 3 and then add 5.
Answer: $267$
It is important to note that converting to a mixed number is not part of the reducing process. We consider improper fractions, such as $267$, to be reduced to lowest terms. In algebra it is often preferable to work with improper fractions, although in some applications, mixed numbers are more appropriate.
Try this! Convert $1012$ to an improper fraction.
Answer: $212$
## Multiplying and Dividing Fractions
In this section, assume that a, b, c, and d are all nonzero integers. The product of two fractions is the fraction formed by the product of the numerators and the product of the denominators. In other words, to multiply fractions, multiply the numerators and multiply the denominators:
Example 4: Multiply: $23⋅57$.
Solution: Multiply the numerators and multiply the denominators.
Answer: $1021$
Example 5: Multiply: $59(−14)$.
Solution: Recall that the product of a positive number and a negative number is negative.
Answer: $−536$
Example 6: Multiply: $23⋅534$.
Solution: Begin by converting $534$ to an improper fraction.
In this example, we noticed that we could reduce before we multiplied the numerators and the denominators. Reducing in this way is called cross cancelingCancelling common factors in the numerator and the denominator of fractions before multiplying., and can save time when multiplying fractions.
Answer: $356$
Two real numbers whose product is 1 are called reciprocalsThe reciprocal of a nonzero number n is 1/n.. Therefore, $ab$ and $ba$ are reciprocals because $ab⋅ba=abab=1$. For example,
Because their product is 1, $23$ and $32$ are reciprocals. Some other reciprocals are listed below:
This definition is important because dividing fractions requires that you multiply the dividend by the reciprocal of the divisor.
Example 7: Divide: $23÷57$.
Solution: Multiply $23$ by the reciprocal of $57$.
Answer: $1415$
You also need to be aware of other forms of notation that indicate division: / and —. For example,
Or
The latter is an example of a complex fractionA fraction where the numerator or denominator consists of one or more fractions., which is a fraction whose numerator, denominator, or both are fractions.
### Note
Students often ask why dividing is equivalent to multiplying by the reciprocal of the divisor. A mathematical explanation comes from the fact that the product of reciprocals is 1. If we apply the multiplicative identity property and multiply numerator and denominator by the reciprocal of the denominator, then we obtain the following:
Before multiplying, look for common factors to cancel; this eliminates the need to reduce the end result.
Example 8: Divide: $52 74$.
Solution:
Answer: $107$
When dividing by an integer, it is helpful to rewrite it as a fraction over 1.
Example 9: Divide: $23÷6$.
Solution: Rewrite 6 as $61$ and multiply by its reciprocal.
Answer: $19$
Also, note that we only cancel when working with multiplication. Rewrite any division problem as a product before canceling.
Try this! Divide: $5÷235$.
Answer: $11213$
## Adding and Subtracting Fractions
Negative fractions are indicated with the negative sign in front of the fraction bar, in the numerator, or in the denominator. All such forms are equivalent and interchangeable.
Adding or subtracting fractions requires a common denominatorA denominator that is shared by more than one fraction.. In this section, assume the common denominator c is a nonzero integer.
It is good practice to use positive common denominators by expressing negative fractions with negative numerators. In short, avoid negative denominators.
Example 10: Subtract: $1215−315$.
Solution: The two fractions have a common denominator 15. Therefore, subtract the numerators and write the result over the common denominator:
Answer: $35$
Most problems that you are likely to encounter will have unlike denominatorsDenominators of fractions that are not the same.. In this case, first find equivalent fractions with a common denominator before adding or subtracting the numerators. One way to obtain equivalent fractions is to divide the numerator and the denominator by the same number. We now review a technique for finding equivalent fractions by multiplying the numerator and the denominator by the same number. It should be clear that 5/5 is equal to 1 and that 1 multiplied times any number is that number:
We have equivalent fractions $12=510$. Use this idea to find equivalent fractions with a common denominator to add or subtract fractions. The steps are outlined in the following example.
Example 11: Subtract: $715−310$.
Solution:
Step 1: Determine a common denominator. To do this, use the least common multiple (LCM)The smallest number that is evenly divisible by a set of numbers. of the given denominators. The LCM of 15 and 10 is indicated by LCM(15, 10). Try to think of the smallest number that both denominators divide into evenly. List the multiples of each number:
Common multiples are listed in bold, and the least common multiple is 30.
Step 2: Multiply the numerator and the denominator of each fraction by values that result in equivalent fractions with the determined common denominator.
Step 3: Add or subtract the numerators, write the result over the common denominator and then reduce if possible.
Answer: $16$
The least common multiple of the denominators is called the least common denominator (LCD)The least common multiple of a set of denominators.. Finding the LCD is often the difficult step. It is worth finding because if any common multiple other than the least is used, then there will be more steps involved when reducing.
Example 12: Add: $510+118$.
Solution: First, determine that the LCM(10, 18) is 90 and then find equivalent fractions with 90 as the denominator.
Answer: $59$
Try this! Add: $230+521$.
Answer: $32105$
### Video Solution
Example 13: Simplify: $213+35−12$.
Solution: Begin by converting $213$ to an improper fraction.
Answer: $21330$
In general, it is preferable to work with improper fractions. However, when the original problem involves mixed numbers, if appropriate, present your answers as mixed numbers. Also, mixed numbers are often preferred when working with numbers on a number line and with real-world applications.
Try this! Subtract: $57−217$.
Answer: $−137$
### Video Solution
Example 14: How many $12$ inch thick paperback books can be stacked to fit on a shelf that is $112$ feet in height?
Solution: First, determine the height of the shelf in inches. To do this, use the fact that there are 12 inches in 1 foot and multiply as follows:
Next, determine how many notebooks will fit by dividing the height of the shelf by the thickness of each book.
Answer: 36 books can be stacked on the shelf.
### Key Takeaways
• Fractions are not unique; there are many ways to express the same ratio. Find equivalent fractions by multiplying or dividing the numerator and the denominator by the same real number.
• Equivalent fractions in lowest terms are generally preferred. It is a good practice to always reduce.
• In algebra, improper fractions are generally preferred. However, in real-life applications, mixed number equivalents are often preferred. We may present answers as improper fractions unless the original question contains mixed numbers, or it is an answer to a real-world or geometric application.
• Multiplying fractions does not require a common denominator; multiply the numerators and multiply the denominators to obtain the product. It is a best practice to cancel any common factors in the numerator and the denominator before multiplying.
• Reciprocals are rational numbers whose product is equal to 1. Given a fraction $ab$, its reciprocal is $ba$.
• Divide fractions by multiplying the dividend by the reciprocal of the divisor. In other words, multiply the numerator by the reciprocal of the denominator.
• Rewrite any division problem as a product before canceling.
• Adding or subtracting fractions requires a common denominator. When the denominators of any number of fractions are the same, simply add or subtract the numerators and write the result over the common denominator.
• Before adding or subtracting fractions, ensure that the denominators are the same by finding equivalent fractions with a common denominator. Multiply the numerator and the denominator of each fraction by the appropriate value to find the equivalent fractions.
• Typically, it is best to convert all mixed numbers to improper fractions before beginning the process of adding, subtracting, multiplying, or dividing.
### Topic Exercises
Part A: Working with Fractions
Reduce each fraction to lowest terms.
1. $530$
2. $624$
3. $3070$
4. $1827$
5. $4484$
6. $5490$
7. $13530$
8. $105300$
9. $186$
10. $25616$
11. $12645$
12. $52234$
13. $54162$
14. $20003000$
15. $270360$
Rewrite as an improper fraction.
16. $434$
17. $212$
18. $5715$
19. $112$
20. $358$
21. $134$
22. $−212$
23. $−134$
Rewrite as a mixed number.
24. $152$
25. $92$
26. $4013$
27. $10325$
28. $7310$
29. $−527$
30. $−596$
Part B: Multiplying and Dividing
Multiply and reduce to lowest terms.
31. $23⋅57$
32. $15⋅48$
33. $12⋅13$
34. $34⋅209$
35. $57⋅4910$
36. $23⋅912$
37. $614⋅2112$
38. $4415⋅1511$
39. $334⋅213$
40. $2710⋅556$
41. $311(−52)$
42. $−45(95)$
43. $(−95)(−310)$
44. $67(−143)$
45. $(−912)(−48)$
46. $−38(−415)$
47. $17⋅12⋅13$
48. $35⋅1521⋅727$
49. $25⋅318⋅45$
50. $249⋅25⋅2511$
Determine the reciprocal of the following numbers.
51. $12$
52. $85$
53. $−23$
54. $−43$
55. $10$
56. $−4$
57. $213$
58. $158$
Divide and reduce to lowest terms.
59. $12÷23$
60. $59÷13$
61. $58÷(−45)$
62. $(−25)÷153$
63. $−67−67$
64. $−12 14$
65. $−103−520$
66. $23 92$
67. $3050 53$
68. $12 2$
69. $5 25$
70. $−654$
71. $212÷53$
72. $423÷312$
73. $5÷235$
74. $435÷23$
Part C: Adding and Subtracting Fractions
Add or subtract and reduce to lowest terms.
75. $1720−520$
76. $49−139$
77. $35+15$
78. $1115+915$
79. $57−217$
80. $518−118$
81. $12+13$
82. $15−14$
83. $34−52$
84. $38+716$
85. $715−310$
86. $310+214$
87. $230+521$
88. $318−124$
89. $512+213$
90. $134+2110$
91. $12+13+16$
92. $23+35−29$
93. $73−32+215$
94. $94−32+38$
95. $113+225−1115$
96. $23−412+316$
97. $1−616+318$
98. $3−121−115$
Part D: Mixed Exercises
Perform the operations. Reduce answers to lowest terms.
99. $314⋅73÷18$
100. $12⋅(−45)÷1415$
101. $12÷34⋅15$
102. $−59÷53⋅52$
103. $512−921+39$
104. $−310−512+120$
105. $45÷4⋅12$
106. $53÷15⋅23$
107. What is the product of $316$ and $49$?
108. What is the product of $−245$ and $258$?
109. What is the quotient of $59$ and $253$?
110. What is the quotient of $−165$ and 32?
111. Subtract $16$ from the sum of $92$ and $23$.
112. Subtract $14$ from the sum of $34$ and $65$.
113. What is the total width when 3 boards, each with a width of $258$ inches, are glued together?
114. The precipitation in inches for a particular 3-day weekend was published as $310$ inches on Friday, $112$ inches on Saturday, and $34$ inches on Sunday. Calculate the total precipitation over this period.
115. A board that is $514$ feet long is to be cut into 7 pieces of equal length. What is length of each piece?
116. How many $34$ inch thick notebooks can be stacked into a box that is 2 feet high?
117. In a mathematics class of 44 students, one-quarter of the students signed up for a special Saturday study session. How many students signed up?
118. Determine the length of fencing needed to enclose a rectangular pen with dimensions $3512$ feet by $2023$ feet.
119. Each lap around the track measures $14$ mile. How many laps are required to complete a $212$ mile run?
120. A retiree earned a pension that consists of three-fourths of his regular monthly salary. If his regular monthly salary was \$5,200, then what monthly payment can the retiree expect from the pension plan?
Part E: Discussion Board Topics
121. Does 0 have a reciprocal? Explain.
122. Explain the difference between the LCM and the GCF. Give an example.
123. Explain the difference between the LCM and LCD.
124. Why is it necessary to find an LCD in order to add or subtract fractions?
125. Explain how to determine which fraction is larger, $716$ or $12$.
### Answers
1: 1/6
3: 3/7
5: 11/21
7: 9/2
9: 3
11: 14/5
13: 1/3
15: 3/4
17: 5/2
19: 3/2
21: 7/4
23: −7/4
25: $412$
27: $4325$
29: $−737$
31: 10/21
33: 1/6
35: 7/2
37: 3/4
39: $834$
41: −15/22
43: 27/50
45: 3/8
47: 1/42
49: 1
51: 2
53: −3/2
55: 1/10
57: 3/7
59: 3/4
61: −25/32
63: 1
65: 40/3
67: 9/25
69: 25/2
71: $112$
73: $11213$
75: 3/5
77: 4/5
79: $−137$
81: 5/6
83: −7/4
85: 1/6
87: 32/105
89: $756$
91: 1
93: 29/30
95: $223$
97: 19/24
99: 4
101: 2/15
103: 9/28
105: 1/10
107: 1/12
109: 1/15
111: 5
113: $778$ inches
115: $34$ feet
117: 11 students
119: 10 laps
Close Search Results
Study Aids
Need Help?
Talk to a Flat World Knowledge Rep today:
• 877-257-9243
• Live Chat
• Contact a Rep
Monday - Friday 9am - 5pm Eastern
We'd love to hear your feedback!
Leave Feedback!
Edit definition for
#<Bookhub::ReaderController:0x0000001d63d9f0>
show
#<Bookhub::ReaderReporter:0x0000001d623050>
369344
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 201, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8815776705741882, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/31192/lorentz-transformations-of-the-polarization-vector
|
# Lorentz transformations of the polarization vector
Let $\bf{n}'$ be a unit vector in the direction of a wavevector in the plasma rest frame and $\bf{B'}$ be a unit vector along the magnetic field in the plasma rest frame. The electric field of a linearly polarized electromagnetic wave is directed along the unit vector $\hat{e}=\bf{n'} \times \hat{B'}$, and the magnetic field of the wave is along the unit vector $\hat{b'}=\bf{b}\times \hat{e'}$, such that the Poynting flux along is $\hat{e'}\times \hat{b'}$ directed along $\bf{n'}$. We give a Lorentz boost to the explosion frame to find the electric field $e$ there, normalize it to unity, and project e on some given direction (e.g. , along the projection of the flow axis on the plane of the sky).
Fields in the wave expressed in terms of the direction of a photon in the explosion frame $\bf{n}$ is
\begin{equation} \hat{e'} ~=~\frac{\bf{b} \times \hat{B'}}{\Gamma (1-\bf{n}\cdot\bf{v})}+\frac{1+\Gamma (1-\bf{n}\cdot \bf{v})}{(1+\Gamma(1-\bf{n}\cdot \bf{v}))} \end{equation}
where $\Gamma=\frac{1}{\sqrt{1-v^2/c^2}}$ is the bulk Lorentz factor. Would some one help to deduce the above equation?
-
Can you give some more context to the terms plasma, explosion, and the like? – Emilio Pisanty Jul 3 '12 at 14:45
I really don't know the answer to this, but I suspect it might help to start thinking about how a quantity like $\vec{A}\times \vec{B}$ where $\vec{A}$ and $\vec{B}$ are standard 3 -vectors, transforms under regular rotations, and thus build up the intuition to Lorentz transformations. – DJBunk Jul 3 '12 at 15:03
@ Emilio Pisanty, I was trying to derive this from the Ref: M. Lyutikov, V. I. Pariev, and R. D. BlandfordPOLARIZATION OF PROMPT GAMMA-RAY BURST EMISSION: EVIDENCE FOR ELECTROMAGNETICALLY DOMINATED OUTFLOW, , The Astrophysical Journal, 597:998–1009, 2003 November 10. – kallo Jul 3 '12 at 23:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8801579475402832, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/24409/looking-for-references-about-operator-mixing-and-anomalous-dimensions?answertab=oldest
|
# Looking for references about “operator mixing” (and anomalous dimensions)
I am looking for some pedagogic references about the idea of "operator mixing" and its associated concept about when anomalous dimension has to be thought of as a matrix.
For example this idea is slightly touched upon in this article though the link to anomalous dimension doesn't lead anywhere. Here they just introduce this notation of $\gamma_{kl}$ and leave it unexplained and undefined.
For some of its aspects that I want to learn about let me refer to this article. I would like to understand the meaning and derivation of the equation $12$ (..that thing called $\gamma_{\phi ^2 I}$..) in the beginning of the section "Perturbative Examples" (bottom of page 5) and the argument at the top of page $7$ and equation $18$.
{...also I would like to know if this is known by some other name since I was a bit surprised to not find these two concepts in various standard QFT books like even in Weinberg's!..}
-
Eq. (18) is a general statement about scalars in CFTs. It expresses the fact that the OPE of two scalars consists of symmetric tensors (and no other operators), but otherwise that equation itself doesn't mean much. The (operator) functions $C_{\Delta}^{(\ell)}(x,\partial)$ are universal and can be calculated by looking at three-point functions, see for example Osborn's own paper hep-th/0011040 (but it's a useless exercise). – Vibert Apr 21 at 16:31
## 1 Answer
Have you tried Peskin and Schroeder? It has two entries for operator mixing.
-
1
Yeah..I have seen that but as usual I find Peskin and Schroeder's exposition always kind of disparate and can't use it for anything more than an occasional reference. I am looking for something more substantial and pedagogic. – user6818 Apr 26 '12 at 20:22
How about Zinn-Justin? From memory there is a whole chapter devoted to it, or at least something more substantial than P&S. – Michael Brown Mar 22 at 11:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9595924019813538, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showpost.php?p=3795929&postcount=5
|
View Single Post
Quote by kai_sikorski If you know that the realization is in A, then all you don't know is if step 5 was -1, or 1. You know everything else. Yes you can add this set to F4, but that doesn't mean that the stochastic process will go right 5 times. It means that, you're now allowed to ask whether it did, at time 4. However adding sets like this to F4, while allowed and would still mean X was adapted is not useful, this is not the natural filtration. The natural filtration is generated by only the information you need at an individual time step to determine the value of the stochastic process. In fact you could make the 5 successive sigma-fields in the filtration F, F, F, F, F, where F is the σ-field for the whole probability space. Again X would be adapted to this filtration, but this would not be useful. To understand this it really helps to understand the formal measure theoretic interpretation of conditional expectation. See if you can read the wikipedia article on this and understand why $\operatorname{E}(X_i|\mathcal{F}_i):\Omega \to \mathbb{R}$ is not a random variable but $\operatorname{E}(X_i|\mathcal{F}_{i-1}):\Omega \to \mathbb{R}$ is a random variable, although it has much less uncertainty than $X_i$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9631735682487488, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/9400/induced-emf-of-a-spinning-metal-rod
|
# Induced EMF of a spinning metal rod [closed]
Hey everyone, I'm currently taking an Eletricity and Magnetism course and I'm having dificulty answering this problem. I've gone to my tutor, but sadly, he hasn't taken E+M for a while now and couldn't recall off the top of his head how to go about solving this. We went through my book together, but couldn't find any examples pertaining to this type of problem. (We did find examples of induced EMF's obviously, but making the leap to a spinning rod seems to be too great a task)
Anyways, here is the problem I'm working on:
A metal rod is spinning on an axis that is located in the middle of the rod, and perpendicular to the rod's length. If the rod is in an external magnetic field that is parallel to the axis the rod is spinning on, how would I go about finding the induced emf?
The rod's thickness is negligible but has length L. It's spinning at 1500 rev/s and the magnetic field is 7.5T
Any pointers would be appreciated!
-
– Mark Eichenlaub May 3 '11 at 6:00
@mark Thanks for your reply. As I stated above, I'm having trouble solving for the case of a rod spinning. I've worked through examples where the focus is on a point charge, but not on a rod that is rotating. – Jacob Patel May 3 '11 at 6:18
2
Did you read the link I gave you? In general, you should try to ask a question that clarifies some physics point, or some technique of problem solving, etc. It's good to ask about a spinning rod if you have some reason in mind that the situation might be different and require some new physical insight compared to other questions about induction. I don't see how that's the case here. As such, I've tried to give an answer about induction in general. – Mark Eichenlaub May 3 '11 at 6:56
Wait, the axis of rotation is perpendicular to the rod's length? Or parallel? – Andrew May 3 '11 at 11:40
## closed as too localized by David Zaslavsky♦Oct 23 '12 at 4:04
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ.
## 4 Answers
It is zero.
Faraday's law states that the induced EMF around the boundary of a surface is proportional to the time derivative of the flux through a that surface.
Therefore, when the magnetic field is constant in both space and time there is no electromagnetic induction.
There would be a Hall effect. Its magnitude would depend on the material, especially the density of charge carriers.
-
how does a homopolar generator work where the flux though it remains constant in space? – John McVirgo May 3 '11 at 13:09
@John There is a Lorentz force on the charge carriers. – Mark Eichenlaub May 3 '11 at 16:46
– Andrew May 3 '11 at 17:52
@Andrew I was considering EMF comes from the term in Maxwell's equation related the curl of E to the time derivative of B. – Mark Eichenlaub May 3 '11 at 17:55
Ok, so we both agree that an electric field is generated. Since the definition is not universal, it's possible that the OP isn't clear on the distinction between voltage and EMF. Heck, I'm not sure I'm clear. – Andrew May 3 '11 at 17:57
show 6 more comments
For a wire moving in a magnetic field,
emf = rate of cutting of magnetic flux
Which is one form of Faraday's Law. This is straight forward to use when all parts of the wire are moving at the same velocity. If not, you can divide the wire up into small parts and imagine each small element having a unique velocity, with an emf across it, and the sum across each element giving the total emf across the ends.
One half of the rod sweeps out an area $\pi r^2$ in a time $1/f$ giving a rate of cutting of magnetic flux and hence emf between the center and r as
$emf = Bf\pi r^2$
A more sophisticated approach is to use the Lorentz force law which needs a deeper understanding of what's going on inside the rod.
$F = e( E + v \times B)$
As the rod moves through the magnetic field, it carries with it electrons and metal ions, the former being of a much smaller mass so their motion is affected much more by electric and magnetic fields.
In your example, let $B$ point into the screen, the rod rotating clockwise about $r = 0$ in the plane of the screen. From the definition of the cross product $\times$, the magnetic force has a magnitude $evB$, and points towards r = 0 along the rod for an electron. The electrons being forced in this direction disrupts the neutrality of the rod, so creating an internal E which prevents further electron movement. So we can say that since $F = 0$ then $E = -vB$ and we've now got an internal electric field $E$ which can do work, unlike the static magnetic field B. To calculate the emf, we need the work done in moving a charge of 1 Coulomb between two points
$emf = -\int_{0}^{r}Edr = \int_{0}^{r}vBdr$
Since the speed of an electron at $r$ is $2 \pi fr$, finally
$emf = Bf\pi r^2$
Which gives the same result as before. As expected, there is an emf between the center and each end, but not between the ends.
-
You are asking about a homopolar generator. The excellent UMD physics demo page shows an experimental demonstration and includes references. I'll try to find a free version of that article.
EDIT: Fitzpatrick discusses homopolar generators in his online plasma physics textbook.
-
The Fitzpatrick link says "In other words, if the rotating disk is a perfect conductor then dynamo action is impossible". Surprising if true. – John McVirgo May 3 '11 at 21:41
He's talking about dynamo, where the current from the homopolar generator creates the magnetic field used in the homopolar generator. Does his reasoning apply to the case of an externally applied magnetic field? – Andrew May 3 '11 at 22:16
Use `e=B*omega*L^2/2`, where `e` is induced EMF, `omega` is the angular velocity and `L` is the length of the rod. Actually, EMF is induced due to change in area. However between the ends, it is 0.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505559802055359, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/21336/what-determines-color-wavelength-or-frequency
|
# What determines color — wavelength or frequency?
What determines the color of light -- is it the wavelength of the light or the frequency?
(i.e. If you put light through a medium other than air, in order to keep its color the same, which one would you need to keep constant: the wavelength or the frequency?)
-
## 6 Answers
For almost all detectors, it is actually the energy of the photon that is the attribute that is detected and the energy is not changed by a refractive medium. So the "color" is unchanged by the medium...
-
+1 makes sense, thanks! – Mehrdad Feb 23 '12 at 0:20
Colour is defined by the eye, and only indirectly from physical properties like wavelength and frequency. Since this interaction happens in a medium of fixed index of refraction (the vitreous humour of your eye), the frequency/wavelength relation inside your eye is fixed.
Outside your eye, the frequency stays constant, and teh wavelength changes according to the medium, so I would say the frequency is what counts more. This explains why object's colour doesn't change when we look at them under (transparent) water (n=1.33) or in the air (n=1.)
-
As FrankH said, it's actually energy that determines color. The reason, in summary, is that color is a psychological phenomenon that the brain constructs based on the signals it receives from cone cells on the eye's retina. Those signals, in turn, are generated when photons interact with proteins called photopsins. The proteins have different energy levels corresponding to different configurations, and when a photon interacts with a photopsin, it is the photon's energy that determines what transition between energy levels takes place, and thus the strength of the electrical signal gets sent to the brain.
Side note: I posted a pretty detailed but underappreciated (at least, I thought so) answer to a very similar question on reddit a few days ago. I could edit it in here if you find it useful.
-
5
Wouldn't energy $\implies$ frequency? $E=h\nu$, and $\nu$ is invariant on refraction. Which brings me to an interesting side-question: Materials exhibit colors due to their tendencies to absorb/reflect various wavelengths. What happens when the object is put in a medium? – Manishearth♦ Feb 23 '12 at 3:14
1
Yeah, frequency determines color too. (There is a function mapping frequencies in the visible spectrum to an $\mathbb{R}^1$ subspace of the $\mathbb{R}^3$ RGB color space.) But I emphasized energy because the physical reason that colors are able to be distinguished is really based on the energy. AFAIK the origin of materials' colors is mostly the same mechanism, energy level transitions, so again the colors are unaffected when you put the material in a refractive medium. – David Zaslavsky♦ Feb 23 '12 at 5:03
Refraction experiments show it is the frequency that determines color. When a beam of light crosses the boundary between two medium whose refraction index are (n1,n2), its speed changes (v1=c/n1; v2=c/n2), its frequency does not change because it is fixed by the emitter, so its wavelenght changes: lambda1=v1/f;lambda2=v2/f. Now, it is an experimental fact that refraction does not affect color, so one can conclude that color is frequency dependant.
-
Actually, there is something important all these answers are missing. Color is determined by the response of the human eye, not by energy or frequency. In order to get the full range ('gamut') of colors, I need a mix of red, green and blue light (hence the RGB displays) and the primaries can themselves all be different frequencies. That is, one RGB system can have one frequency for the red, while another has a somewhat different frequency for red, the only hard and fast requirement being that both of them choose that frequency from somewhere in the red range. But the choice affects the gamut.
Now I said "human eye", but of course, other animals see colors, too. Bees see colors into the ultraviolet. But of course, we have no idea what the ultraviolet colors look like to them, only that they do see them, and can distinguish shades of them.
Wikipedia has a lot of good further info on this, but it is scattered among several articles. Probably http://en.wikipedia.org/wiki/Color_theory#Color_abstractions is the best starting point. For something much more thorough and technical, see Poynton's excellent Color FAQ at http://www.poynton.com/ColorFAQ.html
-
True and informative, but it remains the energy (i.e. frequency) that determines which photo-receptors are activated. – dmckee♦ Jul 31 '12 at 13:39
In my opinion both because frequency determines the main category of EM radiation such as: Radio waves, Microwaves, Infrared etc... Inside each category you can access a precise range of wavelenght. So colors are all the combination of frequency in the range 428 THz – 749 THz and wavelenght in the range 700 nm – 400 nm.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332972168922424, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/234115/find-the-norm-of-an-operator-on-ell-2?answertab=oldest
|
# Find the norm of an operator on $\ell_2$
Let $(x_n) \subset \ell_2$ and let operator $L:\ell_2\to \mathbb R$ be defined by:
$\displaystyle L((x_n)) := \sum_{n=1}^{\infty} \frac{x_n}{\sqrt{n(n+1)}}$.
Find the norm of L.
-
Are you assigning homework to us? Please don't post in the imperative and without showing any effort of your own. Some people find that kind of post rather rude. – kahen Nov 10 '12 at 11:19
1
Sorry, i did try to work on it but got a bit stuck. It's not a homework, i'm just exercising before the test and am running out of time. Next time I'll try to show my work when asking a question :) – hncas Nov 10 '12 at 11:40
## 2 Answers
Let $a\in\ell^2$ the sequence defined by $a(n):=\frac 1{\sqrt{n(n+1)}}$. Then $T(x)=\langle a,x\rangle_{\ell^2}$.
Cauchy-Schwarz inequality gives that $$\lVert T\rVert=\lVert a\rVert_{\ell^2}= \sum_{n=1}^{+\infty}\frac{n+1-n}{n(n+1)}=\sum_{n=1}^{+\infty}\left(\frac 1n-\frac 1{n+1}\right)=1.$$
-
Ok, I see :). Should've seen that :D. Thanks a lot :) – hncas Nov 10 '12 at 11:41
Quick question: writing $T(x) = <a,x>_{l^2}$ is using writing it in the Riesz Representation? Looking at the norm of a is always the way of finding the operator norm? – Lost1 Nov 21 '12 at 22:04
It's easier in the case of a linear functional on a Hilbert space. Here, we just use the definition of $T$ to find the vector $a$ such that $T(x)=\langle a,x\rangle$. – Davide Giraudo Nov 21 '12 at 22:09
$$L(x) = \left\langle x,\Bigl((n(n+1))^{-1/2}\Bigr)\right\rangle = \langle x,\xi\rangle.$$
So by the Riesz representation theorem for Hilbert spaces we have
$$\lVert L\rVert^2 = \lVert \xi\rVert_2^2 = \sum_{n=1}^\infty \left|\frac1{\sqrt{n(n+1)}}\right|^2 = \sum_{n=1}^\infty \frac1{n(n+1)}.$$
You should be able to sum the series yourself.
-
Yes, I can :). Thanks a lot :) – hncas Nov 10 '12 at 11:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221338033676147, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/44428-reciprocal-opposites.html
|
# Thread:
1. ## reciprocal opposites
If two nonzero numbers are opposites of each other, are their reciprocals opposites of each other? Why or why not?
2. Hello,
Originally Posted by renmail2000
If two nonzero numbers are opposites of each other, are their reciprocals opposites of each other? Why or why not?
Let n be a number. Its opposite is -n. Let m=-n.
Is the reciprocal of n opposite of the reciprocal of m ? That is to say is 1/n=-1/m ?
Substitute m
3. Originally Posted by renmail2000
If two nonzero numbers are opposites of each other, are their reciprocals opposites of each other? Why or why not?
Two numbers that have the same absolute value but have opposite signs are called opposite numbers.
If n is one number, then -n is its opposite.
The reciprocal of a fraction is obtained by interchanging the numerator and the denominator, i.e. by inverting the fraction. The reciprocal of a whole number is 1 over that number.
If $\frac{1}{n}$ is one number, then - $\frac{1}{n}$ is its opposite.
If $\frac{m}{n}$ is one number, then - $\frac{m}{n}$ is its opposite.
If $\frac{n}{m}$ is one number, then - $\frac{n}{m}$ is its opposite.
So, the answer is yes. Opposite numbers are the same distance from zero on the number line - one in the negative direction and one in the positive direction.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554130434989929, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/65491/voronoi-cell-of-lattices-with-the-same-profile
|
## Voronoi cell of lattices with the same profile.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Definition 1. Given a body $V$ in $\mathbb R^n$, the function $$p_V(r)=\mathop{\rm vol} [V\cap B_r(0)]$$ will be called profile of $V$.
Definition 2. Define Voronoi cell of lattice $L$ in $\mathbb R^n$ as $$V_L=\{\,x\in \mathbb R^n;\,|x|\le |x+\ell| \ \text{for any}\ \ell\in L\}.$$
Question. Can it happen that Voronoi cells of a pair of lattices have the same profile, but not isometric?
Comment
• The question is inspired by this one.
-
Presumably $B_r(0)$ is the ball of radius $r$ centered at $0$? If so, it seems, at least, profile $\Rightarrow$ isometric for $n=1$! – Joseph O'Rourke May 20 2011 at 1:19
3
The $II_{16,0}$ and $E_8 + E_8$ lattices in dimension 16 have different Voronoi domains, since the reflection planes for the roots yield different Dynkin diagrams. I think the domains have the same profile, because the theta constants of the lattices are equal. – S. Carnahan♦ May 20 2011 at 2:21
2
Equality of theta-functions is equivalent to the fact that both latices have the same number of points in any ball centered at the origin (?). I do not see why this property related to the one I want --- they sound similar, but I do not see a bridge between them. – Anton Petrunin May 20 2011 at 3:15
2
@Will Jagy: There are quite a few polytopes with the same profile, even centrally symmetric ones. You can stellate two pairs of opposite sides of an icosahedron (or octagon), getting the same profile regardless of the pairs of opposite sides you choose. The symmetries do not act transitively on the possibilities. However, it's much easier to motivate considering the profile of the Voronoi domain of a lattice since it tells you the distribution of distances to the lattice in $\mathbb{R}^n / L$. – Douglas Zare May 20 2011 at 17:24
1
@Scott Carnahan: Given a lattice, one can consider the norm of the "most distant point to the lattice" that is closest to the origin. Is that invariant the same for E_8⊕E_8 and for II_{16,0}? Let me rephrase because I might have said things in a confusing way: I'm asking for the diameters (or maybe "radius") of the Voronoi cells: are they the same for those two lattices? – André Henriques Jul 20 2011 at 23:41
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9138864874839783, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/156226/show-that-operatornameinta-cap-b-operatornameinta-cap-operatornam
|
# Show that $\operatorname{int}(A \cap B)= \operatorname{int}(A) \cap \operatorname{int}(B)$
It's kind of a simple proof (I think) but I´m stuck!
I have to show that $\operatorname{int} (A \cap B)=\operatorname{int} (A) \cap \operatorname{int}(B)$.
(The interior point of the intersection is the intersection of the interior point.)
I thought like this:
Intersection: there's a point that is both in $A$ and $B$, so there is a point $x$, so $\exists ε>0$ such $(x-ε,x+ε) \subset A \cap B$.I don´t know if this is right.
Now $\operatorname{int} (A) \cap \operatorname{int}(B)$, but again with the definition ,there is a point that is in both sets,there's an interior point that is in both sets,an $x$ such $(x-ε,x+ε)\subset A \cap B$. There we have the equality.
I think it may be wrong. Please, I'm confused!
-
1
It´s better now,@Peter Tamaroff! – Charlie Jun 9 '12 at 21:08
## 2 Answers
If $x\in\mathrm{int}(A\cap B)$, then there exists $\epsilon\gt 0$ such that $(x-\epsilon,x+\epsilon)\subseteq A\cap B$. And since $A\cap B\subseteq A$ and $A\cap B\subseteq B$, then...
If $x\in\mathrm{int}(A)\cap\mathrm{int}(B)$, then there exists $\epsilon_1\gt 0$ such that $(x-\epsilon_1,x+\epsilon_1)\subseteq A$, and there exists $\epsilon_2\gt 0$ such that $(x-\epsilon_2,x+\epsilon_2)\subseteq B$. Can you find a single $\epsilon$ that works for both sets? Then what can you say about $(x-\epsilon,x+\epsilon)$?
-
THANK YOU SO MUCH!!!! – Charlie Jun 9 '12 at 20:50
Always remember the trivial inclusion using the property $A\subset B \implies \operatorname{int}A\subset \operatorname{int}B$. Then:
$$A\cap B\subset A,\ A\cap B\subset B \implies \operatorname{int}(A\cap B)\subset \operatorname{int}A,\ \operatorname{int}(A\cap B)\subset \operatorname{int}B$$
therefore $\operatorname{int}(A\cap B)\subset \operatorname{int}A\cap\operatorname{int}B$. The other inclusion is in Arturo Magidin answer.
In same form we can prove the trivial inclusion $\operatorname{int}A\cup\operatorname{int}B\subset\operatorname{int}(A\cup B)$. Using only the fact that $A\subset A\cup B$ and $B\subset A\cup B$.
If you known what is the closure of a set you can prove that if $A\subset B$ then $\overline{A}\subset \overline{B}$. Then the following facts are inmediate:
$$\overline{A\cap B}\subset\overline{A}\cap\overline{B},$$ $$\overline{A}\cup\overline{B}\subset \overline{A\cup B}.$$
Please don't forget it. This observation is crucial and is always used. The other inclusion sometimes is false or sometimes is true, generally you must to use definition indeed this basic properties.
-
1
So,I could have used only the properties,with no definitions? – Charlie Jun 9 '12 at 21:43
Only in the trivial inclusion. – Gastón Burrull Jun 9 '12 at 21:48
Try to prove the statments that I wrote but I didn't prove. Using set properties :) – Gastón Burrull Jun 9 '12 at 21:51
Ok,Thanks!I think i understood.sometimes i have problems is proofs ,using epsilons...it´s a very stupid mistake,I know,I just can´t make things right ,it stays a little confused.... – Charlie Jun 9 '12 at 21:56
with this property you probably will make faster some of your proofs – Gastón Burrull Jun 9 '12 at 22:32
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244706630706787, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/81653-staggering-algebra-problem.html
|
# Thread:
1. ## A staggering algebra problem.
x+(300-2x)+(170-x)+(150-x)=500
Need help with this problem. I know the solution is 200 but do not know how to achieve it.
2. Originally Posted by GeoBlockage
x+(300-2x)+(170-x)+(150-x)=500
Need help with this problem. I know the solution is 200 but do not know how to achieve it.
the solution in the book is wrong from what i am getting. show me your work.
3. x+(300-x)+(170-x)+(150-x)=500
620x-4x=500
616x=500
x=116
4. Originally Posted by GeoBlockage
x+(300-x)+(170-x)+(150-x)=500
620x-4x=500
616x=500
x=116
you must use the distributive law outside the brackets. when there is no number present in front of the bracket, you must add an imaginary 1.
$x+(300-x)+(170-x)+(150-x)=500$
$<br /> x+1(300-x)+1(170-x)+1(150-x)=500$
$x+300-x+170-x+150-x=500$
$x-x-x-x=500-300-170-150$
$-2x=-120$
$<br /> x=60$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8225470781326294, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Symbolic_logic
|
# Mathematical logic
(Redirected from Symbolic logic)
Mathematical logic (also symbolic logic, formal logic, or, less frequently, modern logic) is a subfield of mathematics with close connections to the foundations of mathematics, theoretical computer science and philosophical logic.[1] The field includes both the mathematical study of logic and the applications of formal logic to other areas of mathematics. The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.
Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order logic, and definability. In computer science (particularly in the ACM Classification) mathematical logic encompasses additional topics not detailed in this article; see logic in computer science for those.
Since its inception, mathematical logic has both contributed to, and has been motivated by, the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed.
## Subfields and scope
The Handbook of Mathematical Logic makes a rough division of contemporary mathematical logic into four areas:
1. set theory
2. model theory
3. recursion theory, and
4. proof theory and constructive mathematics (considered as parts of a single area).
Each area has a distinct focus, although many techniques and results are shared among multiple areas. The borderlines amongst these fields, and the lines separating mathematical logic and other fields of mathematics, are not always sharp. Gödel's incompleteness theorem marks not only a milestone in recursion theory and proof theory, but has also led to Löb's theorem in modal logic. The method of forcing is employed in set theory, model theory, and recursion theory, as well as in the study of intuitionistic mathematics.
The mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily considered a subfield of mathematical logic. Because of its applicability in diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational system for mathematics, independent of set theory. These foundations use toposes, which resemble generalized models of set theory that may employ classical or nonclassical logic.
## History
Mathematical logic emerged in the mid-19th century as a subfield of mathematics independent of the traditional study of logic (Ferreirós 2001, p. 443). Before this emergence, logic was studied with rhetoric, through the syllogism, and with philosophy. The first half of the 20th century saw an explosion of fundamental results, accompanied by vigorous debate over the foundations of mathematics.
### Early history
Further information: History of logic
Theories of logic were developed in many cultures in history, including China, India, Greece and the Islamic world. In 18th century Europe, attempts to treat the operations of formal logic in a symbolic or algebraic way had been made by philosophical mathematicians including Leibniz and Lambert, but their labors remained isolated and little known.
### 19th century
In the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work, building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine of logic into a sufficient framework for the study of foundations of mathematics (Katz 1998, p. 686).
Charles Sanders Peirce built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885. Gottlob Frege presented an independent development of logic with quantifiers in his Begriffsschrift, published in 1879, a work generally considered as marking a turning point in the history of logic. Frege's work remained obscure, however, until Bertrand Russell began to promote it near the turn of the century. The two-dimensional notation Frege developed was never widely adopted and is unused in contemporary texts.
From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century.
#### Foundational theories
Concerns that mathematics had not been built on a proper foundation led to the development of axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry.
In logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano (1888) published a set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely characterized by their induction properties. Dedekind (1888) proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction.
In the mid-19th century, flaws in Euclid's axioms for geometry became known (Katz 1998, p. 774). In addition to the independence of the parallel postulate, established by Nikolai Lobachevsky in 1826 (Lobachevsky 1840), mathematicians discovered that certain theorems taken for granted by Euclid were not in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that circles of the same radius whose centers are separated by that radius must intersect. Hilbert (1899) developed a complete set of axioms for geometry, building on previous work by Pasch (1882). The success in axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics, such as the natural numbers and the real line. This would prove to be a major area of research in the first half of the 20th century.
The 19th century saw great advances in the theory of real analysis, including theories of convergence of functions and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817 (Felscher 2000), but remained relatively unknown. Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858, Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers (Dedekind 1872), a definition still employed in contemporary texts.
Georg Cantor developed the fundamental concepts of infinite set theory. His early results developed the theory of cardinality and proved that the reals and the natural numbers have different cardinalities (Cantor 1874). Over the next twenty years, Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be well-ordered, but was unable to produce a proof for this result, leaving it as an open problem in 1895 (Katz 1998, p. 807).
### 20th century
In the early decades of the 20th century, the main areas of study were set theory and formal logic. The discovery of paradoxes in informal set theory caused some to wonder whether mathematics itself is inconsistent, and to look for proofs of consistency.
In 1900, Hilbert posed a famous list of 23 problems for the next century. The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively; the tenth was to produce a method that could decide whether a multivariate polynomial equation over the integers has a solution. Subsequent work to resolve these problems shaped the direction of mathematical logic, as did the effort to resolve Hilbert's Entscheidungsproblem, posed in 1928. This problem asked for a procedure that would decide, given a formalized mathematical statement, whether the statement is true or false.
#### Set theory and paradoxes
Ernst Zermelo (1904) gave a proof that every set could be well-ordered, a result Georg Cantor had been unable to obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among mathematicians and the pioneers of set theory. The immediate criticism of the method led Zermelo to publish a second exposition of his result, directly addressing criticisms of his proof (Zermelo 1908a). This paper led to the general acceptance of the axiom of choice in the mathematics community.
Skepticism about the axiom of choice was reinforced by recently discovered paradoxes in naive set theory. Cesare Burali-Forti (1897) was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter, Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard (1905) discovered Richard's paradox.
Zermelo (1908b) provided the first set of axioms for set theory. These axioms, together with the additional axiom of replacement proposed by Abraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox.
In 1910, the first volume of Principia Mathematica by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an effort to avoid the paradoxes. Principia Mathematica is considered one of the most influential works of the 20th century, although the framework of type theory did not prove popular as a foundational theory for mathematics (Ferreirós 2001, p. 445).
Fraenkel (1922) proved that the axiom of choice cannot be proved from the remaining axioms of Zermelo's set theory with urelements. Later work by Paul Cohen (1966) showed that the addition of urelements is not needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.
#### Symbolic logic
Leopold Löwenheim (1915) and Thoralf Skolem (1920) obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox.
In his doctoral thesis, Kurt Gödel (1929) proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic. Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians.
In 1931, Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time.
Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen (1936) proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel (1958) gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intutitionistic arithmetic in higher types.
#### Beginnings of the other branches
Alfred Tarski developed the basics of model theory.
Beginning in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish a series of encyclopedic mathematics texts. These texts, written in an austere and axiomatic style, emphasized rigorous presentation and set-theoretic foundations. Terminology coined by these texts, such as the words bijection, injection, and surjection, and the set-theoretic foundations the texts employed, were widely adopted throughout mathematics.
The study of computability came to be known as recursion theory, because early formalizations by Gödel and Kleene relied on recursive definitions of functions.[2] When these definitions were shown equivalent to Turing's formalization involving Turing machines, it became clear that a new concept – the computable function – had been discovered, and that this definition was robust enough to admit numerous independent characterizations. In his work on the incompleteness theorems in 1931, Gödel lacked a rigorous concept of an effective formal system; he immediately realized that the new definitions of computability could be used for this purpose, allowing him to state the incompleteness theorems in generality that could only be implied in the original paper.
Numerous results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene (1943) introduced the concepts of relative computability, foreshadowed by Turing (1939), and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order functionals. Kleene and Kreisel studied formal versions of intuitionistic mathematics, particularly in the context of proof theory.
## Formal logical systems
At its core, mathematical logic deals with mathematical concepts expressed using formal logical systems. These systems, though they differ in many details, share the common property of considering only expressions in a fixed formal language, or signature. The systems of propositional logic and first-order logic are the most widely studied today, because of their applicability to foundations of mathematics and because of their desirable proof-theoretic properties.[3] Stronger classical logics such as second-order logic or infinitary logic are also studied, along with nonclassical logics such as intuitionistic logic.
### First-order logic
Main article: First-order logic
First-order logic is a particular formal system of logic. Its syntax involves only finite expressions as well-formed formulas, while its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse.
Early results from formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable first-order language has an infinite model then it has at least one model of each infinite cardinality. This shows that it is impossible for a set of first-order axioms to characterize the natural numbers, the real numbers, or any other infinite structure up to isomorphism. As the goal of early foundational studies was to produce axiomatic theories for all parts of mathematics, this limitation was particularly stark.
Gödel's completeness theorem (Gödel 1929) established the equivalence between semantic and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is true in every model that satisfies a particular set of axioms, then there must be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness theorem, and it took many years before logicians grasped its significance and began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words that an inconsistent set of formulas must have a finite inconsistent subset. The completeness and compactness theorems allow for sophisticated analysis of logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence of first-order logic in mathematics.
Gödel's incompleteness theorems (Gödel 1931) establish additional limits on first-order axiomatizations. The first incompleteness theorem states that for any sufficiently strong, effectively given logical system there exists a statement which is true but not provable within that system. Here a logical system is effectively given if it is possible to decide, given any formula in the language of the system, whether the formula is an axiom. A logical system is sufficiently strong if it can express the Peano axioms. When applied to first-order logic, the first incompleteness theorem implies that any sufficiently strong, consistent, effective first-order theory has models that are not elementarily equivalent, a stronger limitation than the one established by the Löwenheim–Skolem theorem. The second incompleteness theorem states that no sufficiently strong, consistent, effective axiom system for arithmetic can prove its own consistency, which has been interpreted to show that Hilbert's program cannot be completed.
### Other classical logics
Many logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an infinite amount of information, and higher-order logics, which include a portion of set theory directly in their semantics.
The most well studied infinitary logic is $L_{\omega_1,\omega}$. In this logic, quantifiers may only be nested to finite depths, as in first-order logic, but formulas may have finite or countably infinite conjunctions and disjunctions within them. Thus, for example, it is possible to say that an object is a whole number using a formula of $L_{\omega_1,\omega}$ such as
$(x = 0) \lor (x = 1) \lor (x = 2) \lor \cdots.$
Higher-order logics allow for quantification not only of elements of the domain of discourse, but subsets of the domain of discourse, sets of such subsets, and other objects of higher type. The semantics are defined so that, rather than having a separate domain for each higher-type quantifier to range over, the quantifiers instead range over all objects of the appropriate type. The logics studied before the development of first-order logic, for example Frege's logic, had similar set-theoretic aspects. Although higher-order logics are more expressive, allowing complete axiomatizations of structures such as the natural numbers, they do not satisfy analogues of the completeness and compactness theorems from first-order logic, and are thus less amenable to proof-theoretic analysis.
Another type of logics are fixed-point logics that allow inductive definitions, like one writes for primitive recursive functions.
One can formally define an extension of first-order logic — a notion which encompasses all logics in this section because they behave like first-order logic in certain fundamental ways, but does not encompass all logics in general, e.g. it does not encompass intuitionistic, modal or fuzzy logic. Lindström's theorem implies that the only extension of first-order logic satisfying both the compactness theorem and the Downward Löwenheim–Skolem theorem is first-order logic.
### Nonclassical and modal logic
Modal logics include additional modal operators, such as an operator which states that a particular formula is not only true, but necessarily true. Although modal logic is not often used to axiomatize mathematics, it has been used to study the properties of first-order provability (Solovay 1976) and set-theoretic forcing (Hamkins and Löwe 2007).
Intuitionistic logic was developed by Heyting to study Brouwer's program of intuitionism, in which Brouwer himself avoided formalization. Intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. Kleene's work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. For example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of arithmetic such as Peano arithmetic.
### Algebraic logic
Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental example is the use of Boolean algebras to represent truth values in classical propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic propositional logic. Stronger logics, such as first-order logic and higher-order logic, are studied using more complicated algebraic structures such as cylindric algebras.
## Set theory
Main article: Set theory
Set theory is the study of sets, which are abstract collections of objects. Many of the basic notions, such as ordinal and cardinal numbers, were developed informally by Cantor before formal axiomatizations of set theory were developed. The first such axiomatization, due to Zermelo (1908b), was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for mathematics.
Other formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory (MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows objects such as the set of all sets at the cost of restrictions on its set-existence axioms. The system of Kripke–Platek set theory is closely related to generalized recursion theory.
Two famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo (1904), was proved independent of ZF by Fraenkel (1922), but has come to be widely accepted by mathematicians. It states that given a collection of nonempty sets there is a single set C that contains exactly one element from each set in the collection. The set C is said to "choose" one element from each set in the collection. While the ability to make such a choice is considered obvious by some, since each set in the collection is nonempty, the lack of a general, concrete rule by which the choice can be made renders the axiom nonconstructive. Stefan Banach and Alfred Tarski (1924) showed that the axiom of choice can be used to decompose a solid ball into a finite number of pieces which can then be rearranged, with no scaling, to make two solid balls of the original size. This theorem, known as the Banach–Tarski paradox, is one of many counterintuitive results of the axiom of choice.
The continuum hypothesis, first proposed as a conjecture by Cantor, was listed by David Hilbert as one of his 23 problems in 1900. Gödel showed that the continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In 1963, Paul Cohen showed that the continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set theory (Cohen 1966). This independence result did not completely settle Hilbert's question, however, as it is possible that new axioms for set theory could resolve the hypothesis. Recent work along these lines has been conducted by W. Hugh Woodin, although its importance is not yet clear (Woodin 2001).
Contemporary research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the existence of such cardinals cannot be proved in ZFC. The existence of the smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that large cardinals have extremely high cardinality, their existence has many ramifications for the structure of the real line. Determinacy refers to the possible existence of winning strategies for certain two-player games (the games are said to be determined). The existence of these strategies implies structural properties of the real line and other Polish spaces.
## Model theory
Main article: Model theory
Model theory studies the models of various formal theories. Here a theory is a set of formulas in a particular formal logic and signature, while a model is a structure that gives a concrete interpretation of the theory. Model theory is closely related to universal algebra and algebraic geometry, although the methods of model theory focus more on logical considerations than those fields.
The set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes.
The method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. Tarski (1948) established quantifier elimination for real-closed fields, a result which also shows the theory of the field of real numbers is decidable. (He also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic.) A modern subfield developing from this is concerned with o-minimal structures.
Morley's categoricity theorem, proved by Michael D. Morley (1965), states that if a first-order theory in a countable language is categorical in some uncountable cardinality, i.e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities.
A trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum hypothesis. Many special cases of this conjecture have been established.
## Recursion theory
Main article: Recursion theory
Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets which have the same level of uncomputability. Recursion theory also includes the study of generalized computability and definability. Recursion theory grew from the work of Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s.
Classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. The fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using Turing machines, λ calculus, and other systems. More advanced results concern the structure of the Turing degrees and the lattice of recursively enumerable sets.
Generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. It includes the study of computability in higher types as well as areas such as hyperarithmetical theory and α-recursion theory.
Contemporary research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory.
### Algorithmically unsolvable problems
An important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm which returns the correct answer for all legal inputs to the problem. The first results about unsolvability, obtained independently by Church and Turing in 1936, showed that the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory and computer science.
There are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959. The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example.
Hilbert's tenth problem asked for an algorithm to determine whether a multivariate polynomial equation with integer coefficients has a solution in the integers. Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem was proved by Yuri Matiyasevich in 1970 (Davis 1973).
## Proof theory and constructive mathematics
Main article: Proof theory
Proof theory is the study of formal proofs in various logical deduction systems. These proofs are represented as formal mathematical objects, facilitating their analysis by mathematical techniques. Several deduction systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen.
The study of constructive mathematics, in the context of mathematical logic, includes the study of systems in non-classical logic such as intuitionistic logic, as well as the study of predicative systems. An early proponent of predicativism was Hermann Weyl, who showed it is possible to develop a large part of real analysis using only predicative methods (Weyl 1918).
Because proofs are entirely finitary, whereas truth in a structure is not, it is common for work in constructive mathematics to emphasize provability. The relationship between provability in classical (or nonconstructive) systems and provability in intuitionistic (or constructive, respectively) systems is of particular interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (or translate) classical logic into intuitionistic logic, allowing some properties about intuitionistic proofs to be transferred back to classical proofs.
Recent developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen.
## Connections with computer science
Main article: Logic in computer science
The study of computability theory in computer science is closely related to the study of computability in mathematical logic. There is a difference of emphasis, however. Computer scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on computability as a theoretical concept and on noncomputability.
The theory of semantics of programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard isomorphism between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages.
Computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming.
Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by sentences of existential second-order logic.
## Foundations of mathematics
Main article: Foundations of mathematics
In the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as Weierstrass' nowhere-differentiable continuous function were discovered.
Cantor's study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all else is the work of man," endorsing a return to the study of finite, concrete objects in mathematics. Although Kronecker's argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. David Hilbert argued in favor of the study of the infinite, saying "No one shall expel us from the Paradise that Cantor has created."
Mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. In addition to removing ambiguity from previously-naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. In the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. Thus, for example, non-Euclidean geometry can be proved consistent by defining point to mean a point on a fixed sphere and line to mean a great circle on the sphere. The resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate.
With the development of formal logic, Hilbert asked whether it would be possible to prove that an axiom system is consistent by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis should be entirely concrete, using the term finitary to refer to the methods he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. Gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory.
A second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different programs with various definitions of constructive. At the most accommodating end, proofs in ZF set theory that do not use the axiom of choice are called constructive by many mathematicians. More limited versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist.
In the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a philosophy of mathematics. This philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to intuit the statement, to not only believe its truth but understand the reason for its truth. A consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to Brouwer, could not be claimed to be true while their negations also could not be claimed true. Brouwer's philosophy was influential, and the cause of bitter disputes among prominent mathematicians. Later, Kleene and Kreisel would study formalized versions of intuitionistic logic (Brouwer rejected formalization, and presented his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models, intuitionism became easier to reconcile with classical mathematics.
## Notes
1. Undergraduate texts include Boolos, Burgess, and Jeffrey (2002), Enderton (2001), and Mendelson (1997). A classic graduate text by Shoenfield (2001) first appeared in 1967.
## References
### Undergraduate texts
• Walicki, Michał (2011), Introduction to Mathematical Logic, Singapore: World Scientific Publishing, ISBN 978-981-4343-87-9 (pb.) .
• Boolos, George; Burgess, John; Jeffrey, Richard (2002), Computability and Logic (4th ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-00758-0 (pb.) .
• Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-238452-3 .
• Hamilton, A.G. (1988), Logic for Mathematicians (2nd ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-36865-0 .
• Ebbinghaus, H.-D.; Flum, J.; Thomas, W. (1994), Mathematical Logic (2nd ed.), New York: Springer, ISBN 0-387-94258-0 .
• Katz, Robert (1964), Axiomatic Analysis, Boston, MA: D. C. Heath and Company .
• Mendelson, Elliott (1997), Introduction to Mathematical Logic (4th ed.), London: Chapman & Hall, ISBN 978-0-412-80830-2 .
• .
• .
• Shawn Hedman, A first course in logic: an introduction to model theory, proof theory, computability, and complexity, Oxford University Press, 2004, ISBN 0-19-852981-3. Covers logics in close relation with computability theory and complexity theory
### Graduate texts
• Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof (2nd ed.), Boston: Kluwer Academic Publishers, ISBN 978-1-4020-0763-7 .
• Barwise, Jon, ed. (1989), Handbook of Mathematical Logic, Studies in Logic and the Foundations of Mathematics, North Holland, ISBN 978-0-444-86388-1 .
• Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6 .
• Jech, Thomas (2003), Set Theory: Millennium Edition, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-44085-7 .
• Shoenfield, Joseph R. (2001) [1967], Mathematical Logic (2nd ed.), A K Peters, ISBN 978-1-56881-135-2 .
• Troelstra, Anne Sjerp; Schwichtenberg, Helmut (2000), Basic Proof Theory, Cambridge Tracts in Theoretical Computer Science (2nd ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-77911-1 .
### Research papers, monographs, texts, and surveys
• Cohen, P. J. (1966), Set Theory and the Continuum Hypothesis, Menlo Park, CA: W. A. Benjamin .
• Davis, Martin (1973), "Hilbert's tenth problem is unsolvable", (The American Mathematical Monthly, Vol. 80, No. 3) 80 (3): 233–269, doi:10.2307/2318447, JSTOR 2318447 , reprinted as an appendix in Martin Davis, Computability and Unsolvability, Dover reprint 1982. JStor
• Felscher, Walter (2000), "Bolzano, Cauchy, Epsilon, Delta", The American Mathematical Monthly (The American Mathematical Monthly, Vol. 107, No. 9) 107 (9): 844–862, doi:10.2307/2695743, JSTOR 2695743 . JSTOR
• Ferreirós, José (2001), "The Road to Modern Logic-An Interpretation", Bulletin of Symbolic Logic (The Bulletin of Symbolic Logic, Vol. 7, No. 4) 7 (4): 441–484, doi:10.2307/2687794, JSTOR 2687794 . JStor
• Hamkins, Joel David; Löwe, Benedikt, "The modal logic of forcing", Transactions of the American Mathematical Society , to appear. Electronic posting by the journal
• Katz, Victor J. (1998), A History of Mathematics, Addison–Wesley, ISBN 0-321-01618-1 .
• Morley, Michael (1965), "Categoricity in Power", (Transactions of the American Mathematical Society, Vol. 114, No. 2) 114 (2): 514–538, doi:10.2307/1994188, JSTOR 1994188 .
• Soare, Robert I. (1996), "Computability and recursion", Bulletin of Symbolic Logic (The Bulletin of Symbolic Logic, Vol. 2, No. 3) 2 (3): 284–321, doi:10.2307/420992, JSTOR 420992 .
• Solovay, Robert M. (1976), "Provability Interpretations of Modal Logic", Israel Journal of Mathematics 25 (3–4): 287–304, doi:10.1007/BF02757006 .
• Woodin, W. Hugh (2001), "The Continuum Hypothesis, Part I", Notices of the American Mathematical Society 48 (6) . PDF
### Classical papers, texts, and collections
• Burali-Forti, Cesare (1897), A question on transfinite numbers , reprinted in van Heijenoort 1976, pp. 104–111.
• Dedekind, Richard (1872), Stetigkeit und irrationale Zahlen . English translation of title: "Consistency and irrational numbers".
• Dedekind, Richard (1888), Was sind und was sollen die Zahlen? Two English translations:
• 1963 (1901). Essays on the Theory of Numbers. Beman, W. W., ed. and trans. Dover.
• 1996. In From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols, Ewald, William B., ed., Oxford University Press: 787–832.
• Fraenkel, Abraham A. (1922), "Der Begriff 'definit' und die Unabhängigkeit des Auswahlsaxioms", Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse, pp. 253–257 (German), reprinted in English translation as "The notion of 'definite' and the independence of the axiom of choice", van Heijenoort 1976, pp. 284–289.
• Frege Gottlob (1879), Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a. S.: Louis Nebert. Translation: Concept Script, a formal language of pure thought modelled upon that of arithmetic, by S. Bauer-Mengelberg in Jean Van Heijenoort, ed., 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. Harvard University Press.
• Frege Gottlob (1884), Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl. Breslau: W. Koebner. Translation: J. L. Austin, 1974. The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, 2nd ed. Blackwell.
• Gentzen, Gerhard (1936), "Die Widerspruchsfreiheit der reinen Zahlentheorie", Mathematische Annalen 112: 132–213, doi:10.1007/BF01565428 , reprinted in English translation in Gentzen's Collected works, M. E. Szabo, ed., North-Holland, Amsterdam, 1969.[specify]
• Gödel, Kurt (1929), Über die Vollständigkeit des Logikkalküls, doctoral dissertation, University Of Vienna . English translation of title: "Completeness of the logical calculus".
• Gödel, Kurt (1930), "Die Vollständigkeit der Axiome des logischen Funktionen-kalküls", Monatshefte für Mathematik und Physik 37: 349–360, doi:10.1007/BF01696781 . English translation of title: "The completeness of the axioms of the calculus of logical functions".
• Gödel, Kurt (1931), "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I", Monatshefte für Mathematik und Physik 38 (1): 173–198, doi:10.1007/BF01700692 , see On Formally Undecidable Propositions of Principia Mathematica and Related Systems for details on English translations.
• Gödel, Kurt (1958), "Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes", Dialectica. International Journal of Philosophy 12 (3–4): 280–287, doi:10.1111/j.1746-8361.1958.tb01464.x , reprinted in English translation in Gödel's Collected Works, vol II, Soloman Feferman et al., eds. Oxford University Press, 1990.[specify]
• van Heijenoort, Jean, ed. (1967, 1976 3rd printing with corrections), From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931 (3rd ed.), Cambridge, Mass: Harvard University Press, ISBN 0-674-32449-8, (pbk.)
• Hilbert, David (1899), Grundlagen der Geometrie, Leipzig: Teubner , English 1902 edition (The Foundations of Geometry) republished 1980, Open Court, Chicago.
• David, Hilbert (1929), "Probleme der Grundlegung der Mathematik", Mathematische Annalen 102: 1–9, doi:10.1007/BF01782335 . Lecture given at the International Congress of Mathematicians, 3 September 1928. Published in English translation as "The Grounding of Elementary Number Theory", in Mancosu 1998, pp. 266–273.
• Kleene, Stephen Cole (1943), "Recursive Predicates and Quantifiers", American Mathematical Society Transactions (Transactions of the American Mathematical Society, Vol. 53, No. 1) 54 (1): 41–73, doi:10.2307/1990131, JSTOR 1990131 .
• Lobachevsky, Nikolai (1840), Geometrishe Untersuchungen zur Theorie der Parellellinien (German). Reprinted in English translation as "Geometric Investigations on the Theory of Parallel Lines" in Non-Euclidean Geometry, Robert Bonola (ed.), Dover, 1955. ISBN 0-486-60027-0
• Löwenheim, Leopold (1915), "Über Möglichkeiten im Relativkalkül", Mathematische Annalen 76 (4): 447–470, doi:10.1007/BF01458217, ISSN 0025-5831 (German). Translated as "On possibilities in the calculus of relatives" in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press: 228–251.
• Mancosu, Paolo, ed. (1998), From Brouwer to Hilbert. The Debate on the Foundations of Mathematics in the 1920s, Oxford: Oxford University Press .
• Pasch, Moritz (1882), Vorlesungen über neuere Geometrie .
• Peano, Giuseppe (1888), Arithmetices principia, nova methodo exposita (Italian), excerpt reprinted in English stranslation as "The principles of arithmetic, presented by a new method", van Heijenoort 1976, pp. 83 97.
• Richard, Jules (1905), "Les principes des mathématiques et le problème des ensembles", Revue générale des sciences pures et appliquées 16: 541 (French), reprinted in English translation as "The principles of mathematics and the problems of sets", van Heijenoort 1976, pp. 142–144.
• Skolem, Thoralf (1920), "Logisch-kombinatorische Untersuchungen über die Erfüllbarkeit oder Beweisbarkeit mathematischer Sätze nebst einem Theoreme über dichte Mengen", Videnskapsselskapet Skrifter, I. Matematisk-naturvidenskabelig Klasse 6: 1–36 .
• Tarski, Alfred (1948), A decision method for elementary algebra and geometry, Santa Monica, California: RAND Corporation
• Turing, Alan M. (1939), "Systems of Logic Based on Ordinals", 45 (2): 161–228, doi:10.1112/plms/s2-45.1.161
• Zermelo, Ernst (1904), "Beweis, daß jede Menge wohlgeordnet werden kann", Mathematische Annalen 59 (4): 514–516, doi:10.1007/BF01445300 (German), reprinted in English translation as "Proof that every set can be well-ordered", van Heijenoort 1976, pp. 139–141.
• Zermelo, Ernst (1908a), "Neuer Beweis für die Möglichkeit einer Wohlordnung", 65: 107–128, doi:10.1007/BF01450054, ISSN 0025-5831 (German), reprinted in English translation as "A new proof of the possibility of a well-ordering", van Heijenoort 1976, pp. 183–198.
• Zermelo, Ernst (1908b), "Untersuchungen über die Grundlagen der Mengenlehre", Mathematische Annalen 65 (2): 261–281, doi:10.1007/BF01449999 .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8948338627815247, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/172688/calculating-a-number-when-its-remainder-is-given?answertab=oldest
|
# Calculating a number when its remainder is given
I am having difficulty solving the following problem:
Marge has n candies , where n is an integer between 20 and 50.If marge divides the candies equally among 5 children she will have 2 candies remaining . If she divided the candies among 6 children she will have 1 candy remaining.How many candies remain if she divides the candies among 7 children ?
The equation I got was $$5q-6k+1=0$$
I tried using this method but I dont think it will work here. Any suggestions on how I could solve this ?I would appreciate it if the method involves equations instead of plugging in numbers and testing.
-
4
Most of your questions just beg for congruences. Why are you not using congruences? – Arturo Magidin Jul 19 '12 at 3:27
1
Let me try using congruences and will post back – Rajeshwar Jul 19 '12 at 3:28
## 3 Answers
This is a classic Chinese Remainder Theorem. We know that $$\begin{align*} n &\equiv 2\pmod{5}\\ n &\equiv 1\pmod{6}. \end{align*}$$ By the Chinese Remainder Theorem, there is a unique number $n$ modulo $30=5\times 6$ that satisfies both equations.
We can compute it directly: $n=5q+2$ since it leaves a remainder of $2$ when divided by $5$. That means that we must have $5q+2\equiv1\pmod{6}$, which means $5q\equiv -1\equiv 5\pmod{6}$, hence $q\equiv 1\pmod{6}$. So $n=5q+2 = 5(6k+1)+2 = 30k+5+2 = 30k+7$. That is, the solution is $n\equiv 7\pmod{30}$.
Since the number must be between $20$ and $50$, the number is $37$. When divided among $7$ children, she will have $2$ candles left over.
If you must avoid congruences (really, you are only avoiding them explicitly, shoving them "under the carpet"), $n$ must be of the form $6k+1$. We can rewrite this as $n=6k+1 = 5k + (k+1)$. Since, when $n$ is divided by $5$ the remainder is $2$, that means that when we divide $k+1$ by $5$ the remainder is $2$. Therefore, the remainder when dividing $k$ by $5$ must be $1$ (so that the remainder of $k+1$ will be $1+1=2$), so $k=5r+1$. So $n=6k+1 = 6(5r+1)+1 = 30r+7$, and we are back in what we had above.
-
Thanks for the answer.I tried using inequalities , however I dont think I am doing them right. Could you let me know if that method could be used to calculate n ? – Rajeshwar Jul 19 '12 at 3:54
if $\frac{5q+2}{6}$ gives remainder 1 how did you get q=1 (mod 6). Arent you suppose to get $5q+2$ = 6k +1 ? – Rajeshwar Jul 19 '12 at 5:04
@Rajeshwar $\rm\,\ 6q\!-\!q + 2\, =\, 6k\!+\!1\:\Rightarrow\:q\, =\, 1 + 6\,(q\!-\!k)\equiv 1\pmod 6\ \$ – Gone Jul 19 '12 at 5:47
1
@Rajeshwar: What do you mean, "inequalities"? This is a problem for using congruences. If $n$ leaves a remainder of $1$ when divided by $6$, then it must be of the form $n=6k+1$ for some integer $k$. When you divide it by $5$, since $6k+1 =5k+k+1$, then $5$ goes evenly into $5k$, and you are left with $k+1$, so the remainder for $n$ is the same as the remainder for $k+1$. As to the equation $5q+2=6k+1$, it is useless unless you then proceed to solve it as a diophantine equation, which usually requires you to do.. you guessed it! Congruences. – Arturo Magidin Jul 19 '12 at 15:12
1
@Rajeshwar: Because when you divide $6k+1$ by $5$, the $5k$ part is already a multiple of $5$. Adding or subtracting multiples of five form a number does not change the remainder when you divide by $5$. E.g., $2$, $7=2+5$, $22=2+5\times 4$, $177 = 2 + 5\times35$, etc., all have the same remainder when divided by $5$. So the remainder you get when dividing $n=6k+1$ by $5$ is the same as the remainder you get when dividing $n-5k = k+1$ by $5$. – Arturo Magidin Jul 20 '12 at 2:31
show 3 more comments
Find all the numbers between $\,30\,$ and $\,50\,$ that give a remainder of $\,2\,$ when divided by $\,5\,$ (there are $\,6\,$). From these, choose the numbers that give a residue of $\,1\,$ when divided by $\,6\,$ (there's only $\,1\,$ such number out of the above six). Now calculate the remainder of this one number when divided by $\,7\,$ (solution: $\,2\,$)
-
Hint $\rm\,\ n = 6j\!+\!1,\,\ 5\:|\:n\!-\!2 = 6j\!-\!1\:\Rightarrow\:5\:|\:j\!-\!1\:\Rightarrow\:j = 5k\!+\!1\:\Rightarrow\: n = 6(5k\!+\!1)\!+\!1 = 30k\!+\!7$
-
I don't get how you got 5|n−2=6j−1 ? – Rajeshwar Jul 19 '12 at 4:40
If Marge divides the $\rm\:n\:$ candies equally among $5$ children she will have $2$ candies remaining. If each child got $\rm\:m\:$ candies then $\rm\:n = 5m\!+\!2,\:$ so $\rm\:5m = n\!-\!2,\:$ i.e. $\rm\:5\:|\:n\!-\!2.\:$ Similarly $\rm\:n = 6j\!+\!1,\:$ hence $\rm\:n\!-\!2 = (6j\!+\!1)-2 = 6j\!-\!1 = 5j + j\!-\!1.\:$ $5$ divides that iff $\rm\,5\:|\:j\!-\!1.\ \$ – Gone Jul 19 '12 at 5:00
So far I got two equations $5m=n-2$ and $n-2=6j-1$ could you explain how you got $n=6(5k+1)+1$ using those two equations ? – Rajeshwar Jul 19 '12 at 5:21
– Gone Jul 19 '12 at 5:28
@Bill.Thanks for clarifying. I got the rest of the answer , however I am still having difficulty understanding how you got $j-1=5k$ .So far on one side I have $n-2=5m$ and the other equation $n-2=5j+(j-1)$. – Rajeshwar Jul 19 '12 at 5:47
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486582279205322, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/tagged/fuchsian-groups
|
## Tagged Questions
2answers
241 views
### finite index subgroup of a fuchsian group
Given G, a fuchsian group and a finite sub set A of G. Does there exist a finite index subgroup H in G such that inter section of A with H is empty?
1answer
77 views
### Fundamental domain for subgroup of fuchsian Schottky group.
Let G be a Fuchsian Schottky group defined by a possibly infinite set of disjoint halfplanes {C_i}_i. Let F be the fundamental domain obtained by intersecting the complements of th …
2answers
154 views
### Fuchsian groups and automorphisms of Riemann surfaces
Let $\Gamma \subseteq PSL_2(\mathbb{R})$ be a Fuchsian group, possibly containing elliptic elements. Is it true that $N(\Gamma) / \Gamma$, where $N(\Gamma)$ the normalizer of \$\Gam …
1answer
95 views
### Fuchsian groups and their normalizers
Let $\Gamma \leq PSL_2(\mathbb{R})$ be a Fuchsian group. What is the relation between $N(\Gamma) = \{ \alpha \in PSL_2(\mathbb{R}) \mid \alpha \Gamma \alpha^{-1} = \Gamma \}$ and \$ …
0answers
83 views
### Asymptotics of arithmetic Fuchsian groups and Shimura curves.
I'm interested in what is known/expected about some families of arithmetic Fuchsian groups. Here is the simplest family that I'm interested in: Let $E = Z[\omega]$, where \$\omega …
1answer
197 views
### How do you find the genus of a Fuchsian group derived from a quaternion algebra?
Let $G$ be a Fuchsian group with normalizer $N(G)$ inside $PSL(2,13)$ Due to the Hurwitz formula, it suffices to find a presentation of $G$ of the form: \langle x_1,\ldots,x_r,a …
1answer
262 views
### Are any two Dirichlet domains for a Fuchsian group “comparable”?
Let $\Gamma$ be a [EDIT: finitely generated] Fuchsian group of the first kind (i.e. a discrete subgroup of $PSL_2(\mathbf{R})$ acting on the upper half-plane admitting a fundamenta …
4answers
350 views
### Growth of smallest closed geodesic in congruence subgroups?
Let $\Gamma$ be one of the classical congruence subgroups $\Gamma_0(N)$, $\Gamma_1(N)$ and $\Gamma(N)$ of $SL(2, \mathbb{Z})$. How does the lower bound for the length of primitive …
1answer
310 views
### Non congruence subgroups containing congruence subgroups.
Does there exist Fuchsian groups, which is not conjugated in $SL(2, \mathbb{R})$ to a subgroup of $SL(2, \mathbb{Z})$, but still contains a congruence subgroup?
1answer
391 views
### How nice are representation varieties of Fuchsian groups?
Background Let $S_{g,n}$ be an oriented surface of genus $g$, with $n$ punctures. We explicitly prohibit the non-hyperbolic cases: $g=0$, $n=0,1,2$. $g=1$, $n=0$. Let \$\Gamma …
1answer
322 views
### The smallest positive eigenvalue and the length of the shortest geodesic
I'm confused about some things concerning lengths of geodesics on Riemann surfaces and positive eigenvalues of the Laplacian. Moreover, I'm also interested in the relation between …
1answer
196 views
### Genus of arithmetic surface groups
It is known that for each genus, only finitely many points in the moduli space of hyperbolic genus g surfaces are arithmetic. I'm wondering if an existence result is known: for wh …
1answer
253 views
### Arithmetic Fuchsian group
Dear all, I have the following questions: Are all Fuchsian groups of signature $(0;2,2,2,\infty)$ arithmetic? What is known about the trace fields of these groups? Best, K.
2answers
414 views
### Cusp width for an arbitraty Fuchsian group
In Shimura's Intro to Arithmetic Theory of Automorphic Forms, he defines a cusp of a Fuchsian group $\Gamma$ as a point $s \in \mathbb{R} \cup \{ \infty \}$ that is fixed by a para …
0answers
198 views
### Is the absolute value of the j-invariant bounded from below on an annulus
Let $j:\mathbf{H}\to \mathbf{C}$ be the $j$-invariant. It's a modular function for $\Gamma(1) = \textrm{PSL}_2(\mathbf{Z})$. For $\epsilon>0$ small, let $B(\epsilon)$ be the image …
15 30 50 per page
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8794677257537842, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/68885/mathematica-package-for-obtaining-hypergeometric-function
|
Mathematica package for obtaining hypergeometric function
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In my current research in electromagnetics I am encountering integrals of the form $$\int_0^\infty dt J_0( r t) \frac{\exp(-h \sqrt{t^2 - a^2})}{\sqrt{t^2 - b^2}} t .$$ $a$ and $b$ are complex numbers, never purely real. If $a=b$, the integral can be done analytically (see, e. g. Watson, A Treatise on the Theory of Bessel Functions, pg. 416, Eqn 4). One just recognizes that the ratio $\exp(.)/\sqrt{.}$ is proportional to the Bessel function $$K_{\frac{1}{2}} (.),$$ which appears in a doable integral of the correct form.
However, I haven’t been able to find any information on such integrals for $a \ne b$. My current thought is to rewrite this as $$const \int_0^\infty dt J_0(t r) K_{\mu}( h \sqrt{t^2 – a^2}) t \frac{\sqrt{t^2 - a^2}}{\sqrt{t^2 - b^2}},$$ and to write the ratio of square roots as a hypergeometric function (to have proper analyticity to allow it to be integrated over the desired range) that I can expand and integrate term-by-term, or use Mellin transforms on, etc.
This gets me to my question – I know there are Mathematica packages (I don’t have Maple) for recognizing hypergeometric functions and using the relations between them, but I don’t really understand enough about them to know which package I could use. Can someone point me to the right package and some instructions on how to use it, or to some basic literature? I’m not at all educated on hypergeometric series and how to construct them for a given function. I looked at ”Special Functions” by Andrews, Askey and Roy, but I need a different introduction showing how to construct the hypergeometric series for a specified function.
Many thanks, Tom
-
1
This isn't what you are looking for, but it doesn't keep it from being way entertaining, and on topic. math.upenn.edu/~wilf/AeqB.html You can get Mathematica code to go with it. – Charlie Frohman Jun 26 2011 at 23:04
Charlie, thanks for the link. I had downloaded that book a while back, but had forgotten about it. It doesn't exactly solve my problem, but it is overall quite helpful! Tom – Tom Dickens Jun 30 2011 at 0:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375386238098145, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/119938-how-do-you-find-last-digit-number-using-fermats-little-theorem-print.html
|
# How do you find the last digit of a number using Fermat's Little Theorem
Printable View
• December 11th 2009, 01:45 PM
steph3824
How do you find the last digit of a number using Fermat's Little Theorem
For example, what is the last digit of 17^(3241)? It would help a ton if someone could show how to solve this using Fermat's Little Theorem. I have finals coming up and this was something I never quite understood
• December 12th 2009, 03:18 AM
CaptainBlack
Quote:
Originally Posted by steph3824
For example, what is the last digit of 17^(3241)? It would help a ton if someone could show how to solve this using Fermat's Little Theorem. I have finals coming up and this was something I never quite understood
I'm not sure why one would need Fermat's little theorem for this since:
$17^3 \equiv 1 \text{ mod } 10$
and:
$17^{3241}=(17^3)^{1080}\times 17$
which immediately should tell you that:
$17^{3241}\equiv 7 \text{ mod } 10$
CB
• December 12th 2009, 04:40 AM
alexmahone
Quote:
Originally Posted by CaptainBlack
I'm not sure why one would need Fermat's little theorem for this since:
$17^3 \equiv 1 \text{ mod } 10$
and:
$17^{3241}=(17^3)^{1080}\times 17$
which immediately should tell you that:
$17^{3241}\equiv 7 \text{ mod } 10$
CB
Incorrect!
$17^4 \equiv 1 (mod 10)$
$17^{3241}=(17^4)^{810}\times 17$
So $17^{3241}\equiv 7(mod 10)$
• December 12th 2009, 11:24 AM
CaptainBlack
Quote:
Originally Posted by alexmahone
Incorrect!
$17^4 \equiv 1 (mod 10)$
$17^{3241}=(17^4)^{810}\times 17$
So $17^{3241}\equiv 7(mod 10)$
Oppsss.. arithmetic error (actually a transcription error with argument completed with the erroeous value)
CB
• December 13th 2009, 05:52 AM
awkward
Quote:
Originally Posted by steph3824
For example, what is the last digit of 17^(3241)? It would help a ton if someone could show how to solve this using Fermat's Little Theorem. I have finals coming up and this was something I never quite understood
Here is a solution via Fermat's Last Theorem (although it's not necessarily the simplest):
By Fermat's Last Theorem, $17^4 \equiv 1 \mod 5$,
so since
$17^{3241} = (17^4)^{810} \cdot 17$,
$17^{3241} \equiv 1 \cdot 17 \equiv 2 \mod 5$.
We also have $17 \equiv 1 \mod 2$, so $17 ^ {3241} \equiv 1 \mod 2$.
Since $17^{3241} \equiv 2 \mod 5$ and $17 ^ {3241} \equiv 1 \mod 2$,
$17^{3241} \equiv 7 \mod 10$.
We could use the Chinese Remainder Theorem at the last step, but it seems like overkill here.
All times are GMT -8. The time now is 11:14 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340219497680664, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/91174/on-unitary-fractions
|
## On unitary fractions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
My apologies if the question has already been discussed somewhere else, I did not found anything related to unitary fractions with the search tool...
It is a nice exercise for high-school students to prove that any positive rational number less than 1 can be written as a sum of unitary fractions with distinct denominators. One possible proof is to consider the lesser integer n such that p/q-1/n is positive; we obtain a fraction whose numerator is np-q which can be at most p-1.
So I was considering the following questions :
• in some cases, since nq-p can be equal to p-1, it seems possible that one cannot write p/q as a sum of less than p unitary fractions. Is it true that one can find such a fraction for any integer p ?
• finding a sum of unitary fractions which is equal to a fraction p/q is not difficult, but is it easy to find the sum with a minimum number of fractions ? And the sum which minimize the sum of denominators ?
Thanks by advance for any hint or reference concerning those questions! (Also, I would be very interested in questions related to this topic that could be solved at high-school level)
-
Do you mean $np-q$? Because $p/q-1/n = (np-q)/nq$. – Sean Eberhard Mar 14 2012 at 13:33
1
en.wikipedia.org/wiki/Category:Egyptian_fractions – Charles Matthews Mar 14 2012 at 13:49
Thanks Sean, that was a mistake, it is np-q as you noticed it. Thanks Charles for the references which solve the first question; however I haven't found anything concerning the minimal sum of denominators. – Nekochan Mar 14 2012 at 14:20
## 1 Answer
I doubt that there are nice algorithms for the second question. It could be described as having two parts: the minimum denominator sum (mds) question and the minimal numerator sum (mns) question. For the first, we know one sum for a given rational and then there are an enormous but finite number of sums with a smaller sum of denominators. So the mds question can be answered in finite time for any rational $0 \lt r \lt 1$, but perhaps not in any reasonable manner (Although obviously not all these sums need to be examined.)
I'm not sure that there is an algorithm for the mns question which is sure to provide an answer for the mns question in finite time. It is conjectured that every fraction $\frac{4}{n}$ can be written as the sum of at most three unit fractions. The first counter-example , if there is any, is over $10^{14}$, has a prime denominator, and that denominator belongs to one of 6 congruence classes$\mod{840}$ but it is an open problem. This means that there is not (known to be) an easy way to find the minimum number of fractions.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600998759269714, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/7147/derivation-of-itos-lemma?answertab=active
|
# Derivation of Ito's Lemma
My question is rather intuitive than formal and circles around the derivation of Ito's Lemma. I have seen in a variety of textbooks that by applying Ito's Lemma, one can derive the exact solution of a geometric Brownian Motion. So, if one has
$dS = a(S,t)dt + b(S,t)dZ$
the first part on the right side of the equation is the (deterministic) drift rate and the second term is the stochastic component. Now I have read that ordinary calculus cannot handle the stochastic integral because for example
$\frac{dZ}{dt} = \phi \frac{1}{\sqrt{dt}}\\ \rightarrow \infty \text{ as } dt \rightarrow 0$
But Ito came along and proposed something which culminated in his famous Ito Lemma which helped calculate the stochastic integral. What I don't understand is the step taken from realizing that ordinary calculus does not work in this context to proposing a function $G = G(S,t) \text{ of } S(t)$, Taylor-expaning it, etc.
Or more precisely: What did Ito realize and propose that helped solve the stochastic integral?
-
## 1 Answer
Baxter and Rennie say it better than me, so I will summarize them.
Suppose that $N_t$ is not stochastic and $f(.)$ is a smooth function then the Taylor expansion is $$df(N_t) = f'(N_t)dN_t + \frac{1}{2}f''(N_t)(dN_t)^2 + \frac{1}{3!} f'''(N_t)(dN_t)^3 + \ldots$$ and the term $(dN_T)^2$ and higher terms are zero. Ito showed that this is not the case in the stochastic case. Suppose $W_t$ follows a Brownian motion and let $$Z_{n, i} = \frac{W\left(\frac{ti}{n}\right) - W\left(\frac{ti}{n}\right)}{\sqrt{t/n}}$$ now consider $$\int_0^t (dW_t)^2 \approx t \sum_{i=1}^n \frac{Z^2_{n,i}}{n}.$$ If $n \to \infty$ then $\sum_{i=1}^n \frac{Z^2_{n,i}}{n} \to 1$ and thus $\int_0^t (dW_t)^2 = t \neq 0$. Thus the second order term does not cancel (but higher order terms do) and we have an extra term in our derivative.
-
1
good explanation, upvoted – Freddy Jan 29 at 19:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474433064460754, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/43523/does-a-magnetic-field-have-gravity?answertab=active
|
# Does a magnetic field have gravity?
Re-reading http://physics.stackexchange.com/a/33156/5265; I find myself thinking if light, being EM in the humanly visible spectrum, may possess gravity - does a magnetic field also possess gravity?
-
## 1 Answer
Per wikipedia, the electromagnetic tensor $F^{\mu \nu}$ contributes to the stress energy tensor $T^{\mu \nu}$ by
$$T^{\mu \nu} = \frac{1}{\mu_0} \left(F^{\mu \alpha} g_{\alpha \beta} F^{\nu \beta} - \frac{1}{4} g^{\mu \nu} F^{\gamma \delta} F_{\gamma \delta} \right)$$
The Einstein equations govern how the stress-energy tensor is coupled to spacetime curvature. Since the magnetic field is entirely captured by the electromagnetic tensor, the answer is yes, magnetic fields contribute to gravitation.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8353335857391357, "perplexity_flag": "middle"}
|
http://www.sagemath.org/doc/reference/plane_curves/sage/schemes/elliptic_curves/ell_local_data.html
|
# Local data for elliptic curves over number fields¶
Let $$E$$ be an elliptic curve over a number field $$K$$ (including $$\QQ$$). There are several local invariants at a finite place $$v$$ that can be computed via Tate’s algorithm (see [Sil2] IV.9.4 or [Ta]).
These include the type of reduction (good, additive, multiplicative), a minimal equation of $$E$$ over $$K_v$$, the Tamagawa number $$c_v$$, defined to be the index $$[E(K_v):E^0(K_v)]$$ of the points with good reduction among the local points, and the exponent of the conductor $$f_v$$.
The functions in this file will typically be called by using local_data.
EXAMPLES:
```sage: K.<i> = NumberField(x^2+1)
sage: E = EllipticCurve([(2+i)^2,(2+i)^7])
sage: pp = K.fractional_ideal(2+i)
sage: da = E.local_data(pp)
sage: da.has_bad_reduction()
True
sage: da.has_multiplicative_reduction()
False
sage: da.kodaira_symbol()
I0*
sage: da.tamagawa_number()
4
sage: da.minimal_model()
Elliptic Curve defined by y^2 = x^3 + (4*i+3)*x + (-29*i-278) over Number Field in i with defining polynomial x^2 + 1
```
An example to show how the Neron model can change as one extends the field:
```sage: E = EllipticCurve([0,-1])
sage: E.local_data(2)
Local data at Principal ideal (2) of Integer Ring:
Reduction type: bad additive
Local minimal model: Elliptic Curve defined by y^2 = x^3 - 1 over Rational Field
Minimal discriminant valuation: 4
Conductor exponent: 4
Kodaira Symbol: II
Tamagawa Number: 1
sage: EK = E.base_extend(K)
sage: EK.local_data(1+i)
Local data at Fractional ideal (i + 1):
Reduction type: bad additive
Local minimal model: Elliptic Curve defined by y^2 = x^3 + (-1) over Number Field in i with defining polynomial x^2 + 1
Minimal discriminant valuation: 8
Conductor exponent: 2
Kodaira Symbol: IV*
Tamagawa Number: 3
```
Or how the minimal equation changes:
```sage: E = EllipticCurve([0,8])
sage: E.is_minimal()
True
sage: EK = E.base_extend(K)
sage: da = EK.local_data(1+i)
sage: da.minimal_model()
Elliptic Curve defined by y^2 = x^3 + i over Number Field in i with defining polynomial x^2 + 1
```
REFERENCES:
• [Sil2] Silverman, Joseph H., Advanced topics in the arithmetic of elliptic curves. Graduate Texts in Mathematics, 151. Springer-Verlag, New York, 1994.
• [Ta] Tate, John, Algorithm for determining the type of a singular fiber in an elliptic pencil. Modular functions of one variable, IV, pp. 33–52. Lecture Notes in Math., Vol. 476, Springer, Berlin, 1975.
AUTHORS:
• John Cremona: First version 2008-09-21 (refactoring code from ell_number_field.py and ell_rational_field.py)
• Chris Wuthrich: more documentation 2010-01
class sage.schemes.elliptic_curves.ell_local_data.EllipticCurveLocalData(E, P, proof=None, algorithm='pari')¶
Bases: sage.structure.sage_object.SageObject
The class for the local reduction data of an elliptic curve.
Currently supported are elliptic curves defined over $$\QQ$$, and elliptic curves defined over a number field, at an arbitrary prime or prime ideal.
INPUT:
• E – an elliptic curve defined over a number field, or $$\QQ$$.
• P – a prime ideal of the field, or a prime integer if the field is $$\QQ$$.
• proof (bool)– if True, only use provably correct methods (default controlled by global proof module). Note that the proof module is number_field, not elliptic_curves, since the functions that actually need the flag are in number fields.
• algorithm (string, default: “pari”) – Ignored unless the base field is $$\QQ$$. If “pari”, use the PARI C-library ellglobalred implementation of Tate’s algorithm over $$\QQ$$. If “generic”, use the general number field implementation.
Note
This function is not normally called directly by users, who may access the data via methods of the EllipticCurve classes.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve('14a1')
sage: EllipticCurveLocalData(E,2)
Local data at Principal ideal (2) of Integer Ring:
Reduction type: bad non-split multiplicative
Local minimal model: Elliptic Curve defined by y^2 + x*y + y = x^3 + 4*x - 6 over Rational Field
Minimal discriminant valuation: 6
Conductor exponent: 1
Kodaira Symbol: I6
Tamagawa Number: 2
```
bad_reduction_type()¶
Return the type of bad reduction of this reduction data.
OUTPUT:
(int or None):
• +1 for split multiplicative reduction
• -1 for non-split multiplicative reduction
• 0 for additive reduction
• None for good reduction
EXAMPLES:
```sage: E=EllipticCurve('14a1')
sage: [(p,E.local_data(p).bad_reduction_type()) for p in prime_range(15)]
[(2, -1), (3, None), (5, None), (7, 1), (11, None), (13, None)]
sage: K.<a>=NumberField(x^3-2)
sage: P17a, P17b = [P for P,e in K.factor(17)]
sage: E = EllipticCurve([0,0,0,0,2*a+1])
sage: [(p,E.local_data(p).bad_reduction_type()) for p in [P17a,P17b]]
[(Fractional ideal (4*a^2 - 2*a + 1), None), (Fractional ideal (2*a + 1), 0)]
```
conductor_valuation()¶
Return the valuation of the conductor from this local reduction data.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve([0,0,0,0,64]); E
Elliptic Curve defined by y^2 = x^3 + 64 over Rational Field
sage: data = EllipticCurveLocalData(E,2)
sage: data.conductor_valuation()
2
```
has_additive_reduction()¶
Return True if there is additive reduction.
EXAMPLES:
```sage: E = EllipticCurve('27a1')
sage: [(p,E.local_data(p).has_additive_reduction()) for p in prime_range(15)]
[(2, False), (3, True), (5, False), (7, False), (11, False), (13, False)]
```
```sage: K.<a> = NumberField(x^3-2)
sage: P17a, P17b = [P for P,e in K.factor(17)]
sage: E = EllipticCurve([0,0,0,0,2*a+1])
sage: [(p,E.local_data(p).has_additive_reduction()) for p in [P17a,P17b]]
[(Fractional ideal (4*a^2 - 2*a + 1), False),
(Fractional ideal (2*a + 1), True)]
```
has_bad_reduction()¶
Return True if there is bad reduction.
EXAMPLES:
```sage: E = EllipticCurve('14a1')
sage: [(p,E.local_data(p).has_bad_reduction()) for p in prime_range(15)]
[(2, True), (3, False), (5, False), (7, True), (11, False), (13, False)]
```
```sage: K.<a> = NumberField(x^3-2)
sage: P17a, P17b = [P for P,e in K.factor(17)]
sage: E = EllipticCurve([0,0,0,0,2*a+1])
sage: [(p,E.local_data(p).has_bad_reduction()) for p in [P17a,P17b]]
[(Fractional ideal (4*a^2 - 2*a + 1), False),
(Fractional ideal (2*a + 1), True)]
```
has_good_reduction()¶
Return True if there is good reduction.
EXAMPLES:
```sage: E = EllipticCurve('14a1')
sage: [(p,E.local_data(p).has_good_reduction()) for p in prime_range(15)]
[(2, False), (3, True), (5, True), (7, False), (11, True), (13, True)]
sage: K.<a> = NumberField(x^3-2)
sage: P17a, P17b = [P for P,e in K.factor(17)]
sage: E = EllipticCurve([0,0,0,0,2*a+1])
sage: [(p,E.local_data(p).has_good_reduction()) for p in [P17a,P17b]]
[(Fractional ideal (4*a^2 - 2*a + 1), True),
(Fractional ideal (2*a + 1), False)]
```
has_multiplicative_reduction()¶
Return True if there is multiplicative reduction.
Note
See also has_split_multiplicative_reduction() and has_nonsplit_multiplicative_reduction().
EXAMPLES:
```sage: E = EllipticCurve('14a1')
sage: [(p,E.local_data(p).has_multiplicative_reduction()) for p in prime_range(15)]
[(2, True), (3, False), (5, False), (7, True), (11, False), (13, False)]
```
```sage: K.<a> = NumberField(x^3-2)
sage: P17a, P17b = [P for P,e in K.factor(17)]
sage: E = EllipticCurve([0,0,0,0,2*a+1])
sage: [(p,E.local_data(p).has_multiplicative_reduction()) for p in [P17a,P17b]]
[(Fractional ideal (4*a^2 - 2*a + 1), False), (Fractional ideal (2*a + 1), False)]
```
has_nonsplit_multiplicative_reduction()¶
Return True if there is non-split multiplicative reduction.
EXAMPLES:
```sage: E = EllipticCurve('14a1')
sage: [(p,E.local_data(p).has_nonsplit_multiplicative_reduction()) for p in prime_range(15)]
[(2, True), (3, False), (5, False), (7, False), (11, False), (13, False)]
```
```sage: K.<a> = NumberField(x^3-2)
sage: P17a, P17b = [P for P,e in K.factor(17)]
sage: E = EllipticCurve([0,0,0,0,2*a+1])
sage: [(p,E.local_data(p).has_nonsplit_multiplicative_reduction()) for p in [P17a,P17b]]
[(Fractional ideal (4*a^2 - 2*a + 1), False), (Fractional ideal (2*a + 1), False)]
```
has_split_multiplicative_reduction()¶
Return True if there is split multiplicative reduction.
EXAMPLES:
```sage: E = EllipticCurve('14a1')
sage: [(p,E.local_data(p).has_split_multiplicative_reduction()) for p in prime_range(15)]
[(2, False), (3, False), (5, False), (7, True), (11, False), (13, False)]
```
```sage: K.<a> = NumberField(x^3-2)
sage: P17a, P17b = [P for P,e in K.factor(17)]
sage: E = EllipticCurve([0,0,0,0,2*a+1])
sage: [(p,E.local_data(p).has_split_multiplicative_reduction()) for p in [P17a,P17b]]
[(Fractional ideal (4*a^2 - 2*a + 1), False),
(Fractional ideal (2*a + 1), False)]
```
kodaira_symbol()¶
Return the Kodaira symbol from this local reduction data.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve([0,0,0,0,64]); E
Elliptic Curve defined by y^2 = x^3 + 64 over Rational Field
sage: data = EllipticCurveLocalData(E,2)
sage: data.kodaira_symbol()
IV
```
minimal_model(reduce=True)¶
Return the (local) minimal model from this local reduction data.
INPUT:
• reduce – (default: True) if set to True the EC returned by Tate’s algorithm will be “reduced” as specified in _reduce_model() for curves over number fields.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve([0,0,0,0,64]); E
Elliptic Curve defined by y^2 = x^3 + 64 over Rational Field
sage: data = EllipticCurveLocalData(E,2)
sage: data.minimal_model()
Elliptic Curve defined by y^2 = x^3 + 1 over Rational Field
sage: data.minimal_model() == E.local_minimal_model(2)
True
```
To demonstrate the behaviour of the parameter reduce:
```sage: K.<a> = NumberField(x^3+x+1)
sage: E = EllipticCurve(K, [0, 0, a, 0, 1])
sage: E.local_data(K.ideal(a-1)).minimal_model()
Elliptic Curve defined by y^2 + a*y = x^3 + 1 over Number Field in a with defining polynomial x^3 + x + 1
sage: E.local_data(K.ideal(a-1)).minimal_model(reduce=False)
Elliptic Curve defined by y^2 + (a+2)*y = x^3 + 3*x^2 + 3*x + (-a+1) over Number Field in a with defining polynomial x^3 + x + 1
sage: E = EllipticCurve([2, 1, 0, -2, -1])
sage: E.local_data(ZZ.ideal(2), algorithm="generic").minimal_model(reduce=False)
Elliptic Curve defined by y^2 + 2*x*y + 2*y = x^3 + x^2 - 4*x - 2 over Rational Field
sage: E.local_data(ZZ.ideal(2), algorithm="pari").minimal_model(reduce=False)
Traceback (most recent call last):
...
ValueError: the argument reduce must not be False if algorithm=pari is used
sage: E.local_data(ZZ.ideal(2), algorithm="generic").minimal_model()
Elliptic Curve defined by y^2 = x^3 - x^2 - 3*x + 2 over Rational Field
sage: E.local_data(ZZ.ideal(2), algorithm="pari").minimal_model()
Elliptic Curve defined by y^2 = x^3 - x^2 - 3*x + 2 over Rational Field
```
prime()¶
Return the prime ideal associated with this local reduction data.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve([0,0,0,0,64]); E
Elliptic Curve defined by y^2 = x^3 + 64 over Rational Field
sage: data = EllipticCurveLocalData(E,2)
sage: data.prime()
Principal ideal (2) of Integer Ring
```
tamagawa_exponent()¶
Return the Tamagawa index from this local reduction data.
This is the exponent of $$E(K_v)/E^0(K_v)$$; in most cases it is the same as the Tamagawa index.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve('816a1')
sage: data = EllipticCurveLocalData(E,2)
sage: data.kodaira_symbol()
I2*
sage: data.tamagawa_number()
4
sage: data.tamagawa_exponent()
2
sage: E = EllipticCurve('200c4')
sage: data = EllipticCurveLocalData(E,5)
sage: data.kodaira_symbol()
I4*
sage: data.tamagawa_number()
4
sage: data.tamagawa_exponent()
2
```
tamagawa_number()¶
Return the Tamagawa number from this local reduction data.
This is the index $$[E(K_v):E^0(K_v)]$$.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve([0,0,0,0,64]); E
Elliptic Curve defined by y^2 = x^3 + 64 over Rational Field
sage: data = EllipticCurveLocalData(E,2)
sage: data.tamagawa_number()
3
```
sage.schemes.elliptic_curves.ell_local_data.check_prime(K, P)¶
Function to check that $$P$$ determines a prime of $$K$$, and return that ideal.
INPUT:
• K – a number field (including $$\QQ$$).
• P – an element of K or a (fractional) ideal of K.
OUTPUT:
• If K is $$\QQ$$: the prime integer equal to or which generates $$P$$.
• If K is not $$\QQ$$: the prime ideal equal to or generated by $$P$$.
Note
If $$P$$ is not a prime and does not generate a prime, a TypeError is raised.
EXAMPLES:
```sage: from sage.schemes.elliptic_curves.ell_local_data import check_prime
sage: check_prime(QQ,3)
3
sage: check_prime(QQ,ZZ.ideal(31))
31
sage: K.<a>=NumberField(x^2-5)
sage: check_prime(K,a)
Fractional ideal (a)
sage: check_prime(K,a+1)
Fractional ideal (a + 1)
sage: [check_prime(K,P) for P in K.primes_above(31)]
[Fractional ideal (5/2*a + 1/2), Fractional ideal (5/2*a - 1/2)]
```
#### Previous topic
Torsion subgroups of elliptic curves over number fields (including $$\QQ$$)
Kodaira symbols
### Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 25, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7468646168708801, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/148134/concrete-examples-of-2-categories/148138
|
# Concrete examples of 2-categories
I've been reading some of John Baez's work on 2-categories (eg here) and have been trying to visualize some of the constructions he gives.
I'm interested in coming up with 'concrete' examples of 2-categories. As an example of what I don't mean, I know that the category Cat forms a 2-category, where the objects are small categories, the morphisms are functors and the 2-morphisms are natural transformations. But this is too abstract for me - given that categorical constructs are what I'm having trouble understanding, it doesn't help me much to give an example from category theory!
One thought I had is that you might be able to view a group as a 2-category. Taking the perspective that a group is a category with one object where the morphisms are the symmetries of the object, you should then be able to construct a 2-category by saying that the 2-morphisms are the inner automorphisms of the group. An interesting question is then what the compositional structure of the 2-morphisms is.
To be really concrete, consider the group $D_3$. Here the object is an equilateral triangle, and there are six morphisms $e$, $r$, $r^2$, $m$, $mr$ and $mr^2$ where $e$ is the identity, $r$ is rotation by $2\pi/3$ and $m$ is reflection in one of the axes of symmetry, and the others are the obvious compositions of these.
Then the 2-morphisms are the functions $\phi_g$ given by $\phi_g(h)=ghg^{-1}$. For this example, the 2-morphisms have the structure of the underlying group $D_3$, but clearly this isn't always the case (e.g. for any abelian group the 2-morphisms have the structure of the trivial group). I haven't worked through many of the details, but it seems like there might be the grain of an interesting line of thought here.
So my questions are:
1. Is viewing groups as 2-categories an interesting thing to do, i.e. does it give you any new perspectives that make previously esoteric facts about groups 'obvious', or at least special cases of results in 2-categories?
2. What other 'concrete' examples of 2-categories are there?
-
Qiaochu Yuan has another example of 2-category using groups, or a single group. If you take a group $G$ as object, morphisms its endomorphisms, and 2-morphisms from $f$ to $g$ conjugations by an element $x$ of $G$ which send $f$ to $g$, i.e. $xf(y)x^{-1}=f(y)$ for all $y\in G$. You may take any class of groups and morphisms between them to form a category this way. This construction comes from seeing groups as categories, homorphisms as functors, then "commuting" elements are natural transformations. This must make a group a 3-category in some sense. – plm May 22 '12 at 10:49
It occured to me as I was writing the above that since the automorphisms of G form a group Aut(G), you should be able to form a 3-category by considering Aut(Aut(G)), and n-categories by forming Aut$^n$(G) etc. I don't know if there are 'interesting' cases of this though (i.e. cases where Aut$^n$(G) is not isomorphic to G for all n, and where Aut$^n$(G) is not the trivial group for all n>N for some N) or if it would be more (or less) interesting to consider Inn(G) instead of Aut(G). – Chris Taylor May 22 '12 at 10:52
1
– plm May 22 '12 at 10:53
I don't understand your example built on a group: are you saying that the 2-morphisms between two 1-morphisms (i.e., group elements) $h$ and $k$ are the inner automorphisms $\phi_g$ such that $\phi_g(h) = k$? If so, how do you plan to compose two of them? The ordinary composition does not satisfy that $\phi_g \circ \phi_{g'} (h) = k$. – Omar May 22 '12 at 14:41
Sorry, my comment was nonsense. The two kinds of composition you need are: vertical, composing $\phi_{g_1} : h_1 \to h_2$ with $\phi_{g_2} : h_2 \to h_3$ to get a 2-morphism $h_1 \to h_3$, and horizontal, composing a $\phi_{g_1} : h_1 \to h_2$ and a $\phi_{g_2} : k_1 \to k_2$ to get a 2-morphism $h_1 k_1 \to h_2 k_2$. Ordinary composition of functions would work for horizontal composition since $\phi_{g_2} \circ \phi_{g_1} (h_1) = \phi_{g_2}(h_2) = h_3$, but not for horizontal composition: $\phi_{g_2}(\phi_{g_1}(h_1k_1)) = \phi_{g_2}(h_2) \phi_{g_2}(\phi_{g_1}(k_1))$ and then you're stuck. – Omar May 22 '12 at 15:10
show 8 more comments
## 5 Answers
In order to understand $2$-categories, you really have to understand the prototype $\mathsf{Cat}$ of small categories. Objects are categories, morphisms are functors, and $2$-morphisms are natural transformations. Another prototype, which is closely related to that, is the $2$-category $\mathsf{Top}$ (which is actually an $(\infty,1)$-category). Objects are topological spaces, morphisms are continuous maps, and $2$-morphisms are homotopies between continuous maps (as Omar remarks, one has to be careful here to get associativity of $2$-morphisms; there are various solutions). Many basics about $2$-categories are adapted (starting with the notation, for example "$2$-cells" instead of $2$-morphisms) to these prototypes.
There are many interesting subcategories of $\mathsf{Cat}$ or variations thereof. The category of monoids $\mathsf{Mon}$ is a a full subcategory of $\mathrm{Cat}$, consisting of categories with just one object. An object is a monoid, a morphisms is a homomorphism of monoids, and a $2$-morphism between homomorphisms $f,g : M \to N$ is some element $n \in N$ such that $f(m) n = n g(m)$ for all $m \in M$. If $M,N$ are groups, this means that $f,g$ are conjugated to each other. So this comes close to your example, but I don't think that a single group may be regarded as a $2$-category.
Something similar happens for the category $\mathsf{Ring}$ of rings: Although usually considered as a $1$-category, it is actually a $2$-category when we regard it as a full subcategory of the category if linear categories (namely those with just one object). The description of $2$-morphisms is as above.
Rings categorify to cocomplete tensor categories, which also constitute a $2$-category (morphisms: cocontinuous tensor functors, $2$-morphisms: tensor natural transformations). The $2$-category of (algebraic) stacks is another important example. It is related because to every stack $\mathcal{X}$ one can associate a cocomplete tensor category $\mathrm{Qcoh}(\mathcal{X})$ of quasi-coherent sheaves, and it turns out that $\mathrm{Qcoh}(-)$ is fully faithful in many situations (see here).
As you can see, most examples are optained by variations of $\mathsf{Cat}$. Apart from that:
Every $1$-category can be regarded as a $2$-category by introducing only identities as $2$-morphisms. And a $2$-category with just one object is just a monoidal category, and there are plenty examples of them. So similar to the point of view "category = monoid with many objects" we have "$2$-category = monoidal category with many objects".
Finally, another very basic example of a $2$-category is the category of spans: Objects are sets (or objects from another nice category), a morphism from $A$ to $B$ is a set $C$ together with maps $A \leftarrow C \rightarrow B$. These are composed via pullbacks. And a $2$-morphism from a span $A \leftarrow C \rightarrow B$ to another span $A \leftarrow C' \rightarrow B$ is a morphism $C \to C'$ such that the obvious "diamond" diagram commutes. Actually you have to take isomorphism-classes of spans so that associativity is satisfied.
-
2
Thanks Martin. Do you mean that a single group can't be regarded as a 2-category, or that it can't usefully be regarded as a 2-category? If the former, I'd be interested to know why. For example, does one of the 2-category axioms fail? If the latter, then is it because Aut(G) or Inn(G) can be shown to always have some non-interesting relationship with G (so nothing new is gained by studying them - this case seems very unlikely) or is it something deeper? – Chris Taylor May 22 '12 at 11:02
Minor correction: in the 2-category Top, the 2-morphisms should be homotopy classes of homotopies to get associativity. Alternatively you could use Moore style homotopies (i.e., on intervals of arbitrary length, and the lengths add when you compose homotopies), but then you wouldn't get a (2,1)-category. – Omar May 22 '12 at 14:44
2
Another correction about Top: it's not an $(\infty,0)$-category, only an $(\infty,1)$-category (it's not true that an arbitrary continuous map has a homotopy inverse). – Omar May 22 '12 at 14:47
Omar's construction using central elements is a special case of something I describe in this post here. $\newcommand{\id}{\textrm{id}}$
In Cartan and Eilenberg there are several instances of squares which "anticommute", that is, we have $h \circ f = - k \circ g$ instead of $h \circ f = k \circ g$. I was wondering if we could make this into an instance of a square commuting "up to a specified 2-morphism" and it turned out the answer was yes.
Let $\mathbb{C}$ be a (small) category. We attach to every parallel pair of 1-morphisms $f, g : X \to Y$ the set of all natural transformations $\alpha : \id_\mathbb{C} \Rightarrow \id_\mathbb{C}$ such that $g = \alpha_Y \circ f$. The vertical composition is obvious, and if we have another parallel pair $h, k : Y \to Z$ and a 2-morphism $\beta : h \Rightarrow k$, the horizontal composition of $\alpha$ and $\beta$ is just $\beta \circ \alpha$, since $k \circ g = (\beta_Z \circ h) \circ (\alpha_Y \circ f) = (\beta_Z \circ \alpha_Z) \circ (h \circ f)$, by naturality of $\alpha$. This yields a (strict) 2-category structure on $\mathbb{C}$. Note that we have to remember which natural transformation is needed to make the triangle commute in order to have a well-defined horizontal composition.
In the specific case of $\mathbb{C} = R\text{-Mod}$, the set (class?) of natural transformations $\id_\mathbb{C} \Rightarrow \id_\mathbb{C}$ include the scalar action of $R$, so in particular the anticommutative squares of Cartan and Eilenberg can be regarded as a square commuting up to a 2-morphism.
$\newcommand{\profto}{\nrightarrow}$ My current favourite example of a bicategory (i.e. a weak 2-category) is the bicategory $\mathfrak{Span}$ of spans of sets. The objects are sets, and the 1-morphisms $M : A \profto B$ are arbitrary pairs of maps $(s : M \to A, t : M \to B)$. Composition is given by fibre products: if $N : B \profto C$ is another span, then their composite $N \circ M : A \profto C$ is given by $M \times_B N$ and the obvious projections down to $A$ and $C$. A 2-morphism between spans is just an ordinary map of sets that commutes with the structural maps.
Why is $\mathfrak{Span}$ interesting? Because a monad in $\mathfrak{Span}$ is the exactly the same thing as a (small) 1-category! (I think the "natural" notion of homomorphism that arises this way is that of a profunctor rather than a functor, but that should be considered a feature rather than a bug.)
One easy way to "strictify" $\mathfrak{Span}$ is to look at a certain more familiar subcategory: the 2-category $\mathfrak{Rel}$, whose objects are sets and whose 1-morphisms are relations. (The composition of relations is the usual one: if $R : A \profto B$ and $S : B \profto C$ are relations, then $c \mathrel{(S \circ R)} a$ if and only if the is some $b$ such that $c \mathrel{S} b$ and $b \mathrel{R} a$.) A 2-morphism between two relations is just the inclusion of the underlying graphs.
Morally, $\mathfrak{Rel}$ is the 0-dimensional analogue of the bicategory $\mathfrak{Prof}$ of categories and profunctors, and it is a way of "enlarging" the ordinary 1-category $\textbf{Set}$. We have the following remarkable fact: a relation $F : A \profto B$ has a right adjoint if and only if $F$ is a functional relation. So not only is $\textbf{Set}$ faithfully embedded in $\mathfrak{Rel}$, we also have a way of recognising when a morphism comes from $\textbf{Set}$!
Finally, some unsolicited advice: 2-category theory is impenetrable even if you are familiar with ordinary category theory. I firmly believe that one must have an excellent grasp of ordinary category theory before moving on to the higher-dimensional stuff. Just as ordinary category theory depends heavily on our intuitions about $\textbf{Set}$ as a 1-category, 2-category theory depends heavily on our intuitions about $\mathfrak{Cat}$ as a 2-category – and that intuition can only be built by studying 1-categories.
-
Chris's construction is not a special case of the construction you describe, as far as I can tell (and in fact, I don't think Chris's construction can be made to work). But the construction of a 2-category from a group (using the center of the group) that I described in a comment to the question is a special case of this construction: if you regard a group $G$ as a category with one object, a natural transformation between two functors (i.e. two group homomorphisms) $\phi$ and $\psi$ is a group element $g$ such that $\phi(x) = g^{-1} \psi(x) g$ for all $x \in G$. (to be cont'd). – Omar May 22 '12 at 19:59
1
@Omar: Ah, yes, that's true. I mistakenly thought the endomorphisms on the identity functor would be the group endomorphisms. – Zhen Lin May 22 '12 at 20:01
(cont'd) So in particular, natural transformations $\id_G \Rightarrow \id_G$, correspond to elements of the center of $G$. Then, your construction has as 2-morphisms $g \Rightarrow h$ the elements $a$ of the center such that $g = ah$. – Omar May 22 '12 at 20:02
1
For $R$-Mod, aren't the natural transformations $\alpha : \id \Rightarrow \id$ just the scalars? Let's figure out $\alpha_M(x)$ for some arbitrary $x \in M \in R$-Mod. Let $f : R \to M$ be the morphism $f(r) = rx$. By naturality $\alpha_M(x) = \alpha_M(f(1)) = f(\alpha_R(1)) = \alpha_R(1) x$, so that $\alpha$ is just multiplication by the scalar $\alpha_R(1)$. – Omar May 22 '12 at 20:17
Taking the perspective that a group is a category with one object where the morphisms are the symmetries of the object, you should then be able to construct a 2-category by saying that the 2-morphisms are the inner automorphisms of the group.
I don't think this works. More precisely, I don't see a natural candidate for horizontal composition.
I wrote this blog post partially as an introduction to 2-categories. I give a few examples, but not too many, so here are examples (some taken from the post and some not):
• Various subcategories of $\text{Cat}$. For example, $\text{Mon}$ (monoids) or $\text{Pos}$ (posets).
• For any topological space $X$, there is a 2-category $\Pi_2(X)$, the fundamental 2-groupoid of $X$, whose objects are the points of $X$, whose morphisms are the continuous paths in $X$, and whose 2-morphisms are the homotopy classes of homotopies between paths in $X$.
• Just as a category with one object is a monoid, a 2-category with one object is a (strict) monoidal category $(M, \otimes)$. Important examples include any category with products as well as the category of representations of a group, Lie algebra, bialgebra...
• For any monoidal category $V$, various subcategories of $V\text{-Cat}$. For example, if $V = \text{Ab}$, then one can take the 2-category of rings (closely analogous to the case of monoids).
• (An appropriate skeleton of) the bicategory of bimodules. This construction generalizes considerably.
I sometimes talk about "the" 2-category of logical propositions. The morphisms are proofs of one proposition from another, and the 2-morphisms are ways of turning one proof into another (I do not have a precise idea of what this ought to mean).
-
1
The homotopy type theorists are trying to work out something which, in a certain light, can be interpreted as an $(\infty, 1)$-category of propositions and proofs. – Zhen Lin May 22 '12 at 17:45
Do you mean that you don't see a natural candidate for horizontal composition? As Omar said in the comments above, it looks as though regular function composition works fine for vertical composition. – Chris Taylor May 22 '12 at 18:51
@Chris: I guess one of us is mixing up vertical and horizontal composition, but I don't think it's me. Do we agree that horizontal composition is composition along objects and that ordinary composition of inner automorphisms suffices for this? – Qiaochu Yuan May 22 '12 at 19:21
1
– Chris Taylor May 22 '12 at 19:38
1
@Chris: aha! Okay. I am the one who is confused. My apologies. – Qiaochu Yuan May 22 '12 at 19:40
show 2 more comments
This is probably not concrete enough, but one of my favorite examples of a 2-category is the category of rings, bimodules, and bimodule homomorphisms.
The reason I find this interesting is partly because it collects the 'algebra' of modules and tensor products together into one structure, and partly because it turns out to be 2-equivalent to the 2-category of module categories, adjunctions, and natural transformations.
It's similar to why my favorite example of a 1-category is the (arrow-only) category of matrices -- i.e. matrix 'algebra'.
-
You mean "cocontinous functors" instead of "localization functors". – Martin Brandenburg May 23 '12 at 6:23
Ah, you're right, the localizations are the adjunctions with exact left adjoints. I'll fix it. – Hurkyl May 23 '12 at 8:51
I have not had the time to read the details of your example, but the idea should be correct. Another example is a crossed module. This is a category object in the category of groups. There is a paper by Baez and Lauda that describe 2-groups, and you can re-interpret these as 2-categories. See the references therein.
My favorite example of a 2-category is (for obvious reasons) the category of 2-tangles. The objects are points, the morphisms are tangle diagrams and the 2-morphisms are surface diagrams that interpolate between the tangle diagrams. The relations among the 2-morphisms are the movie moves, but caution needs to be taken here.
I understand that there are algebraic versions of 2-braids given by Rouquier and others.
A good exercise is to determine that a braided monoidal category is a 2-category.
-
1
Thanks Scott. I've just read your answer to the question 'How do you do mathematics?' and your notion that the best thing to do is understand the simple things in as much detail as possible is exactly what I'm trying to do here. In particular, I find that spending time working out all the details of the simplest non-trivial example often pays off many times over. Thanks for the help! – Chris Taylor May 22 '12 at 10:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 169, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942661702632904, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/573/varying-definitions-of-cohomology
|
# Varying definitions of cohomology
So I know that given a chain complex we can define the $d$-th cohomology by taking $\ker{d}/\mathrm{im}_{d+1}$. But I don't know how this corresponds to the idea of holes in topological spaces (maybe this is homology, I'm a tad confused).
-
One can compute (co)homology of different complexes. In particular, for any topological space one can define it's singular complex (see Eric's answer for an idea how it's done) which in some sense indeed counts holes. But the idea of (co)homology is more general. – Grigory M Jul 24 '10 at 15:28
1
I couldn't really do better than Eric's answer, and like Grigory says, cohomology is more general. So instead I want to mention a case where cohomology doesn't do this: Sheaf Cohomology of Algebraic Groups, classify dominant Vector Bundles. This is called the Borel-Weil-Bott Theorem and has some nice ramifications for Representation Theory and Algebraic Geometry. – BBischof Jul 24 '10 at 19:48
## 1 Answer
Edited to clear some things up:
Simplicial and singular (co)homology were invented to detect holes in spaces. To get an intuitive idea of how this works, consider subspaces of the plane. Here the 2-chains are formal sums of things homeomorphic to the closed disk, and 1-chains are formal sums of things homeomorphic to a line segment. The operator d takes the boundary of a chain. For example, the boundary of the closed disk is a circle. If we take d of the circle we get 0 since a circle has no boundary. And in general it happens that d^2 = 0, that is boundaries always have no boundaries themselves. Now suppose we remove the origin from the plane and take a circle around the origin. This circle is in the kernel of d since it has no boundary. However, it does not bound any 2-chain in the space (since the origin is removed) and so it is not in the image of the boundary operator on two-dimensions. Thus the circle represents a non-trivial element in the quotient space ker d / im d.
The way I have defined things makes the above a homology theory simply because the d operator decreases dimension. Cohomology is the same thing only the operator increases dimension (for example the exterior derivative on differential forms). Thus algebraically there really is no difference between cohomology and homology since we can just change the grading from i to -i.
From a homology we can get a corresponding cohomology theory by dualizing, that is by looking at maps from the group of chains to the underlying group (e.g. Z or R). Then d on the cohomology theory becomes the adjoint of the previous boundary operator and thus increases degrees.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368475675582886, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.