url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/75521/reference-request-parametrizing-covers-of-the-projective-line
|
## Reference request: parametrizing covers of the projective line
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hurwitz spaces (or Hurwitz schemes) parametrize covers of the projective line. One can do this in many ways.
For example, one could fix the number $r$ of branch points, the degree $n$ of the cover and look only at simple covers of $\mathbf{P}^1$. This is usually denoted by $H_{r,n}$. Fulton defined this space as a scheme over $\textrm{Spec} \mathbf{Z}$ and showed that $H_{r,n} \otimes \mathbf{F}_p$ is irreducible for $p$ big enough.
One could also fix a subset $B\subset \mathbf{P}^1$, the degree $n$ of the cover and look at covers of degree $n$ unramified outside $B\cup {\lambda }$, where $\lambda \in \mathbf{P}^1-B$ is allowed to vary. One can show that most curves arise as an irreducible component of such a space (Diaz, Donagi, Harbater).
One could also look at Galois covers with a fixed Galois group, etc.
In the end, there are many ways to parametrize covers of the projective line.
Are there any standard references that contain the basics of Hurwitz spaces?
At the moment I have at my disposal
Work of M. Romagny, J. Bertin and S. Wewers (available on Romagny's website). These are very stacky.
The article of Fulton Hurwitz Schemes and Irreducibility of Moduli of Algebraic Curves.
Notes by Brian Osserman available on his website (The representation theory, geometry, and combinatorics of branched covers.).
The article of Diaz, Donagi and Harbater: Every curve is a Hurwitz space.
Question. What are the standard references for the basics of Hurwitz spaces/schemes?
-
## 2 Answers
There is really no universal reference, unfortunately, and what you should look at depends on what you're interested in. Are you interested in arithmetic or are you working over the complex numbers? Are you interested in compactification? Do you care mostly about simple branching or about more general branching or even about more general Galois groups than S_n? Are you interested in these guys as subvarieties of M_g, or as covers of M_{0,n}, or both, or as abstract moduli spaces or....?
But I guess this is not an answer yet, so let me add to your already good list Harris and Morrison's book on algebraic curves, which has a nice discussion of admissible covers that should be very helpful for understanding how one kind of branched cover can degenerate to another. For the topologist's point of view, you might also look at any paper containing the words "braid group" and "Nielsen classes," or McReynolds's exposition of Thurston's proof of the congruence subgroup property for the braid group.
-
Thanks for your helpful answer. My goal is to (eventually) study the arithmetic side of Hurwitz schemes, but I think it wouldn't hurt to have a good understanding of what happens over the complex numbers. Moreover, I'm interested in just learning the general theory at the moment. So compactification, connectedness, irreducibility, abstract moduli spaces point of view, etc. Could you elaborate a bit on what you mean by viewing these guys as subvarieties of M_g or as covers of M_{o,n}? – Ariyan Javanpeykar Sep 15 2011 at 14:43
By the way, the only book I can find written by Harris and Morisson is "Moduli of Curves". I couldn't find anything about Hurwitz spaces in this book. Is there another book I'm unaware of? – Ariyan Javanpeykar Sep 15 2011 at 15:00
"Moduli of Curves" is the right book -- look up "admissible covers." A Hurwitz space parametrizes branched covers C -> P^1. If you forget the cover and just remember C, you get a point of M_g. If you forget C and just remember the locations of the r branch points, you get a point of M_{0,r}. – JSE Sep 15 2011 at 15:03
The degree of the map to M_{0,4} is called the Hurwitz number, and looking at (the beginning of) some of Ravi Vakil's papers about Hurwitz numbers would also be a good option. – JSE Sep 15 2011 at 15:04
Nice. This gives me something to do for a while. – Ariyan Javanpeykar Sep 15 2011 at 15:07
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
One modern way to view admissible covers is via twisted stable maps to the stack $BG$ (where $G$ is the group of the Galois closure of the cover). The objects in the stack $M_{0,n}(BS_d)$ can be regarded as degree $d$ covers of a $\mathbb{P}^1$ which is ramified over $n$ marked points. The ramification type is determine by the evaluation map $ev:M_{0,n}(BS_d)\to IBS_d$ where the components of the inertia stack are indexed by conjugacy classes of $G$. The stable map compactification $\overline{M}_{0,n}(BS_d)$ is essentially the same as the admissible covers compactification, in fact it is actually a bit better behaved (I think it is the normalization of the later).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939367413520813, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/19630/what-is-an-intermediate-definition-for-a-tangent-to-a-curve?answertab=votes
|
# What is an intermediate definition for a tangent to a curve?
Most students come to calculus with an intuitive sense of what a tangent line should be for a curve. It is easy enough to give a definition of a tangent to a circle that is both elementary and rigorous. (A line that intersects a circle exactly once.) Yet, when we talk about a curve, such as a polynomial, I think one must talk about infinity to give a rigorous definition. This is not a bad thing... It can help motivate a lesson on the formal definition of a tangent line at a point... But, what intermediate definition could I use to help those students who have no intuition about the matter get an idea of what we are going for before we talk about secants and such?
I have looked at some textbooks and online and I find saying a tangent line is one that "just touches" problematic-- it is the kind of phrase that is quite meaningless... Even in the setting of un-rigorous definitions. Yes, I will give examples, but I would like to do better ... I think saying that a tangent line is an arrow that points in the direction that the curve is going at that instant might make sense... Though it makes everything 'directional' and that might confuse them later.
What is the best intermediate definition you have seen?
-
## 4 Answers
One possible way to introduce a tangent to a curve at a point is to do as follows. Take a point on the curve where we want to define a tangent to the curve. Consider all set of lines passing through the point. There will be a unique line such that in a neighborhood of the curve the entire curve in the neighborhood lies on only one side of the line. But of course this way of motivating will fail if we are near a point of inflexion.
-
A tangent is a line that intersects the curve once, at least if it is made short enough, but there's a direction such that if you rotate the line about the intersection point in that direction, no matter how little you rotate, it'll hit the curve again. This definition apparently goes back to Euclid in some form, and it works at inflection points, but fails in dimensions higher than $2$. (I assume you're only talking about plane curves.) I think we need the derivative to be continuous for it to work, and we also need it not to be constant in a neighborhood of the intersection point.
I don't see what's wrong with giving a physical definition though: it's where a particle moving along the curve would go if there were suddenly no forces acting on it (Newton's first law). Or, even more physically, it's the line a ball would begin to travel in if you threw the ball in an arc corresponding to the curve and let it go at the point you're interested in. This definition has the advantage that it does not depend on the background mathematics: it is (to a first approximation) an empirical fact about the universe that this concept is consistent.
-
I like the physical interpretation you give here. – Mike Spivey Jan 31 '11 at 3:49
(i) When you say "move it by a little bit", do you mean "rotate it by a little bit about the point of intersection"? Otherwise I can't see how it works. (ii) Even then, it doesn't work for points of inflexion. You can rotate a tangent in one direction about a point of inflexion without intersecting the curve more than once. – TonyK May 31 '11 at 16:33
@TonyK: yes. I guess I should be more precise: this should work at an inflection point if we rotate in both directions, right? – Qiaochu Yuan May 31 '11 at 16:39
Maybe, but I don't like to think what might happen if the second derivative is not continuous. And, as you say, if the first derivative is not continuous, then the definition fails completely. – TonyK May 31 '11 at 18:16
My favorite calculus-level definition (which is fairly rigorous, not hard to motivate, and does not rely on pictures) is that the tangent line to $y=f(x)$ at $x_0$ is the (unique) line that goes through $(x_0,f(x_0))$ and affords the best linear approximation to $y=f(x)$ near $x_0$. That is, if you let $g(x)$ be the point on the line with coordinate $x$, then $f(x)-g(x)$ goes to zero faster than $x-x_0$ goes to $0$; that is, $\frac{f(x)-g(x)}{x-x_0} \to 0$ as $x\to x_0$. This captures the idea that the tangent is the line that "best approaches" the graph when you are near the point $x_0$.
See about halfway through this previous answer (starting in paragraph 7, where it says "Now, does the line that join $A$ and $B$ really have a slope that approaches the slope of the tangent?").
-
according to me tangent is a line perpendicular to the radious of curvature of the curve at a given point.
-
2
You have exchanged the problem of finding the line closest to the curve with the problem of finding the circle closest to the curve. This is a step backwards. – TonyK May 31 '11 at 16:36
1
This seems like a silly definition. It is harder to explain what the "radius of curvature" (if this means what I think it means) is than to explain what a tangent line is. – Qiaochu Yuan May 31 '11 at 16:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9643731117248535, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/118156/asymptotics-of-arithmetic-fuchsian-groups-and-shimura-curves
|
## Asymptotics of arithmetic Fuchsian groups and Shimura curves.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm interested in what is known/expected about some families of arithmetic Fuchsian groups. Here is the simplest family that I'm interested in: Let $E = Z[\omega]$, where $\omega = e^{2 \pi i / 3}$. Consider the family of $Z$-valued binary Hermitian forms on $E$: $$H_n(x,y) = x \bar x - n y \bar y, \text{ for all } x,y \in E.$$ Here, we let $n$ range over all positive integers which are not norms from $E$, i.e., all positive integers for which $H_n$ does not nontrivially represent zero.
Let $\Gamma_n$ be the special unitary group of $H_n$, i.e., $$\Gamma_n = { g \in SL_2(E) : H_n( g(x,y)) = H_n(x,y) \text{ for all } x,y \in E }.$$ Here $g(x,y)$ denotes the effect of matrix multiplication.
As $H_n$ is an indefinite Hermitian form, these groups $\Gamma_n$ are discrete subgroups of the Lie group $SU(1,1)$. There's almost certainly another way of seeing these groups as coming from orders in indefinite quaternion algebras over $Q$. So I guess that these groups $\Gamma_n$ yield a family of Shimura curves of increasing "complexity" measured in any natural way. Associated to the groups $\Gamma_n$ are compact orbifold Riemann surfaces $X_n$, each with invariants including genus $g$, and a series of $t$ marked orbifold points, with indices $m_1, \ldots, m_t$.
What do we know about the following: How do we expect the genus $g = g(X_n)$ to behave as $n \rightarrow \infty$? This I anticipate is known or well-studied.
But also, how does the family of indices $m_1, \ldots, m_t$ behave asymptotically? For example, how many orbifold points of index $3$ do we expect on $X_n$, as $n \rightarrow \infty$?
By some hyperbolic geometry, we can relate these indices to the volume of $X_n$. Can we use this to get some heuristics?
Am I asking something silly? I know that one can eliminate torsion by passing to a finite-index subgroup, but I would anticipate that torsion doesn't disappear in a family of groups such as the $\Gamma_n$.
References and speculations are welcome!
-
I think this paper might be relevant to your question: ams.org/mathscinet-getitem?mr=1108918 – Agol Jan 6 at 0:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443342685699463, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/130353-definition-branch-point.html
|
# Thread:
1. ## Definition of a branch point
I can't seem to find a good definition for a branch point. I have several thoughts and have found various definitions but would like to hear MHFs opinions on this one!
I'm thinking something along the lines of...
A branch point is a point of a (analytic?) function that is undefined/discontinuous?
Or perhaps...
A branch point(s?) $z$ is a point(s?) of an analytic function $f$ such that $f$ is holomorphic everywhere except at $z$? (or perhaps near z?)
2. A point is called a branch-point if analytic continuation over a closed curve around it can produce a different value upon reaching the starting point. Take for example $f(z)=\sqrt{z}$ starting at the point $z=1$ and analytically continuing the function around the unit circle using the differential equation $\frac{df}{dt}=1/2 i f(t)$ (just differentiate the function and let $z=e^{it}$). Upon integrating from zero to $2\pi$, $f(0)\ne f(2\pi)$. Therefore, there is a branch point in the unit circle.
3. A multivalued function is a complex function that for a given $z$ can assume several different values. Typical examples are 'nth root', logarithm, inverse circular functions and so on. A branch point of a multivalued function is a point in the complex plane from which depart two or more branches of a multivalued function. Let consider for example the multivalued function $f(z)= \sqrt{z}$. The point $z=0$ is a branch point for it because, setting $z= \rho\cdot e^{i\cdot (\theta + 2k\pi)}$, is...
$\sqrt{z}= \sqrt{\rho}\cdot e^{i\cdot (\frac {\theta}{2} + k\pi)}= \pm \sqrt{\rho}\cdot e^{i\cdot \frac {\theta}{2}}$ (1)
In (1) the sign '+' is for k even and the sign '-' for k odd and at different signs they correspond two different branches that have in common the point $z=0$, that for this reason is called 'branch point'...
Kind regards
$\chi$ $\sigma$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338475465774536, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/4915-polar-coordinate-integration.html
|
# Thread:
1. ## Polar Coordinate Integration
Hello,
Having problems setting integration limits due to symmetry.
I have two questions:
One asks to find the volume of the solid formed by the interior of the circle
r = cos(theta) capped by the plane z = x.
Because x = rcos(theta) and r = cos(theta), we have z = (cos(theta))^2
= (1+ cos2(theta))/2, which is the function we integrate.
The volume we wish to find is the right hand side of the lemniscate which is enclosed inside the circle.
For limits for the dr expression we can integrate from b = 1 to a = 0.
For the d(theta) limits I get confused. I was told to integrate the smallest interval possible and multiply by any symmetry factors. With that said, I can take beta = pi/2 and alpha = 0 and multiply by two due to symmetry about the x-axis. However, I know this is not the smallest interval as there must be a ray that caps the lemniscate function. I know how to find this ray, essentially a value for theta, by setting functios r equal to each other and finding that theta. However, I have an expression in terms of r and in terms of z.
My confusion in this next question is much the same as the above:
The question asks to find the volume of the solid based on the interior of the cardioid r = 1 + cos(theta), capped by the cone z = 2 - r.
We have z = 2 - (1 + cos(theta)) = 1 - cos(theta). Essentially another cardioid with the same size but with the cusp pointing in the opposite direction. The graph I get is basically an infinity sign along the y-axis, that is with rays pi/2 and 3pi/2, and with vertices (1,0) and (-1,0).
Again I get confused by symmetry due to the negative and positive contributions of the polar graph. For the d(theta) limits I believe I can integrate from alpha = 0 to beta = pi/2 and multiply by four due to symmetry. I get confused for the dr limits as the solid formed by the two cardioids expands over all four quadrants. Do I integrate from a = o to b = 2 and multiply by two due to symmetry? Two is the intersection point of the
1 + cos (theta) cardiod on the x-axis. I really get confused with this question as I imagine you can integrate from 0 to pi/2, pi/2 to pi, pi to 3pi/2 and 3pi/2 to 2pi breaking up the four quadrants and adding up the volumes of the four regions of the solid.
I would love any hints and any generalizations about symmetry considerations with polar coordinates.
Thank you.
2. Originally Posted by jcarlos
Hello,
Having problems setting integration limits due to symmetry.
I have two questions:
One asks to find the volume of the solid formed by the interior of the circle
r = cos(theta) capped by the plane z = x.
The region of integration is demonstrated below.
To find the surface area you need to find,
$\int_D \int \sqrt{1+f_x^2+f_y^2} dA$
Given the surface,
$z=x$ we find that,
$f_x=1$ and $f_y=0$,
Thus,
$\sqrt{1+f_x^2+f_y^2}=\sqrt{1+1^2+0^2}=\sqrt{2}$
Thus,
$\int_D \int \sqrt{2} dA=\sqrt{2}\int_D\int dA$.
But,
$\int_D \int dA$ is the area of the region. Which is,
$\frac{\pi}{4}$
Thus,
$\frac{\pi\sqrt{2}}{4}$
Attached Thumbnails
3. Originally Posted by jcarlos
My confusion in this next question is much the same as the above:
The question asks to find the volume of the solid based on the interior of the cardioid r = 1 + cos(theta), capped by the cone z = 2 - r.
We have z = 2 - (1 + cos(theta)) = 1 - cos(theta). Essentially another cardioid with the same size but with the cusp pointing in the opposite direction. The graph I get is basically an infinity sign along the y-axis, that is with rays pi/2 and 3pi/2, and with vertices (1,0) and (-1,0).
The region of integration is demonstrated below.
To simplify the problem divide it into to parts. Calculate the right region and then add the left region.
The cone is, $z=2-(x^2+y^2)$??? But that is a parabolid So I presume you meant to say $z=2-\sqrt{x^2+y^2}$. Thus, you need to find,
$\int_{A_1} \int 2-x^2-y^2 dA +\int_{A_2} \int 2-x^2-y^2 dA$ after the substitution $x^2+y^2=r^2$ you end with (and remember to multiply by $r$ again),
$\int_{A_1} \int (2-r)r dr d\theta +\int_{A_2}\int (2-r)r dr d\theta$
Now you need to set the limits. Which are,
$\int_{3\pi/2}^{\pi/2} \int_0^{1-\cos \theta} (2-r)r dr d\theta +\int_{\pi/2}^{3\pi/2} \int_0^{1+\cos \theta} (2-r)r dr d\theta$
Attached Thumbnails
4. ## Polar coordinate Integration
Perfect Hacker,
Thank you for your explanations the question with the cardioids is now clear.
However, I do not understand your reply to my first question. I don't see what the surface area has to do with the solid region inside the circle
r = cos(theta) and the plane z = x.
The polar coordinate integration requires dA = rdrd(theta), which is not used in your explanation.
The graph I obtain has the right hand of the lemniscate inside of the circle.
I used z = x = rcos(theta)
Therefore, z = cos(theta)cos(theta) = (1 + cos2(theta))/2. This is the function we integrate right.
So the limits can be from r = cos(theta) to r = 0 right?!
I am not sure about the other limits though, would it be from pi/2 to 0 and multiply by two because of symmetry?
Thanks again.
5. ## Polar Coordinate Integration
Perfect Hacker,
I finished doing the question with the two cardioids. Unless I have made some mistakes, working out the integrals for 2r - r^2 was very labourious. I got an answer of -28/9 for the volume of the solid formed by the two cardioiods. Does a negative volume make any sense?
Thank you again.
6. Forgive me. In the first question I assumed you were speaking of surface area not volume. I respond back later.
7. Originally Posted by jcarlos
Perfect Hacker,
I finished doing the question with the two cardioids. Unless I have made some mistakes, working out the integrals for 2r - r^2 was very labourious. I got an answer of -28/9 for the volume of the solid formed by the two cardioiods. Does a negative volume make any sense?
Thank you again.
No, it cannot be negative unless the surface goes below the xy-plane. It you visualize the surface $z=2-\sqrt{x^2+y^2}$ is it above. It is very possible you make a mistake in this problem because it is very long computation. It is also possible that I made a mistake with the first integral by writing the angles bound incorrecty, I shall check that. I belive it might be simpler to express this as 4 integral. Divide that region into 4 parts.
---
I believe the integral should have been,
$<br /> \int_{-\pi/2}^{\pi/2} \int_0^{1-\cos \theta} (2-r)r dr d\theta +\int_{\pi/2}^{3\pi/2} \int_0^{1+\cos \theta} (2-r)r dr d\theta<br />$
When we, use the first iteration we get,
$\int_{-\pi/2}^{\pi/2} (1-\cos \theta)^2-\frac{1}{3}(1-\cos \theta)^3 d\theta \approx .5388$ (I used software for this part).
On the second integral,
$\int_{\pi/2}^{3\pi/2}(1+\cos \theta)^2-\frac{1}{3}(1+\cos \theta)^3 d\theta\approx .5388$
Your answer is the sum of these two.
---
I believe when you mentioned symettry was a good idea.
The cone, $f(x,y)=2-\sqrt{x^2+y^2}$ is symettric because $f(-x,-y)=f(x,y)$.
Thus, you good have calculated the right part and then multipled by two, namely,
$2\int_{\pi/2}^{3\pi/2}(1+\cos \theta)^2-\frac{1}{3}(1+\cos \theta)^3 d\theta$
8. ## Polar coordinate integration
Perfect Hacker,
Thank you again for the question with the cardiods. The limits you proposed the first time were incorrect. With the new limits you suggested, I got the same volume as you did by either adding up the left and the right side, or by computing either and multiplying by two due to symmetry.
Like I said before, quite a labourious question with all the trigonometric substitutions.
I am confident that I did the other question correctly, the one you misunderstood as surface area instead of volume.
For d(theta) limits I integrated from 0 to pi/2 and multiplied by two due to symmetry, and for dr limits, I integrated from 0 to cos(theta). The function I integrated was (1+ cos2(theta))/2. Please correct me if I am wrong.
Thanks again.
9. Originally Posted by jcarlos
For d(theta) limits I integrated from 0 to pi/2 and multiplied by two due to symmetry, and for dr limits, I integrated from 0 to cos(theta). The function I integrated was (1+ cos2(theta))/2. Please correct me if I am wrong.
Seems right. Just let me tell you a story about symettry. You should not rely on it so much. Because it is based on the symettry of the region (like here) AND the symettry of the solid above the region. Once you have that then you can proceede with symettry.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509974718093872, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/hilbert-space+wavefunction
|
# Tagged Questions
2answers
114 views
### Vector representation of wavefunction in quantum mechanics?
I am new to quantum mechanics, and I just studied some parts of "wave mechanics" version of quantum mechanics. But I heard that wavefunction can be represented as vector in Hilbert space. In my eye, ...
1answer
172 views
### What does it mean for something to be a ket?
Ok so I will provide the following example, which I am choosing at random from Sabio et al(2010): \psi(r,\phi)~=~\left[ \begin{array}{c} A_1r\sin(\theta-\phi)\\ ...
1answer
261 views
### Wave function and Dirac bra-ket notation
Would anyone be able to explain the difference, technically, between wave function notation for quantum systems e.g. $\psi=\psi(x)$ and Dirac bra-ket vector notation? How do you get from one to the ...
1answer
128 views
### Once I have the eigenvalues and the eigenvectors, how do I find the eigenfunctions?
I am using Mathematica to construct a matrix for the Hamiltonian of some system. I have built this matrix already, and I have found the eigenvalues and the eigenvectors, I am uncertain if what I did ...
2answers
149 views
### In Dirac notation, what do the subscripts represent? (Solution for particle in a box in mind)
So the set of solutions for the particle in a box is given by $$\psi_n(x) = \sqrt{\frac{2}{L}}\sin(\frac{n\pi x}{L}).$$ In Dirac notation $<\psi_i|\psi_j>=\delta_{ij}$ assuming $|\psi_i>$ ...
3answers
215 views
### Normalisation factor $\psi_0$ for wave function $\psi = \psi_0 \sin(kx-\omega t)$
I know that if I integrate probabilitlity $|\psi|^2$ over a whole volume $V$ I am supposed to get 1. This equation describes this. $$\int \limits^{}_{V} \left|\psi \right|^2 \, \textrm{d} V = 1\\$$ ...
2answers
127 views
### What does the quantum state of a system tell us about itself?
In quantum mechanics, quantum state refers to the state of a quantum system. A quantum state is given as a vector in a vector space, called the state vector. The state vector theoretically ...
2answers
334 views
### What's the physical significance of the inner product of two wave functions in quantum region?
I am a reading a book for beginners of the quantum mechanics. In one section, the author shows the inner product of two wave functions $\langle\alpha\vert\beta\rangle$. I am wondering what's the ...
1answer
343 views
### Where does the wave function of the universe live? Please describe its home
Where does the wave function of the universe live? Please describe its home. I think this is the Hilbert space of the universe. (Greater or lesser, depending on which church you belong to.) Or maybe ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9056180715560913, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/180725-integral.html
|
# Thread:
1. ## an integral
Hi!
I need the following to complete a proof.
$\int _0^\infty \frac{sin^2(x)}{x^2}dx = \frac{\pi}{2}$
I've checked it in Maple and it's true. All I have left is to actually show it (without referring to Maple ofc).
Thanks!
2. I expect you would have to use the residue theorem...
3. Originally Posted by mgarson
Hi!
I need the following to complete a proof.
$\int _0^\infty \frac{sin^2(x)}{x^2}dx = \frac{\pi}{2}$
I've checked it in Maple and it's true. All I have left is to actually show it (without referring to Maple ofc).
Thanks!
A solution of the integral based on the Laplace Tranform is illustrated here...
http://www.mathhelpforum.com/math-he...rm-170211.html
Kind regards
$\chi$ $\sigma$
4. Here is a third method using the Laplace transform
Consider the function
$f(t)=\int_{0}^{\infty}\frac{\sin^2(tx)}{x^2}dx$
Note that
$f(1)=\int_{0}^{\infty}\frac{\sin^2(x)}{x^2}dx$
Is the integral you are looking for
Using a trig Identity we get
$f(t)=\int_{0}^{\infty}\frac{\sin^2(tx)}{x^2}dx= \int_{0}^{\infty} \frac{1-\cos(2tx)}{2x^2}$
Now if we take the Laplace transform with respect to t we get
$\mathcal{L}(f)=\frac{1}{2}\int_{0}^{\infty} \left( \frac{1}{x^2s}-\frac{s}{x^2(s^2+4x^2)} \right) dx$
Now by partial fractions on the 2nd term we get
$\frac{s}{x^2(s^2+4x^2)}=\frac{1}{sx^2}-\frac{4}{s(s^2+4x^2)}$
Plugging this back in gives
$\mathcal{L}(f)=\frac{2}{s}\int_{0}^{\infty} \frac{1}{(s^2+4x^2)} dx=\frac{1}{s^2} \cdot \tan^{-1}\left( \frac{2x}{s}\right) \bigg|_{0}^{\infty}$
$\mathcal{L}(f)=\frac{\pi }{2s^2}$
Now finally take the inverse transform to get
$f(t)=\frac{\pi}{2}t \implies f(1)=\frac{\pi}{2}$
5. Another way: using integration by parts with $u=\sin^2 x,\; dv=dx/x^2$ :
$\displaystyle\int_0^{+\infty}\dfrac{\sin ^2 x}{x^2}dx=\left[\dfrac{\sin ^2 x}{x}\right]_0^{+\infty}+\displaystyle\int_0^{+\infty}\dfrac{\ sin 2x}{x}dx=\displaystyle\int_0^{+\infty}\dfrac{\sin 2 x}{x}dx$
Using the substitution $t=2x$ we obtain the Dirichlet's integral:
$\displaystyle\int_0^{+\infty}\dfrac{\sin 2 x}{x}dx=\displaystyle\int_0^{+\infty}\dfrac{\sin t}{t}dt=\dfrac{\pi}{2}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8978409171104431, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/93438?sort=votes
|
## Euler class in the non-compact case
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Does anyone have a reference for:
The Euler-class for an open non-compact manifold possibly with twisted coefficients (if the group action on the manifold does not preserve orientation) and/or for a compactification e.g. the one point compactification
jim
-
## 3 Answers
A version of the Euler class for oriented noncompact manifolds appears in the paper "Fixed-point theories on noncompact manifolds" by Shmuel Weinberger (you need a Riemannian metric of bounded geometry, I think): http://math.uchicago.edu/~shmuel/fpt.pdf The setup is similar to the one in compact case, but you need to have uniform bounds on the vector fields. The non-orientable case should be similar.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
There is one version of Euler class for oriented vector bundles on non-compact manifolds, the so called relative Euler class . It requires that the vector bundle admits a section which does not vanish outside a compact set. The relative Euler class is then an element of the cohomology with compact supports, and as such, it depends on the choice of the section that is nontrivial outside that compact set.
Formally, if $E\to M$ is an oriented vector bundle with Thom class $\tau$ and $s:M\to E$ is a section that does not vanish outside a compact set, then the relative Euler class is
$$\boldsymbol{e}(E, s):=s^*\tau(E)\in H^r_c(M),$$
$r$ being the (real) rank of $E$. $\newcommand{\be}{\boldsymbol{e}}$ The class $\be(E,s)$ depends only the homotopy class of $s$ in the space of sections nontrivial outside a compact set.
If $M$ happens to be oriented, then $\be(E,s)$ is the Poincare dual of the cycle determined by the zero set of $s$.
Here is a good example to think about. Suppose that $L\to D$ is the trivial complex line bundle over the open unit disk in the plane. Suppose $s(z)=z^k$, $k\geq 0$. Then
$$\be(L,s)\in H^2_c(D)= H^2(D,\partial D)$$
and
$$\langle \be(L,s), [D,\partial D]\rangle =k.$$
-
For the untwisted case see Dold's "Lectures on algebraic topology" section VIII.11. If $N$ is a oriented topological submanifold of an oriented manifold $M$ of codimension $k$, then one looks at the Thom class in $H^k(M, M-N)$ and then restricts it to $H^k(N)$ to get the Euler class. Compactness of the submanifold $N$ is never needed.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028264284133911, "perplexity_flag": "head"}
|
http://planetmath.org/uniformspace
|
# uniform space
A uniform structure (or uniformity) on a set $X$ is a non empty set $\mathcal{U}$ of subsets of $X\times X$ which satisfies the following axioms:
1. 1.
Every subset of $X\times X$ which contains a set of $\mathcal{U}$ belongs to $\mathcal{U}$.
2. 2.
Every finite intersection of sets of $\mathcal{U}$ belongs to $\mathcal{U}$.
3. 3.
Every set of $\mathcal{U}$ is a reflexive relation on $X$ (i.e. contains the diagonal).
4. 4.
If $V$ belongs to $\mathcal{U}$, then $V^{{\prime}}=\{(y,x):(x,y)\in V\}$ belongs to $\mathcal{U}$.
5. 5.
If $V$ belongs to $\mathcal{U}$, then exists $V^{{\prime}}$ in $\mathcal{U}$ such that, whenever $(x,y),(y,z)\in V^{{\prime}}$, then $(x,z)\in V$ (i.e. $V^{{\prime}}\circ V^{{\prime}}\subseteq V$).
The sets of $\mathcal{U}$ are called entourages or vicinities. The set $X$ together with the uniform structure $\mathcal{U}$ is called a uniform space.
If $V$ is an entourage, then for any $(x,y)\in V$ we say that $x$ and $y$ are $V$-close.
Every uniform space can be considered a topological space with a natural topology induced by uniform structure. The uniformity, however, provides in general a richer structure, which formalize the concept of relative closeness: in a uniform space we can say that $x$ is close to $y$ as $z$ is to $w$, which makes no sense in a topological space. It follows that uniform spaces are the most natural environment for uniformly continuous functions and Cauchy sequences, in which these concepts are naturally involved.
Examples of uniform spaces are metric spaces, topological groups, and topological vector spaces.
Type of Math Object:
Definition
Major Section:
Reference
Groups audience:
## Mathematics Subject Classification
54E15 Uniform structures and generalizations
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: mps
Added: 2002-06-10 - 10:20
Author(s): mps
## Corrections
clarify by mps ✓
vicinity by CWoo ✓
## Versions
(v12) by mps 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 33, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8993343710899353, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Affine_function
|
# Affine transformation
(Redirected from Affine function)
An image of a fern-like fractal that exhibits affine self-similarity. Each of the leaves of the fern are related to one another by an affine transformation. For instance, the red leaf can be transformed into the blue leaf by a combination of reflection, rotation, expansion, and translation.
In geometry, an affine transformation or affine map[1] or an affinity (from the Latin, affinis, "connected with") is a transformation which preserves straight lines (i.e., all points lying on a line initially still lie on a line after transformation) and ratios of distances between points lying on a straight line (e.g., the midpoint of a line segment remains the midpoint after transformation). It does not necessarily preserve angles or lengths, but does have the property that sets of parallel lines will remain parallel to each other after an affine transformation.
Examples of affine transformations include translation, geometric contraction, expansion, homothety, reflection, rotation, shear mapping, similarity transformation, and spiral similarities and compositions of them.
An affine transformation is equivalent to a linear transformation followed by a translation.
## Mathematical Definition
An affine map[1] $f:\mathcal{A} \;\to\; \mathcal{B}$ between two affine spaces is a map on the points that acts linearly on the vectors (that is, the vectors between points of the space). In symbols, f determines a linear transformation φ such that, for any pair of points $P,\, Q \,\in\, \mathcal{A}$:
$\overrightarrow{f(P)~f(Q)} = \varphi(\overrightarrow{PQ})$
or
$f(Q)-f(P) = \varphi(Q-P) \,$.
We can interpret this definition in a few other ways, as follows.
If an origin $O \in \mathcal{A}$ is chosen, and $B\,$ denotes its image $f(O) \,\in\, \mathcal{B}$, then this means that for any vector $\vec{x}$:
$f: (O+\vec{x}) \mapsto (B+\varphi(\vec{x})).$
If an origin $O' \,\in\, \mathcal{B}$ is also chosen, this can be decomposed as an affine transformation $g : \mathcal{A} \,\to\, \mathcal{B}$ that sends $O \;\mapsto\; O'$, namely
$g: (O+\vec{x}) \mapsto (O'+\varphi(\vec{x})),$
followed by the translation by a vector $\vec{b} \;=\; \overrightarrow{O'B}$.
The conclusion is that, intuitively, $f$ consists of a translation and a linear map.
### Alternative definition
Given two affine spaces $\mathcal{A}$ and $\mathcal{B}$, over the same field, a function $f:\, \mathcal{A} \;\to\; \mathcal{B}$ is an affine map if and only if for every family $\{(a_i,\, \lambda_i)\}_{i\in I}\,$ of weighted points in $\mathcal{A}$ such that
$\sum_{i\in I}\lambda_i \;=\; 1\, ,$
we have[2]
$f\left(\sum_{i\in I}\lambda_i a_i\right)=\sum_{i\in I}\lambda_i f(a_i)\, .$
In other words, $f\,$ preserves barycenters.
## Representation
As shown above, an affine map is the composition of two functions: a translation and a linear map. Ordinary vector algebra uses matrix multiplication to represent linear maps, and vector addition to represent translations. Formally, in the finite-dimensional case, if the linear map is represented as a multiplication by a matrix A and the translation as the addition of a vector $\vec{b}$, an affine map $f$ acting on a vector $\vec{x}$ can be represented as
$\vec{y} = f(\vec{x}) = A \vec{x} + \vec{b}.$
### Augmented matrix
Using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear map using a single matrix multiplication. The technique requires that all vectors are augmented with a "1" at the end, and all matrices are augmented with an extra row of zeros at the bottom, an extra column—the translation vector—to the right, and a "1" in the lower right corner. If A is a matrix,
$\begin{bmatrix} \vec{y} \\ 1 \end{bmatrix} = \begin{bmatrix} A & \vec{b} \ \\ 0, \ldots, 0 & 1 \end{bmatrix} \begin{bmatrix} \vec{x} \\ 1 \end{bmatrix}$
is equivalent to the following
$\vec{y} = A \vec{x} + \vec{b}.$
The above mentioned augmented matrix is called affine transformation matrix, or projective transformation matrix (as it can also be used to perform Projective transformations).
This representation exhibits the set of all invertible affine transformations as the semidirect product of Kn and GL(n, k). This is a group under the operation of composition of functions, called the affine group.
Ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending the additional coordinate "1" to every vector, one essentially considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the additional coordinate is 1. Thus the origin of the original space can be found at (0,0, ... 0, 1). A translation within the original space by means of a linear transformation of the higher-dimensional space is then possible (specifically, a shear transformation). The coordinates in the higher-dimensional space are an example of homogeneous coordinates. If the original space is Euclidean, the higher dimensional space is a real projective space.
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into one by multiplying the respective matrices. This property is used extensively in computer graphics, computer vision and robotics.
## Properties
An affine transformation preserves:
1. The collinearity relation between points; i.e., points which lie on the same line (called collinear points) continue to be collinear after the transformation.
2. Ratios of vectors along a line; i.e., for distinct collinear points $p_1,\, p_2,\, p_3,$ the ratio of $\overrightarrow{p_1p_2}$ and $\overrightarrow{p_2p_3}$ is the same as that of $\overrightarrow{f(p_1)f(p_2)}$ and $\overrightarrow{f(p_2)f(p_3)}$.
3. More generally barycenters of weighted collections of points.
An affine transformation is invertible if and only if A is invertible. In the matrix representation, the inverse is:
$\begin{bmatrix} A^{-1} & -A^{-1}\vec{b} \ \\ 0,\ldots,0 & 1 \end{bmatrix}$
The invertible affine transformations (of an affine space onto itself) form the affine group, which has the general linear group of degree n as subgroup and is itself a subgroup of the general linear group of degree n + 1.
The similarity transformations form the subgroup where A is a scalar times an orthogonal matrix. For example, if the affine transformation acts on the plane and if the determinant of A is 1 or −1 then the transformation is an equi-areal mapping. Such transformations form a subgroup called the equi-affine group[3] A transformation that is both equi-affine and a similarity is an isometry of the plane taken with Euclidean distance.
Each of these groups has a subgroup of transformations which preserve orientation: those where the determinant of A is positive. In the last case this is in 3D the group of rigid body motions (proper rotations and pure translations).
If there is a fixed point, we can take that as the origin, and the affine transformation reduces to a linear transformation. This may make it easier to classify and understand the transformation. For example, describing a transformation as a rotation by a certain angle with respect to a certain axis may give a clearer idea of the overall behavior of the transformation than describing it as a combination of a translation and a rotation. However, this depends on application and context.
## Affine transformation of the plane
A central dilation. The triangles A1B1Z, A1C1Z, and B1C1Z get mapped to A2B2Z, A2C2Z, and B2C2Z, respectively.
Affine transformations in two real dimensions include:
• pure translations,
• scaling in a given direction, with respect to a line in another direction (not necessarily perpendicular), combined with translation that is not purely in the direction of scaling; taking "scaling" in a generalized sense it includes the cases that the scale factor is zero (projection) and negative; the latter includes reflection, and combined with translation it includes glide reflection,
• rotation combined with a homothety and a translation,
• shear mapping combined with a homothety and a translation, or
• squeeze mapping combined with a homothety and a translation.
To visualise the general affine transformation of the Euclidean plane, take labelled parallelograms ABCD and A′B′C′D′. Whatever the choices of points, there is an affine transformation T of the plane taking A to A′, and each vertex similarly. Supposing we exclude the degenerate case where ABCD has zero area, there is a unique such affine transformation T. Drawing out a whole grid of parallelograms based on ABCD, the image T(P) of any point P is determined by noting that T(A) = A′, T applied to the line segment AB is A′B′, T applied to the line segment AC is A′C′, and T respects scalar multiples of vectors based at A. [If A, E, F are collinear then the ratio length(AF)/length(AE) is equal to length(A′F′)/length(A′E′).] Geometrically T transforms the grid based on ABCD to that based in A′B′C′D′.
Affine transformations don't respect lengths or angles; they multiply area by a constant factor
area of A′B′C′D′ / area of ABCD.
A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by its effect on signed areas (as defined, for example, by the cross product of vectors).
## Examples of affine transformations
### Affine transformations over the real numbers
Functions f : R → R, f(x) = mx + c with m and c constant, are commonplace affine transformations.
### Affine transformation over a finite field
The following equation expresses an affine transformation in GF(28) (with "+" representing XOR):
$\{\,a'\,\} = M\{\,a\,\} + \{\,v\,\},$
where [M] is the matrix and {v} is the vector $M\{\,a\,\}= \begin{bmatrix} 1&0&0&0&1&1&1&1 \\ 1&1&0&0&0&1&1&1 \\ 1&1&1&0&0&0&1&1 \\ 1&1&1&1&0&0&0&1 \\ 1&1&1&1&1&0&0&0 \\ 0&1&1&1&1&1&0&0 \\ 0&0&1&1&1&1&1&0 \\ 0&0&0&1&1&1&1&1 \end{bmatrix}$ :$\{\,v\,\}= \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \\ 0 \\ 1 \\ 1 \\ 0 \end{bmatrix}.$
For instance, the affine transformation of the element {a} = y7 + y6 + y3 + y = {11001010} in big-endian binary notation = {CA} in big-endian hexadecimal notation, is calculated as follows:
$a_0' = a_0 \oplus a_4 \oplus a_5 \oplus a_6 \oplus a_7 \oplus 1 = 0 \oplus 0 \oplus 0 \oplus 1 \oplus 1 \oplus 1 = 1$
$a_1' = a_0 \oplus a_1 \oplus a_5 \oplus a_6 \oplus a_7 \oplus 1 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 1 = 0$
$a_2' = a_0 \oplus a_1 \oplus a_2 \oplus a_6 \oplus a_7 \oplus 0 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1$
$a_3' = a_0 \oplus a_1 \oplus a_2 \oplus a_3 \oplus a_7 \oplus 0 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1$
$a_4' = a_0 \oplus a_1 \oplus a_2 \oplus a_3 \oplus a_4 \oplus 0 = 0 \oplus 1 \oplus 0 \oplus 1 \oplus 0 \oplus 0 = 0$
$a_5' = a_1 \oplus a_2 \oplus a_3 \oplus a_4 \oplus a_5 \oplus 1 = 1 \oplus 0 \oplus 1 \oplus 0 \oplus 0 \oplus 1 = 1$
$a_6' = a_2 \oplus a_3 \oplus a_4 \oplus a_5 \oplus a_6 \oplus 1 = 0 \oplus 1 \oplus 0 \oplus 0 \oplus 1 \oplus 1 = 1$
$a_7' = a_3 \oplus a_4 \oplus a_5 \oplus a_6 \oplus a_7 \oplus 0 = 1 \oplus 0 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1.$
Thus, {a′} = y7 + y6 + y5 + y3 + y2 + 1 = {11101101} = {ED}.
### Affine transformation in plane geometry
A simple affine transformation on the real plane
In ℝ2, the transformation shown at right is accomplished using the map given by:
$\begin{bmatrix} x \\ y\end{bmatrix} \mapsto \begin{bmatrix} 0&1\\ 2&1 \end{bmatrix}\begin{bmatrix} x \\ y\end{bmatrix} + \begin{bmatrix} -100 \\ -100\end{bmatrix}$
Transforming the three corner points of the original triangle (in red) gives three new points which form the new triangle (in blue). This transformation skews and translates the original triangle.
In fact, all triangles are related to one another by affine transformations. This is also true for all parallelograms, but not for all quadrilaterals.
## Notes
1. ^ a b Berger, Marcel (1987), p. 38 Missing or empty `|title=` (help)
2. Schneider, Philip K. & Eberly, David H. (2003). Geometric Tools for Computer Graphics. Morgan Kaufmann. p. 98. ISBN 978-1-55860-594-7.
3. Oswald Veblen (1918) Projective Geometry, volume 2, page 105–7
## References
• Berger, Marcel (1987), Geometry I, Berlin: Springer, ISBN 3-540-11658-3
• Nomizu, Katsumi; Sasaki, S. (1994), Affine Differential Geometry (New ed.), Cambridge University Press, ISBN 978-0-521-44177-3
• Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 47, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8802365064620972, "perplexity_flag": "head"}
|
http://www.abstractmath.org/Word%20Press/?p=4845
|
# Gyre&Gimbleposts about math, language and other things that may appear in the wabe
## A visualization of a computation in tree form
2012/06/28 — SixWingedSeraph
To manipulate the demo below, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website.
This demonstration shows the step by step computation of the value of the expression $3x^2+2(1+y)$ shown as a tree. By moving the first slider from right to left, you go through the six steps of the computation. You may select the values of $x$ and $y$ with the second and third sliders. If you click on the plus sign next to a slide, a menu opens up that allows you to make the slider move automatically, shows the values, and other things.
Note that subtrees on the same level are evaluate left to right. Parallel processing would save two steps.
The code for this demo is in the file Live evaluation of expressions in TreeForm 3. The code is ad-hoc. It might be worthwhile for someone to design a package that produces this sort of tree for any expression.
A previous post related to this post is Making visible the abstraction in algebraic notation.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8527936339378357, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/122860-intro-integrals.html
|
# Thread:
1. ## Intro to Integrals
Hi,
I'm stressing out because I've been on this problem for hours and I'm still getting nowhere.
"""
Let R denote the region that lies below the graph of $f(x)=2+4x^2$ on the interval [-1, 2].
Estimate the area of R using six approximating rectangles of the same width and
(a) left endpoints,
(b) right endpoints
"""
my prof doesn't really have notes on what to do and I'm really confused since it wants an estimate, and not an exact answer...
For (a), since the area of one block is:
$(3/n)(2+4(i(3/n))^2)$
but I'm not even sure if that's right...
2. Originally Posted by alleysan
Hi,
I'm stressing out because I've been on this problem for hours and I'm still getting nowhere.
"""
Let R denote the region that lies below the graph of $f(x)=2+4x^2$ on the interval [-1, 2].
Estimate the area of R using six approximating rectangles of the same width and
(a) left endpoints,
(b) right endpoints
"""
my prof doesn't really have notes on what to do and I'm really confused since it wants an estimate, and not an exact answer...
For (a), since the area of one block is:
$(3/n)(2+4(i(3/n))^2)$
but I'm not even sure if that's right...
It helps if you draw the graph and the rectangles, to give you a picture of what is happening...
If the region is $[-1, 2]$, this is a distance of $3$ units.
Since you need $6$ subintervals, that means that each will be $\frac{1}{2}$ unit. This is going to be the length of each rectangle.
How do you know the width? You determine the $x$ co-ordinate and its corresponding $y$. The $y$ co-ordinate represents the width.
$y = 2 + 4x^2$.
If you are using the left-hand endpoints, then the first $x$ co-ordinate will be $-1$. So your $y$ co-ordinate is $2 + 4(-1)^2 = 6$.
So $A_1 = L\times W$
$= \frac{1}{2}\times 6$
$= 3\,\textrm{units}^2$.
The second rectangle will be $\frac{1}{2}$ a unit more on the $x$ axis.
So $x = -\frac{1}{2}$ and $y = 2 + 4\left(-\frac{1}{2}\right)^2 = 3$.
So $A_2 = L \times W$
$= \frac{1}{2}\times 3$
$= \frac{3}{2}\,\textrm{units}^2$.
The next rectangle will be at $x = 0$ and $y = 2 + 0^2 = 2$.
So $A_3 = L \times W$
$= \frac{1}{2} \times 2$
$= 1\,\textrm{unit}^2$.
Once you have all the areas you need, you add them up.
For the right-hand estimate, you start at the right-hand endpoint and go to the left $\frac{1}{2}$ a unit each time.
3. Ah crap,
I overcomplicated it.
Thought I was supposed to use something the prof taught us.
Thank you for explaining everything very well!
4. Other methods that give approximations to the area under the curve (and in many cases, better approximations) are the midpoint rule and the trapezoidal rule.
Rectangle method - Wikipedia, the free encyclopedia
Trapezoidal rule - Wikipedia, the free encyclopedia
5. Oh wow, thanks for the links!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918452799320221, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/32878/hall-conductivity-and-edge-response
|
# Hall conductivity and edge response
The hall conductivity $\sigma_{xy}$ seems to reflect to some extent the response of a system in direction $\hat{y}$ to certain perturbation (electric field for example) restricted in $\hat{x}$ direction.
My question is, does a nonzero $\sigma_{xy}$ imply anything about the the physics of edge response, i.e. if given a half infinite system with an edge at $x=0$, what would be the effect on y direction? Would there be a current in y direction along the edge?
Thanks!
-
## 2 Answers
Your word "half-infinite" is quite tricky. A QHS is defined on a closed manifold, a 2-torus. If I open x direction, there is a chiral current in y direction (a circle); if I open both directions, there will be a chiral current around the edge. Yes, for your question, there is a current in y direction, you may just stretch the other sides to infinity.
-
Classically it doesn't do any of this--- it just says that you have a current in a direction different from the applied voltage. If you apply a voltage dropping in the x direction, you get an electric field in the x-direction (at first), so the current will have a y-component (at first). If you wait a little, this current will build up charges on the edges which will change the direction of the electric field in the interior, and this will continue until you redirect the current all in the x direction. At this point, you have a y-component of the electric field, and integrating this E field gives the hall voltage.
There are a lot of interesting quantum edge effects in quantum hall effect materials, but these are not accessible just from the classical $\sigma_{xy}$. This quantity is the tensor component that describes how the E-field direction is tilted away from the current direction. Mathematically speaking, the tensor that you multiply $E_x,E_y$ by to get $J_x,J_y$ is symmetric, and is the conductivity $\sigma$ on the diagonal, and $\sigma_{xy}$ off diagonal.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252176880836487, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/81149?sort=votes
|
## On lattice points “far inside” convex lattice polygons
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathcal{P}$ be a convex lattice polygon with $n$ vertices and let $\mathcal{L}$ be the set of all lattice points inside $\mathcal{P}$. For every $n \geq 5$, does there exist a point in $\mathcal{L}$ such that it also lies in the convex polygon bounded by (all) the diagonals of $\mathcal{P}$? How many such points are there? (//By diagonals I mean of course the lines different from the sidelines of the polygon which are connecting two vertices of $\mathcal{P}$.)
I proved a while ago that for $n=5$ there is such a point in $\mathcal{L}$. I also managed to show this now for $n \geq 6$ using a similar argument, yet it got more involved and I still need to check for potential bugs. Any ideas for the general case?
-
1
I suspect there are simple counterexamples for n=6, so I may be misunderstanding something. Can you say more about what interior region is supposed to have a lattice point? Gerhard "Ask Me About System Design" Paseman, 2011.11.17 – Gerhard Paseman Nov 17 2011 at 10:07
What do you mean by "the convex pentagon bounded by (all) the diagonals of P"? usually the do not bound a pentagon. – Fedor Petrov Nov 17 2011 at 11:41
Yes he does, in the first line. – Igor Rivin Nov 17 2011 at 13:10
Thanks, Igor, deleted my comment. – Joseph O'Rourke Nov 17 2011 at 13:22
6
I am still having difficulty understanding the phrase, "the convex polygon bounded by all the diagonals of $P$." In general, there is no convex polygon bounded by all the diagonals, if by "bounded" you mean, "forming the boundary of." There are many convex polygons, each bounded by a subset of the diagonals... – Joseph O'Rourke Nov 17 2011 at 17:56
show 3 more comments
## 1 Answer
For $n=5$, this has been shown by Eppstein:
D. Eppstein, Happy endings for flip graphs, Journal of Computational Geometry 1 (2010), no. 1, 3--28.
For odd $n>5$, one could consider the polygon bounded by the longest diagonals.
It may be defined as the intersection of the half-planes containing $(n+1)/2$ vertices of $P$.
For $n=9$, this intersection may be empty (for example, if the nine vertices form three triples and each of the triples is placed very close to a vertex of a regular triangle).
For $n=7$, this intersection is non-empty. However, it may be free of lattice points: take, for example, the polygon $P$ with vertices $[0,1], [1,0], [2,0], [3,2], [3,3], [1,3], [0,2]$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218696355819702, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showpost.php?p=3780863&postcount=4
|
View Single Post
Recognitions:
Gold Member
## Can the Speed of Light be derived?
Well, historically Maxwell used his newly-corrected Ampere law to get an electromagnetic wave equation. When solving this equation, he found that EM waves travel at a speed given by:
$v=\frac{1}{\sqrt{\mu_0 \epsilon _0}}$
, where μ0 and ε0 are the magnetic and electric constants. When he calculated this value, he found that it was the same speed that light was known to travel at, suggesting that light consisted of EM waves.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9646657109260559, "perplexity_flag": "middle"}
|
http://en.m.wikipedia.org/wiki/Right_triangle
|
# Right triangle
Right triangle
A right triangle (American English) or right-angled triangle (British English) is a triangle in which one angle is a right angle (that is, a 90-degree angle). The relation between the sides and angles of a right triangle is the basis for trigonometry.
The side opposite the right angle is called the hypotenuse (side c in the figure above). The sides adjacent to the right angle are called legs (or catheti, singular: cathetus). Side a may be identified as the side adjacent to angle B and opposed to (or opposite) angle A, while side b is the side adjacent to angle A and opposed to angle B.
If the lengths of all three sides of a right triangle are integers, the triangle is said to be a Pythagorean triangle and its side lengths are collectively known as a Pythagorean triple.
## Principal properties
### Area
As with any triangle, the area is equal to one half the base multiplied by the corresponding height. In a right triangle, if one leg is taken as the base then the other is height, so the area of a right triangle is one half the product of the two legs. As a formula the area T is
$T=\tfrac{1}{2}ab$
where a and b are the legs of the triangle.
If the incircle is tangent to the hypotenuse AB at point P, then denoting the semi-perimeter (a + b + c) / 2 as s, we have PA = s − a and PB = s − b, and the area is given by
$T=\text{PA} \cdot \text{PB} = (s-a)(s-b).$
This formula only applies to right triangles.[1]
### Altitude
Altitude of a right triangle
If an altitude is drawn from the vertex with the right angle to the hypotenuse then the triangle is divided into two smaller triangles which are both similar to the original and therefore similar to each other. From this:
• The altitude is the geometric mean (mean proportional) of the two segments of the hypotenuse.
• Each leg of the triangle is the mean proportional of the hypotenuse and the segment of the hypotenuse that is adjacent to the leg.
In equations,
$\displaystyle f^2=de,$ (this is sometimes known as the right triangle altitude theorem)
$\displaystyle b^2=ce,$
$\displaystyle a^2=cd$
where a, b, c, d, e, f are as shown in the diagram.[2] Thus
$f=\frac{ab}{c}.$
Moreover, the altitude to the hypotenuse is related to the legs of the right triangle by[3][4]
$\frac{1}{a^2} + \frac{1}{b^2} = \frac{1}{f^2}.$
### Pythagorean theorem
Main article: Pythagorean theorem
The Pythagorean theorem states that:
In any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle).
This can be stated in equation form as
$\displaystyle a^2+b^2=c^2$
where c is the length of the hypotenuse, and a and b are the lengths of the remaining two sides.
### Inradius and circumradius
The radius of the incircle of a right triangle with legs a and b and hypotenuse c is
$r = \frac{a+b-c}{2} = \frac{ab}{a+b+c}.$
The radius of the circumcircle is half the length of the hypotenuse,
$R = \frac{c}{2}.$
One of the legs can be expressed in terms of the inradius and the other leg as
$\displaystyle a=\frac{2r(b-r)}{b-2r}.$
↑Jump back a section
## Characterizations
A triangle ABC with sides $a \le b < c$, semiperimeter s, area T, altitude h opposite the longest side, circumradius R, inradius r, exradii ra, rb, rc (tangent to a, b, c respectively), and medians ma, mb, mc is a right triangle if and only if any one of the statements in the following six categories is true. All of them are of course also properties of a right triangle, since characterizations are equivalences.
### Sides and semiperimeter
• $\displaystyle a^2+b^2=c^2\quad (\text{Pythagoras})$
• $\displaystyle (s-a)(s-b)=s(s-c)$
• $\displaystyle s=2R+r.$[5]
• $\displaystyle a^2+b^2+c^2=8R^2.$[6]
### Angles
• A and B are complementary.[7]
• $\displaystyle \cos{A}\cos{B}\cos{C}=0.$[6][8]
• $\displaystyle \sin^2{A}+\sin^2{B}+\sin^2{C}=2.$[6][8]
• $\displaystyle \cos^2{A}+\cos^2{B}+\cos^2{C}=1.$[8]
### Area
• $\displaystyle T=\frac{ab}{2}$
• $\displaystyle T=r_ar_b=rr_c$
• $\displaystyle T=r(2R+r)$
• $T=PA\cdot PB,$ where P is the tangency point of the incircle at the longest side AB.[9]
### Inradius and exradii
• $\displaystyle r=s-c$
• $\displaystyle r_a=s-b$
• $\displaystyle r_b=s-a$
• $\displaystyle r_c=s$
• $\displaystyle r_a+r_b+r_c+r=a+b+c$
• $\displaystyle r_a^2+r_b^2+r_c^2+r^2=a^2+b^2+c^2$
• $\displaystyle r=\frac{r_ar_b}{r_c}$
### Altitude and medians
• $\displaystyle h=\frac{ab}{c}$
• $\displaystyle m_a^2+m_b^2+m_c^2=6R^2.$[11]
• The length of one median is equal to the circumradius.
• The shortest altitude (the one from the vertex with the biggest angle) is the geometric mean of the line segments it divides the opposite (longest) side into. This is the right triangle altitude theorem.
### Circumcircle and incircle
• The triangle can be inscribed in a semicircle, with one side coinciding with the entirety of the diameter (Thales' theorem).
• The circumcenter is the midpoint of the longest side.
• The longest side is a diameter of the circumcircle $\displaystyle (c=2R).$
• The circumcircle is tangent to the nine-point circle.[6]
• The orthocenter lies on the circumcircle.[11]
• The distance between the incenter and the orthocenter is equal to $\sqrt{2}r$.[11]
↑Jump back a section
## Trigonometric ratios
The trigonometric functions for acute angles can be defined as ratios of the sides of a right triangle. For a given angle, a right triangle may be constructed with this angle, and the sides labeled opposite, adjacent and hypotenuse with reference to this angle according to the definitions above. These ratios of the sides do not depend on the particular right triangle chosen, but only on the given angle, since all triangles constructed this way are similar. If, for a given angle α, the opposite side, adjacent side and hypotenuse are labeled O, A and H respectively, then the trigonometric functions are
$\sin\alpha =\frac {O}{H},\,\cos\alpha =\frac {A}{H},\,\tan\alpha =\frac {O}{A},\,\sec\alpha =\frac {H}{A},\,\cot\alpha =\frac {A}{O},\,\csc\alpha =\frac {H}{O}.$
↑Jump back a section
## Special right triangles
Main article: Special right triangles
The values of the trigonometric functions can be evaluated exactly for certain angles using right triangles with special angles. These include the 30-60-90 triangle which can be used to evaluate the trigonometric functions for any multiple of π/6, and the 45-45-90 triangle which can be used to evaluate the trigonometric functions for any multiple of π/4.
The hyperbolic triangle is a special right triangle used to define the hyperbolic functions.
↑Jump back a section
## Thales' theorem
Main article: Thales' theorem
Median of a right angle of a triangle
Thales' theorem states that if A is any point of the circle with diameter BC (except B or C themselves) ABC is a right triangle where A is the right angle. The converse states that if a right triangle is inscribed in a circle then the hypotenuse will be a diameter of the circle. A corollary is that the length of the hypotenuse is twice the distance from the right angle vertex to the midpoint of the hypotenuse. Also, the center of the circle that circumscribes a right triangle is the midpoint of the hypotenuse and its radius is one half the length of the hypotenuse.
↑Jump back a section
## Medians
The following formulas hold for the medians of a right triangle:
$m_a^2 + m_b^2 = 5m_c^2 = \frac{5}{4}c^2.$
The median on the hypotenuse of a right triangle divides the triangle into two isosceles triangles, because the median equals one-half the hypotenuse.
↑Jump back a section
## Relation to various means and the golden ratio
Let H, G, and A be the harmonic mean, the geometric mean, and the arithmetic mean of two positive numbers a and b with a > b. If a right triangle has legs H and G and hypotenuse A, then[12]
$\frac{A}{H} = \frac{A^{2}}{G^{2}} = \frac{G^{2}}{H^{2}} = \phi \,$
and
$\frac{a}{b} = \phi^{3}, \,$
where $\phi$ is the golden ratio $\tfrac{1+ \sqrt{5}}{2}. \,$
↑Jump back a section
## Other properties
If segments of lengths p and q emanating from vertex C trisect the hypotenuse into segments of length c/3, then[13]:pp. 216-217
$p^2 + q^2 = 5\left(\frac{c}{3}\right)^2.$
The right triangle is the only triangle having two, rather than three, distinct inscribed squares.[14]
Let h and k (h > k) be the sides of the two inscribed squares in a right triangle with hypotenuse c. Then
$\frac{1}{c^2} + \frac{1}{h^2} = \frac{1}{k^2}.$
The perimeter of a right triangle equals the sum of the radii of the incircle and the three excircles.
↑Jump back a section
## References
1. Di Domenico, Angelo S., "A property of triangles involving area", 87, July 2003, pp. 323-324.
2. Wentworth p. 156
3. Voles, Roger, "Integer solutions of $a^{-2} + b^{-2} = d^{-2}$," Mathematical Gazette 83, July 1999, 269–271.
4. Richinick, Jennifer, "The upside-down Pythagorean Theorem," Mathematical Gazette 92, July 2008, 313–317.
5. Triangle right iff s = 2R + r, Art of problem solving, 2011, [1]
6. ^ a b c d Andreescu, Titu and Andrica, Dorian, "Complex Numbers from A to...Z", Birkhäuser, 2006, pp. 109-110.
7. ^ a b c CTK Wiki Math, A Variant of the Pythagorean Theorem, 2011, [2].
8. Darvasi, Gyula (March 2005), "Converse of a Property of Right Triangles", The Mathematical Gazette 89 (514): 72–76 .
9. Bell, Amy (2006), "Hansen's Right Triangle Theorem, Its Converse and a Generalization", Forum Geometricorum 6: 335–342 .
10. ^ a b c Inequalities proposed in “Crux Mathematicorum”, Problem 954, p. 26, [3].
11. Di Domenico, A., "The golden ratio — the right triangle — and the arithmetic, geometric, and harmonic means," Mathematical Gazette 89, July 2005, 261. Also Mitchell, Douglas W., "Feedback on 89.41", vol 90, March 2006, 153-154.
12. Posamentier, Alfred S., and Salkind, Charles T. Challenging Problems in Geometry, Dover, 1996.
13. Bailey, Herbert, and DeTemple, Duane, "Squares inscribed in angles and triangles", 71(4), 1998, 278-284.
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8467720746994019, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/44614/is-there-a-limit-of-cos-n
|
## Is there a limit of cos (n!)?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi, I encountered a problem today to prove that cos (n!) does not have a limit. I have no idea how to do it formally. Could someone help? The simpler the proof (by that i mean less complex theorems are used) the better. Thanks
-
10
It has a limit if the argument of the function is expressed in degrees. – Justin Melvin Nov 2 2010 at 21:01
2
The question is now here: math.stackexchange.com/questions/8690/… – Douglas S. Stones Nov 2 2010 at 21:50
12
The question boils down to whether the sequence $cn!$ tends to a limit mod 1, where $c=1/(2\pi)$. There are transcendental numbers $c$ for which the sequence DOES tend to a limit mod 1, so we have to use something about $\pi$. I'm sorry to see the question closed – SJR Nov 2 2010 at 22:26
5
I've started a meta conversation over at meta.mathoverflow.net/discussion/741/… – David Speyer Nov 3 2010 at 1:08
4
IMO it would have made sense for people to participate in the meta thread rather than to have an close/open tug-of-war with no discussion. Oh well. – Ryan Budney Nov 3 2010 at 5:31
show 7 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9008827209472656, "perplexity_flag": "middle"}
|
http://mathforum.org/mathimages/index.php?title=Harter-Heighway_Dragon&diff=29563&oldid=6426
|
# Harter-Heighway Dragon
### From Math Images
(Difference between revisions)
| | | | |
|----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Current revision (13:50, 11 May 2012) (edit) (undo) | |
| (17 intermediate revisions not shown.) | | | |
| Line 1: | | Line 1: | |
| - | {{Image Description | + | {{Image Description Ready |
| - | |ImageName=Harter-Heighway Dragon Curve (3D- twist) | + | |ImageName=Harter-Heighway Dragon Curve |
| | |Image=DragonCurve.jpg | | |Image=DragonCurve.jpg |
| - | |ImageIntro=This image is an artistic rendering of the Harter-Heighway Curve (also called the Dragon Curve), which is a fractal. This curve is an iterated function system and is often referred to as the Jurassic Park Curve, because it garnered popularity after being drawn and alluded to in the novel Jurassic Park by Michael Crichton (1990). | + | |ImageIntro=This image is an artistic rendering of the Harter-Heighway Curve (also called the Dragon Curve), which is a fractal. It is often referred to as the Jurassic Park Curve because it garnered popularity after being drawn and alluded to in the novel Jurassic Park by Michael Crichton (1990). |
| | |ImageDescElem= | | |ImageDescElem= |
| - | [[Image:DragonCurve_Construction.png|600px|thumb|First 5 iterations of the Harter-Heighway Curve|left]] | | |
| | | | |
| | | + | This fractal is described by a <balloon title="A crooked line, like the the one used in this fractal, can be considered a curve."> curve</balloon> that undergoes a repetitive process (called an [[Iterated Functions|iterated process]]). To begin the process, the curve has a basic segment of a straight line. |
| | | + | Then at each iteration, |
| | | + | :*Each line is replaced with two line segments at an angle of 90 degrees (other angles can be used to make fractals that look slightly different). |
| | | + | :*Each line is rotated alternatively to the left or to the right of the line it is replacing. |
| | | | |
| | | | |
| | | + | [[Image:DragonCurve_Construction.png|900px|thumb|Base Segment and First 5 iterations of the Harter-Heighway Curve|center]] |
| | | + | [[Image:DragonCurve_int15.gif|thumb|200px|right|15th iteration]] |
| | | | |
| | | | |
| | | + | The Harter-Heighway Dragon is created by iteration of the curve process described above, and is thus a type of fractal known as '''iterated function systems'''. This process can be repeated infinitely, and the perimeter or length of the dragon is in fact infinite. However, if you look to the image at the right, a 15th iteration of the Harter-Heighway Dragon is already enough to create an impressive fractal. |
| | | | |
| | | | |
| | | + | An interesting property of this curve is that although the corners of the fractal seem to touch at various points, the curve never actually crosses over itself. Also, the curve exhibits '''self-similarity''' when iterated infinitely because as you look look closer and closer at the curve, the magnified parts of the curve continue to look like the larger curve. |
| | | | |
| - | | + | To learn another method to create the Harter-Heighway Dragon, click [http://sierra.nmsu.edu/morandi/coursematerials/JurassicParkFractal.html here] |
| - | | + | |
| - | | + | |
| - | | + | |
| - | This fractal is described by a curve that undergoes an iterated process. To begin the process, the curve starts out as a line as the base segment. Each iteration replaces each line with two line segments at an angle of 90 degrees (other angles can be used to make various looking fractals), with each line being rotated alternatively to the left or to the right of the line it is replacing. To learn more about [[Iterated Functions|iterated functions]], click here. | + | |
| - | [[Image:DragonCurve_int15.gif|thumb|200px|right|15th iteration]] | + | |
| - | | + | |
| - | | + | |
| - | The Harter-Heighway Dragon is created by iteration of the curve process described above. This process can be repeated infinitely, and the perimeter of the dragon is in fact infinite. However, if you look to the image at the right, a 15th iteration of the Harter-Heighway Dragon is already enough to create an impressive fractal. | + | |
| - | | + | |
| - | The perimeter of the Harter-Heighway curve increases by<math>\sqrt{2}</math> with each repetition of the curve. | + | |
| - | | + | |
| - | An interesting property of this curve is that the curve never crosses itself. Also, the curve exhibits ''self-similarity'' because as you look closer and closer at the curve, the curve continues to look like the larger curve. | + | |
| | | | |
| | |Pre-K=No | | |Pre-K=No |
| Line 32: | | Line 27: | |
| | |HighSchool=Yes | | |HighSchool=Yes |
| | |ImageDesc= | | |ImageDesc= |
| | | + | |
| | | + | {{hide|1= |
| | ==Properties== | | ==Properties== |
| - | {{Hide|1= | | |
| - | '''Perimeter''' {{hide|1= | | |
| - | [[Image:DragonCurve_basic.png|thumb|right|First iteration in detail]] | | |
| - | The perimeter of the Harter-Heighway curve increases by a factor of <math>\sqrt{2}</math> for each iteration. | | |
| - | For example, if the first iteration is split up into two isosceles triangles, the ratio between the base segment and first iteration is: <math>\frac{2s\sqrt{2}}{2s} = \sqrt{2}</math> | | |
| - | }} | | |
| | | | |
| | | + | The Harter-Heighway Dragon curve has several different properties we can derive. |
| | | | |
| - | '''Number of Sides''' {{hide|1= | + | ===Perimeter=== |
| - | The number of sides (<math>N_k</math>) of the Harter-Heighway curve for any degree of iteration (''k'') is given by <math>N_k = 2^k\,</math>, where the "sides" of the curve refer to alternating slanted lines of the fractal. | + | |
| - | }} | + | |
| | | | |
| | | + | [[Image:DragonCurve_basic.png|thumb|right|1st iteration of the Harter-Heighway Dragon]] |
| | | | |
| - | '''Fractal Dimension''' {{hide|1= | | |
| - | [[Image:DragonCurveDimension.png|right|thumb|225px|2nd iteration of the Harter-Heighway Dragon]] | | |
| | | | |
| - | The [[Fractal Dimension]] of the Harter-Heighway Curve can also be calculated using the equation: <math>\frac{logN}{loge}</math> | + | The perimeter of the Harter-Heighway curve increases by a factor of <math>\sqrt{2}</math> for each iteration. |
| | | + | For example, if you look at the picture to the right, the straight red line shows the fractal as its base segment and the black crooked line shows the fractal at its first iteration. |
| | | | |
| | | + | If the first iteration is split up into two triangles, the ratio of the perimeter of the first iteration over the base segment is: |
| | | + | ::<math>\frac{s\sqrt{2} + s\sqrt{2}}{s + s} = \frac{2s\sqrt{2}}{2s} = \frac{\sqrt{2}}{1}</math> |
| | | | |
| - | As seen from the image of the second iteration of the curve, there are two new curves that arise during the iteration and N = 2. | + | ===Number of Sides=== |
| | | + | [[Image:DragonCurve_Sides.png|900px|center]] |
| | | + | The number of sides (<math>N_k</math>) of the Harter-Heighway curve for any degree of iteration (''k'') is given by <math>N_k = 2^k\,</math>, where the "sides" of the curve refer to alternating slanted lines of the fractal. |
| | | | |
| - | Also, the ratio of the lengths of each new curve to the old curve is: <math>\frac{2s\sqrt{2}}{2s} = \sqrt{2} </math>, so e = <math>\sqrt{2}</math>. | + | For example, the third iteration of this curve should have a total number of sides <math>N_3 = 2^3 = 8\,</math>. |
| | | | |
| | | | |
| - | Thus, the fractal dimension is <math>\frac{logN}{loge} = \frac{log2}{log\sqrt{2}} = 2 </math>, and it is a space-filling curve. | + | ===Fractal Dimension=== |
| | | | |
| | | + | The [[Fractal Dimension]] of the Harter-Heighway Curve can also be calculated using the equation: <math>\frac{logN}{loge}</math>. |
| | | | |
| | | + | Let us use the second iteration of the curve as seen below to calculate the fractal dimension. |
| | | + | [[Image:DragonCurveDimension1.png|center]] |
| | | | |
| - | }} | + | There are two new curves that arise during the iteration so that <math>N = 2\,</math>. |
| - | }} | + | |
| | | | |
| | | + | Also, the ratio of the lengths of each new curve to the old curve is: <math>\frac{4(s\sqrt{2})}{4(s)} = \sqrt{2} </math>, so that <math>e = \sqrt{2}</math>. |
| | | + | |
| | | + | Thus, the fractal dimension is <math>\frac{logN}{loge} = \frac{log2}{log\sqrt{2}} = 2 </math>, and it is a <balloon title="A space-filling curve in 2-dimensions is a curve with a fractal dimension of exactly 2. This means that the curve touches every point in the unit square."> space-filling curve</balloon>. |
| | | | |
| | ==Changing the Angle== | | ==Changing the Angle== |
| - | {{hide|1= | | |
| - | The Harter-Heighway curve iterates with a 90 degree angle. However, if the angle is changed, new curves can be created: | | |
| | | | |
| - | <gallery caption="" widths="300px" heights="200px" perrow="3"> | + | The Harter-Heighway curve iterates with a 90 degree angle; however, if the angle is changed, new curves can be created. The following fractals are the result of 13 iterations. |
| | | + | |
| | | + | <gallery caption="" widths="305px" heights="205px" perrow="3"> |
| | Image:CurveAngle85.jpg|'''Curve with angle 85''' | | Image:CurveAngle85.jpg|'''Curve with angle 85''' |
| | Image:CurveAngle100.jpg|'''Curve with angle 100''' | | Image:CurveAngle100.jpg|'''Curve with angle 100''' |
| | Image:CurveAngle110.jpg|'''Curve with angle 110''' | | Image:CurveAngle110.jpg|'''Curve with angle 110''' |
| | </gallery> | | </gallery> |
| - | | | |
| | }} | | }} |
| - | | | |
| | |other=Algebra | | |other=Algebra |
| | |AuthorName=SolKoll | | |AuthorName=SolKoll |
| Line 84: | | Line 81: | |
| | |Field=Dynamic Systems | | |Field=Dynamic Systems |
| | |Field2=Fractals | | |Field2=Fractals |
| - | |FieldLinks= | + | |FieldLinks=:To read about an alternate method of creating the Harter-Heighway Dragon http://sierra.nmsu.edu/morandi/coursematerials/JurassicParkFractal.html |
| | |References= | | |References= |
| | Wikipedia, [http://en.wikipedia.org/wiki/Dragon_curve Wikipedia's Dragon Curve page] | | Wikipedia, [http://en.wikipedia.org/wiki/Dragon_curve Wikipedia's Dragon Curve page] |
| | Cynthia Lanius, [http://math.rice.edu/~lanius/frac/jurra.html Cynthia Lanius' Fractals Unit: A Jurassic Park Fractal] | | Cynthia Lanius, [http://math.rice.edu/~lanius/frac/jurra.html Cynthia Lanius' Fractals Unit: A Jurassic Park Fractal] |
| - | |ToDo=*An animation for the showing the fractal being drawn gradually through increasing iterations | + | |ToDo= |
| | | + | An animation of the fractal being drawn gradually through increasing iterations (a frame for each individual iteration) |
| | | + | Also, an animation that draws the curve at the 13 or so iteration, but slowly to show that the curve never crosses itself. |
| | |HideMME = No | | |HideMME = No |
| | }} | | }} |
## Current revision
Harter-Heighway Dragon Curve
Fields: Dynamic Systems and Fractals
Image Created By: SolKoll
Website: Wikimedia Commons
Harter-Heighway Dragon Curve
This image is an artistic rendering of the Harter-Heighway Curve (also called the Dragon Curve), which is a fractal. It is often referred to as the Jurassic Park Curve because it garnered popularity after being drawn and alluded to in the novel Jurassic Park by Michael Crichton (1990).
# Basic Description
This fractal is described by a curve that undergoes a repetitive process (called an iterated process). To begin the process, the curve has a basic segment of a straight line.
Then at each iteration,
• Each line is replaced with two line segments at an angle of 90 degrees (other angles can be used to make fractals that look slightly different).
• Each line is rotated alternatively to the left or to the right of the line it is replacing.
Base Segment and First 5 iterations of the Harter-Heighway Curve
15th iteration
The Harter-Heighway Dragon is created by iteration of the curve process described above, and is thus a type of fractal known as iterated function systems. This process can be repeated infinitely, and the perimeter or length of the dragon is in fact infinite. However, if you look to the image at the right, a 15th iteration of the Harter-Heighway Dragon is already enough to create an impressive fractal.
An interesting property of this curve is that although the corners of the fractal seem to touch at various points, the curve never actually crosses over itself. Also, the curve exhibits self-similarity when iterated infinitely because as you look look closer and closer at the curve, the magnified parts of the curve continue to look like the larger curve.
# A More Mathematical Explanation
Note: understanding of this explanation requires: *Algebra
## Properties
The Harter-Heighway Dragon curve has several different properties we can derive.
### Perimeter
1st iteration of the Harter-Heighway Dragon
The perimeter of the Harter-Heighway curve increases by a factor of $\sqrt{2}$ for each iteration. For example, if you look at the picture to the right, the straight red line shows the fractal as its base segment and the black crooked line shows the fractal at its first iteration.
If the first iteration is split up into two triangles, the ratio of the perimeter of the first iteration over the base segment is:
$\frac{s\sqrt{2} + s\sqrt{2}}{s + s} = \frac{2s\sqrt{2}}{2s} = \frac{\sqrt{2}}{1}$
### Number of Sides
The number of sides ($N_k$) of the Harter-Heighway curve for any degree of iteration (k) is given by $N_k = 2^k\,$, where the "sides" of the curve refer to alternating slanted lines of the fractal.
For example, the third iteration of this curve should have a total number of sides $N_3 = 2^3 = 8\,$.
### Fractal Dimension
The Fractal Dimension of the Harter-Heighway Curve can also be calculated using the equation: $\frac{logN}{loge}$.
Let us use the second iteration of the curve as seen below to calculate the fractal dimension.
There are two new curves that arise during the iteration so that $N = 2\,$.
Also, the ratio of the lengths of each new curve to the old curve is: $\frac{4(s\sqrt{2})}{4(s)} = \sqrt{2}$, so that $e = \sqrt{2}$.
Thus, the fractal dimension is $\frac{logN}{loge} = \frac{log2}{log\sqrt{2}} = 2$, and it is a space-filling curve.
## Changing the Angle
The Harter-Heighway curve iterates with a 90 degree angle; however, if the angle is changed, new curves can be created. The following fractals are the result of 13 iterations.
Curve with angle 85 Curve with angle 100 Curve with angle 110
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# About the Creator of this Image
SolKoll is interested in fractals, and created this image using an iterated function system (IFS).
# Related Links
### Additional Resources
To read about an alternate method of creating the Harter-Heighway Dragon http://sierra.nmsu.edu/morandi/coursematerials/JurassicParkFractal.html
# References
Wikipedia, Wikipedia's Dragon Curve page Cynthia Lanius, Cynthia Lanius' Fractals Unit: A Jurassic Park Fractal
# Future Directions for this Page
An animation of the fractal being drawn gradually through increasing iterations (a frame for each individual iteration) Also, an animation that draws the curve at the 13 or so iteration, but slowly to show that the curve never crosses itself.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8654094934463501, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/6114/list
|
2 added bsp tag
1
# What are the right categories of finite-dimensional Banach spaces?
This is inspired partly by this question, especially Tom Leinster's answer.
Let me start with some background. I apologize that this will be rather long, since I'm hoping for input from people who probably don't much about the following.
Classically, there are two obvious categories whose objects are Banach spaces (let's say all Banach spaces are real for simplicity): in the first, morphisms are bounded linear maps and isomorphisms are exactly what are usually called isomorphisms of Banach spaces; in the second, morphisms are linear contractions (bounded linear maps with norm at most 1) and isomorphisms are linear isometries. These categories distinguish between the "isomorphic" and "isometric" theories of Banach spaces.
Now if I'm interested in finite-dimensional spaces, then the "isomorphic" category is not rigid enough, because any two n-dimensional Banach spaces are isomorphic. But the isometric category is too rigid for most purposes. So we get more quantitative about our isomorphisms. One way to do this is with the Banach-Mazur distance. If X and Y are both n-dimensional Banach spaces, $$d(X,Y) = \inf_T (\lVert T \rVert \lVert T^{-1} \rVert),$$ where the infimum is over all linear isomorphisms $T:X\to Y$. Then $\log d$ is a metric on the class of isometry classes of n-dimensional Banach spaces.
Theorems about spaces of arbitrary dimension which include some constants independent of the dimension are characterized as "isomorphic results". One example is Kashin's theorem: There exists a constant c>0 such that for every n, $\ell_1^n$ has a subspace X with $\dim X = m= \lfloor n/2 \rfloor$ such that $d(X,\ell_2^m) < c$. (Here $\ell_p^n$ denotes R^n with the $\ell_p$ norm $\lVert x \rVert_p = (\sum |x_i|^p)^{1/p}$.) Thus $\ell_1^n$ contains an n/2-dimensional subspace isomorphic to Hilbert space in a dimension-independent way.
On the other hand there are "almost isometric" results, typified by Dvoretzky's theorem: There exists a function $f$ such that for every n-dimensional Banach space X and every $\varepsilon > 0$, X has a subspace Y with $\dim Y = m \ge f(\varepsilon) \log (n+1)$ such that $d(Y,\ell_2^m) < 1+\varepsilon$. Thus any space contains subspaces, of not too small dimension, which are arbitrarily close to being isometrically Hilbert spaces.
So my question, finally, is: are there natural categories in which to interpret such results? I suppose that the objects should not be individual spaces, but sequences of spaces with increasing dimensions. In particular, as the two results quoted above highlight, the sequence of n-dimensional Hilbert spaces $\ell_2^n$ should play a distinguished role. But I have no idea what the morphisms should be to accomodate quantitative control over norms in these ways.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258549213409424, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/spacetime+gravity
|
Tagged Questions
0answers
35 views
effect of gravity on chemical reaction rates [closed]
a chemical reaction is done on earth in very vacuum and that chemical reaction is also done in space so that gravity higgs field not affect that reaction. Which reaction will be fast wrt gravity and ...
3answers
69 views
What is the cause the light is affected by gravity? [duplicate]
I know that photons have no mass and that a photons exist only moving at the speed of light. So what is the cause that a massive astronomical object can bend a ray of light? I have two thoughts, but I ...
0answers
37 views
When spacetime expands to the point where galaxy clusters are not observable, will there by any interaction?
It's my understanding that in a few billion years, clusters of galaxies won't be able to directly observe one another due to the expansion of spacetime overcoming gravity between those clusters. ...
2answers
105 views
Einstein's theory tells us that gravity is a curve in space and time but how does that causes attraction in mass? [duplicate]
The sun is incredibly massive object and it causes the space around it to bend. This causes the planets to pulled to the sun or the planets move in an elliptical path around the sun. But I don't ...
3answers
134 views
About gravity through space time curvature
Is it possible to produce virtual gravity? I mean gravity without the help of mass by curving spacetime with other effects like fast rotating objects?
0answers
156 views
Penrose Conformal diagram for flat 2-dim Lorentz space-time
I have the following metric $$ds^2 ~=~ Tdv^2 + 2dTdv,$$ defined for $$(v,T)~\in~ S^1\times \mathbb{R},$$ e.g. $v$ is periodic. This is the according Penrose diagram: Question 1) Is the ...
1answer
199 views
Why are we talking about space curvature as if we know what space is? [closed]
1) Why are we talking about space curvature as if we know what space is? Every question about gravity seems to evoke an answer involving "space curvature" which seems like an undefined placeholder ...
4answers
156 views
Can a huge gravitational force cause visible distortions on an object
In space, would it be possible to have an object generating such a huge gravitational force so it would be possible for an observer (not affected directly by gravitational force and the space time ...
4answers
1k views
If gravity is a bend in Space-time then what is magnetism?
Einstein postulated that gravity bends the geometry of space-time then what does magnetism do in to the geometry of space-time, or is there even a correlation between space-time geometry and ...
2answers
180 views
Do objects with mass “suck in” spacetime?
I don't really understand the general theory of relativity (GTR) really deeply, but according to my understanding, the GTR say that gravitation is caused by the curvature of spacetime by objects with ...
1answer
327 views
$\pi$ and the Curvature of Space
If one draws a circle on a sphere and measures the ratio of the diameter to the circumference, that value varies depending on the diameter of the circle compared to the diameter of the sphere it is ...
2answers
179 views
Is this alternate theory of gravity as cause instead of effect plausible?
I came across this video today on YouTube that presents an interesting alternate theory of Gravity and the "missing" matter in the Universe that Dark Matter/Energy theories try to account for. If I ...
4answers
597 views
Alternate layman's metaphors for illustrating curved space-time
The metaphor of a surface (typically a pool table or a trampoline) distorted by a massive object is commonly used as a metaphor for illustrating gravitationally induced space-time curvature. But as ...
0answers
91 views
Does Spacetime have a “This Side Up” arrow? [duplicate]
Possible Duplicate: Does the curvature of spacetime theory assume gravity? Forgive my naivete as I am not schooled in Physics or Mathematics. I was watching NOVA's "The Fabric of the ...
5answers
1k views
Why is gravitation force always attractive?
Why is the gravitation force always attractive? Is there a way to explain this other than the curvature of space time? PS: If the simple answer to this question is that mass makes space-time curve ...
1answer
185 views
future light cones and light paths
I understand that an event, in a four dimensional space-time, produces a light cone. As time increases the cones gets larger on either side of the event (past and future). For example the if the sun ...
2answers
708 views
Gravitational time dilation at the earth's center
I would like to know what happens with time dilation (relative to surface) at earth's center . There is a way to calculate it? Is time going faster at center of earth? I've made other questions ...
1answer
181 views
Gravitational time dilation and intelligent life [closed]
Thinking in the posibility of extraterrestrial life, I think that gravity could be a fundamental factor for evolution, I mean, in darkest zones of universe, where is dust, little mass accumulation, ...
1answer
631 views
Voyager local time dilation (caused by gravity)
Voyager I, as an example, taking account gravity and setting aside effects of speed as cause of time dilation. If it is very far away from earth and sun, so then there must be a difference in the ...
1answer
180 views
Is relativistic motion equivalent to fluctuating gravitational fields?
The theory of relativity makes very precise predictions about how an object's motion through space-time affects the passage of time for both the object and observers in other frames of reference. I ...
2answers
2k views
Why does Venus rotate the opposite direction as other planets?
Given: Law of Conservation of Angular Momentum. Reverse spinning with dense atmosphere (92 times > Earth & CO2 dominant sulphur based). Surface same degree of aging all over. Theoretical large ...
5answers
772 views
How does the curvature of spacetime induce gravitational attraction?
I don't know how to ask this more clearly than in the title.
3answers
694 views
Does the curvature of spacetime theory assume gravity?
Whenever I read about the curvature of spacetime as an explanation for gravity, I see pictures of a sheet (spacetime) with various masses indenting the sheet to form "gravity wells." Objects which are ...
8answers
5k views
How exactly does curved space-time describe the force of gravity?
I understand that people explain (in layman's terms at least) that the presence of mass "warps" space-time geometry, and this causes gravity. I have also of course heard the analogy of a blanket or ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258322715759277, "perplexity_flag": "middle"}
|
http://crypto.stackexchange.com/questions/2272/encryption-algorithm-that-produces-dummy-output-on-incorrect-passwords
|
# Encryption algorithm that produces dummy output on incorrect passwords
Background: I've been thinking about using encryption in the context of backing up files to untrusted locations (to the point of making the file publicly and widely distributed for practically failsafe backup).
The problem is, once a file is publicly available, it will forever remain so. And in 20 years, when computing power is unpredictably faster, AES256 bit encryption might be practically useless - and my private backup file readable to all.
I was thinking, as a deterrent to brute force attacks on encryption, what if when the wrong password was tried the algorithm returned dummy data that would require human examination to assert that the data is not what the attacker is looking for.
Example:
I encrypt the plain text (say, an account number) '123456789' with the password 'pass'.
Attacker attempts to brute force the encryption and tries the password 'a'.
The result is '987654321'.
Now, how is the attacker to know that this is not the value I encrypted, and that the password I used was 'a'? Additionally, even if the attacker guesses the password 'pass', how are they to know that '123456789' is the correct value?
This of course is a simple example and most people encrypt somewhat recognizable artifacts; human language text, files recognizable by headers, etc. So this idea could be expanded to not just scramble data upon output, but also to include a variety of generated 'dummy' artifacts (common file formats, samples of language text, etc) not necessarily included in the core algorithm (the dummy data could be user defined). The dummy data output on incorrect passwords could then possibly include valid JPEG files, WORD files, PNG files, and samples of valid text from a variety of languages.
The result would make the encrypted file very hard to brute force without either a huge amount of computing and human power, or specific knowledge of what is encrypted.
Are there any algorithms out there that work like this? Are there any flaws in the idea itself?
-
## 3 Answers
Yes, this is possible (conditionally). It sounds like you want Format Preserving Encryption. FPE works by encrypting from an arbitrary domain $X$ onto $X$. Consequentially, if plaintext $M \in X$ is encrypted to ciphertext $C \in X$, any decryption of $C$ - even with the wrong key - will yield a decrypted message inside of $X$. Thus an attacker doesn't know anything from a decrypted ciphertext if he only knows the original format of the valid plaintexts. All decrypted ciphertexts look like valid plaintexts to him.
For example, your domain might be "all ASCII strings of length 7", or "a NULL-terminated string of numbers with length less than 10", or some credit-card number format. All plaintexts, ciphertexts, and wrong-key decrypted ciphertexts will be in that domain. From your example, you might have a domain of 9 ASCII numeral digits.
(In case it's not obvious: The way an attack typically works is that the attackers are decrypting a ciphertext onto a plaintext message space $Y$, but they know something about the original plaintext that limits the potential domain to some set $Z$ strictly contained within $Y$ (that is, $Z \subset Y$). On a typical cipher we encrypt $\{0,1\}^n \rightarrow \{0,1\}^m$ (one binary string to another), but it's rare that all plaintexts could actually be any binary value, generally they are only a strict subset of the input domain. So we assume the attacker knows which plaintext formats are valid, and they just check decrypted plaintexts against this smaller domain. FPE addresses that issue by only allowing encryption and decryption from and to domains that are valid plaintexts. The attacker will never be able to look at a decrypted value and conclude that it is an invalid plaintext. Eg, if they are decrypting a credit-card number, they have an idea of what a valid decrypted CC will look like, but with an FPE algorithm all decrypted messages will look like a valid CC.)
Note that this won't mitigate the problem if the attacker knows some part of the original plaintext that does not apply to all valid plaintexts. FPE mitigates attacks where the attacker just knows the original message space for all valid messges, but it's possible they know something about the specific message on hand that doesn't apply to other messages. (For example, they may know that the 3rd digit is an 8 for the plaintext in question, but not necessarily an 8 for all plaintexts.) In this case they would have a smaller domain to check decrypted messages against and their original attack will still help them to some extent.
An FPE algorithm can be implemented from an existing block cipher. There are different schemes for turning a block cipher into an FPE, depending on your domain size and such. Some of the constructions are relatively simple given the block cipher and a function that decides whether a raw block cipher output is inside of the desired domain. The Wikipedia page linked to above contains descriptions of some.
As for your example: I think that it's rare to see FPE applied to very large domains, like image files. It's usually only applied to small domains, things like credit-card numbers in databases. I don't think I've seen it used in a case where the domain was larger than a couple block cipher blocks. (But whether you can use FPE efficiently on very large domains is a different question.)
-
Very interesting! It's not exactly what I had in mind, but has very similar properties. Will accept if nothing closer comes up. Thanks. – Jack Singleton Apr 4 '12 at 22:40
FPE is typically used when you already have a database structure, but later discover that some of the fields need to be kept confidential. In order not to have to restructure the database and change the type of those fields, you use a scheme such that the cipher text has the same format as the plain text, with respect to the definitions of the database. However, the question was how to produce cipher text such that it decrypts to seemingly valid plain text using any key (or at least any key derived from plausible passwords), and it's a future computer that determines if it does or not. – Henrick Hellström Apr 5 '12 at 7:29
FPE ensures that all keys will always decrypt to plaintext that was considered valid at encryption time, which is what the question asked. The future attacker is assumed to know the PT domain, but FPE negates that advantage. They may also know more about the PT than just the domain, as I outlined, but the OP's question seems about the PT domain, not message-specific knowledge (see the account number example). For very complicated domains, defining it carefully and using FPE efficiently would be very difficult. But it's still a theoretical construction that closely matches the OP's question. – B-Con Apr 5 '12 at 16:24
## Did you find this question interesting? Try our newsletter
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
email address
Your problem, the way I read it, could be described as follows: You are currently using password encryption for protecting the confidentiality of files on a known format. You have concerns regarding the long term confidentiality of those files, given that you don't know what computers will be able to do in the future. Ideally, you want the confidentiality to be preserved even if the encryption scheme is broken.
• I wouldn't bet it would be very problematic for a futuristic computer that is able to crack AES, to also tell invalid WORD or PNG files from valid ones. If you fear that human inspection would possibly be sufficient to tell a valid one from an invalid one, you obviously can't rule out that advances in both hardware and software will make it possible for the computer to do anything a human could do.
• If you are using password based encryption, it is possible that the weakest link is the password and not the cipher you use for bulk encryption.
• If you require indefinite confidentiality, using a stronger symmetric block cipher (such as triple-AES or whatever) won't help, unless it is provably secure even if the adversary has unlimited resources. If the assumption is that any symmetric block cipher will be broken eventually, and that this is no good, only provable security will do.
There are ways to hide one alternative plain text in the cipher text, which are intended for situations where you yourself might be coerced into revealing the password, and in such case might reveal the password that decrypts the cipher text into the alternative plain text. The common denominator for such schemes is however that the length of the cipher text is doubled if you want to include one alternative plain text, tripled if you want to include two, etc. Clearly, this makes such schemes infeasible if you want to include an arbitrarily large number of alternative plain text.
That said, there are two things you possibly could do:
1. Use an One Time Pad. If you do that, your main problem would be how to generate the One Time Pad and where to store it. If all the adversary has is OTP encrypted cipher text, the plain text could be anything (up to the length of the cipher text), because the pad could be anything.
2. Use a dedicated perfect compression function that effectively removes all redundancy from the plain text before encryption. The idea in such case would be that any automated test of the validity of the decrypted text would look for certain patterns that would have to be present if it were valid plain text, and a compression function could theoretically remove any such pattern (because such patterns consist in redundant information, and that's what a compression function is supposed to remove).
-
2. sounds a lot like performing double encryption to me - and doing double encryption using two known good ciphers that use different techniques might be an answer (as CPU power is not the problem, finding a vulnerability in the cipher might be more dangerous) – owlstead Apr 4 '12 at 20:26
3
No, compression is not encryption. Compression will just remove redundancies in such way that it is possible to deterministically restore them. However, if you have an ideal compression function, it will be such that it the corresponding decompression function will output plain text that "looks OK" from any random bit string. – Henrick Hellström Apr 4 '12 at 21:17
"I wouldn't bet it would be very problematic for a futuristic computer that is able to crack AES, to also tell invalid WORD or PNG files from valid ones." - I don't see how computing power would help with this... if the attacker does not know what is in the file they are decrypting, how can they know what is an 'invalid' WORD file and what is a 'valid' one? – Jack Singleton Apr 4 '12 at 22:05
1
@JackSingleton: A computer can fairly easily spot a WORD file that contains random characters, just by running a spell check. If your decryption algorithm outputs random but correctly spelled words, it can run a grammar check. Etc. – Henrick Hellström Apr 4 '12 at 22:42
1
@owlstead: If by a perfect cipher you mean one that is IND-CPA, IND-CCA and IND-CCA2 given contemporary security parameters, it does in no way require the plain text to look fully random. – Henrick Hellström Apr 5 '12 at 7:06
show 1 more comment
It's certainly possible to design an encryption algorithm so it's possible to identify a "correct" decryption. For example the algorithm could append a hash to each block before the actual encryption, and check the hash after decryption.
However, I think this would actually make the overall system less secure. One of the problems a brute force attack has is to determine if the decrypted text is correct (and therefore it's task is done). Adding a categorical check makes this trivial.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385014772415161, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/201541/an-equation-in-a-component-of-identity-in-a-lie-group/201664
|
# an equation in a component of identity in a lie group
could any one help me how to solve :
prove that there exist solution for the equation $x^2=y$ in identity component of a lie group. I dont know how to start this one, what is the specia; about component of identity? well for $y=e$ we get $x=e$
-
It is not clear where this question comes from, can you give more context? One could imagine trying to trace solutions by moving $y$ around, starting at $y=e$ (where by the way there are in general more solutions than just $x=e$), which is an approach that would require a connected group. But it is not at all clear that this is actually a very promising angle of attack. – Marc van Leeuwen Sep 24 '12 at 8:27
component of identity is an Normal Subgroup of the Lie Group, so can we just try to see in a group this kind of equation has solution or not? – miosaki Sep 24 '12 at 10:34
## 1 Answer
The result is false: the matrix $$y=\begin{pmatrix}-1&0\\0&-2\end{pmatrix}$$ lies in the identity component of $G=\mathbf{GL}(2,\mathbf R)$ (which is the subset of matrices with positive determinant), but the equation $x^2=y$ for $x\in G$ has no solution, as can be checked by writing the equations for the matrix coefficients explicitly and showing the absence of real solutions.
See also this question relating this equation to the image of the exponential map, and this other question.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9530321359634399, "perplexity_flag": "head"}
|
http://www.reference.com/browse/Algorithmic+entropy
|
Definitions
# Kolmogorov complexity
In algorithmic information theory (a subfield of computer science), the (also known as descriptive complexity, Kolmogorov-Chaitin complexity, stochastic complexity, algorithmic entropy, or program-size complexity) of an object such as a piece of text is a measure of the computational resources needed to specify the object. For example, consider the following two strings of length 64, each containing only lowercase letters, numbers, and spaces:
`abababababababababababababababababababababababababababababababab`
`4c1j5b2p0cv4w1 8rx2y39umgw5q85s7ur qbjfdppa0q7nieieqe9noc4cvafzf`
The first string admits a short English language description, namely "ab 32 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 64 characters.
More formally, the complexity of a string is the length of the string's shortest description in some fixed universal description language. The sensitivity of complexity relative to the choice of description language is discussed below. It can be shown that the Kolmogorov complexity of any string cannot be too much larger than the length of the string itself. Strings whose Kolmogorov complexity is small relative to the string's size are not considered to be complex. The notion of Kolmogorov complexity is surprisingly deep and can be used to state and prove impossibility results akin to Gödel's incompleteness theorem and Turing's halting problem.
## Definition
To define Kolmogorov complexity, we must first specify a description language for strings. Such a description language can be based on a programming language such as Lisp, Pascal, or Java Virtual Machine bytecode. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string. In determining the length of P, the lengths of any subroutines used in P must be accounted for. The length of any integer constant n which occurs in the program P is the number of bits required to represent n, that is (roughly) log2n.
We could alternatively choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which on input w outputs string x, then the concatenated string <M> w is a description of x. For theoretical analysis, this approach is more suited for constructing detailed formal proofs and is generally preferred in the research literature. In this article we will use an informal approach.
Any string s has at least one description, namely the program
` function GenerateFixedString()`
` return s`
If a description of s, d(s), is of minimal length—i.e. it uses the fewest number of characters—it is called a minimal description of s. Then the length of d(s)—i.e. the number of characters in the description—is the Kolmogorov complexity of s, written K(s). Symbolically,
$K\left(s\right) = |d\left(s\right)|. quad$
We now consider how the choice of description language affects the value of K and show that the effect of changing the description language is bounded.
Theorem. If K1 and K2 are the complexity functions relative to description languages L1 and L2, then there is a constant c (which depends only on the languages L1 and L2) such that
$forall s |K_1\left(s\right) - K_2\left(s\right)| leq c, quad$
Proof. By symmetry, it suffices to prove that there is some constant c such that for all bitstrings s,
$K_1\left(s\right) leq K_2\left(s\right) + c.$
To see why this is so, there is a program in the language L1 which acts as an interpreter for L2:
` function InterpretLanguage(string p)`
where p is a program in L2. The interpreter is characterized by the following property:
Running InterpretLanguage on input p returns the result of running p.
Thus if P is a program in L2 which is a minimal description of s, then InterpretLanguage(P) returns the string s. The length of this description of s is the sum of
1. The length of the program InterpretLanguage, which we can take to be the constant c.
2. The length of P which by definition is K2(s).
This proves the desired upper bound.
See also invariance theorem.
## History and context
Algorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures).
Kolmogorov Complexity was first invented by Ray Solomonoff in 1960, who described it in "A Preliminary Report on a General Theory of Inductive Inference" as a side product to his invention of Algorithmic Probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part I and Part II" in Information and Control.
Andrey Kolmogorov independently invented complexity as a measure of information content, first describing it in 1965, Problems Inform. Transmissions, 1, (1965), 1-7. Gregory Chaitin also invented it independently, submitting 2 reports on it in 1965, a preliminary investigation published in 1966 (J. ACM, 13(1966) and a more complete discussion in 1969 (J. ACM, 16(1969).
When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority (IEEE Trans. Inform Theory, 14:5(1968), 662-664). For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence while Algorithmic Probability became associated with Solomonoff, who focussed on prediction using his invention of the universal a priori probability distribution.
There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974).
Axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov (Burgin 1982). Axiomatic approach to Kolmogorov complexity was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).
Naming this concept "Kolmogorov complexity" is an example of the Matthew effect.
## Basic results
In the following, we will fix one definition and simply write K(s) for the complexity of the string s.
It is not hard to see that the minimal description of a string cannot be too much larger than the string itself: the program GenerateFixedString above that outputs s is a fixed amount larger than s.
Theorem. There is a constant c such that
$forall s K\left(s\right) leq |s| + c. quad$
### Incomputability of Kolmogorov complexity
The first result is that there is no way to effectively compute K.
Theorem. K is not a computable function.
In other words, there is no program which takes a string s as input and produces the integer K(s) as output. We show this by contradiction by making a program that creates a string that should only be able to be created by a longer program. Suppose there is a program
` function KolmogorovComplexity(string s)`
that takes as input a string s and returns K(s). Now consider the program
` function GenerateComplexString(int n)`
` for i = 1 to infinity:`
` for each string s of length exactly i`
` if KolmogorovComplexity(s) >= n`
` return s`
` quit`
This program calls KolmogorovComplexity as a subroutine. This program tries every string, starting with the shortest, until it finds a string with complexity at least n, then returns that string. Therefore, given any positive integer n, it produces a string with Kolmogorov complexity at least as great as n. The program itself has a fixed length U. The input to the program GenerateComplexString is an integer n; here, the size of n is measured by the number of bits required to represent n which is log2(n). Now consider the following program:
` function GenerateParadoxicalString()`
` return GenerateComplexString(n0)`
This program calls GenerateComplexString as a subroutine and also has a free parameter n0. This program outputs a string s whose complexity is at least n0. By an auspicious choice of the parameter n0 we will arrive at a contradiction. To choose this value, note s is described by the program GenerateParadoxicalString whose length is at most
$U + log_2\left(n_0\right) + C quad$
where C is the "overhead" added by the program GenerateParadoxicalString. Since n grows faster than log2(n), there exists a value n0 such that
$U + log_2\left(n_0\right) + C < n_0. quad$
But this contradicts the definition of having a complexity at least n0. That is, by the definition of K(s), the string s returned by GenerateParadoxicalString is only supposed to be able to be generated by a program of length n0 or longer, but GenerateParadoxicalString is shorter than n0. Thus the program named "KolmogorovComplexity" cannot actually computably find the complexity of arbitrary strings.
This is proof by contradiction where the contradiction is similar to the Berry paradox: "Let n be the smallest positive integer that cannot be defined in fewer than twenty English words." It is also possible to show the uncomputability of K by reduction from the uncomputability of the halting problem H, since K and H are Turing-equivalent.
### Chain rule for Kolmogorov complexity
The chain rule for Kolmogorov complexity states that
$K\left(X,Y\right) = K\left(X\right) + K\left(Y|X\right) + O\left(log\left(K\left(X,Y\right)\right)\right).quad$
It states that the shortest program that reproduces X and Y is no more than a logarithmic term larger than a program to reproduce X and a program to reproduce Y given X. Using this statement one can define an analogue of mutual information for Kolmogorov complexity.
## Compression
It is however straightforward to compute upper bounds for K(s): simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the resulting string's length.
A string s is compressible by a number c if it has a description whose length does not exceed |s| − c. This is equivalent to saying K(s) ≤ |s| − c. Otherwise s is incompressible by c. A string incompressible by 1 is said to be simply incompressible; by the pigeonhole principle, incompressible strings must exist, since there are 2n bit strings of length n but only 2n−2 shorter strings, that is strings of length n − 1 or less.
For the same reason, "most" strings are complex in the sense that they cannot be significantly compressed: K(s) is not much smaller than |s|, the length of s in bits. To make this precise, fix a value of n. There are 2n bitstrings of length n. The uniform probability distribution on the space of these bitstrings assigns to each string of length exactly n equal weight 2−n.
Theorem. With the uniform probability distribution on the space of bitstrings of length n, the probability that a string is incompressible by c is at least 1 − 2−c+1 + 2−n.
To prove the theorem, note that the number of descriptions of length not exceeding n − c is given by the geometric series:
$1 + 2 + 2^2 + cdots + 2^\left\{n-c\right\} = 2^\left\{n-c+1\right\}-1.quad$
There remain at least
$2^n-2^\left\{n-c+1\right\}+1 quad$
many bitstrings of length n that are incompressible by c. To determine the probability divide by 2n.
This theorem is the justification for various challenges in comp.compression FAQ Despite this result, it is sometimes claimed by certain individuals (considered cranks) that they have produced algorithms which uniformly compress data without loss. See lossless data compression.
## Chaitin's incompleteness theorem
We know that most strings are complex in the sense that they cannot be described in any significantly "compressed" way. However, it turns out that the fact that a specific string is complex cannot be formally proved, if the string's length is above a certain threshold. The precise formalization is as follows. First fix a particular axiomatic system S for the natural numbers. The axiomatic system has to be powerful enough so that to certain assertions A about complexity of strings one can associate a formula FA in S. This association must have the following property: if FA is provable from the axioms of S, then the corresponding assertion A is true. This "formalization" can be achieved either by an artificial encoding such as a Gödel numbering or by a formalization which more clearly respects the intended interpretation of S.
Theorem. There exists a constant L (which only depends on the particular axiomatic system and the choice of description language) such that there does not exist a string s for which the statement
$K\left(s\right) geq L quad$
(as formalized in S) can be proven within the axiomatic system S.
Note that by the abundance of nearly incompressible strings, the vast majority of those statements must be true.
The proof of this result is modeled on a self-referential construction used in Berry's paradox. The proof is by contradiction. If the theorem were false, then
Assumption (X): For any integer n there exists a string s for which there is a proof in S of the formula "K(s) ≥ n" (which we assume can be formalized in S).
We can find an effective enumeration of all the formal proofs in S by some procedure
` function NthProof(int n)`
which takes as input n and outputs some proof. This function enumerates all proofs. Some of these are proofs for formulas we do not care about here (examples of proofs which will be listed by the procedure NthProof are the various known proofs of the law of quadratic reciprocity, those of Fermat's little theorem or the proof of Fermat's last theorem all translated into the formal language of S). Some of these are complexity formulas of the form K(s) ≥ n where s and n are constants in the language of S. There is a program
` function NthProofProvesComplexityFormula(int n)`
which determines whether the nth proof actually proves a complexity formula K(s) ≥ L. The strings s and the integer L in turn are computable by programs:
` function StringNthProof(int n)`
` function ComplexityLowerBoundNthProof(int n)`
Consider the following program
` function GenerateProvablyComplexString(int n)`
` for i = 1 to infinity:`
` if NthProofProvesComplexityFormula(i) and ComplexityLowerBoundNthProof(i) >= n`
` return StringNthProof(i)`
` quit`
Given an n, this program tries every proof until it finds a string and a proof in the formal system S of the formula K(s) ≥ L for some L >= n. The program terminates by our Assumption (X). Now this program has a length U. There is an integer n0 such that U + log2(n0) + C < n0, where C is the overhead cost of
` function GenerateProvablyParadoxicalString()`
` return GenerateProvablyComplexString(n0)`
` quit`
The program GenerateProvablyParadoxicalString outputs a string s for which there exists an L such that K(s) ≥ L can be formally proved in S with L >= n0. In particular K(s) ≥ n0 is true. However, s is also described by a program of length U+log2(n0)+C so its complexity is less than n0. This contradiction proves Assumption (X) cannot hold.
Similar ideas are used to prove the properties of Chaitin's constant.
The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968. MML is Bayesian (it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (the inference transforms with a re-parametrisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (even for very hard problems, MML will converge to any underlying model) and efficiency (the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace and D.L. Dowe showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity) in 1999.
## Kolmogorov randomness
Kolmogorov randomness (also called algorithmic randomness) defines a string (usually of bits) as being random if and only if it is shorter than any computer program that can produce that string. This definition of randomness is critically dependent on the definition of Kolmogorov complexity. To make this definition complete, a computer has to be specified, usually a Turing machine. According to the above definition of randomness, a random string is also an "incompressible" string, in the sense that it is impossible to give a representation of the string using a program whose length is shorter than the length of the string itself. However, according to this definition, most strings shorter than a certain length end up to be (Chaitin-Kolmogorovically) random because the best one can do with very small strings is to write a program that simply prints these strings.
## References
• Blum, M. (1967) On the Size of Machines, Information and Control, v. 11, pp. 257-265
• Burgin, M. (1982) Generalized Kolmogorov complexity and duality in theory of computations, Notices of the Russian Academy of Sciences, v.25, No. 3, pp.19-23
• Mark Burgin (2005), Super-recursive algorithms, Monographs in computer science, Springer.
• Burgin, M. and Debnath, N. Hardship of Program Utilization and User-Friendly Software, in Proceedings of the International Conference “Computer Applications in Industry and Engineering”, Las Vegas, Nevada, 2003, pp. 314-317
• Thomas M. Cover, Joy A. Thomas. Elements of information theory, 1st Edition. New York: Wiley-Interscience, 1991. ISBN 0-471-06259-6.
2nd Edition. New York: Wiley-Interscience, 2006. ISBN 0-471-24195-4.
• Debnath, N.C. and Burgin, M., (2003) Software Metrics from the Algorithmic Perspective, in Proceedings of the ISCA 18th International Conference “Computers and their Applications”, Honolulu, Hawaii, pp. 279-282
• Rónyai Lajos, Ivanyos Gábor, Szabó Réka, Algoritmusok. TypoTeX, 1999.
• Ming Li and Paul Vitányi, An Introduction to Kolmogorov Complexity and Its Applications, Springer, 1997. Introduction chapter full-text
• Yu Manin, A Course in Mathematical Logic, Springer-Verlag, 1977.
• Michael Sipser, Introduction to the Theory of Computation, PWS Publishing Company, 1997.
## External links
• The Legacy of Andrei Nikolaevich Kolmogorov
• Chaitin's online publications
• Solomonoff's IDSIA page
• Generalizations of algorithmic information by J. Schmidhuber
• Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, 2nd Edition, Springer Verlag, 1997.
• Tromp's lambda calculus computer model offers a concrete definition of K()
• Universal AI based on Kolmogorov Complexity ISBN 3-540-22139-5 by M. Hutter: ISBN 3-540-22139-5
• Minimum Message Length and Kolmogorov Complexity (by C.S. Wallace and D.L. Dowe, Computer Journal, Vol. 42, No. 4, 1999).
• David Dowe's Minimum Message Length (MML) and Occam's razor pages.
• P. Grunwald, M. A. Pitt and I. J. Myung (ed.), Advances in Minimum Description Length: Theory and Applications, M.I.T. Press, April 2005, ISBN 0-262-07262-9.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8971848487854004, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/159215/inherited-morita-similar-rings?answertab=active
|
# Inherited Morita similar rings
Let $R$ and $S$ be Morita similar rings.
If a ring $R$ with the following property: every right ideal is injective. How do I prove that the ring $S$ has this property?
If a ring $R$ with the following property: $R$ is finitely generated. How do I prove that the ring $S$ has this property?
-
## 1 Answer
Every right ideal of $R$ is injective iff $R$ is semisimple iff all right $R$ modules are semisimple.
Since the Morita equivalence sends semisimple modules to semisimple modules, all of $S$'s right modules are semisimple as well, so $S$ is semisimple.
Your second question is a little strange... for Morita theory we usually require $R$ to have unity and so $R$ will always be cyclic... and $S$ too.
If you mean: "Finitely generated modules will correspond to each other through a Morita equivalence between $R$ and $S$." then let's try that.
Prove that $M$ is f.g. iff for every collection of submodules $\{M_i\mid i\in I\}$ of $M$, $\sum M_i=M$ implies $M$ is a sum of finitely many $M_i$. This provides a module theoretic description of finite generation that you can see is preserved by Morita equivalence.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9120452404022217, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/111291-equation-tangent-line.html
|
# Thread:
1. ## equation of tangent line
the equation of the tangent line to f(x)=sqrt(x) at x=16
y=
b=
using this, find approximation for sqrt(16.2)
sooo lost =(
2. Originally Posted by sjara
the equation of the tangent line to f(x)=sqrt(x) at x=16
y=
b=
using this, find approximation for sqrt(16.2)
sooo lost =(
Can you find the equation of the tangent line? If you can, then plug x=16.2 into the equation of the tangenet line
3. this is what i've got so far..
f '(x)=1/2x^-1/2
so i plugged in 16 to get y
so now its y-4=m(x-16)
but i dont know how to get m
4. Originally Posted by sjara
this is what i've got so far..
f '(x)=1/2x^-1/2
so i plugged in 16 to get y
so now its y-4=m(x-16)
but i dont know how to get m
Okay not to worry you, but maybe the most important point from Calc I is the fact that the derivative of a function is the slope of the function... so the derivative at a point x, is the slope of the function at the point x
A tangent line has the same slope as the curve at the point of intersection, thus it's slope is equal to the derivative
If you don't remember anything else from Calc I, remember that
You plugged 16 into f(x) to get y, so y=4, and m is the derivative, which you have there, evaluated at x=16
So $f'(x)=\frac{1}{2\sqrt{x}}$
And $m=f'(16)=\frac{1}{2\sqrt{16}}$
5. i've been reading this book for hours and didnt understand the wording until you explained it the way you did. thanks again
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305511116981506, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7074/index?setlocale=en_US
|
### Cubic Spin
Prove that the graph of f(x) = x^3 - 6x^2 +9x +1 has rotational symmetry. Do graphs of all cubics have rotational symmetry?
### Sine Problem
In this 'mesh' of sine graphs, one of the graphs is the graph of the sine function. Find the equations of the other graphs to reproduce the pattern.
### Parabolic Patterns
The illustration shows the graphs of fifteen functions. Two of them have equations y=x^2 and y=-(x-4)^2. Find the equations of all the other graphs.
# Agile Algebra
##### Stage: 5 Challenge Level:
The following equations are difficult to solve by direct attack but if you look for symmetric features and make simple substitutions they become much easier to solve.
(1) $\frac{2x}{2x^2-5x+3}+\frac{13x}{2x^2+x+3}=6.$
(2) $x^4 -8x^3 + 17x^2 -8x + 1 = 0.$
(3) $(x-4)(x-5)(x-6)(x-7) = 1680.$
(4) $(8x+7)^2(4x+3)(x+1)=\frac{9}{2}.$
NOTES AND BACKGROUND
This is an example of a process which occurs frequently in mathematics. Let's refer to two frames of reference as A and B and say we have a problem stated in A, then the technique is to map the given relationships to B, work in B and then map the results back to A. All these equations have symmetry of one sort or another. By using the symmetry to make a substitution you can change the variable and get an equation which is easier to solve. After that you have to use the solutions you have found and go back to find the corresponding solutions of the original equation.
To find out more about this general technique see the article "The Why and How of Substitution".
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334940314292908, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=665075&page=3
|
Physics Forums
Page 3 of 3 < 1 2 3
Mentor
## Why does an object start to fall under gravity
Quote by Adeste I agree 100% with what you say I am not trying to replace mechanics (forces) with potential changes for moving objects. Therefore I am not trying to prove a path or trajectory mechanics both works and is very sophisticated at this. My only concern is if a particle starts moving from rest ie stops remaining stationary. My model predicts it does not move, Any path would prove the model wrong. I am only trying to prove my model wrong. Any path would do.
Then I don't understand the point. You already seem aware that your model doesn't correctly describe the motion of a general moving particle, so why would you be at all concerned that it also doesn't correctly describe the motion of a stationary particle?
It is possible to use energy principles only to correctly describe a particle in general, you just have to use the Lagrangian approach. You should hold off trying to invent your own approach until you have actually learned Lagrangian mechanics.
Quote by DaleSpam Then I don't understand the point. You already seem aware that your model doesn't correctly describe the motion of a general moving particle, so why would you be at all concerned that it also doesn't correctly describe the motion of a stationary particle? It is possible to use energy principles only to correctly describe a particle in general, you just have to use the Lagrangian approach. You should hold off trying to invent your own approach until you have actually learned Lagrangian mechanics.
My model does absolutely describe the motion of a general moving particle.
I apologise if Lagrangian mechanics shows this. I leave you to judge.
However I conclude that gravitaional force is not equal to mg but ma where a is the actual acceleration of the actual particle not a theoretical value.
To be precise f=mg does not work for a stationary particle but does at all other times.
Quote by Adeste My model does absolutely describe the motion of a general moving particle. I apologise if Lagrangian mechanics shows this. I leave you to judge. However I conclude that gravitaional force is not equal to mg but ma where a is the actual acceleration of the actual particle not a theoretical value. To be precise f=mg does not work for a stationary particle but does at all other times.
Your "model" claims that particle with zero internal energy will never accelerate from rest. Depending on how you define internal energy, this is either not true, or it is impossible to have a particle with no energy (if you want to consider rest energy of a particle in SR). If you have a theory of physics that predicts things that are never observed, your theory is wrong.
If the force due to gravity is the mass times the actual acceleration of the particle, then a particle at rest would have no weight. Again, this is pure silliness. If there were some observation of this, then maybe it would be interesting, but there hasn't been and this isn't.
$f=mg$ does work for a stationary particle. The force do to gravity of a particle at rest is $mg$ which opposed by some normal force that is equal and opposite and therefore the sum of all forces is 0.
Your "model" is not a model because it does not describe anything real. You are trying to find a way to solve a problem that arises because of a false assumption. The resolution should be obvious. I'm pretty much done with this thread since you appear to be sticking to your guns and uninterested in actual study. Your "model" predicts things that don't happen. Give it up.
Quote by DrewD Your "model" claims that particle with zero internal energy will never accelerate from rest. Depending on how you define internal energy, this is either not true, or it is impossible to have a particle with no energy (if you want to consider rest energy of a particle in SR). If you have a theory of physics that predicts things that are never observed, your theory is wrong. If the force due to gravity is the mass times the actual acceleration of the particle, then a particle at rest would have no weight. Again, this is pure silliness. If there were some observation of this, then maybe it would be interesting, but there hasn't been and this isn't. $f=mg$ does work for a stationary particle. The force do to gravity of a particle at rest is $mg$ which opposed by some normal force that is equal and opposite and therefore the sum of all forces is 0. Your "model" is not a model because it does not describe anything real. You are trying to find a way to solve a problem that arises because of a false assumption. The resolution should be obvious. I'm pretty much done with this thread since you appear to be sticking to your guns and uninterested in actual study. Your "model" predicts things that don't happen. Give it up.
Can I say how much I have enjoyed this tread and all the input from contributors.
If as you say I predict things that at this moment in time can not be observed makes my theory wrong so be it. I am not, as I am sure you guess, a professional physicist . I am not trying to create a new model for the scientific community. I am only excising my enquiring mind
You are right I am sure that it is pure silliness to suggest a particle not an object at rest has no weight. I was hoping for a more informative and reasoning other than silliness. Particularly as you can produce no experimental evidence to show me wrong
You keep stating f=mg for a particle is correct you are right it is for an object stationary or moving, and a moving particle, but not a stationary particle as I have shown. So there are no challenges to mechanics there never was I always said the effect was probably not observable.
I am still not sure if the effect can be observed and measured that is for better physicists than me to work out. Also my conclusion is only F=MA which seems highly satisfactory.
Please before you go could you explain my false assumption. I have only used conservation of energy, Heisenberg's Energy-Time uncertainty principle and 2nd law of thermodynamics. oh and causality.
Even if you cannot do this as you perceive me as a waste of space and do not wish to waste your time I really do thank you and the others for your efforts and time indulging my enquiring mind. I have learned a great deal
If that sounds sarcastic it really is not. It is entirely genuine and I wish you all well.
Mentor
Quote by Adeste My model does absolutely describe the motion of a general moving particle.
No, it doesn't. Your model is incomplete meaning that it is underdetermined. While the correct solution is compatible with your model there are also incorrect solutions which are compatible with your model. For example:
$$x(t)=y(t)=\frac{E}{gm} \left( 1- e^{-\sqrt{2}gt} \right)$$
This is not a correct solution, but it satisfies your model by maintaining a constant total energy with KE decreasing as the object moves to an area of higher PE.
Your model is WRONG, even for moving particles. It does not have enough information to select the correct trajectory which is compatible with energy considerations. For that, you need the principle of least action.
Quote by Adeste However I conclude that gravitaional force is not equal to mg but ma where a is the actual acceleration of the actual particle not a theoretical value. To be precise f=mg does not work for a stationary particle but does at all other times.
This conclusion is also WRONG. Please work on learning actual mainstream physics rather than trying to invent new models. The current models have been well tested for centuries, and they work. There is good reason to learn them very well before attempting to make any improvements. Your proposed model is NOT an improvement, it is a mistake.
Recognitions: Gold Member Science Advisor I don't think he's switched to 'receive' on this one.
Quote by DaleSpam No, it doesn't. Your model is incomplete meaning that it is underdetermined. While the correct solution is compatible with your model there are also incorrect solutions which are compatible with your model. For example: $$x(t)=y(t)=\frac{E}{gm} \left( 1- e^{-\sqrt{2}gt} \right)$$ This is not a correct solution, but it satisfies your model by maintaining a constant total energy with KE decreasing as the object moves to an area of higher PE. Your model is WRONG, even for moving particles. It does not have enough information to select the correct trajectory which is compatible with energy considerations. For that, you need the principle of least action. This conclusion is also WRONG. Please work on learning actual mainstream physics rather than trying to invent new models. The current models have been well tested for centuries, and they work. There is good reason to learn them very well before attempting to make any improvements. Your proposed model is NOT an improvement, it is a mistake.
Hi my model is not wrong it needs no information on moving or stationary objects as it makes no difference to the accepted model except when a particle is not moving. It makes no new prediction for moving particles it does not interfere in any way with excepted observable models.
It only says that when a particle not an object is stationary it will not experience a force. As I am lead to believe this is never observable therefore this cannot I believe be shown to be wrong. Clearly you could say it was irrelevant and you may be correct time will tell.
You are correct the present models have been rigorously tested and work I have said this before. Theories on the present testable/observable world are not changed in my model in any way stationary or moving.
I am not trying to improve the models science use it is a personal model of the world to aid my understanding. I thank you for help in refining my model and sorting out in my mind the problem of action at a distance which always seemed nonsense.
I really thank you for your time and patience while I indulged my enquiring mind.
I wish you well
Mentor
Quote by Adeste Hi my model is not wrong it needs no information on moving or stationary objects as it makes no difference to the accepted model except when a particle is not moving. It makes no new prediction for moving particles it does not interfere in any way with excepted observable models.
Yes, your model as you have presented here is wrong for moving objects. I mathematically proved it by showing a trajectory which satisfies your model but is not correct. Your model differs significantly and does make new predictions compared to correct models, as shown above.
If you disagree and believe that your model is correct, then you need to demonstrate in which way my posted trajectory violates your model. If my posted trajectory does not violate your model then your model is wrong, as I have claimed.
Quote by sophiecentaur I don't think he's switched to 'receive' on this one.
As I am sure you have noticed I have finished my enquires my model is complete. Can I thank you also for your thoughts and time.
I wish you well I really enjoyed it
ps great signature
Quote by DaleSpam Yes, your model as you have presented here is wrong for moving objects. I mathematically proved it by showing a trajectory which satisfies your model but is not correct.
My model does not predict any trajectory sorry if I presented it in a way such that you think it did I apologise. I specifically said it does not apply to moving particles.
Again thanks
Mentor
Quote by Adeste I specifically said it does not apply to moving particles.
I am calling "BS" on this:
Quote by Adeste as a body moves to an area of lower gravitational it loses gravitational potential energy which is converted into KE and hence it moves faster hence accelerates
The whole point of this thread is that you believe that the above model correctly describes the physics of a moving body. You were then concerned by the fact that this model admits a non-physical solution for a stationary body, namely the solution where it doesn't move at all.
I demonstrated that it also admits a non-physical solution for a moving body. Therefore, since it admits non-physical solutions for moving bodies it should come as no surprise that it admits non-physical solutions for stationary bodies also.
There is no difference wrt moving and stationary bodies, the model fails for both.
Quote by Adeste My model does not predict any trajectory sorry if I presented it in a way such that you think it did I apologise. I specifically said it does not apply to moving particles. Again thanks
Haha wait! what? you have a model that doesn't predict things? A stationary particle has a trajectory. It is one with momentum equal to zero.
I know I said I wouldn't come back, but this is starting become funny.
Adeste's "model" predicts things that are counter to current theory, BUT since his/her model only applies to particles that are defined by the fact that they follow this model, there is no problem with the fact that current theory predicts different things. So we have a theory that ONLY applies to stationary particles that are not effected by Newton's laws and the theory's conclusion is that these particles are not effected by Newton's laws! Inevitably, they will also not be effected by QM because those are different particles for which this theory doesn't apply.
I think this might actually turn out to be a good example to use in HS physics to show why energy conservation is not sufficient to describe reality and what the difference between science and pseudoscience is.
Quote by Adeste If Δx= 0 then Δpe =0 ∴Δke =0 ∴Δv=0 ∴acceleration = 0
I think this is the key line of reasoning in your model. Δx = 0 → Δpe = 0 holds due to the assumption that potential energy is only a function of x (which holds in the case of a uniform gravitational field). Δpe = 0 → Δke = 0 holds by definition of the total energy and the assumption of its conservation (this statement also holds true in the above discussion). Δke = 0 → Δv = 0 holds due to the assumption that kinetic energy is only a function of velocity (which is true in the above discussion also). Δv = 0 → acceleration = 0 holds if acceleration is a function only of velocity (or change in velocity over a time period of constant length). This last implication does not hold in a uniform gravitational field.
The instantaneous acceleration of an object does not depend on the instantaneous velocity of the object (for uniform gravitational fields). Therefore it is not possible to obtain from the velocity of an object (at one point in time) any information about its acceleration (for uniform gravitational fields).
Page 3 of 3 < 1 2 3
Thread Tools
| | | |
|---------------------------------------------------------------------|-------------------------------|---------|
| Similar Threads for: Why does an object start to fall under gravity | | |
| Thread | Forum | Replies |
| | Introductory Physics Homework | 5 |
| | Introductory Physics Homework | 11 |
| | Introductory Physics Homework | 10 |
| | Classical Physics | 5 |
| | Introductory Physics Homework | 4 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9628020524978638, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/79215/pathologies-of-analytic-non-algebraic-varieties
|
## Pathologies of analytic (non-algebraic) varieties.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Note: By an "analytic non-algebraic" surface below I mean a two dimensional compact analytic variety $X$ (over $\mathbb{C}$) which is not an algebraic variety.
A property of Nagata's example (see the end of the post for the construction) of a non-algebraic normal analytic surface $X$ is the following:
($\star$) $\quad$ There is a point $P$ on $X$ such that every (compact) algebraic curve $C$ on $X$ passes through $P$.
In a paper I am writing I also constructed (to my surprise) some examples of non-algebraic normal analytic surfaces which have this peculiar property.
Questions: Is this sort of behaviour "normal" for such surfaces? Or, more precisely, if an analytic surface does not satisfy ($\star$), is it necessarily algebraic? How about for higher dimensions?
Nagata's Construction (following Bădescu's book on surfaces): Start with a smooth plane cubic $C$ and a point $P$ on $\mathbb{P}^2$ such that $P - O$ is not a torsion point (where $O$ is any of the inflection points of $C$) on $C$. Let $X_1$ be the blow up of $\mathbb{P}^2$ at $P$, and for each $i \geq 1$, let $X_{i+1}$ be the blow up of $X_i$ at the point of intersection of the strict transform of $C$ and the exceptional divisor on $X_i$. Each blow up decreases the self-intersection number of the strict transform $C_i$ of $C$ by $1$, so that on $X_{10}$ the self-intersection number of $C_{10}$ is $-1$. $X$ is the blow down of $X_{10}$ along $C_{10}$. By some theorems of Grauer and Artin, $X$ is a normal analytic surface.
-
4
I don't believe this is true. For example, Kodaira has constructed examples of nonalgebraic elliptic surfaces, which necessarily have disjoint fibres consisting of algebraic curves. I'm away from my books and papers at the moment, but I think you can find this in his papers, or perhaps in the book by Barth, Peters and Van de Ven. – Donu Arapura Oct 27 2011 at 0:08
@Donu: Right, they are the Hopf surfaces. – Francesco Polizzi Oct 27 2011 at 0:14
1
Depending on your interpretation of vacuous statements you might also be interested in complex tori of dimension 2. A general torus of dimension 2 (or higher) contains no closed subvarieties except disjoint unions of points. The condition $(*)$ is then vacuously satisfied for every point on such a torus. – Gunnar Magnusson Oct 27 2011 at 5:26
1
these beautiful answers seem to suggest modifying the question to include the hypothesis your surface has two algebraically independent meromorphic functions. I believe it is then the case that it arises from blowing down a curve on an algebraic surface. you might ask, in order to obtain a non algebraic surface, whether that exceptional curve must meet all other curves on the algebraic resolution. – roy smith Oct 27 2011 at 18:43
@roy smith: You are almost psychically (is that a word?) to the point - the examples I was considering have this property (with a minor correction in your last sentence - "... must meet all other non-exceptional curves on the ...") and I have been sort of debating with myself if I should ask another question. – auniket Oct 27 2011 at 19:47
## 1 Answer
The answer is no.
A counterexample is provided by the so-called Hopf surfaces (they were actually constructed by Kodaira, see Donu Arapura's comment).
A Hopf surface of type $\alpha=(\alpha_1, \alpha_2)$, where $0 < |\alpha_1| \leq |\alpha_2| < 1$, is the compact complex surface $H_{\alpha}$ obtained as the quotient of $\mathbb{C}^2 \setminus (0,0)$ by the infinite cyclic group generated by the automorphism $$(z_1, z_2) \to (\alpha_1 z_1, \alpha_2 z_2).$$
One can prove that $H_{\alpha}$ is a compact complex surface diffeomorphic to $S^1 \times S^3$, so it cannot admit any Kahler metric. In particular, it is not algebraic.
However, $H_{\alpha}$ des not satisfy your property. In fact, there is the following result:
The Hopf surface $H_{\alpha}$ is an elliptic fibre space over $\mathbb{P}^1$ if and only if $\alpha_1^l=\alpha_2^k$ for some $l, k \in \mathbb{Z}$. Otherwise $H_{\alpha}$ contains exactly two compact curves, which are disjoint (they are the images of the $z_1$-axis and the $z_2$-axis).
So $H_{\alpha}$ contains either infinitely many or exactly two disjoint elliptic curves.
For more details, see [Barth-Peters-Van de Ven, Compact Complex Surfaces, Chapter V].
-
I am amazed at the elegance of your crisp yet thorough answer, Francesco: it is simply perfect. Bravo! – Georges Elencwajg Oct 27 2011 at 9:53
Dear Georges, thank you for your kind words. I just used the description of Hopf surfaces given in the beautiful book by Barth-Peters and Van de Ven. – Francesco Polizzi Oct 27 2011 at 10:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318357706069946, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/31407/how-can-a-pion-have-a-mass-given-its-a-field-mediator-and-created-destroyed?answertab=votes
|
# How can a pion have a mass, given it's a “field mediator” and created/destroyed continuously?
Maybe some of my assumptions here are basically wrong, but isn't it true that
• pion is the "mediator" for the strong force field.
• the quantum field theory basically says that there are no fields, instead all forces are caused by interchanging of mediator particles all the time.
So to me this look like two particles in a nucleus, which are hold together by the strong force, are "sending" these pion particles all the time. But all references say that these particles have mass! Why doesn't this mean that mass is created/destroyed all the time? Is mass of the whole nuclei affected in some way by the constant stream of pions? If the question is basically irrelevant, what are the most important parts that I am missing here?
-
3
whyever would you think that a theory called Quantum Field Theory says that there are no fields? In fact, the fields are the fundamental objects, whereas particles are 'just' excitations; more so, virtual particles acting as force mediator aren't really particles, but (arguably) artifacts of perturbation theory – Christoph Jul 5 '12 at 20:33
I'll obviously have to study this more, but aren't for example pions real, detected, observable particles? I've read we can even "observe" effects of free pions from cosmic rays? – Cray Jul 5 '12 at 20:43
pions are, but virtual pions are not – Christoph Jul 5 '12 at 20:46
1
Learn some QM, they said; it'll be fun, they said. :P – Cray Jul 5 '12 at 20:59
## 2 Answers
The fundamental force carriers are gauge bosons and the gauge-symmetries forbid them to have a mass. The only way they can gain a mass is through the Higgs mechanism, as in the case of the W and the Z, i.e. the gauge symmetry has to be spontaneously broken.
So in a sense you are right, the fundamental force mediators should be massless but not for the reason you have given and there is a get out clause if the Higgs mechanism is at play.
But the pions are not fundamental force mediators and are not gauge bosons so they are not required to be massless for this reason. However there is an interesting twist to all this: One can in fact understand the pions as the Nambu-Goldstone bosons of the spontaneously broken chiral symmetry (broken by the non zero vacuum expectation value of the quark condensate). If the chiral symmetry were exact then the pions would indeed be required to be massless (by Goldstone's theorem). However the chiral symmetry is only an approximate symmetry as it is explicitly broken by the quark masses. For this reason the pions are allowed to have a (small) mass and are thus called 'pseudo'-Nambu–Goldstone bosons.
I hope the above helps to explain the situation in relation to the masses of the force mediators and the pions. However I sense that your real confusion is in the fact that you cannot understand how a force mediator can have a mass as you think this will in some way effect the interacting particles in a way that is not observed. You seem to be worried about the interacting particles having to create and destroy mass. But why be so concerned about mass? Even with a massless force mediator the interacting particles still need to create and destroy the energy of the transferred mediator and mass is simply one form of energy, so by your reasoning you should be equally well concerned about both massive and massless force carriers.
But your reasoning should just not concern you, why are you worried about this constant exchange of energy? After all, the particles are interacting and how else could they do this? A very simple analogy of how exchange of a massive particle could create a force would be if two people sat in boats and kept throwing a heavy ball between each other, they would move apart and the exchange of the massive ball would appear to be producing a repulsive force.
The above analogy does not get too close to the real situation and would for example fail to explain attractive forces, but it at least shows that you should not be concerned about the transfer of mass or energy in such a situation. To really understand how virtual particles can transmit a force then you have to do the calculations and understand that virtual particles are really not particles at all but are non-particle like disturbances in their respective fields. You should also remember that any interacting particle is constantly creating and destroying virtual particles in its vicinity - the creation and annihilation of virtual particles are simply part and parcel of the definition of the particle in the first place.
-
You are right, the creation/destruction of energy is equally confusing to me, I just took mass as an example, it seemed easier to construct a question about. But is the bottomline this: that all force mediator particles are in fact "virtual", and could never be detected, even if one could somehow, say, get in between two protons in a nucleus and try to detect what is going on there? (surely impossible proposition, but if it was possible) ? Also, then what does the explanation of a field by quantified mediators really achieve? What does it explain if they are all virtual? – Cray Jul 6 '12 at 6:43
Yes, all force mediators are virtual particles. A particle is defined as an asymptotic state infinitely far from other particles. So virtual particles are not these and can not be detected as such. They should simply be thought of as disturbances in the field and the name particle is a bit of a misnomer. I would say the use in thinking of particle mediators is in the Feynman diagram approach where we represent the interaction of fields by exchange of one or more virtual particles to represent a perturbative calculation. But this is more a mathematical tool than a true picture of reality. – Mistake Ink Jul 6 '12 at 11:43
Two points:
• Pion exchange (or more generally light meson exchange) is a reasonable approximation (AKA an "effective theory") for the strong force as it acts on nucleons in the nucleus, but it is not the real strong force which is mediated by gluons.
In other words, pions are not really force mediators at all.
• The isn't actually a problem with massive mediators. The weak force is carried by the W and Z bosons with masses of 80 and 90 GeV respectively.
-
I thought that force particles could be massive (because they're only virtual anyway) but only under the condition that the force is limited in range. Does that make any sense? – AlanSE Jul 5 '12 at 20:25
@AlanSE Yeah, heavy carriers introduce range limits, and the pion mass is more or less responsible for the range of the inter-nucleon force. As I say the pion exchange model is a good effective theory. – dmckee♦ Jul 5 '12 at 20:28
So widely used approximation is one that assumes creation/destruction of mass/energy? Despite the energy conservation principle? That's the part I don't get, how can we say that any field mediators (ie, particles that are created all the time I guess?) can have mass, if that contradicts the energy conservation? – Cray Jul 5 '12 at 20:38
1
@Cray: the hand-wavy argument is that because of uncertainty, anything goes as long as the action doesn't exceed $\hbar$ – Christoph Jul 5 '12 at 20:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560068845748901, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/13266/simple-linear-regression-output-interpretation/13269
|
# Simple linear regression output interpretation
I have run a simple linear regression of the natural log of 2 variables to determine if they correlate. My output is this:
````R^2 = 0.0893
slope = 0.851
p < 0.001
````
I am confused. Looking at the $R^2$ value, I would say that the two variables are not correlated, since it is so close to $0$. However, the slope of the regression line is almost $1$ (despite looking as though it's almost horizontal in the plot), and the p-value indicates that the regression is highly significant.
Does this mean that the two variables are highly correlated? If so, what does the $R^2$ value indicate?
I should add that the Durbin-Watson statistic was tested in my software, and did not reject the null hypothesis (it equalled $1.357$). I thought that this tested for independence between the $2$ variables. In this case, I would expect the variables to be dependent, since they are $2$ measurements of an individual bird. I'm doing this regression as part of a published method to determine body condition of an individual, so I assumed that using a regression in this way made sense. However, given these outputs, I'm thinking that maybe for these birds, this method isn't suitable. Does this seem a reasonable conclusion?
-
1
The Durbin-Watson statistic is a test for serial correlation: that is, to see whether adjacent error terms are mutually correlated. It says nothing about the correlation between your X and your Y! Failing the test is an indication that the slope and p-value should be interpreted with caution. – whuber♦ Jul 19 '11 at 21:29
Ah, ok. That makes a little more sense than whether the two variables themselves are correlated...after all, I thought that's what I was trying to find using the regression. And that failing the test indicates I should be cautious interpreting the slope and p-value makes even more sense in this case! Thanks @whuber! – Mog Jul 20 '11 at 0:34
I would just like to add a slope can be very significant (p-value<.001) even though the relationship is weak, especially with a large sample size. This was hinted at in most of the answers as that the slope (even if it's significant) says nothing about the strength of the relationship. – Glen Jul 20 '11 at 12:30
## 3 Answers
The estimated value of the slope does not, by itself, tell you the strength of the relationship. The strength of the relationship depends on the size of the error variance, and the range of the predictor. Also, a significant $p$-value doesn't tell you necessarily that there is a strong relationship; the $p$-value is simply testing whether the slope is exactly 0. For a sufficiently large sample size, even small departures from that hypothesis (e.g. ones not of practical importance) will yield a significant $p$-value.
Of the three quantities you presented, $R^2$, the coefficient of determination, gives the greatest indication of the strength of the relationship. In your case, $R^{2} = .089$, means that $8.9\%$ of the variation in your response variable can be explained a linear relationship with the predictor. What constitutes a "large" $R^2$ is discipline dependent. For example, in social sciences $R^2 = .2$ might be "large" but in controlled environments like a factory setting, $R^2 > .9$ may be required to say there is a "strong" relationship. In most situations $.089$ is a very small $R^2$, so your conclusion that there is a weak linear relationship is probably reasonable.
-
Thanks Macro. Very helpful answer. I'm glad you included the part about what, exactly, the p-value is testing. It makes a lot of sense that the p-value would be so low considering how close to 1 the slope is. It seems to me, in light of your answer and @jedfrancis', the r^2 value describes that 'cloud' of data points around the line of regression. Excellent! That's much more clear now! – Mog Jul 19 '11 at 19:56
@Macro (+1), fine answer. But how does the "strength of the relationship" depend on the "size of the intercept"? AFAIK the intercept says nothing at all about correlation or "strength" of a linear relationship. – whuber♦ Jul 19 '11 at 21:32
@whuber, you're right - the intercept is irrelevant and definitely doesn't change the correlation - I was thinking about the regression function $y = 10000 + x$ vs. $y = x$ and thinking somehow of the second one being a stronger relationship (all else held equal), since a greater amount of the magnitude of $y$ was due to $x$ in the latter case. Doesn't make much sense now that I think about it. I've edited the post. – Macro Jul 19 '11 at 21:36
3
@macro Excellent answer, but I would stress (for those new to this subject) that R^2 can be very low even with a strong relationship, if the relationship is nonlinear, and particularly if it is nonmonotonic. My favorite example of this is the relationship between stress and exam score; very low stress and very high stress tend to be worse than moderate stress. – Peter Flom Jul 20 '11 at 10:19
@Macro It's hard to see how $R^2$ would be a "measure of linear dependence" because by definition all dependence in these regression models is linear in the parameters. For instance, if $y=\alpha+\beta x+\gamma x^2+\varepsilon$, $R^2$ could be high due to the quadratic relationship between $x$ and $y$. It seems to me rather that $R^2$ is trying to address, in a quantitative way, the split of a response into a sum of deterministic and stochastic terms. It's even a stretch to conceive of it as quantifying a "strength of the relationship"; it seems more a measure of residual variance. – whuber♦ Jul 20 '11 at 20:44
show 3 more comments
## Did you find this question interesting? Try our newsletter
email address
For a linear regression, the fitted slope is going to be the correlation (which, when squared, gives the coefficient of determination, the $R^2$) times the empirical standard deviation of the regressand (the $y$) divided by the empirical standard deviation of the regressor (the $x$). Depending on the scaling of the $x$ and $y$, you can have a fit slope equal to one but an arbitrarily small $R^2$ value.
In short, the slope is not a good indicator of model 'fit' unless you are certain that the scales of the dependent and independent variables must be equal to each other.
-
The $R^2$ value tells you how much variation in the data is explained by the fitted model.
The low $R^2$ value in your study suggests that your data is probably spread widely around the regression line, meaning that the regression model can only explain (very little) 8.9% of the variation in the data.
Have you checked to see whether a linear model is appropriate? Have a look at the distribution of your residuals, as you can use this to assess the fit of the model to your data. Ideally, your residuals should not show a relation with your $x$ values, and if it does, you may want to think of rescaling your variables in a suitable way, or fitting a more appropriate model.
-
Thanks @jed. Yes, I'd checked the normality of the residuals, and all was well. Your suggestion that the data is spread widely around that regression line is exactly right - the data points looks like a cloud around the line of regression plotted by the software. – Mog Jul 19 '11 at 19:51
1
Welcome to our site, @jed, and thanks for your reply! Please note that the slope itself says almost nothing about the correlation, apart from its sign, because correlation does not depend on the units in which X and Y are measured but the slope does. – whuber♦ Jul 19 '11 at 21:29
1
@whuber is saying that the value of the slope does not tell you anything about the strength of the association unless variables are standardized. See shabbychefs answer. – wolf.rauch Jul 19 '11 at 23:54
@wolf.rauch gotcha – jedfrancis Jul 20 '11 at 1:56
@jed It would be good if you were to correct your reply. – whuber♦ Jul 20 '11 at 13:08
show 3 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952686071395874, "perplexity_flag": "middle"}
|
http://www.sagemath.org/doc/reference/combinat/sage/combinat/skew_partition.html
|
# Skew Partitions¶
A skew partition $$skp$$ of size $$n$$ is a pair of partitions $$[p_1, p_2]$$ where $$p_1$$ is a partition of the integer $$n_1$$, $$p_2$$ is a partition of the integer $$n_2$$, $$p_2$$ is an inner partition of $$p_1$$, and $$n = n_1 - n_2$$. We say that $$p_1$$ and $$p_2$$ are respectively the inner and outer partitions of $$skp$$.
A skew partition can be depicted by a diagram made of rows of cells, in the same way as a partition. Only the cells of the outer partition $$p_1$$ which are not in the inner partition $$p_2$$ appear in the picture. For example, this is the diagram of the skew partition [[5,4,3,1],[3,3,1]].
```sage: print SkewPartition([[5,4,3,1],[3,3,1]]).diagram()
**
*
**
*
```
A skew partition can be connected, which can easily be described in graphic terms: for each pair of consecutive rows, there are at least two cells (one in each row) which have a common edge. This is the diagram of the connected skew partition [[5,4,3,1],[3,1]]:
```sage: print SkewPartition([[5,4,3,1],[3,1]]).diagram()
**
***
***
*
sage: SkewPartition([[5,4,3,1],[3,1]]).is_connected()
True
```
The first example of a skew partition is not a connected one.
Applying a reflection with respect to the main diagonal yields the diagram of the conjugate skew partition, here [[4,3,3,2,1],[3,3,2]]:
```sage: SkewPartition([[5,4,3,1],[3,3,1]]).conjugate()
[[4, 3, 3, 2, 1], [3, 2, 2]]
sage: print SkewPartition([[5,4,3,1],[3,3,1]]).conjugate().diagram()
*
*
*
**
*
```
The outer corners of a skew partition are the corners of its outer partition. The inner corners are the internal corners of the outer partition when the inner partition is taken off. Shown below are the coordinates of the inner and outer corners.
```sage: SkewPartition([[5,4,3,1],[3,3,1]]).outer_corners()
[(0, 4), (1, 3), (2, 2), (3, 0)]
sage: SkewPartition([[5,4,3,1],[3,3,1]]).inner_corners()
[(0, 3), (2, 1), (3, 0)]
```
EXAMPLES:
There are 9 skew partitions of size 3, with no empty row nor empty column:
```sage: SkewPartitions(3).cardinality()
9
sage: SkewPartitions(3).list()
[[[3], []],
[[2, 1], []],
[[3, 1], [1]],
[[2, 2], [1]],
[[3, 2], [2]],
[[1, 1, 1], []],
[[2, 2, 1], [1, 1]],
[[2, 1, 1], [1]],
[[3, 2, 1], [2, 1]]]
```
There are 4 connected skew partitions of size 3:
```sage: SkewPartitions(3, overlap=1).cardinality()
4
sage: SkewPartitions(3, overlap=1).list()
[[[3], []], [[2, 1], []], [[2, 2], [1]], [[1, 1, 1], []]]
```
This is the conjugate of the skew partition [[4,3,1],[2]]
```sage: SkewPartition([[4,3,1],[2]]).conjugate()
[[3, 2, 2, 1], [1, 1]]
```
Geometrically, we just applied a reflection with respect to the main diagonal on the diagram of the partition. Of course, this operation is an involution:
```sage: SkewPartition([[4,3,1],[2]]).conjugate().conjugate()
[[4, 3, 1], [2]]
```
The jacobi_trudy() method computes the Jacobi-Trudi matrix. See Macdonald I.-G., (1995), “Symmetric Functions and Hall Polynomials”, Oxford Science Publication for a definition and discussion.
```sage: SkewPartition([[4,3,1],[2]]).jacobi_trudi()
[h[2] h[] 0]
[h[5] h[3] h[]]
[h[6] h[4] h[1]]
```
This example shows how to compute the corners of a skew partition.
```sage: SkewPartition([[4,3,1],[2]]).inner_corners()
[(0, 2), (1, 0)]
sage: SkewPartition([[4,3,1],[2]]).outer_corners()
[(0, 3), (1, 2), (2, 0)]
```
sage.combinat.skew_partition.SkewPartition(skp)¶
Returns the skew partition object corresponding to skp.
EXAMPLES:
```sage: skp = SkewPartition([[3,2,1],[2,1]]); skp
[[3, 2, 1], [2, 1]]
sage: skp.inner()
[2, 1]
sage: skp.outer()
[3, 2, 1]
```
class sage.combinat.skew_partition.SkewPartition_class(skp)¶
Bases: sage.combinat.combinat.CombinatorialObject
TESTS:
```sage: skp = SkewPartition([[3,2,1],[2,1]])
sage: skp == loads(dumps(skp))
True
```
cells()¶
Returns the coordinates of the cells of self. Coordinates are given as (row-index, column-index) and are 0 based.
EXAMPLES:
```sage: SkewPartition([[4, 3, 1], [2]]).cells()
[(0, 2), (0, 3), (1, 0), (1, 1), (1, 2), (2, 0)]
sage: SkewPartition([[4, 3, 1], []]).cells()
[(0, 0), (0, 1), (0, 2), (0, 3), (1, 0), (1, 1), (1, 2), (2, 0)]
sage: SkewPartition([[2], []]).cells()
[(0, 0), (0, 1)]
```
column_lengths()¶
Returns the column lengths of the skew partition.
EXAMPLES:
```sage: SkewPartition([[3,2,1],[1,1]]).column_lengths()
[1, 2, 1]
sage: SkewPartition([[5,2,2,2],[2,1]]).column_lengths()
[2, 3, 1, 1, 1]
```
columns_intersection_set()¶
Returns the set of cells in the lines of lambda which intersect the skew partition.
EXAMPLES:
```sage: skp = SkewPartition([[3,2,1],[2,1]])
sage: cells = Set([ (0,0), (0, 1), (0,2), (1, 0), (1, 1), (2, 0)])
sage: skp.columns_intersection_set() == cells
True
```
conjugate()¶
Returns the conjugate of the skew partition skp.
EXAMPLES:
```sage: SkewPartition([[3,2,1],[2]]).conjugate()
[[3, 2, 1], [1, 1]]
```
diagram()¶
Returns the Ferrers diagram of self.
EXAMPLES:
```sage: print SkewPartition([[5,4,3,1],[3,3,1]]).ferrers_diagram()
**
*
**
*
sage: print SkewPartition([[5,4,3,1],[3,1]]).diagram()
**
***
***
*
sage: Partitions.global_options(diagram_str='#', convention="French")
sage: print SkewPartition([[5,4,3,1],[3,1]]).diagram()
#
###
###
##
sage: Partitions.global_options.reset()
```
ferrers_diagram()¶
Returns the Ferrers diagram of self.
EXAMPLES:
```sage: print SkewPartition([[5,4,3,1],[3,3,1]]).ferrers_diagram()
**
*
**
*
sage: print SkewPartition([[5,4,3,1],[3,1]]).diagram()
**
***
***
*
sage: Partitions.global_options(diagram_str='#', convention="French")
sage: print SkewPartition([[5,4,3,1],[3,1]]).diagram()
#
###
###
##
sage: Partitions.global_options.reset()
```
inner()¶
Returns the inner partition of the skew partition.
EXAMPLES:
```sage: SkewPartition([[3,2,1],[1,1]]).inner()
[1, 1]
```
inner_corners()¶
Returns a list of the inner corners of the skew partition skp.
EXAMPLES:
```sage: SkewPartition([[4, 3, 1], [2]]).inner_corners()
[(0, 2), (1, 0)]
sage: SkewPartition([[4, 3, 1], []]).inner_corners()
[(0, 0)]
```
is_connected()¶
Returns True if self is a connected skew partition.
A skew partition is said to be connected if for each pair of consecutive rows, there are at least two cells (one in each row) which have a common edge.
EXAMPLES:
```sage: SkewPartition([[5,4,3,1],[3,3,1]]).is_connected()
False
sage: SkewPartition([[5,4,3,1],[3,1]]).is_connected()
True
```
is_overlap(n)¶
Returns True if n = self.overlap()
See also
overlap()
EXAMPLES:
```sage: SkewPartition([[5,4,3,1],[3,1]]).is_overlap(1)
True
```
jacobi_trudi()¶
EXAMPLES:
```sage: SkewPartition([[3,2,1],[2,1]]).jacobi_trudi()
[h[1] 0 0]
[h[3] h[1] 0]
[h[5] h[3] h[1]]
sage: SkewPartition([[4,3,2],[2,1]]).jacobi_trudi()
[h[2] h[] 0]
[h[4] h[2] h[]]
[h[6] h[4] h[2]]
```
k_conjugate(k)¶
Returns the k-conjugate of the skew partition.
EXAMPLES:
```sage: SkewPartition([[3,2,1],[2,1]]).k_conjugate(3)
[[2, 1, 1, 1, 1], [2, 1]]
sage: SkewPartition([[3,2,1],[2,1]]).k_conjugate(4)
[[2, 2, 1, 1], [2, 1]]
sage: SkewPartition([[3,2,1],[2,1]]).k_conjugate(5)
[[3, 2, 1], [2, 1]]
```
outer()¶
Returns the outer partition of the skew partition.
EXAMPLES:
```sage: SkewPartition([[3,2,1],[1,1]]).outer()
[3, 2, 1]
```
outer_corners()¶
Returns a list of the outer corners of the skew partition skp.
EXAMPLES:
```sage: SkewPartition([[4, 3, 1], [2]]).outer_corners()
[(0, 3), (1, 2), (2, 0)]
```
overlap()¶
Returns the overlap of self.
The overlap of two consecutive rows in a skew partition is the number of pairs of cells (one in each row) that share a common edge. This number can be positive, zero, or negative.
The overlap of a skew partition is the minimum of the overlap of the consecutive rows, or infinity in the case of at most one row. If the overlap is positive, then the skew partition is called connected.
EXAMPLES:
```sage: SkewPartition([[],[]]).overlap()
+Infinity
sage: SkewPartition([[1],[]]).overlap()
+Infinity
sage: SkewPartition([[10],[]]).overlap()
+Infinity
sage: SkewPartition([[10],[2]]).overlap()
+Infinity
sage: SkewPartition([[10,1],[2]]).overlap()
-1
sage: SkewPartition([[10,10],[1]]).overlap()
9
```
pieri_macdonald_coeffs()¶
Computation of the coefficients which appear in the Pieri formula for Macdonald polynomials given in his book ( Chapter 6.6 formula 6.24(ii) )
EXAMPLES:
```sage: SkewPartition([[3,2,1],[2,1]]).pieri_macdonald_coeffs()
1
sage: SkewPartition([[3,2,1],[2,2]]).pieri_macdonald_coeffs()
(q^2*t^3 - q^2*t - t^2 + 1)/(q^2*t^3 - q*t^2 - q*t + 1)
sage: SkewPartition([[3,2,1],[2,2,1]]).pieri_macdonald_coeffs()
(q^6*t^8 - q^6*t^6 - q^4*t^7 - q^5*t^5 + q^4*t^5 - q^3*t^6 + q^5*t^3 + 2*q^3*t^4 + q*t^5 - q^3*t^2 + q^2*t^3 - q*t^3 - q^2*t - t^2 + 1)/(q^6*t^8 - q^5*t^7 - q^5*t^6 - q^4*t^6 + q^3*t^5 + 2*q^3*t^4 + q^3*t^3 - q^2*t^2 - q*t^2 - q*t + 1)
sage: SkewPartition([[3,3,2,2],[3,2,2,1]]).pieri_macdonald_coeffs()
(q^5*t^6 - q^5*t^5 + q^4*t^6 - q^4*t^5 - q^4*t^3 + q^4*t^2 - q^3*t^3 - q^2*t^4 + q^3*t^2 + q^2*t^3 - q*t^4 + q*t^3 + q*t - q + t - 1)/(q^5*t^6 - q^4*t^5 - q^3*t^4 - q^3*t^3 + q^2*t^3 + q^2*t^2 + q*t - 1)
```
quotient(k)¶
The quotient map extended to skew partitions.
EXAMPLES:
```sage: SkewPartition([[3, 3, 2, 1], [2, 1]]).quotient(2)
[[[3], []], [[], []]]
```
r_quotient(length)¶
This method is deprecated.
EXAMPLES:
```sage: SkewPartition([[3, 3, 2, 1], [2, 1]]).r_quotient(2)
doctest:1: DeprecationWarning: r_quotient is deprecated. Use quotient instead.
See http://trac.sagemath.org/5790 for details.
[[[3], []], [[], []]]
```
row_lengths()¶
Returns the row lengths of the skew partition.
EXAMPLES:
```sage: SkewPartition([[3,2,1],[1,1]]).row_lengths()
[2, 1, 1]
```
rows_intersection_set()¶
Returns the set of cells in the lines of lambda which intersect the skew partition.
EXAMPLES:
```sage: skp = SkewPartition([[3,2,1],[2,1]])
sage: cells = Set([ (0,0), (0, 1), (0,2), (1, 0), (1, 1), (2, 0)])
sage: skp.rows_intersection_set() == cells
True
```
size()¶
Returns the size of the skew partition.
EXAMPLES:
```sage: SkewPartition([[3,2,1],[1,1]]).size()
4
```
to_dag()¶
Returns a directed acyclic graph corresponding to the skew partition.
EXAMPLES:
```sage: dag = SkewPartition([[3, 2, 1], [1, 1]]).to_dag()
sage: dag.edges()
[('0,1', '0,2', None), ('0,1', '1,1', None)]
sage: dag.vertices()
['0,1', '0,2', '1,1', '2,0']
```
to_list()¶
Returns self as a list of lists.
EXAMPLES:
```sage: s = SkewPartition([[4,3,1],[2]])
sage: s.to_list()
[[4, 3, 1], [2]]
sage: type(s.to_list())
<type 'list'>
```
sage.combinat.skew_partition.SkewPartitions(n=None, row_lengths=None, overlap=0)¶
Returns the combinatorial class of skew partitions.
EXAMPLES:
```sage: SkewPartitions(4)
Skew partitions of 4
sage: SkewPartitions(4).cardinality()
28
sage: SkewPartitions(row_lengths=[2,1,2])
Skew partitions with row lengths [2, 1, 2]
sage: SkewPartitions(4, overlap=2)
Skew partitions of 4 with overlap of 2
sage: SkewPartitions(4, overlap=2).list()
[[[4], []], [[2, 2], []]]
```
class sage.combinat.skew_partition.SkewPartitions_all¶
Bases: sage.combinat.combinat.CombinatorialClass
TESTS:
```sage: S = SkewPartitions()
sage: S == loads(dumps(S))
True
```
Element¶
alias of SkewPartition_class
list()¶
TESTS:
```sage: SkewPartitions().list()
Traceback (most recent call last):
...
NotImplementedError
```
class sage.combinat.skew_partition.SkewPartitions_n(n, overlap=0)¶
Bases: sage.combinat.combinat.CombinatorialClass
The combinatorial class of skew partitions with given size (and horizontal minimal overlap).
Element¶
alias of SkewPartition_class
cardinality()¶
Returns the number of skew partitions of the integer n.
EXAMPLES:
```sage: SkewPartitions(0).cardinality()
1
sage: SkewPartitions(4).cardinality()
28
sage: SkewPartitions(5).cardinality()
87
sage: SkewPartitions(4, overlap=1).cardinality()
9
sage: SkewPartitions(5, overlap=1).cardinality()
20
sage: s = SkewPartitions(5, overlap=-1)
sage: s.cardinality() == len(s.list())
True
```
class sage.combinat.skew_partition.SkewPartitions_rowlengths(co, overlap=0)¶
Bases: sage.combinat.combinat.CombinatorialClass
The combinatorial class of all skew partitions with given row lengths.
Element¶
alias of SkewPartition_class
list()¶
Returns a list of all the skew partitions that have row lengths given by the composition self.co.
EXAMPLES:
```sage: SkewPartitions(row_lengths=[2,2]).list()
[[[2, 2], []], [[3, 2], [1]], [[4, 2], [2]]]
sage: SkewPartitions(row_lengths=[2,2], overlap=1).list()
[[[2, 2], []], [[3, 2], [1]]]
```
sage.combinat.skew_partition.from_row_and_column_length(rowL, colL)¶
Construct a partition from its row lengths and column lengths.
INPUT:
• rowL – a composition or a list of positive integers
• colL – a composition or a list of positive integers
OUTPUT:
• If it exists the unique skew-partitions with row lengths rowL and column lengths colL.
• Raise a ValueError if rowL and colL are not compatible.
EXAMPLES:
```sage: from sage.combinat.skew_partition import from_row_and_column_length
sage: print from_row_and_column_length([3,1,2,2],[2,3,1,1,1]).diagram()
***
*
**
**
sage: from_row_and_column_length([],[])
[[], []]
sage: from_row_and_column_length([1],[1])
[[1], []]
sage: from_row_and_column_length([2,1],[2,1])
[[2, 1], []]
sage: from_row_and_column_length([1,2],[1,2])
[[2, 2], [1]]
sage: from_row_and_column_length([1,2],[1,3])
Traceback (most recent call last):
...
ValueError: Sum mismatch : [1, 2] and [1, 3]
sage: from_row_and_column_length([3,2,1,2],[2,3,1,1,1])
Traceback (most recent call last):
...
ValueError: Incompatible row and column length : [3, 2, 1, 2] and [2, 3, 1, 1, 1]
```
Warning
If some rows and columns have length zero, there is no way to retrieve unambiguously the skew partition. We therefore raise a ValueError. For examples here are two skew partitions with the same row and column lengths:
```sage: skp1 = SkewPartition([[2,2],[2,2]])
sage: skp2 = SkewPartition([[2,1],[2,1]])
sage: skp1.row_lengths(), skp1.column_lengths()
([0, 0], [0, 0])
sage: skp2.row_lengths(), skp2.column_lengths()
([0, 0], [0, 0])
sage: from_row_and_column_length([0,0], [0,0])
Traceback (most recent call last):
...
ValueError: row and column length must be positive
```
TESTS:
```sage: all(from_row_and_column_length(p.row_lengths(), p.column_lengths()) == p
... for i in range(8) for p in SkewPartitions(i))
True
```
sage.combinat.skew_partition.row_lengths_aux(skp)¶
EXAMPLES:
```sage: from sage.combinat.skew_partition import row_lengths_aux
sage: row_lengths_aux([[5,4,3,1],[3,3,1]])
[2, 1, 2]
sage: row_lengths_aux([[5,4,3,1],[3,1]])
[2, 3]
```
Partition tuples
#### Next topic
Tableaux and Tableaux-like Objects
### Quick search
Enter search terms or a module, class or function name.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7843272686004639, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/16520/potential-energy-of-a-charged-ring
|
# Potential energy of a charged ring
Consider a ring of radius $R$, and charge density $\rho$. What will be the potential energy of the ring in its self field?
The best I can do: $$dq = \rho R \cdot \, d \alpha$$
$$E_p = 2 \pi R \cdot 2 \int_{0}^{\pi - \delta \alpha} \frac{\rho^2 R^2}{r(\alpha)} \, d\alpha$$
where
$$r(\alpha) = \sqrt{2} R \sqrt {1+ \cos (\alpha)}$$
Edit:
This is wrong - $E_p$ doesn't even have the correct dimensions.
-
– AlanSE Nov 7 '11 at 16:36
## 2 Answers
You've got basically the right idea. Just for clarity, let me recap the setup: suppose that your ring is centered at the origin and oriented in the xy plane. Consider two differential elements of charge, $\mathrm{d}q$ located at $(R,0)$, and $\mathrm{d}q'$, located at $(R\cos\phi,R\sin\phi)$. The potential energy of these two charge elements is
$$\mathrm{d}^2U = k\frac{\mathrm{d}q\mathrm{d}q'}{r}$$
The distance between the two differential charges is
$$r = \sqrt{(R - R\cos\phi)^2 + (R\sin\phi)^2} = R\sqrt{2 - 2\cos\phi}$$
You can easily generalize this to apply to any two differential elements of charge located at angles $\theta_1$ and $\theta_2$, by just replacing $\phi$ with the angular difference between them, $\theta_1 - \theta_2$.
Now in theory, you should be able to determine the potential energy of the ring by integrating over all possible pairs of charge elements:
$$\begin{align}U &= \iint\mathrm{d}^2U\\ &= k\int_0^{2\pi}\int_0^{2\pi}\frac{\rho R\mathrm{d}\theta_1\rho R\mathrm{d}\theta_2}{R\sqrt{2 - 2\cos(\theta_1 - \theta_2)}}\\ &= k\rho^2 R\int_0^{2\pi}\int_0^{2\pi}\frac{\mathrm{d}\theta_1\mathrm{d}\theta_2}{\sqrt{2 - 2\cos(\theta_1 - \theta_2)}} \end{align}$$
But oops, guess what, the integral doesn't converge! So it's clearly not that easy.
In fact, it actually makes sense that this integral shouldn't converge. Think about the potential energy contributed by a pair of charge elements $\mathrm{d}q_1$ and $\mathrm{d}q_2$ which are very close to each other. The denominator of $\mathrm{d}^2U$ becomes very small, and as the separation goes to zero, the contribution to the potential energy becomes infinite. It turns out that if you're doing the equivalent calculation for a surface or volume charge distribution, the "spread" of the charge in 2 or 3 dimensions is enough to keep the integral from diverging, but not so with a line charge. So the bottom line is that the potential energy is infinite.
In practice, this isn't really an issue because any realistic charge distribution is constructed by pushing together existing pieces. You can never actually get the pieces to be right next to each other, so you don't have the problem with $r = 0$ in the denominator. It does interest some theorists, though, to figure out what's going on with this sort of situation and whether it makes sense on some fundamental level to have a theory in which a simple, sensible calculation like this turns out to be infinite.
-
1
Yep, this is the stuff I remember. Would it also be fair to say that an infinitely thin wire loop has infinitely small capacitance? That sounds much less exotic IMO. I think the same can be said for an infinite line charge. We often look for the field around a line charge, but rarely question the potential (per length) of the line itself. – AlanSE Nov 7 '11 at 16:50
Thanks David. I tried to re-write Your solution for the 2D case. So assume that the inner area of circle is also charged. Then:
$dq = r \rho \, d\alpha \, dr$
and the final integral is
$U = \frac{1}{2} \cdot \rho^2 \int_{0}^{2 \pi} \int_{0}^{2 \pi} \int_{0}^{R} \int_{0}^{R} \frac{r_1 r_2 \, dr_1 \, dr_2 \, d\theta_1 \, d\theta_2 }{\sqrt{(r_1-r_2 \cos(\theta_1 -\theta_2))^2 + r_2^2 \sin^2 (\theta_1 -\theta_2)}}$
here $1/2$ takes into account double counting of interactions.
I cannot take this integral - nor can I take Yours - but I can see that there are cases when denominator is equal to zero.
It seems to me that such an approach will always give divergence. The only thing which works - is a Poisson's equation: it gives finite potential in 3D for a charged ball (internals also charged).
The reason for that could be abstractness of $\rho$ - in reality we always have "discete" electrons - and if we wish to take that into account - maybe we have to work within quantum theory of electricity.
-
If you use a log potential in 2d, you get a convergent answer, the same as the self-potential of a cylinder with uniform charge density on the surface. – Ron Maimon Aug 1 '12 at 6:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9576666951179504, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/20998/when-do-thermal-and-chemical-equilibrium-not-coincide?answertab=oldest
|
# When do thermal and chemical equilibrium not coincide?
What is an example for a system, which is in chemical equilibrium, but not in thermodynamical equilibrium?
And what about the other way around?
It seems to me, that as long as Parameters like temperature $T$ and pressure $p$ are changing, there cannot not be chemical equilibrium, since chemical reactions depend on these quantities temperature and pressure. Hence, if there is chemical equilibrium the parameters are not changing, enforcing thermodynamical equilibrium.
-
## 2 Answers
I think generically the answer is: they're in thermal equilibrium if they have equal temperatures, in chemical equilibrium if they have equal chemical potentials, and in both thermal and chemical equilibrium if both are equal. They do not need to be at equal pressures, which is what I assume you mean by $p$. Imagine a tall column of gas in a gravitational field. Divide it (mentally) into two systems, the one at the top and the one at the bottom. The pressure at the top and bottom will differ, but they will be in chemical equilibrium.
-
Chemical equilibrium is a subset of thermodynamic equilibrium. So there are no examples of systems in thermodynamic equilibrium that are not also in chemical equilibrium.
Here's an example of a hypothetical system in chemical equilibrium but not in thermodynamic equilibrium. A system containing two phases, one a condensed phase containing two equilibrating chemicals having no significant volume changes accompanying the reaction, the other an inert gas phase, under varying external pressure. The system is always in thermal equilibrium with its surroundings.
This system would not be in thermodynamic equilibrium because as the external pressure changed the gas phase would be changing its volume. It remains in chemical equilibrium because the condensed phase housing the chemical equilibrium would not respond to the pressure change, and the gas phase hosts no chemical equilibria.
This is basically a cheat because the system is, practically speaking, two separate systems that are in physical contact. Maybe someone else will be more imaginative.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9129770398139954, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/15087/computing-fundamental-groups-and-singular-cohomology-of-projective-varieties/15114
|
## Computing fundamental groups and singular cohomology of projective varieties
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Are there any general methods for computing fundamental group or singular cohomology (including the ring structure, hopefully) of a projective variety (over C of course), if given the equations defining the variety?
I seem to recall that, if the variety is smooth, we can compute the H^{p,q}'s by computer -- and thus the H^n's by Hodge decomposition -- is this correct? However this won't work if the variety is not smooth -- are there any techniques that work even for non-smooth things?
Also I seem to recall some argument that, at least if we restrict our attention to smooth things only, all varieties defined by polynomials of the same degrees will be homotopy equivalent. The homotopy should be gotten by slowly changing the coefficients of the polynomials. Is something like this true? Does some kind of argument like this work?
-
Just to clarify: are you primarily asking about the smooth case, or the non-smooth case? and are you asking for a theorem which says "singular coho is isomorphic to some coho theory for the coordinate ring", or actual algorithms? – Yemon Choi Feb 12 2010 at 9:00
I am asking about anything that can be said about either case, but I am more interested in the non-smooth case. Algorithms are better, but I'd be interested in relevant non-algorithmic statements also. – Kevin Lin Feb 12 2010 at 9:05
Well: in the smooth case you have the HKR theorem, whereby the Hochschild coho of the coordinate ring is isomorphic to the exterior powers of the Kahler module of said ring; so in principle I guess one could read off the de Rham cohomology of the original space as a real manifold. (But I'm rapidly straying away from what I know and into vague armchair-punditry) – Yemon Choi Feb 12 2010 at 9:20
@Yemon: The Hochschild homology (the usual HKR involdes homology) vanishes above the complex dimension, yet a projective variery has non vanishing up to the real dimension. You need to compute the hypercohomology of the algebraic de Rham complex to get something useful. But there is very little gain in using HKR when you can start with the algebraic de Rham complex directly! – Mariano Suárez-Alvarez Feb 12 2010 at 14:18
@Mariano: thanks for the corrections. I wasn't thinking very clearly when I wrote the above comment! – Yemon Choi Feb 12 2010 at 18:10
## 9 Answers
This is an interesting question. To repeat some of the earlier answers, one should be able to get one's hands on a triangulation algorithmically using real algebro-geometric methods, and thereby compute singular cohomology and (a presentation for) the fundamental group. But this should probably be a last resort in practice. For smooth projective varieties, as people have noted, one can compute the Hodge numbers by writing down a presentation for the sheaf p-forms and then apply standard Groebner basis techniques to compute sheaf cohomology. This does work pretty well on a computer. For specific classes, there are better methods. For smooth complete intersections, there is a generating function for Hodge numbers due to Hirzebruch (SGA 7, exp XI), which is extremely efficient to use.
As for the fundamental group, if I had to compute it for a general smooth projective variety, I would probably use a Lefschetz pencil to write down a presentation.
For singular varieties, one can still define Hodge numbers using the mixed Hodge structure on cohomology. The sum of these numbers are still the Betti numbers. I expect these Hodge numbers are still computable, but it would somewhat unpleasant to write down a general algorithm. The first step is to build a simplicial resolution using resolution of singularities. My colleagues who know about resolutions assure me that this can be done algorithmically now days.
(This is my first reply in this forum. Hopefully it'll go through.)
-
Welcome to Math Overflow! Thanks for your response! – Kevin Lin Feb 22 2010 at 0:20
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I just want to assure you that everything in this situation is computable. For any real semi-algebraic set, there is an algorithm called cylindrical decomposition which breaks it into contractible pieces, glued along contractible pieces. See Algorithms in Real Algebraic Geometry, by Basu, Pollack and Roy. The $\mathbb{C}$-points of a $\mathbb{C}$-variety are, in particular, a semi-algebraic set, by restriction of scalars.
So you can compute cohomology, and you can compute a presentation of $\pi_1$. Of course, as always when dealing with groups in terms of generators and relations, it will probably not be computable to determine whether that group is trivial, or is isomorphic to some other group given by generators and relations.
I am pretty sure that this is not how anyone actually computes these things though. I hope someone will give an answer that reflects the actual state of the art.
-
David, do you happen to know what happens in characteristic $p>0$?Thanks. – Hailong Dao Feb 12 2010 at 15:03
Regarding your third paragraph, let p : X ---> B be a smooth, proper map of varieties over C, and for the heck of it say B is smooth. Here I'm thinking B is your space of possible coefficients in the equation, and the fibers of p are the varieties you're talking about. Then on complex points, p is proper submersion between manifolds, and hence a fibration of topological spaces; thus the fibers of p will be homotopy equivalent provided B is connected, canonically homotopy equivalent (up to 2nd order homotopy) if B is simply-connected, and really canonically homotopy equivalent if B is contractible, e.g. an affine space.
-
Cool. That seems like it should work, though I should think about it some more to really convince myself. Question: are all affine varieties really contractible? I guess it'd be a similar argument...? – Kevin Lin Feb 12 2010 at 16:33
Affine varieties are typically not contractible, e.g. a punctured positive genus curve. Affine spaces (as in Dustin's example) are contractible. – Emerton Feb 12 2010 at 16:46
Yeah, ok, that's what I was thinking. What's an "affine space" then? Oh -- I guess you just mean A^n? Silly me. – Kevin Lin Feb 12 2010 at 17:11
Apparently you can compute the h^{p,q}'s of smooth things in, for example, Macaulay. Here's an example: computing the h^{p,q}'s of a quintic hypersurface in P^4.
-
The trick I know (learned it from Ron Livne) is to project it to some space with known homotopy / homology, throw away the ramification and branch loci to get a covering map (and you better pray it's Galois - otherwise the mess is even bigger) , and then bring the ramification back as extra relations.
e.g. here is a computation of the homotopy group of an elliptic curve E:
You have a degree 2 projection to a P1 with four ramification points. The homotopy group of P1 minus the four branch points is freely generated by loops about 3 of these points.
Claim: the homotopy group of E minus the ramification locus the kernel of the map from the free group on three generators: F(a,b,c) to Z / 2, given by adding the powers on all the letters and taking mod 2 (e.g. abbac-1b maps to 4 mod 2 = 0).
Sketch of proof: think of a,b,c, as paths in E minus the ramification points which have to glue to a closed loop, and to project to the generators of the homotopy of P1 minus the branch points (i.e. they are "half loops" / sheet interchange about the ramification points).
Finally we have "fill" the ramification points - i.e. to bring the extra relations a2, b2, c2. After adding these relations, our group is generated by ab, ba, ac, ca, bc, cb. Hence - since e.g. (bc)(cb) = 1 - it is generated by ab, ac, bc; hence - since (ab)(bc) = (ac) - it is generated by ab, ac. We now observe that the map which sends x to axa is the map sending an element to the inverse; which shows as that
(ab)(ca)(ab)-1 = (ab)(ca)(ba) = (bc)-1 ba = (cb)(ba) = ca.
Note that this is the simplest example one can give - this is a painful trick.
-
Not the cleanest answer (if it's too messy to follow, I can clean it up a bit) but look at section 2.4 (starting on page 14) of these notes from a complex algebraic geometry course that I took. Also, section/chapter 6 on page 33 picks up the thread after some diversions about curves. But roughly, the cohomology (specifically, the Hodge decomposition) depends only on the Jacobian Ideal.
-
Oh, and of course, computing the dimension of graded pieces of a zero-dimensional ring are pretty much what Groebner bases are best at. – Charles Siegel Feb 12 2010 at 13:36
First we assume that your equations have rational coefficients; if this is not so then you can probably 'approximate' your variety by a variety defined over rationals without changing its topology (though you have to be careful here).
Now, multiplying the equations by the common denominator, you obtain equations with integral coefficients. Then you can consider these equations over finite fields (and you get certain 'reductions mod p' of your variety).
I believe (and can probably prove) that for p large enough, then the etale cohomology of this reduction is isomorphic to those of the original variety; and the etale cohomology over complex numbers is isomorphic to the singular cohomology with l-adic coefficients.
Lastly, you can compute the Betti numbers over a finite field via computing the number of points of this variety over extensions of this field. I am not sure yet that this algorithm is optimal.:)
-
Here you have to compute the number of solutions in all finite fields of char p. Is there an algorithm for that? – algori Feb 20 2010 at 13:50
You don't actually have to count solutions in ALL fields of a given characteristic. These numbers of points are encoded into the zeta-function of the variety, which is a rational function by the results of Grothendieck and Dwork. So in order to compute all these numbers, you only need to know the first few of them (the quantity required seems to be the sum of dimensions of cohomology groups; probably some apriori bounds for this exist). So, everything is finite. Yet I am very far from being sure that this is the best possible algorithm, and do not know how to optimize it. – Mikhail Bondarko Feb 23 2010 at 20:39
I believe you can have a elliptic curve and a singular elliptic curve both described by equations of degree three. Some people who know more should be able to answer this. If you change the coefficients I guess the homotopy type can change, just imagine that some subvariety degenerates into something singular and then expands back out to something else on the other side.
Even at the algebraic level I think you can have (a minimal set of) polynomials of different degrees defining the same ideal.
Another thought (or variant of your question) ... if you start with some subvariety Y and take a hyperplane section X, then X is cut out by the same things that cut out Y together with another linear polynomial. All the cohomology of X is determined except in the middle dimension by the Lefschetz hyperplane theorem. How do the coefficients describing the hyperplane determine this middle cohomology, including the cup products? (I guess the answer is well known?)
-
Yes, that's right, varieties defined by polynomials of the same degrees can have different homotopy types if we do not assume they are smooth, and your example illustrates that. At least intuitively, a singularity may change the homotopy type because it may shrink a cycle to say a point, and thus killing that cycle in homology or cohomology. – Kevin Lin Feb 12 2010 at 16:41
Here's an interesting special case. If X is a simple convex polytope then the Betti numbers h(X) of the associated toric variety can be computed from the face vector f(X). In fact, h(X) = Cf(X) where C is a matrix of binomial coefficients. This is closely related to Peter McMullen's shelling argument for proving the Dehn-Sommerville equations.
Here's another special case. For certain general convex polytopes the Betti numbers are not a linear function of the flag vector. This was done using explicit calculations (using Macaulay as I recall) by Mark McConnell. However, the middle perversity intersection homology (mpih) Betti numbers of the associated toric variety are a linear function of the flag vector.
The ring structure on the homology of the toric variety associated with a simple polytope is closely associated with the volume of the polytope, as it varies when the facets are moved in and out.
Finally, if the variety is defined over say the rationals then one can reduce mod p and start counting points, and then apply the Weil conjectures to determine the Betti numbers. In fact, this is a quick way to determine the Betti numbers of a smooth toric variety.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273380041122437, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/6493/weakly-informative-prior-distributions-for-scale-parameters/28420
|
# Weakly informative prior distributions for scale parameters
I have been using log normal distributions as prior distributions for scale parameters (for normal distributions, t distributions etc.) when I have a rough idea about what the scale should be, but want to err on the side of saying I don't know much about it. I use it because the that use makes intuitive sense to me, but I haven't seen others use it. Are there any hidden dangers to this?
-
1
– whuber♦ Jan 24 '11 at 19:26
Interesting. I am doing numerical stuff, is there an advantage to these distributions besides congugality? – John Salvatier Jan 24 '11 at 19:40
5
– onestop Jan 24 '11 at 20:22
– user10525 May 10 '12 at 11:16
Conjugate priors for a particular distribution like the normal are just priors that lead to that distribution as a posterior distribution given a set of data. If you use a conjugate prior you don't have to get into the mess of doing the integration to calculate the posterior. It makes things convenient but these days MCMC makes it much easier to use a wide variety of possible priors. – Michael Chernick May 10 '12 at 20:20
show 3 more comments
## 3 Answers
I would recommend using a "Beta distribution of the second kind" ($Beta_2$ for short) for a mildly informative distribution, and to use the conjugate inverse gamma distribution if you have strong prior beliefs. The reason I say this is that the conjugate prior is non-robust in the sense that, if the prior and data conflict, the prior has an unbounded influence on the posterior distribution. Such behaviour is what I would call "dogmatic", and not justified by mild prior information.
The property which determines robustness is the tail-behaviour of the prior and of the likelihood. A very good article outlining the technical details is here. For example, a likelihood can be chosen (say a t-distribution) such that as an observation $y_i \rightarrow \infty$ (i.e. becomes arbitrarily large) it is discarded from the analysis of a location parameter (much in the same way that you would intuitively do with such an observation). The rate of "discarding" depends on how heavy the tails of the distribution are.
Some slides which show an application in the heirarchical modelling context can be found here (shows the mathematical form of the $Beta_2$ distribution), with a paper here.
If you are not in the hierarchical modeling context, then I would suggest comparing the posterior (or whatever results you are creating) but use the Jeffreys prior for a scale parameter, which is given by $p(\sigma)\propto\frac{1}{\sigma}$. This can be created as a limit of the $Beta_2$ density as both its parameters converge to zero. For an approximation you could use small values. But I would try to work out the solution analytically if at all possible (and if not a complete analytical solution, get the analytical solution as far progressed as you possibly can), because you will not only save yourself some computational time, but you are also likely to understand what is happening in your model better.
A further alternative is to specify your prior information in the form of constraints (mean equal to $M$, variance equal to $V$, IQR equal to $IQR$, etc. with the values of $M,V,IQR$ specified by yourself), and then use the maximum entropy distribution (search any work by Edwin Jaynes or Larry Bretthorst for a good explanation of what Maximum Entropy is and what it is not) with respect to the jeffrey's "invariant measure" $m(\sigma)=\frac{1}{\sigma}$.
MaxEnt is the "rolls royce" version, while the $Beta_2$ is more a "sedan" version. The reason for this is that the MaxEnt distribution "assumes the least" subject to the constraints you have put into it (e.g., no constraints means you just get the jeffreys prior), whereas the $Beta_2$ distribution may contain some "hidden" features which may or may not be desirable in your specific case (e.g., if the prior information is more reliable than the data, then $Beta_2$ is bad).
The other nice property of MaxEnt distribution is that if there are no unspecified constraints operating in the data generating mechanism then the MaxEnt distribution is overwhelmingly the most likely distribution that you will see (we're talking odds way over billions and trillions to one). Therefore, if the distribution you see is not the MaxEnt one, then there is likely additional constraints which you have not specified operating on the true process, and the observed values can provide a clue as to what that constraint might be.
-
A really great answer. Thanks. – John Salvatier Jan 25 '11 at 16:53
@probabilityislogic Nice answer. Do you know where can I find the papers you mention int the the third paragraph? The links are not working. – user10525 May 10 '12 at 9:04
1
– probabilityislogic May 10 '12 at 21:29
@probabilityislogic Perhaps I am missing something but I cannot find a reference to the $Beta_2$ in the BA paper. – user10525 May 11 '12 at 10:16
@Procrastinator Am I right to assume that you want only proper priors ? You didn't say it but if you allow improper priors the already mentioned Jeffreys' priors would work and I could cite Jeffreys theory of probability, books by Dennis Lindley or statistics encyclopedia's. The way the request one could check using Google to find the answer and if it can't be found there is probably nothing in the literature outside the ones you have excleded. – Michael Chernick May 12 '12 at 19:47
show 5 more comments
The following paper by Daniels compares a variety of shrinkage priors for the variance. These are proper priors but I am not sure how many could be called non-informative if any. But, he also provides a list of noninformative priors (not all proper). Below is the reference.
M. J. Daniels (1999), A prior for the variance in hierarchical models, Canadian J. Stat., vol. 27, no. 3, pp. 567–578.
Priors
1. Flat: $K$ (constant)
2. Location-scale: $\tau^{-2}$
3. Right-invariant Haar: $\tau^{-1}$
4. Jeffreys': $1/(\sigma^2 + \tau^2)$
5. Proper Jeffreys': $\sigma / (2(\sigma^2 + \tau^2)^{3/2})$
6. Uniform shrinkage: $\sigma^2 / (\sigma^2 + \tau^2)$
7. DuMouchel: $\sigma/(2\tau(\sigma+\tau)^2)$
Another more recent paper in a related vein is the following.
A. Gelman (2006), Prior distributions for variance parameters in hierarchical models, Bayesian Analysis, vol. 1, no. 3, pp. 515–533.
-
(+1) for this finding. – user10525 May 13 '12 at 20:51
Thanks Procrastinator – Michael Chernick May 13 '12 at 21:11
1
(+1) This is a good find. I've added a stable link to the Daniels paper as well as another reference that seems to complement it. – cardinal May 13 '12 at 23:46
Thanks cardinal. – Michael Chernick May 13 '12 at 23:57
For hierarchical model scale parameters, I have mostly ended up using Andrew Gelman's suggestion of using a folded, noncentral t-distribution. This has worked pretty decently for me.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152359962463379, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/06/23/limits-in-functor-categories/?like=1&source=post_flair&_wpnonce=f636fc8316
|
# The Unapologetic Mathematician
## Limits in functor categories
Today I want to give a great example of creation of limits that shows how useful it can be. For motivation, take a set $X$, a monoid $M$, and consider the set $M^X$ of functions from $X$ to $M$. Then $M^X$ inherits a monoid structure from that on $M$. Just define $\left[f_1f_2\right](x)=f_1(x)f_2(x)$ and take the function sending every element to the identity of $M$ as the identity of $M^X$. We’re going to do the exact same thing in categories, but with having limits instead of a monoid structure.
As a preliminary result we need to note that if we have a set of categories $\mathcal{C}_s$ for $s\in S$ each of which has $\mathcal{J}$-limits, then the product category $\prod\limits_{s\in S}\mathcal{C}_s$ has $\mathcal{J}$-limits. Indeed, a functor from $\mathcal{J}$ to the product consists of a list of functors from $\mathcal{J}$ to each category $\mathcal{C}_s$, and each of these has a limiting cone. These clearly assemble into a limiting cone for the overall functor.
The special case we’re interested here is when all $\mathcal{C}$ are the same category. Then the product category $\prod\limits_S\mathcal{C}$ is equivalent to the functor category $\mathcal{C}^S$, where we consider $S$ as a discrete category. If $\mathcal{C}$ has $\mathcal{J}$-limits, then so does $\mathcal{C}^S$ for any set $S$.
Now, any small category $\mathcal{S}$ has a discrete subcategory $|\mathcal{S}|$: its set of objects. There is an inclusion functor $i:\left|\mathcal{S}\right|\rightarrow\mathcal{S}$. This gives rise to a functor $\mathcal{C}^i:\mathcal{C}^\mathcal{S}\rightarrow\mathcal{C}^{|\mathcal{S}|}$. A functor $F:\mathcal{S}\rightarrow\mathcal{C}$ gets sent to the functor $F\circ i:\left|\mathcal{S}\right|\rightarrow\mathcal{C}$. I claim that $\mathcal{C}^i$ creates all limits.
Before I prove this, let’s expand a bit to understand what it means. Given a functor $F:\mathcal{J}\rightarrow\mathcal{C}^\mathcal{S}$ and an object $S\in\mathcal{S}$ we can get a functor $\left[F(\underline{\hphantom{X}})\right](S):\mathcal{J}\rightarrow\mathcal{C}$ that takes an object $J\in\mathcal{J}$ and evaluates $F(J)$ at $S$. This is an $|\mathcal{S}|$-indexed family of functors to $\mathcal{C}$, which is a functor to $\mathcal{C}^{|\mathcal{S}|}$. A limit of this functor consists of a limit for each of the family of functors. The assertion is that if we have such a limit — a $\mathcal{J}$-limit in $\mathcal{C}$ for each object of $\mathcal{S}$ — then these limits over each object assemble into a functor in $\mathcal{C}^\mathcal{S}$, which is the limit of our original $F$.
We have a limiting cone $\lambda_{J,S}:L(S)\rightarrow[F(J)](S)$ for each object $S\in\mathcal{S}$. What we need is an arrow $L(s):L(S_1)\rightarrow L(S_2)$ for each arrow $s:S_1\rightarrow S_2$ in $\mathcal{S}$ and a natural transformation $L\rightarrow F(J)$ for each $J\in\mathcal{J}$. Here’s the diagram we need:
We consider an arrow $j:J_1\rightarrow J_2$ in $\mathcal{J}$. The outer triangle is the limiting cone for the object $S_1$, and the inner triangle is the limiting cone for the object $S_2$. The bottom square commutes because $F$ is functorial in $\mathcal{S}$ and $\mathcal{J}$ separately. The two diagonal arrows towards the bottom are the functors $F(J_1)$ and $F(J_2)$ applied to the arrow $s$. Now for each $J$ we get a composite arrow $\left[F(J)\right](s)\circ\lambda_{J,S_1}:L(S_1)\rightarrow\left[F(J)\right](S_2)$, which is a cone on $\left[F(\underline{\hphantom{X}})\right](S_2)$. Since $\lambda_{J,S_2}:L(S_2)\rightarrow\left[F(J)\right](S_2)$ is a limiting cone on this functor we get a unique arrow $L(s):L(S_1)\rightarrow L(S_2)$.
We now know how $L$ must act on arrows of $\mathcal{S}$, but we need to know that it’s a functor — that it preserves compositions. To do this, try to see the diagram above as a triangular prism viewed down the end. We get one such prism for each arrow $s$, and for composable arrows we can stack the prisms end-to-end to get a prism for the composite. The uniqueness from the universal property now tells us that such a prism is unique, so the composition must be preserved.
Finally, for the natural transformations required to make this a cone, notice that the sides of the prism are exactly the naturality squares for a transformation from $L$ to $F(J_1)$ and $F(J_2)$, so the arrows in the cones give us the components of the natural transformations we need. The proof that this is a limiting cone is straightforward, and a good exercise.
The upshot of all this is that if $\mathcal{C}$ has $\mathcal{J}$-limits, then so does $\mathcal{C}^\mathcal{S}$. Furthermore, we can evaluate such limits “pointwise”: $\left[\varprojlim_\mathcal{J}F\right](S)=\varprojlim_\mathcal{J}(F(S))$.
As another exercise, see what needs to be dualized in the above argument (particularly in the diagram) to replace “limits” with “colimits”.
### Like this:
Posted by John Armstrong | Category theory
## 9 Comments »
1. wow, your blog is all math. this is very much an interest to you isn’t it? are you a professor? and are you from maryland?
Comment by | June 23, 2007 | Reply
2. I grew up a long time in Maryland and did my undergraduate work at College Park. I completed my Ph.D. at Yale, and yes, I’ll be a professor in the math department at Tulane in the fall. Until then I’m hanging around central Maryland where my parents still live.
Comment by | June 23, 2007 | Reply
3. Hi John,
Under suitable assumptions on your category C (e.g., if C has arbitrary *coproducts* (!)), there’s another way of deriving this result, using the fact that the functor C^S –> C^|S| is monadic, hence preserves and reflects (creates) any limits which happen to exist. (I haven’t checked; have you talked about the Eilenberg-Moore category of algebras somewhere on your blog?) Similarly, if C has arbitrary products, then C^S –> C^|S| is comonadic.
Comment by Todd Trimble | June 24, 2007 | Reply
4. No, I haven’t gone into Eilenberg-Moore, nor monads. In fact, I haven’t quite talked about monoidal categories yet, which would be a precursor (in my mind) to that sort of thing. It’s a good point, though.
Comment by | June 24, 2007 | Reply
5. [...] the proof of this is so similar to how we established limits in categories of functors, I’ll just refer you back to that and suggest this as practice with those [...]
Pingback by | June 24, 2007 | Reply
6. Lots of new non-parsing formulas in this one
Comment by Avery | January 8, 2008 | Reply
7. If I is a small category ans it has an initial object, i, how can I prove that any functor F:I->C has limit and the limit is F(i) ?
Comment by Lucia | February 2, 2009 | Reply
8. Lucia, I am not in the business of doing your homework.
Comment by | February 2, 2009 | Reply
9. Thank you you were very helpful:)
Comment by Lucia | February 2, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 79, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935133159160614, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/4600/how-to-invert-an-integral-equation
|
# How to invert an integral equation
There have been numerous times when I've needed to invert an integral equation, i.e. I have something like $$f(x) = g_1(x)\int_{0}^x g_2(x') dx'$$ for arbitrary functions $g_1$ and $g_2$, and I would like to find $x$ for a given $f$. The way I've gotten around this is just making some sort of table of f(x) and x with spacing up to some required precision. Is there a more efficient method of doing it (numerically or analytically)?
When I try to use `NSolve[ f(x) == a, x ]` mathematica complains `NIntegrate::nlim: x = x is not a valid limit of integration.`
-
## 2 Answers
My interpretation of the question is that you want to find $x$ for given $f$, $g_2$ and $g_1$. Then just define $F=f/g_1$ and differentiate with respect to $x$ on both sides:
$\frac{d}{dx}F(x)=g_2(x)$
Now solve this equation for $x$. There's no integration involved.
-
2
Just implemented this and it works well. Using a simple finite-difference routine to calculate the derivative of 'F' is computationally trivial compared to the iterative numerical integrations. – zhermes Apr 22 '12 at 4:59
````f[x_?NumericQ] := f[x] = g1[x] NIntegrate[g2[t], {t, 0, x}];
g1[x_] := Sin[x]; g2[x_] := Cos[x];
k = FindRoot[f[x] == Pi/9, {x, 1}]
Pi/9 - f[x] /. k[[1]]
(*
->
{x -> 0.632072}
-5.55112*10^-17
*)
````
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.894023060798645, "perplexity_flag": "middle"}
|
http://stats.stackexchange.com/questions/20936/how-do-i-handle-very-different-weights-in-a-least-squares-fit
|
# How do I handle very different weights in a least squares fit?
I'm performing a weighted linear least squares fit, where the weights correspond to the number of counts of a specific observation. Due to the nature of the data, it is possible that a small handful of observations get weighted much more than the rest, so that the regression is dominated by, say, two data points only, which obviously leads to bad results.
I've run a few tests and I noticed that by e.g. capping the weights, I can improve the results, though this feels rather hack-ish to me. Are there better ways to avoid a small number of data points to out-weigh the rest of the data, or, if capping the weights is a good approach, is there non-ad hoc way to determine the optimal value of the cap?
EDIT
Here's what some sample data and fits look like with (a) no weights, and (b) weights. The data were generated from fairly realistic simulations, so I know the ground truth (red line).
My problem is that the weight of some data points (#2 at x=10 for example) can be sufficiently large to dominate the fit. However, I also don't want the very-low-count data to weigh in too much, otherwise I get a really crappy fit as well.
-
Have you tried applying a function to the weight? Reasonable choices would be a log or sigmoid. – rm999 Jan 11 '12 at 17:08
3
If you don't like the weights, why are you using them? Capping the weights is esentially using a different weighting function, and is also suspect in the sense that you are in effect giving more weight to the infrequent events (typically, outliers) than to the most frequent events. "Improving" the results by modifying the weights also makes me wonder whether you have a predetermined result to arrive at in mind, and are trying to come up with a weighting function that will achieve the result that you want to get. Else, how do you know that changing the weights improves the results? – Dilip Sarwate Jan 11 '12 at 17:42
4
Is it really the case that a small handful of observations is getting weighted more heavily if the weight is the number of counts of the observation? From your description, it seems to me that each physical observation is really $n_i$ actual observations, in which case each actual observation is getting a weight of one (using your weighting scheme.) – jbowman Jan 11 '12 at 18:19
1
Is there something different about the frequently occurring observations that you could account for in the model? i.e. a reason why you may expect them to be "different" to the rest? If not, then you cannot reasonably claim that they are "skewing" the results. – probabilityislogic Jan 11 '12 at 20:40
1
@DilipSarwate: Since I'm testing this on simulated data, I know exactly which result I should expect, and I want to find out how I can get a trustworthy fit to data I know before I start working with the real data coming off the sensor. Also, you're right in that I'm using the weights to penalize the low-weight data, so making them more important is indeed an issue. – Jonas Jan 12 '12 at 17:13
show 6 more comments
## 1 Answer
You can try applying a function to the weight. Reasonable choices would be a log or sigmoid.
-
Thanks for the suggestion. Log won't work in my case, because it gives too much weight for the low-count data, but sigmoid seems to be exactly what I was looking for. Is there any good rule of thumb for choosing the transition point? – Jonas Jan 16 '12 at 13:57
This sure sounds ad hoc to me. – whuber♦ Jan 16 '12 at 15:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562269449234009, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/132937/show-that-for-a-finite-metric-space-a-every-subset-is-open/132939
|
# Show that for a finite metric space A, every subset is open
Let A be a finite metric space .I want to prove that every subset of A is open. I let the set B, be any subset of A. Since A is finite,then I know that A/B is also finite.I'm stuck here how can this help me reach to a proof? I beg your help
-
## 4 Answers
Every finite metric space is equivalent to a discrete space.
-
1
That's not a proof, is it? You just reworded the question and asserted it as a theorem. – nik Apr 17 '12 at 15:26
Well, I can't quite agree; mathematics is all about transformation of problem statements. In this case I transformed the problem into the direct application of a well-known theorem. – akkkk Apr 17 '12 at 15:42
1
But your theorem is exactly what is being asked, the definition of a discrete space is "every subset is open"... If you asked "Show that theorem X is true", and someone answered "It's obvious by theorem X", how would you feel? – nik Apr 17 '12 at 15:43
Viewing problems in a more general light can sometimes help. In this case, an abstract question (about open and closed sets) is asked, and I clarified it by the more intuitive understanding of discrete spaces. And apparently the question owner was helped. – akkkk Apr 17 '12 at 16:31
Hint: If $(A,d)$ is a finite metric space and $x \in A$ and we let $$\delta=\min_{y \in A \setminus \{x\}}d(x,y)$$ then what is in $B(x,\delta)$?
-
Massive hint: In a metric space, finite point sets are closed. So suppose that you have a subset $B$ of $A$. Then $A \setminus B$ is a finite point set so.....
-
A space is discrete iff every singleton set is open. If M is a finite metric space and $x\in M$. Let $\epsilon$ be the minimum distance from x to other points of M, the $B_{\epsilon}(x)$ contains x only So $\{x\}$ is open for every x.So M is discrete.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960157573223114, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-equations/72318-problem-stability-theorem.html
|
# Thread:
1. ## Problem With Stability Theorem
Okay, I have the following autonomous differential equation:
$dx/dt = f(x) = x^2-5x+6$
I found the steady states of the equation of be x*1=2 and x*2=3
When I sub them into $f'(x^*) = 2x-5$, I get $f'(x^*1) = -1$ and $f'(x^*2)= 1$.
Now I was told that the stability theorem states that if $f'(x^*) < 0$, then x* is stable and when $f'(x^*) > 0$, x* is unstable, however, when I create my phase line diagram is shows that x*2 is stable.
Could someone shed some light on this?
Thanks,
JS
2. Originally Posted by jschlarb
Okay, I have the following autonomous differential equation:
$dx/dt = f(x) = x^2-5x+6$
I found the steady states of the equation of be x*1=2 and x*2=3
When I sub them into $f'(x^*) = 2x-5$, I get $f'(x^*1) = -1$ and $f'(x^*2)= 1$.
Now I was told that the stability theorem states that if $f'(x^*) < 0$, then x* is stable and when $f'(x^*) > 0$, x* is unstable, however, when I create my phase line diagram is shows that x*2 is stable.
Could someone shed some light on this?
Thanks,
JS
Here's the direction field- it shows that $x = 2$ is stable and $x = 3$ unstable.
Attached Thumbnails
3. I think I found my problem: When I made a general shape of the graph, I made it so it curved down instead of curve up :S.
Thanks!
4. I also have another part of the question I need help with:
It asks to use separation of variable to determine the equation explicitly, however when I integrate it, I get an equation that doesn't make sense:
$1=Ke^t$
I separated the variables to get an equation with which I needed to use integration by parts. I determined that A=1 and B=-1 to which i got the final equation to be
$ln|x-2|-ln|x-3|+c = t+c$
Taking the e function to everything and simplifying got me that. Could there have been a mistake in my integration (I apologize, I'm not too familiar with the math functions on this site so I wasn't able to show you what I did).
5. Originally Posted by jschlarb
I also have another part of the question I need help with:
It asks to use separation of variable to determine the equation explicitly, however when I integrate it, I get an equation that doesn't make sense:
$1=Ke^t$
I separated the variables to get an equation with which I needed to use integration by parts. I determined that A=1 and B=-1 to which i got the final equation to be
$ln|x-2|-ln|x-3|+c = t+c$
Taking the e function to everything and simplifying got me that. Could there have been a mistake in my integration (I apologize, I'm not too familiar with the math functions on this site so I wasn't able to show you what I did).
No - you're good to go. Put the constants together and call them $\ln c$ (it cleans things up a bit.) Then solve for x
$\ln \left| \frac{x-2}{x-3}\right| = \ln c + t$
and solving for x gives
$x = \frac{3 c e^t - 2}{ c e^t - 1}$
You can see if $c = 0$ you have your first critical point whereas if you let $c = \frac{1}{\bar{c}}$ then
$x = \frac{3e^t - 2\bar{c}}{ e^t - \bar{c}}$
and letting $\bar{c} = 0$ gives your second.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585870504379272, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/47722-odd-polynomial-functions.html
|
# Thread:
1. ## Odd Polynomial Functions
Prove algebraically and explain grahically why a polynomial that is an odd function is no longer an odd function when a non-zero constant is added. Provide examples.
Thanks in advance :P
2. Originally Posted by ahling
Prove algebraically and explain grahically why a polynomial that is an odd function is no longer an odd function when a non-zero constant is added. Provide examples.
Thanks in advance :P
If a polynomial P is an odd function, then P(-x)=-P(x).
So if you add a constant, say m, then let Q(x)=P(x)+m.
The thing is now to prove that Q is not odd.
Q(-x)=P(-x)+m=-P(x)+m $\neq$ -Q(x)=-(P(x)+m)
Do you quite understand ?
Graphically, adding a constant means to level up (or down) the curve. Will it still be symmetric wrt the center of the graph ?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8600618243217468, "perplexity_flag": "middle"}
|
http://conservapedia.com/Rational_number
|
# Rational number
### From Conservapedia
x2 − 5x + 6 = 0 x = ? This article/section deals with mathematical concepts appropriate for a student in mid to late high school.
The rational numbers are the numbers representable as ratios of integers, that is, fractions. Mathematicians denote the set of rational numbers with an ornate capital letter: $\mathbb{Q}$. They are the 3rd item in this hierarchy of types of numbers:
• The "natural numbers", 1, 2, 3, ... (There is controversy about whether zero should be included. It doesn't matter.)
• The "integers"—positive, negative, and zero
• The "rational numbers", which can be represented as fractions, like 355/113
• The "real numbers", including irrational numbers
• The "complex numbers", which give solutions to polynomial equations
## Contents
Rational numbers are actually defined as equivalence classes of ratios of integers, so that 2/3 and 4/6 are the same number.
If a rational number were to be represented as a decimal to infinite precision, that decimal would either terminate at some point (e.g. 1.25) or would eventually get into an endless repeating pattern (e.q. 1.250909090909..., which is 1376/1100).
Zero in the denominator of a rational number is not allowed. All rational numbers are finite.
## Countability and density
An important theoretical property of the rationals is that they are countable. That is, the entire set of rational numbers can be put into a one-to-one correspondence with the integers. This is surprising at first glance, since the integers are "sparse" while the rationals seem to fill out the real line. To see this correspondence, we need to list all rational numbers in some order. List the rationals (that is, the rationals in which the fraction has been fully reduced) that have the sum of their numerators and denominators equal to one. 0/1 is the only such. Then follow those with the rationals that have the sum of their numerators and denominators equal to two. 1/1 is the only such, since 0/2 is not reduced. Follow those with the rationals that have the sum of their numerators and denominators equal to three. They are 1/2 and 2/1. Continue without end.
A more subtle theoretical property is that the rationals comprise a countable dense subset of the reals. This is used in some advanced theorems of topology.
## Algebraic Properties
The algebraic implications of the rational numbers are both interesting and far reaching. The rational numbers represent a specific example of a Field of Fractions over the Integers. The integers constitute a ring, there are two defined operations - addition and multiplication, which follow a certain set of rules (or as Mathematicians would say - have a certain structure). Specifically, the integers are an integral domain, a special type of ring with no zero divisors (There is no integer $a,b\neq 0$ such that a * b = 0) and where the multiplication is commutative. The integers, however, are not a field because not every integer has an integer multiplicative inverse (in fact, only the number 1 and -1 do) .
As mentioned above, we can define an equivalence relation on ($\mathbb{Z}\times\mathbb{Z}\backslash\{0\}$) (A set of pairs of integers, where the second number in the pair cannot be 0 - thus avoiding the issue of dividing by zero) where two tuples (a,b) and (c,d) (which can be written in the more familiar form $\frac{a}{b}$ and $\frac{c}{d}$) are equivalent if ad − bc = 0. For example, the two fractions $\frac{1}{4}$ and $\frac{2}{8}$ are equivalent because 2 * 4 − 1 * 8 = 0. When two pairs of numbers (or fractions) are equivalent we call them members of the same equivalence class. The Field of fractions is the set of these equivalence classes, it is a simple exercise to show that they constitute a field. Along with the equivalence class, we define the operations of addition ($\frac{a}{b} + \frac{c}{d} = \frac{ad+cb}{bd}$) and multiplication ($\frac{a}{b}*\frac{c}{d} = \frac{ac}{bd}$), these operations are intentionally designed to mimic the addition and multiplication of fractions often taught in gradeschool, however keep in mind that the underlying ring does not necessarily have to be the integers (this same formulation constructs a field for any ring). Since the integers are naturally embedded in its field of fractions (we define the trivial mapping that $a\in \mathbb{Z} \rightarrow (a,1) \equiv \frac{a}{1} \in \mathbb{Z}\times \mathbb{Z}\backslash\{0\}$ we only need to show the existence of a multiplicative inverse for each member of the set (all the other rules are satisfied by the ring and integral domain structure of the integers). Since for every fraction $\frac{a}{b}$ we can construct a multiplicative inverse $\frac{b}{a}$, we satisfy all the conditions for a field and complete the proof. If the field of fractions is taken over a bona-fide field we will just get back the same field (technically, a new field but which is isomorphic to the original) - so the field of fractions over the rationals is just the rationals again.
Therefore, the rational numbers (or following the definition above, the field of fraction over the integers) can be though of as a "completion" of the integers in the sense that we construct a field "from the ashes" of the ring of integers. It is also important to note that the rational numbers is the smallest field in which the integers can be embedded, in fact, for any arbitrary ring, its field of fractions constitute the smallest field into which the ring can be embedded. Since both the integers and the rationals are infinite in size, we define the notion of smallest to mean that there doesn't exist a subfield $F \subset \mathbb{Q}$ such that $\mathbb{Z} \subset F$.
The concept of a field of fraction may seem heavy-handed for something as intuitive as rational numbers, however when we are dealing with very abstract rings that look very little like the integers we have been acquainted with since kindergarten its field of fractions may look very bizarre , it is useful that we can rely on our intuition of rational numbers to understand very abstract things.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482007026672363, "perplexity_flag": "head"}
|
http://mathhelpforum.com/math-software/209043-please-help-piece-matlab-finite-differences-coursework.html
|
# Thread:
1. ## Please help with a piece of MatLab finite differences coursework :)
Hello everyone. New to this so apologies for any mistakes.
I have attached a piece of coursework I have for Uni and wondered if anyone had any ideas of where to even start. Any help would be massively appreciated.
Cheers.
Oli.
Attached Thumbnails
2. ## Re: Please help with a piece of MatLab finite differences coursework :)
Originally Posted by Ojackson
Hello everyone. New to this so apologies for any mistakes.
I have attached a piece of coursework I have for Uni and wondered if anyone had any ideas of where to even start. Any help would be massively appreciated.
Cheers.
Oli.
Start with the stability criterion for whatever numerical method you are going to use.
For the FTCS scheme the stability criterion is that:
$\frac{2D\Delta t}{(\Delta x)^2} \le 1$
which will allow you to calculate $\Delta t_{max}$
.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418156743049622, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/59945/why-are-bad-orbits-excluded-from-symplectic-field-theory-contact-homology-etc/60153
|
Why are “bad orbits” excluded from symplectic field theory, contact homology etc.?
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In the Introduction to Symplectic Field Theory paper, Eliashberg, Givental and Hofer introduce a differential graded algebra generated by periodic orbits of the Reeb vector field on a contact manifold $M$. Instead of taking all periodic orbits, they throw out the "bad orbits". Given a simple periodic orbit $\gamma : S^1 \to M$, denote by $\gamma^k$ the periodic orbit that is a $k$-fold cover of $\gamma$. They define the even covers of an embedded orbit $\gamma$ to be bad if the Conley-Zehnder indices of $\gamma^{2k}$ have different parity from the Conley-Zehnder indices of $\gamma^{2k-1}$. (Note that this is well-defined, since by properties of the Conley-Zehnder index, the even covers all have the same parity, and the odd ones all have the same. Furthermore, the parity of the Conley-Zehnder index is independent of the choice of trivialization of the contact structure along the orbit.)
The question then has three parts:
1. Why must they throw out the "bad orbits"? Is there a possible theory that includes them?
2. Why can they throw out the "bad orbits" and still get a differential on the chain complex? doesn't the compactness theorem for pseudholomorphic curves allow for breaking at a "bad orbit"? By analogy, one cannot arbitrarily exclude certain critical points from a Morse complex and still obtain a chain complex.
3. Naively, one would hope that if $a$ is a good orbit, $b$ is a bad orbit, and the moduli space $\mathcal M(a,b)$ is 1-dimensional, then the signed count `$\# \mathcal{M}(a,b)/\mathbb{R}$` is zero. Is this true? Is there a way in which it can be arranged to be true?
-
2
Some context, or an explanation of your notation, can do wonders in attracting useful answers! – Mariano Suárez-Alvarez Mar 29 2011 at 5:36
1
(As can proper capitalization and grammar, of course) – Mariano Suárez-Alvarez Mar 29 2011 at 5:37
2
If you (or someone else willing to put in the time) can turn this into a meaningful question, please use the "edit" link to fix it, and then flag for moderator attention. – S. Carnahan♦ Mar 29 2011 at 9:33
1
@Scott, I can't find any "edit" link, on this question or on any other one for that matter. (I can find it on math.SE, where it is between "flag" and "link", whereas here I have nothing between "flag" and "cite", so I suspect it's been disabled on MO?) I would be happy to edit this question (and provide an answer). The author is presumably asking about why the so-called "bad orbits" are thrown out of the symplectic field theory chain complex. In dimension 3, these are hyperbolic orbits whose stable/unstable manifolds are not orientable. – Sam Lisi Mar 29 2011 at 13:19
show 1 more comment
2 Answers
That's because symplectic field theory is secretly an $S^1$-equivariant theory. As a finite-dimensional model, suppose that you have a manifold $M$ with an $S^1$-action and an invariant function $h: M \rightarrow \mathbb{R}$. Take an orbit of the $S^1$-action (not a fixed point) which consists of critical points of $h$, and such that the Hessian is transversally nondegenerate. If the negative eigenspaces of the Hessian form a nontrivial vector bundle over our orbit, the local contribution to the Morse homology is a twisted homology of $S^1$, which vanishes with rational coefficients. This vanishing also holds for equivariant homology. Note that this phenomenon can never happen for free orbits, since the $S^1$-action itself provides a trivialization of the bundle of negative eigenspaces, but it does happen for orbits with finite even stabilizer.
The analogy is of course that $h$ corresponds to the action functional on free loop space. Free orbits of critical points correspond to simple periodic (Reeb) orbits, and ones with finite stabilizers to multiple covers of simple orbits.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I think Eigenbunny's explanation cuts to the heart of the matter. Here are some more specific details in the case of symplectic field theory.
First of all, to get some intuition for what bad orbits look like, in the case of a 3 dimensional contact manifold, the bad orbits are precisely the hyperbolic orbits that have non-orientable stable and unstable manifolds. For simplicity, I will talk about cylinders, but the general case is very similar.
(1) The bad orbits must be thrown out because of an orientation problem. In SFT, the simple orbits are decorated with a choice of marked points on each one, called markers. Also, the domain curves are decorated with asymptotic markers. These are a choice of marked point in the circle at infinity for each of the cylindrical ends of the domain. Now consider a holomorphic half-cylinder asymptotic to the $m$-fold cover of a simple orbit $\gamma$. By the convergence result, the image of the asymptotic marker makes sense, and is a point on $\gamma$. We want the image of the asymptotic marker to match the marker on the orbit. Given an unmarked curve, we have $m$ possible locations for the asymptotic marker on the domain.
To understand how to glue, we want to think of how a level 2 building can arise as the limit of cylinders. Suppose we break at the $m$-fold cover of $\gamma$. The two ends asymptotic to this orbit don't obtain an asymptotic marker -- only a marker relative to each other. We can think of this as having $m$ possible choices of asymptotic marker on each end, and then identifying the ones obtained by a simultaneous rotation, giving us $m$ possible choices in the end. Now, for us to count these curves, we need the rotation of the markers to consistent with the orientations. In general it is, but for bad orbits, this is orientation reversing.
This business with markers is one of the places where we see the equivariant nature of SFT: we break the $S^1$ symmetry with these markers, and then quotient out by all possible choices to remove the dependence.
For more details, Bourgeois and Mohnke's Coherent orientations in symplectic field theory does a good job explaining this in detail. Also, see the Introduction to SFT paper by Eliashberg, Givental & Hofer (Section 1.8.4 and Remarks 1.9.2, 1.9.6)
(2) This is also because of the $\mathbb Z_m$ action. Consider the simplest example, where we look at the boundary of a 1-dimensional moduli space of cylinders connecting orbits a and a'. (By this, I mean 1 dimensional after the quotient by the translation.) Suppose one boundary component consists of a level 2 building with two cylinders, a to b and b to a', and b is a bad orbit. Suppose b has multiplicity $m=2k$. Then, the boundary building has $m$ different possible decorations by asymptotic markers. If you chase through the orientations, the end result is that these terms all cancel out, thus saving us from having problems with $d^2$.
(3) The answer to the most optimistic version of this question is no: here is no reason for the moduli space to be empty in general. In order to get a signed count of zero, one needs to make sense of orientations. The way orientations are done in SFT, these moduli spaces asymptotic to bad orbits don't even get one, so the signed count doesn't make sense. Arguably, this isn't the only way we might define orientations (to get the same theory). A related situation in which something like this is true is in the isomorphism between linearized contact homology and $S^1$-equivariant symplectic homology (actually the $SH^+$ part), due to Bourgeois and Oancea. Their non-equivariant contact homology involves decorating each Reeb orbit with a Morse function and imposing markers. The bad'' orbits end up dying (over $\mathbb Q$) because the differential of the maximum, instead of being $0$, is then twice the minimum.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153916835784912, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/272779/let-y0-1-rightarrow-mathbb-r-be-a-twice-continuously-differentiable-funct
|
Let $y:[0,1] \rightarrow \mathbb R$ be a twice continuously differentiable function such that $y''(x)-y(x)<0$
I came across the following problem that says:
Let $y:[0,1] \rightarrow \mathbb R$ be a twice continuously differentiable function such that $y''(x)-y(x)<0$ for all $x \in (0,1)$ and $y(0)=y(1)=0.$ Then which of the following statement(s) is/are true?
(a) $y$ has at least two zeros in $(0,1)$.
(b) $y$ has at least one zero in $(0,1)$.
(c) $y(x)>0$ for all $x \in (0,1)$.
(d) $y(x)<0$ for all $x \in (0,1)$.
Can someone point me in the right direction? Thanks in advance for your time.
-
2
This example $y=1-(2x-1)^2$ eliminates all but one. – P.. Jan 8 at 12:55
@Pambos I have one question.Solving $y(x)=0,$ we see that $x=0,1$ which does not lie in $(0,1)$. So how can option $(a)$ be correct? – learner Jan 8 at 13:06
Hint: (a) is not the correct answer. – P.. Jan 8 at 13:07
@Pambos I have got it.It is not $(a),(b)$ or $(d)$. So it must be $(c)$. – learner Jan 8 at 13:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434017539024353, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Grandi's_series
|
# Grandi's series
In mathematics, the infinite series 1 − 1 + 1 − 1 + …, also written
$\sum_{n=0}^{\infin} (-1)^n$
is sometimes called Grandi's series, after Italian mathematician, philosopher, and priest Guido Grandi, who gave a memorable treatment of the series in 1703. It is a divergent series, meaning that it lacks a sum in the usual sense. On the other hand, its Cesàro sum is 1/2.
## Heuristics
One obvious method to attack the series
1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + …
is to treat it like a telescoping series and perform the subtractions in place:
(1 − 1) + (1 − 1) + (1 − 1) + … = 0 + 0 + 0 + … = 0.
On the other hand, a similar bracketing procedure leads to the apparently contradictory result
1 + (−1 + 1) + (−1 + 1) + (−1 + 1) + … = 1 + 0 + 0 + 0 + … = 1.
Thus, by applying parentheses to Grandi's series in different ways, one can obtain either 0 or 1 as a "value". (Variations of this idea, called the Eilenberg–Mazur swindle, are sometimes used in knot theory and algebra.)
Treating Grandi's series as a divergent geometric series we may use the same algebraic methods that evaluate convergent geometric series to obtain a third value:
S = 1 − 1 + 1 − 1 + …, so
1 − S = 1 − (1 − 1 + 1 − 1 + …) = 1 − 1 + 1 − 1 + … = S,
resulting in S = 1/2. The same conclusion results from calculating −S, subtracting the result from S, and solving 2S = 1.[1]
The above manipulations do not consider what the sum of a series actually means. Still, to the extent that it is important to be able to bracket series at will, and that it is more important to be able to perform arithmetic with them, one can arrive at two conclusions:
• The series 1 − 1 + 1 − 1 + … has no sum.[1][2]
• ...but its sum should be 1/2.[2]
In fact, both of these statements can be made precise and formally proven, but only using well-defined mathematical concepts that arose in the 19th century. After the late 17th-century introduction of calculus in Europe, but before the advent of modern rigor, the tension between these answers fueled what has been characterized as an "endless" and "violent" dispute between mathematicians.[3][4]
## Early ideas
Main article: History of Grandi's series
## Divergence
In modern mathematics, the sum of an infinite series is defined to be the limit of the sequence of its partial sums, if it exists. The sequence of partial sums of Grandi's series is 1, 0, 1, 0, …, which clearly does not approach any number (although it does have two accumulation points at 0 and 1). Therefore, Grandi's series is divergent.
It can be shown that it is not valid to perform many seemingly innocuous operations on a series, such as reordering individual terms, unless the series is absolutely convergent. Otherwise these operations can alter the result of summation. It's easy to see how terms of Grandi's series can be rearranged to arrive at any integer number, not only 0 or 1.
• E. W. Hobson, The theory of functions of a real variable and the theory of Fourier's series (Cambridge University Press, 1907), section 331. The University of Michigan Historical Mathematics Collection [1]
• E. T. Whittaker and G. N. Watson, A course of modern analysis, 4th edition, reprinted (Cambridge University Press, 1962), section 2.1.
## Education
Main article: Grandi's series in education
## Summability
Main article: Summation of Grandi's series
## Related problems
Main article: Occurrences of Grandi's series
## Notes
1. ^ a b Devlin p.77
2. ^ a b Davis p.152
3. Kline 1983 p.307
4. Knopp p.457
## References
• Davis, Harry F. (May 1989). Fourier Series and Orthogonal Functions. Dover. ISBN 0-486-65973-9.
• Devlin, Keith (1994). Mathematics, the science of patterns: the search for order in life, mind, and the universe. Scientific American Library. ISBN 0-7167-6022-3.
• Kline, Morris (November 1983). "Euler and Infinite Series". Mathematics Magazine 56 (5): 307–314. doi:10.2307/2690371. JSTOR 2690371.
• Knopp, Konrad (1990) [1922]. Theory and Application of Infinite Series. Dover. ISBN 0-486-66165-2.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8679466843605042, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-equations/151794-solve-differential-equation-using-power-series.html
|
# Thread:
1. ## Solve differential equation using "Power Series"
How can I solve the following differential equation using power series?
(1 – x^2)y'' – 6xy' – 4y = 0
I know there are other ways to solve this, but I specifically need to know how to do these using power series.
2. This is just a guess, but maybe rewriting the DEQ as
$y'' - \frac{6x}{1-x^2} y' - \frac{4}{1-x^2} y = 0$
Rewrite the fractions as a power series, and then solve?
3. Well, maybe. Most likely what is intended is to assume a solution of the form
$\displaystyle{y=\sum_{j=0}^{\infty}a_{j}x^{j}},$ plug that into the DE, and turn the crank. Am I right, EmilyL?
4. That sound more like it.
5. Originally Posted by Ackbeet
Well, maybe. Most likely what is intended is to assume a solution of the form
$\displaystyle{y=\sum_{j=0}^{\infty}a_{j}x^{j}},$ plug that into the DE, and turn the crank. Am I right, EmilyL?
Yes, exactly! Instead of "a" and "j", she used "c" and "n" respectively, but I don't think those variables really matter. Can you show me how to go about doing this?
6. Sure. What happens when you plug this expression into the DE?
7. What are you using to type out the equations so neatly? Do you need to know the code, or is there something that does it for you?
For plugging into the DE, I'm not sure what exactly to plug in. For the y, I think its what you showed in the previous post. For the y', it is ncx^(n-1) and for y'' it is n(n-1)cx^(n-1), right? Other than that, is there any more plugging in / substituting to do?
8. Right. In order to type up equations in LaTeX, which is what all those nice-looking equations are in, you have to double-click the Reply to Thread option, or click the Go Advanced button after single-clicking Reply to Thread. That brings you to a new screen, where you can click the button that looks like TeX, close to the far right. That will produce a math environment starting and ending brackets. What you type in-between those brackets gets interpreted as LaTeX code. The best way to learn LaTeX is by doing and observing. On this forum, you can double-click formulas to see how somebody typed them. Another little trick: after double-clicking to see LaTeX source code, you can copy and paste the code for yourself. Saves oodles of time!
So, you've got your original DE thus:
$(1-x^{2})\,y''(x)-6\,x\,y'(x)-4\,y(x)=0,$
as well as your series ansatz (ansatz is a terrific German word often used in the context of DE's. It means "your original guess" or "working hypothesis" for the purposes of computation) as follows:
$\displaystyle{y(x)=\sum_{n=0}^{\infty}c_{n}x^{n}},$ to use your teacher's notation.
Plugging the ansatz into the DE is close to what you said, but not quite. You have
$\displaystyle{y'(x)=\sum_{n=0}^{\infty}nc_{n}x^{n-1}},$ as you said, but
$\displaystyle{y''(x)=\sum_{n=0}^{\infty}n(n-1)c_{n}x^{n-2}}.$
Now these series don't really start at n=0 for the derivatives, do they? The $n$ and $n-1$ multiplying stuff changes it to the following:
$\displaystyle{y'(x)=\sum_{n=1}^{\infty}nc_{n}x^{n-1}},$ and
$\displaystyle{y''(x)=\sum_{n=2}^{\infty}n(n-1)c_{n}x^{n-2}}.$
So plugging that into the DE produces the following:
$\displaystyle{(1-x^{2})\,\sum_{n=2}^{\infty}n(n-1)c_{n}x^{n-2}-6\,x\,\sum_{n=1}^{\infty}nc_{n}x^{n-1}-4\,\sum_{n=0}^{\infty}c_{n}x^{n}=0}.$
What happens next?
9. I'm not sure. The example in my book indicates that I would substitute k=n-2, k=n-1, and k=n for each series respectively. However, I'm not sure why that is done and what that achieves
10. The reason that's the next step is that what you want to do is add all the series together, gather like terms, etc. You can't do that if the series don't all start at the same beginning value. Each series, currently, isn't on the same page with everyone else. So, what do you get when you make those substitutions?
11. $\displaystyle{(1-x^{2})\,\sum_{k=0}^{\infty}(k+2)(K+1)c_{k+2}x^{k}-6\,x\,\sum_{k=0}^{\infty}nc_{k+1}x^{k}-4\,\sum_{k=0}^{\infty}c_{k}x^{k}=0}.$
right?
I'm not sure how you are allowed to substitute different values for n into the same equation. How can one variable have different values within the same equation?
12. I think you're confused about the shifting. Take the first series: if k = n - 2, then n = k + 2. Just plug that in. What do you get?
13. you're right, I edited my previous post, now getting k=0 for the starting point of each sum
14. Close, very close. I'd clean it up just a bit more:
$\displaystyle{(1-x^{2})\,\sum_{k=0}^{\infty}(k+2)(k+1)c_{k+2}x^{k}-6\,x\,\sum_{k=0}^{\infty}(k+1)c_{k+1}x^{k}-4\,\sum_{k=0}^{\infty}c_{k}x^{k}=0}.$
Now what?
15. Now you can put them all together since the boundaries are the same, right?
$\displaystyle{(1-x^{2})(-6X)(-4)\,\sum_{k=0}^{\infty}(k+2)(k+1)c_{k+2}x^{k}\,+(k +1)c_{k+1}x^{k}\,+c_{k}x^{k}=0}.$
Not sure if I dealt with the coefficients right.
Also, can I get rid of the "sum" sign now?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519897103309631, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/187497/proof-that-determinant-rank-equals-row-column-rank/187512
|
# Proof that determinant rank equals row/column rank
Let $A$ be a $m \times n$ matrix with entries from some field $F$. Define the determinant rank of $A$ to be the largest possible size of a nonzero minor, ie. the largest invertible square submatrix of $A$. It is true that the determinant rank is equal to the rank of a matrix, which we define to be the dimension of the row/column space.
It's not difficult to see that $\text{rank} \geq \text{determinant rank}$. If some submatrix of $A$ is invertible, then its columns/rows are linearly indepedent, which implies that the corresponding rows/columns of $A$ are also linearly indepedent.
Is there a nice proof for the converse?
-
## 1 Answer
If the matrix $A$ has rank $k$, then it has $k$ linearly independent lines. Those form an $k\times n$ submatrix, which of course also has rank $k$. But if it has rank $k$, then it has $k$ linearly independent columns. Those form a $k\times k$ submatrix of $A$, which of course also has rank $k$. But a $k\times k$ submatrix with rank $k$ is a full-rank square matrix, therefore invertible, thus is has a non-zero determinant. And therefore the determinant rank has to be at least $k$.
-
Nice! Now that I see this proof, I can't believe I missed it.. – spin Aug 27 '12 at 15:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384967088699341, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2008/05/07/associativity-in-series-ii/?like=1&source=post_flair&_wpnonce=39170bc4ce
|
# The Unapologetic Mathematician
## Associativity in Series II
I’m leaving for DC soon, and may not have internet access all day. So you get this now!
We’ve seen that associativity doesn’t hold for infinite sums the way it does for finite sums. We can always “add parentheses” to a convergent sequence, but we can’t always remove them.
The first example we mentioned last time. Consider the series with terms $a_k=(-1)^k$:
$\displaystyle\sum\limits_{k=0}^\infty a_k=1+(-1)+1+(-1)+...$
Now let’s add parentheses using the sequence $d(j)=2j+1$. Then $b_j=(-1)^{(2(j-1)+1)+1}+(-1)^{2j+1}=1+(-1)=0$. That is, we now have the sequence
$\displaystyle\sum\limits_{j=0}^\infty b_j=(1+(-1))+(1+(-1))+...=0+0+...=0$
So the resulting series does converge. However, the original series can’t converge.
The obvious fault is that the terms $a_k$ don’t get smaller. And we know that $\lim\limits_{k\rightarrow\infty}a_k$ must be zero, or else we’ll have trouble with Cauchy’s condition. With the parentheses in place the terms $b_j$ go to zero, but when we remove them this condition can fail. And it turns out there’s just one more condition we need so that we can remove parentheses.
So let’s consider the two series with terms $a_k$ and $b_j$, where the first is obtained from the second by removing parentheses using the function $d(j)$. Assume that $\lim_{k\rightarrow\infty}a_k=0$, and also that there is some $M>0$ so that each of the $b_j$ is a sum of fewer than $M$ of the $a_k$. That is, $d(j+1)-d(j)<M$. Then the series either both diverge or both converge, and if they converge they have the same sum.
We set up the sequences of partial sums
$\displaystyle s_n=\sum\limits_{k=0}^na_k$
$\displaystyle t_m=\sum\limits_{j=0}^mb_j$
We know from last time that $t_m=s_{d(m)}$, and so if the first series converges then the second one must as well. We need to show that if $t=\lim\limits_{m\rightarrow\infty}t_m$ exists, then we also have $\lim\limits_{n\rightarrow\infty}s_n=t$.
To this end, pick an $\epsilon>0$. Since the sequence of $t_m$ converge to $t$, we can choose some $N$ so that $\left|t_m-t\right|<\frac{\epsilon}{2}$ for all $m>N$. Since the sequence of terms $a_k$ converges to zero, we can increase $N$ until we also have $\left|a_k\right|<\frac{\epsilon}{2M}$ for all $k>N$.
Now take any $n>d(N)$. Then $n$ falls between $d(m)$ and $d(m+1)$ for some $m$. We can see that $m\geq N$, and that $n$ is definitely above $N$. So the partial sum $s_n$ is the sum of all the $a_k$ up through $k=d(m+1)$, minus those terms past $k=n$. That is
$\displaystyle s_n=\sum\limits_{k=0}^na_k=\sum\limits_{k=0}^{d(m+1)}a_k-\sum\limits_{n+1}^{d(m+1)}a_k$
But this first sum is just the partial sum $t_{m+1}$, while each term of the second sum is bounded in size by our assumptions above. We check
$\displaystyle\left|s_n-t\right|=\left|(t_{m+1}-t)-\sum\limits_{n+1}^{d(m+1)}a_k\right|\leq\left|t_{m+1}-t\right|+\sum\limits_{n+1}^{d(m+1)}\left|a_k\right|$
But since $n$ is between $d(m)$ and $d(m+1)$, there must be fewer than $M$ terms in this last sum, all of which are bounded by $\frac{\epsilon}{2M}$. So we see
$\displaystyle\left|s_n-t\right|<\frac{\epsilon}{2}+M\frac{\epsilon}{2M}=\epsilon$
and thus we have established the limit.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 4 Comments »
1. [...] We’ve seen that associativity may or may not hold for infinite sums, but it can be improved with extra assumptions. As it happens, commutativity breaks down as well, though the story is a bit clearer [...]
Pingback by | May 8, 2008 | Reply
2. [...] series itself — must converge. Now a similar argument to the one we used when we talked about associativity for absolutely convergent series shows that the rearranged series has the same sum as the [...]
Pingback by | May 9, 2008 | Reply
3. [...] But this is just the sum of a bunch of absolute values from the original series, and so is bounded by . So the series of absolute values of has bounded partial sums, and so converges absolutely. That it has the same sum as the original is another argument exactly analogous to (but more complicated than) the one for a simple rearrangement, and for associativity of absolutely convergent series. [...]
Pingback by | May 12, 2008 | Reply
4. [...] measure of the union is finite, but absolute convergence will give us all sorts of flexibility to reassociate and rearrange our [...]
Pingback by | June 23, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356103539466858, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-algebra/163750-irreducible-polynomial-mod-p-2-a.html
|
# Thread:
1. ## Irreducible polynomial mod p^2
Consider the polynomial $x^{2} + p$ modulo $p^{2}$.
I would like to show that this polynomial is irreducible. Unfortunately I don't see how I can apply any of the usual tricks (Eisenstein, simple enumeration in the case of fields of small characteristic) might work.
2. Suppose $x^2+p\pmod{p^2}$ is reducible. Then, $x^2+p\equiv 0\pmod{p^2}$ has a solution. By reducing the modulus, we have $x^2\equiv 0\pmod{p}$, so $x$ is divisible by $p$. Write $x=py$. Putting this back into the original equation gives $(py)^2+p\equiv p^2y^2+p\equiv p \equiv 0\pmod{p^2}$, which is certainly not true.
Therefore, $x^2+p\pmod{p^2}$ is irreducible.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285991787910461, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2011/07/19/the-uniqueness-of-the-exterior-derivative/?like=1&source=post_flair&_wpnonce=18b5244a97
|
# The Unapologetic Mathematician
## The Uniqueness of the Exterior Derivative
It turns out that our exterior derivative is uniquely characterized by some of its properties; it is the only derivation of the algebra $\Omega(M)$ of degree $1$ whose square is zero and which gives the differential on functions. That is, once we specify that $d:\Omega^k(M)\to\Omega^{k+1}(M)$, that $d(\alpha+\beta)=d\alpha+d\beta$, that $d(\alpha\wedge\beta)=d\alpha\wedge\beta+(-1)^p\wedge d\beta$ if $\alpha$ is a $p$-form, that $d(d\omega)=0$, and that $df(X)=Xf$ for functions $f\in\Omega^0(M)$, then there is no other choice but the exterior derivative we already defined.
First, we want to show that these properties imply another one that’s sort of analytic in character: if $\alpha=\beta$ in a neighborhood of $p$ then $d\alpha(p)=d\beta(p)$. Equivalently (given linearity), if $\alpha=0$ in a neighborhood $U$ of $p$ then $d\alpha(p)=0$. But then we can pick a bump function $\phi$ which is $0$ on a neighborhood of $p$ and $1$ outside of $U$. Then we have $\phi\alpha=\alpha$ and
$\displaystyle\begin{aligned}d\alpha(p)&=\left[d(\phi\alpha)\right](p)\\&=d\phi(p)\wedge\alpha(p)+\phi(p)d\alpha(p)\\&=0+0=0\end{aligned}$
And so we may as well throw this property onto the pile. Notice, though, how this condition is different from the way we said that tensor fields live locally. In this case we need to know that $\alpha$ vanishes in a whole neighborhood, not just at $p$ itself.
Next, we show that these conditions are sufficient for determining a value of $d\omega$ for any $k$-form $\omega$. It will helps us to pick a local coordinate patch $(U,x)$ around a point $p$, and then we’ll show that the result doesn’t actually depend on this choice. Picking a coordinate patch gives us a canonical basis of the space $\Omega^k(U)$ of $k$-forms over $U$, indexed by “multisets” $I=\{0\leq i_1<\dots<i_k\leq n\}$. Any $k$-form $\omega$ over $U$ can be written as
$\displaystyle\omega(q)=\sum\limits_I\omega_I(q)dx^{i_1}(q)\wedge\dots\wedge dx^{i_k}(q)$
and so we can calculate
$\displaystyle\begin{aligned}d\omega(p)=&d\left(\sum\limits_I\omega_I(p)dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\right)\\=&\sum\limits_Id\left(\omega_I(p)dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\right)\\=&\sum\limits_I\Bigl(d\omega_I(p)\wedge dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\\&+\omega_I(p)d\left(dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\right)\Bigr)\\=&\sum\limits_I\Biggl(d\omega_I(p)\wedge dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\\&+\omega_I(p)\sum\limits_{j=1}^k(-1)^jd\left(dx^{i_1}(p)\wedge\dots\wedge d\left(dx^{i_j}\right)\wedge\dots\wedge dx^{i_k}(p)\right)\Biggr)\\=&\sum\limits_Id\omega_I(p)\wedge dx^{i_1}(p)\wedge\dots\wedge dx^{i_k}(p)\end{aligned}$
where we use the fact that $d\left(dx^i\right)=0$.
Now if $(V,y)$ is a different coordinate patch around $p$ then we get a different decomposition
$\displaystyle\omega(q)=\sum\limits_J\omega_J(q)dy^{i_1}(q)\wedge\dots\wedge dy^{i_k}(q)$
but both decompositions agree on the intersection $U\cap V$, which is a neighborhood of $p$, and thus when we apply $d$ to them we get the same value at $p$, by the “analytic” property we showed above. Thus the value only depends on $\omega$ itself (and the point $p$), and not on the choice of coordinates we used to help with the evaluation. And so the exterior derivative $d\omega$ is uniquely determined by the four given properties.
### Like this:
Posted by John Armstrong | Differential Topology, Topology
## 3 Comments »
1. Are you going to cover integration of differential forms or de Rham cohomology?
Comment by Andrei | July 20, 2011 | Reply
2. Well, having introduced the exterior derivative and shown that its square is zero, I’m halfway to de Rham cohomology already, so that wouldn’t be a bad guess.
Comment by | July 20, 2011 | Reply
3. [...] The really important thing about the exterior derivative is that it makes the algebra of differential forms into a “differential graded algebra”. We had the structure of a graded algebra before, but now we have a degree-one derivation whose square is zero. And as long as we want it to agree with the differential on functions, there’s only one way to do it. [...]
Pingback by | July 20, 2011 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 51, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329982995986938, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Prismatoid
|
# Prismatoid
In geometry, a prismatoid is a polyhedron where all vertices lie in two parallel planes. (If both planes have the same number of vertices, and the lateral faces are either parallelograms or trapezoids, it is called a prismoid.)
If the areas of the two parallel faces are A1 and A3, the cross-sectional area of the intersection of the prismatoid with a plane midway between the two parallel faces is A2, and the height (the distance between the two parallel faces) is h, then the volume of the prismatoid is given by $V = \frac{h(A_1 + 4A_2 + A_3)}{6}$ (This formula follows immediately by integrating the area parallel to the two planes of vertices by Simpson's rule, since that rule is exact for integration of polynomials of degree up to 3, and in this case the area is at most a quadratic function in the height.)
## Prismatoid families
Families of prismatoids include:
• Pyramids, where one plane contains only a single point;
• Wedges, where one plane contains only two points;
• Prisms, where the polygons in each plane are congruent and joined by rectangles or parallelograms;
• Antiprisms, where the polygons in each plane are congruent and joined by an alternating strip of triangles;
• Crossed antiprisms;
• Cupolae, where the polygon in one plane contains twice as many points as the other and is joined to it by alternating triangles and rectangles;
• Frusta obtained by truncation of a pyramid;
• Quadrilateral-faced hexahedral prismatoids:
1. Parallelepipeds - six parallelogram faces
2. Rhombohedrons - six rhombus faces
3. Trigonal trapezohedra - six congruent rhombus faces
4. Cuboids - six rectangular faces
5. Quadrilateral frusta - an apex-truncated square pyramid
6. Cubes - six square faces
## Higher dimensions
In general a polytope is prismatoidal if its vertices exist in two hyperplanes. For example in 4-dimension, two polyhedra can be placed in 2 parallel 3-spaces, and connected with polyhedral sides.
A tetrahedral-cuboctahedral cupola.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186462163925171, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2010/10/27/one-complete-character-table-part-2/?like=1&source=post_flair&_wpnonce=c8d80d4293
|
# The Unapologetic Mathematician
## One Complete Character Table (part 2)
Last time we wrote down the complete character table of $S_3$:
$\displaystyle\begin{array}{c|ccc}&e&(1\,2)&(1\,2\,3)\\\hline\chi^\mathrm{triv}&1&1&1\\\mathrm{sgn}&1&-1&1\\\chi^\perp&2&0&-1\end{array}$
which is all well and good except we haven’t actually seen a representation with the last line as its character!
So where did we get the last line? We had the equation $\chi^\mathrm{def}=\chi^\mathrm{triv}+\chi^\perp$, which involves the characters of the defining representation $V^\mathrm{def}$ and the trivial representation $V^\mathrm{triv}$. This equation should correspond to an isomorphism $V^\mathrm{def}\cong V^\mathrm{triv}\oplus V^\perp$.
We know that there’s a copy of the the trivial representation as a submodule of the defining representation. If we use the standard basis $\{\mathbf{1},\mathbf{2},\mathbf{3}\}$ of $V^\mathrm{def}$, this submodule is the line spanned by the vector $\mathbf{1}+\mathbf{2}+\mathbf{3}$. We even worked out the defining representation in terms of the basis $\{\mathbf{1}+\mathbf{2}+\mathbf{3},\mathbf{2},\mathbf{3}\}$ to show that it’s reducible.
But what we want is a complementary subspace which is also $G$-invariant. And we can find such a complement if we have a $G$-invariant inner product on our space. And, luckily enough, permutation representations admit a very nice invariant inner product! Indeed, just take the inner product that arises by declaring the standard basis to be orthonormal; it’s easy to see that this is invariant under the action of $G$.
So we need to take our basis $\{\mathbf{1}+\mathbf{2}+\mathbf{3},\mathbf{2},\mathbf{3}\}$ and change the second and third members to make them orthogonal to the first one. Then they will span the orthogonal complement, which we will show to be $G$-invariant. The easiest way to do this is to use $\{\mathbf{1}+\mathbf{2}+\mathbf{3},\mathbf{2}-\mathbf{1},\mathbf{3}-\mathbf{1}\}$. Then we can calculate the action of each permutation in terms of this basis. For example:
$\displaystyle\begin{aligned}\left[\rho((1\,2))\right](\mathbf{1}+\mathbf{2}+\mathbf{3})&=\mathbf{1}+\mathbf{2}+\mathbf{3}\\\left[\rho((1\,2))\right](\mathbf{2}-\mathbf{1})&=\mathbf{1}-\mathbf{2}=-(\mathbf{2}-\mathbf{1})\\\left[\rho((1\,2))\right](\mathbf{3}-\mathbf{1})&=\mathbf{3}-\mathbf{2}=-(\mathbf{2}-\mathbf{1})+(\mathbf{3}-\mathbf{1})\end{aligned}$
and write out all the representing matrices in terms of this basis:
$\displaystyle\begin{aligned}\rho(e)&=\begin{pmatrix}1&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}\\\rho((1\,2))&=\begin{pmatrix}1&0&0\\{0}&-1&-1\\{0}&0&1\end{pmatrix}\\\rho((1\,3))&=\begin{pmatrix}1&0&0\\{0}&1&0\\{0}&-1&-1\end{pmatrix}\\\rho((2\,3))&=\begin{pmatrix}1&0&0\\{0}&0&1\\{0}&1&0\end{pmatrix}\\\rho((1\,2\,3))&=\begin{pmatrix}1&0&0\\{0}&-1&-1\\{0}&1&0\end{pmatrix}\\\rho((1\,3\,2))&=\begin{pmatrix}1&0&0\\{0}&0&1\\{0}&-1&-1\end{pmatrix}\end{aligned}$
These all have the required form:
$\displaystyle\left(\begin{array}{c|cc}1&0&0\\\hline{0}&\ast&\ast\\{0}&\ast&\ast\end{array}\right)$
where the $1$ in the upper-left is the trivial representation and the $2\times 2$ block in the lower right is exactly the other representation $V^\perp$ we’ve been looking for! Indeed, we can check the values of the character:
$\displaystyle\begin{aligned}\chi^\perp(e)&=\mathrm{Tr}\begin{pmatrix}1&0\\{0}&1\end{pmatrix}=2\\\chi^\perp((1\,2))&=\mathrm{Tr}\begin{pmatrix}-1&-1\\{0}&1\end{pmatrix}=0\\\chi^\perp((1\,2\,3))&=\mathrm{Tr}\begin{pmatrix}-1&-1\\1&0\end{pmatrix}=-1\end{aligned}$
exactly as the character table predicted.
## 2 Comments »
1. [...] Alternative Path It turns out that our efforts last time were somewhat unnecessary, although they were instructive. Actually, we already had a matrix [...]
Pingback by | October 28, 2010 | Reply
2. [...] a basis, we can pick . We recognize this pattern from when we calculated the invariant subspaces of the defining representation of . And indeed, is the defining [...]
Pingback by | December 28, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274023771286011, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/18688/j-invariant-of-a-supersingular-elliptic-curve/18693
|
## j-invariant of a supersingular elliptic curve
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let E be a supersingular curve over a finite field. Why is the j-invariant always in F_p^2?
-
The answers by Pete Clark and Sam Derbyshire below are great, but I'll add here a text reference: Silverman's "The Arithmetic of Elliptic Curves", Chapter V, Section 3, Theorem 3.1. In particular, see the proof of (ii) implies (iii). – Álvaro Lozano-Robledo Jul 27 2011 at 19:24
## 2 Answers
(Note: the following argument uses the fact that an isogeny of elliptic curves is inseparable iff it factors through the Frobenius isogeny. This is a result in Silverman's book, for instance.)
Let $E$ be an elliptic curve over an algebraically closed field $k$ of positive characteristic $p$. Recall that $[p]: E \rightarrow E$ is always an inseparable isogeny. Therefore, by the above, it factors through $F: E \rightarrow E^p$. Moreover $E$ is supersingular iff `$E[p](k) = 0$` iff $[p]$ is purely inseparable, iff the dual isogeny to Frobenius $V: E^p \rightarrow E$ (the "Verschiebung") is also inseparable. But again, this means that $V$ factors through the Frobenius isogeny for $E^p$ -- i.e., $E^p \rightarrow E^{p^2}$ -- and since both have degree $p$ this means that $E$ is isomorphic to $E^{p^2}$. Thus on $j$-invaraiants we have $j(E)^{p^2} = j(E)$, done.
-
Beat me to it. The Frobenius isogeny is always purely inseparable though, right? I think you meant that multiplication by $p$ is purely inseparable iff the dual to the Frobenius is inseparable. – Sam Derbyshire Mar 19 2010 at 0:52
@Sam: yes, you're right. I fixed it accordingly. – Pete L. Clark Mar 19 2010 at 0:58
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In characteristic $p$, every map $E_1 \to E_2$ factors as a power of the Frobenius $\varphi_r \colon E_1 \to E_1^{(p^r)}$ followed by a separable morphism $E_1^{(p^r)} \to E_2$, and we find $r$ by looking at the inseparable degree of our map (if the map is separable, then $r=0$, as Pete pointed out).
Now, in the case of interest, if $E$ is supersingular, $\widehat{\varphi}$ is inseparable (as this is equivalent to multiplication by $p$ being purely inseparable). But then $\widehat{\varphi} \colon E^{(p)} \to E$ factors as $E^{(p)} \to E^{(p^2)} \to E$ by comparing degrees, where the first map is the Frobenius and the second is an isomorphism.
It then follows that $j(E) = j(E^{(p^2)}) = j(E)^{p^2}$ so $j(E) \in \mathbb{F}_{p^2}$.
-
Let me return the favor: every isogeny factors as above with the convention that the power of the Frobenius is allowed to be the zeroth power! By the way, it strikes me that it should be possible to "merge" answers and split the credit. That would be appropriate here... – Pete L. Clark Mar 19 2010 at 1:00
Haha, I guess it was confusing; it's clear that we need the zeroth power whenever our map is separable. As for the rest, I don't see any issues; it's a standard answer anyway! – Sam Derbyshire Mar 19 2010 at 1:13
Thanks for the answers guys – Josh Mar 19 2010 at 2:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303038716316223, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/217085/is-there-an-extensive-cheat-sheet-for-general-topology-questions?answertab=oldest
|
# Is there an extensive 'cheat sheet' for general topology questions?
I'm looking for a collection of results in basic topology. Often I find myself working out some small detail for i.e. continuous functions, and I know I have done the same thing before, but cannot remember if the result was $=$ or just $\subseteq$ or what exact conditions the map has to have for the result to hold.
A small example: If $f\colon X\rightarrow Y$ is a continuous map and $A\subseteq Y$, then in general only $f^{-1}(\overline{A})\subseteq \overline{f^{-1}(A)}$. But if $f$ is also an open map, then $f^{-1}(\overline{A})=\overline{f^{-1}(A)}$.
Do you know a good (not necessarily small) collection of results? Most topology books cover some very elementary ones, but I never find what I'm looking for, or when I eventually find something, it would have been faster if I had actually proven it myself.
Thank you in advance.
-
8
The resource you are searching for: it will be 10 times better than any one you find on the internet if you write it for yourself :) I'm not trying to be sarcastic, I'm just saying that it's well worth the time investment. Specifically the organization and content will be ideal for you. – rschwieb Oct 19 '12 at 20:42
2
I find keeping a list of definitions much more valuable. Re-deriving results until they become instinctual is actually a good thing - it means you are increasing your understanding of the subject. (I don't this is necessarily true for all subjects, but general topology is a case where almost all the basic results follow straight from the definitions.) – Thomas Andrews Oct 19 '12 at 20:47
## 1 Answer
When I took real analysis and topology (undergrad) I used this wikibook for proofs.
It has definitions, theorems (sometimes with proofs), and exercises.
If you do not like your book, try to find the one you like in the library, at least that is what I did.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529282450675964, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/lie-algebra+group-theory
|
# Tagged Questions
2answers
207 views
### Definition of Casimir operator and its properties
I'm not sure which is the exact definition of a Casimir operator. In some texts it is defined as the product of generators of the form: $$X^2=\sum X_iX^i$$ But in other parts it is defined as an ...
1answer
86 views
### Different representations of the Lorentz algebra
I've found many definitions of Lorentz generators that satisfy the Lorentz algebra: ...
2answers
181 views
### How to model a symmetry using Lie Groups?
I have been reading lately about Lie groups, and although all books keep listing the groups, and talk about Lie algebras and all that, one thing I still don't know how is it made, and I guess it's the ...
2answers
164 views
### high spin atoms SU(2) representation
I am very confused that some atoms called high spin or magnetic atoms have spin level more than $\frac{1}{2}$ but are still said to have $SU(2)$ symmetry. Why not $SU(N)$?
2answers
344 views
### Lie bracket for Lie algebra of $SO(n,m)$
How does one show that the bracket of elements in the Lie algebra of $SO(n,m)$ is given by $$[J_{ab},J_{cd}] ~=~ i(\eta_{ad} J_{bc} + \eta_{bc} J_{ad} - \eta_{ac} J_{bd} - \eta_{bd}J_{ac}),$$ ...
1answer
264 views
### Wigner-Eckart theorem of SU(3)
I have just come across the Wigner-Eckart theorem and am not sure on how to apply it. How do I find the matrix elements of $\langle u|T_a|v\rangle$ in terms of tensor components and the Gell-Mann ...
1answer
202 views
### How do I find the tensor components of all weights of a representation of SU(3), e.g. the six dimensional representation (2,0)
How do I find the corresponding tensor component v^ij of the six dimensional representation of SU(3) with dynkin label (2,0).
1answer
71 views
### Charge of a field under the action of a group
What does it mean for a field (say, $\phi$) to have a charge (say, $Q$) under the action of a group (say, $U(1)$)?
2answers
408 views
### How does non-Abelian gauge symmetry imply the quantization of the corresponding charges?
I read an unjustified treatment in a book, saying that in QED charge an not quantized by the gauge symmetry principle (which totally clear for me: Q the generator of $U(1)$ can be anything in ...
0answers
186 views
### Coupling Coefficients in SO(4)
I have two equations (from two distinct authors) for the decomposition of a coupling coefficient of SO(4) (i.e. Wigner 3j-symbol for SO(4)). In the first: ...
2answers
155 views
### Is this a simple Lie algebra?
This question comes from Georgi, Lie Alegbras in Particle Physics. Consider the algebra generated by $\sigma_a\otimes1$ and $\sigma_a\otimes \eta_1$ where $\sigma_a$ and $\eta_1$ are Pauli matrices ...
2answers
512 views
### Is the G2 Lie algebra useful for anything?
Seems like all the simpler Lie algebras have a use in one or another branch of theoretical physics. Even the exceptional E8 comes up in string theory. But G2? I've always wondered about that one. ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281282424926758, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/action?sort=active&pagesize=15
|
# Tagged Questions
The action tag has no wiki summary.
1answer
78 views
### Retrieving Maxwell's equations from the minimum action principle
I'm currently working at the start of Alexei Tsvelik's book Quantum Field Theory in Condensed Matter Physics. I'm kinda stumped on a few essential steps. Starting with the action: S = \int dt \int ...
3answers
1k views
### Derivation of Maxwell's equations from field tensor lagrangian
I've started reading Peskin and Schroeder on my own time, and I'm a bit confused about how to obtain Maxwell's equations from the (source-free) lagrangian density \$L = ...
1answer
51 views
### Why vary the action with respect to the inverse metric?
Whenever I have read texts which employ actions that contain metric tensors, such as the Nambu-Goto, Polyakov or Einstein-Hilbert action, the equations of motion are derived by varying with respect to ...
1answer
45 views
### if i want action to be positive number then it require that $\tau_i$ be bigger than $\tau_f$, isn't it true? [closed]
the action is the length of the geodesic $S=-E_o\int_i^f d\tau$ we get an action that is minimised for the correct path. if i want action to be positive number then it require that $\tau_i$ be ...
1answer
97 views
### Discretization of action in path integral
I am reading Peskin and Schroeder (path integrals) and it states that discretising the classical action gives: S~=~\int \left(\frac{m}{2}\dot{x}^{2}-V(x)\right) dt ~\rightarrow~ \sum ...
2answers
272 views
### Action for a point particle in a curved spacetime
Is this action for a point particle in a curved spacetime correct? $$\mathcal S =-Mc \int ds = -Mc \int_{\xi_0}^{\xi_1}\sqrt{g_{\mu\nu}(x)\frac{dx^\mu(\xi)}{d\xi} \frac{dx^\nu(\xi)}{d\xi}} \ \ d\xi$$
1answer
85 views
### What is the action for an electromagnetic field if including magnetic charge
Recently, I try to write an action of an electromagnetic field with magnetic charge and quantize it. But it seems not as easy as it seems to be. Does anyone know anything or think of anything like ...
2answers
119 views
### How the boundary term in the variation of the action vanishes
Can someone explain a little more that why the last term in equation (1.5) vanishes? Reference: David Tong, Quantum Field Theory: University of Cambridge Part III Mathematical Tripos, Lecture ...
3answers
216 views
### Noether's current expression in Peskin and Schroeder
In the second chapter of Peskin and Schroeder, An Introduction to Quantum Field Theory, it is said that the action is invariant if the Lagrangian density changes by a four-divergence. But if we ...
1answer
86 views
### Does anybody know of any good sources that explain (generically) how we form Lagrangians/Actions/Superpotentials for different field content?
I regularly find that I'll understand where the field content in a particular physics paper comes from, but then a Lagrangian or action or superpotential is stated and I don't know how it's derived. ...
1answer
73 views
### Varying an action (cosmological perturbation theory)
I am stuck varying an action, trying to get an equation of motion. (Going from eq. 91 to eq. 92 in the image.) This is the action $$S~=~\int d^{4}x \frac{a^{2}(t)}{2}(\dot{h}^{2}-(\nabla h)^2).$$ ...
1answer
170 views
### Polyakov action: difference induced metric and dynamical metric
The Polyakov action is given by: S_p ~=~ -\frac{T}{2}\int d^2\sigma \sqrt{-g}g^{\alpha\beta}\partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}\eta_{\mu\nu} ~=~ -\frac{T}{2}\int d^2\sigma ...
2answers
362 views
### Conversion of the Nambo-Goto action into the Polyakov action?
I`ve read that the Nambo-Goto action containing the induced metric $\gamma_{\alpha\beta}$ $$\tag{1} S_{NG} ~=~ -T\int_{\tau_i}^{\tau_f} d\tau \int_0^{\ell} d\sigma \sqrt{-\gamma}$$ can be converted ...
0answers
68 views
### Solving the path integral for $(ax)^4-(bx)^2$ potential
I need help in solving the path integral of potential given by the form $(ax)^4-(bx)^2$ This potential is maybe known as Ginzberg Landau potential I tried using the approximation in which the ...
3answers
545 views
### Entropy and the principle of least action
Is there any link between the law of maximum entropy and the principle of least action. Is it possible to derive one from the other ?
1answer
84 views
### Calculating the (on-shell) action of a free particle
I am having difficulty with the first problem from Feynman and Hibbs' book. For a free particle $L = (m/2)\dot{x}^2$. Show that the (on-shell) action $S_{cl}$ corresponding to the classical ...
2answers
189 views
### Why lagrangian is negative number?
In the special relativistic action for a massive point particle, $$\int_{t_i}^{t_f}\mathcal {L}dt,$$ why is the Lagrangian $$\mathcal {L}=-E_o\gamma^{-1}$$ a negative number?
1answer
204 views
### Lagrangian for Euler Equations in general relativity
The stress energy tensor for relativistic dust $$T_{\mu\nu} = \rho v_\mu v_\nu$$ follows from the action S_M = -\int \rho c \sqrt{v_\mu v^\mu} \sqrt{ -g } d^4 x = -\int c \sqrt{p_\mu ...
1answer
303 views
### Do an action and its Euler-Lagrange equations have the same symmetries?
Assume a certain action $S$ with certain symmetries, from which according to the Lagrangian formalism, the equations of motion (EOM) of the system are the corresponding Euler-Lagrange equations. Can ...
3answers
116 views
### What is the meaning of the word “Principle” in Physics?
What is the meaning of the word principle in Physics? For example in the "action principle". Is it an action law, an action equation, or an unproved assumption? (I have an idea what an action is). ...
0answers
54 views
### path integrals: how/why can the phase be identified with the action?
In Peskin & Schroeder, chapter 9 introduces the functional methods. The idea, to recall, is simply to sum over all the possible paths: \$U(x_a,x_b;T) = \sum_{\text{all paths}} e^{i . ...
1answer
175 views
### To construct an action from a given two-point function
This is really a basic question whose answer I guess may have to do with the way we construct Feynman rules and diagrams. The question is: Suppose I have been given a two-point function (found in some ...
2answers
1k views
### Deriving the Lagrangian for a free particle
I'm a newbie in physics. Sorry, if the following questions are dumb. I began reading "Mechanics" by Landau and Lifshitz recently and hit a few roadblocks right away. Proving that a free particle ...
2answers
75 views
### More general invariance of the action functional
I will formulate my question in the classical case, where things are simplest. Usually when one discusses a continuous symmetry of a theory, one means a one-parameter group of diffeomorphisms of the ...
1answer
203 views
### What variables does the action $S$ depend on?
Action is defined as, $$S ~=~ \int L(q, q', t) dt,$$ but my question is what variables does $S$ depend on? Is $S = S(q, t)$ or $S = S(q, q', t)$ where $q' := \frac{dq}{dt}$? In ...
3answers
145 views
### Is the path of stationary action unique? What are the physical implications of $L_{\dot{x}}=L_x$
Below, for any function $Q$ the notation $Q_x$ means $\frac{\partial Q}{\partial x}$, and $Q_{xx}$ means $\frac{\partial^2 Q}{\partial x^2}$. In physics, the trajectory of a particle is given by the ...
2answers
155 views
### What is the relativistic action of a massive particle?
all Lorentz observers watching a particle move will compute the same value for the quantity $$ds^2 = -(c \, dt)^2 + dx^2 + dy^2 + dz^2,$$ $$ds^2 = g_{\mu\nu}dx^{\mu}dx^{\nu},$$ and ''ds/c'' is then ...
0answers
42 views
### Help identifying an expression for the action
I found the following expression for the action of a (free, I think) relativistic particle in my notes but I can't remember from what it came from: S = \int_{0}^{N} \left [ ...
2answers
301 views
### How to apply Noether's theorem
Say I have a point transformation: $$x' ~=~ (1 +\epsilon)x,$$ $$t' ~=~ (1 +\epsilon)^2t,$$ and Lagrangian $$L ~=~ \frac{1}{2}m\dot{x}^2 - \frac{\alpha}{x^2}.$$ How do I go out about showing ...
1answer
152 views
### What's the motivation behind the action principle? [closed]
What's the motivation behind the action principle? Why does the action principle lead to Newtonian law? If Newton's law of motion is more fundamental so why doesn't one derive Lagrangians and ...
2answers
124 views
### Could we get rid of explicit fields derivatives in Quantum Field Theories?
For instance, if we choose the following scalar field Lagrangian, which is (I hope) Lorentz-invariant, where $l$ is a a length scale, and with a $(-1,1,1,1)$ metric: \mathfrak{L}(x) \sim ...
2answers
197 views
### How do I show that there exists variational/action principle for a given classical system?
We see variational principles coming into play in different places such as Classical Mechanics (Hamilton's principle which gives rise to the Euler-Lagrange equations), Optics (in the form of Fermat's ...
5answers
1k views
### Hamilton's Principle
Hamilton's principle states that a dynamic system always follows a path such that its action integral is stationary (that is, maximum or minimum). Why should the action integral be stationary? On ...
2answers
261 views
### Gauge fixing and equations of motion
Consider an action that is gauge invariant. Do we obtain the same information from the following: Find the equations of motion, and then fix the gauge? Fix the gauge in the action, and then find the ...
0answers
101 views
### Dirac action and conventions
I have a (possibly) fundamental question, which is driving me crazy. Notation When considering the Dirac action (say reading Peskin's book), one have \$\int ...
4answers
379 views
### Physical meaning of action in Lagrangian mechanics
Action is defined as $S = \int_{t_1}^{t_2}L \, dt$ where $L$ is Lagrangian. I know that using Euler-Lagrange equation, all sorts of formula can be derived, but I remain unsure of the physical meaning ...
4answers
454 views
### Is the principle of least action a boundary value or initial condition problem?
Here is a question that's been bothering me since I was a sophomore in university, and should have probably asked before graduating: In analytic (Lagrangian) mechanics, the derivation of the ...
3answers
200 views
### Is the Lagrangian “math” or “science”?
I've seen in class that we can get from Lagrangian to derive equations of motion (I know its used elsewhere in physics, but I haven't seen it yet). It's not clear to me whether the Lagrangian itself ...
1answer
363 views
### Deriving the action and the Lagrangian for a free particle in Relativistic mechanics
My question relates to Landau, Classical Theory of Field, Chapter 2 - Relativistic Mechanics, paragraph 8 - The principle of least action. As stated there, To determine the action integral for a ...
1answer
111 views
### What do I call the inverse of a propagator?
Let's suppose I have a theory described by a Lagrangian as follows: \$ \mathcal{L} = A_\mu \underbrace{\left( \partial^2 g^{\mu\nu} - \partial^\mu \partial^\nu + m^2 g^{\mu \nu} \right)}_{K^{\mu \nu}} ...
6answers
2k views
### Why the Principle of Least Action?
I'll be generous and say it might be reasonable to assume that nature would tend to minimize, or maybe even maximize, the integral over time of $T-V$. Okay, fine. You write down the action ...
1answer
484 views
### Does Action in Classical Mechanics have a Interpretation? [duplicate]
Possible Duplicate: Hamilton's Principle The Lagrangian formulation of Classical Mechanics seem to suggest strongly that "action" is more than a mathematical trick. I suspect strongly ...
2answers
206 views
### What is the significance of action?
What is the physical interpretation of $$\int_{t_1}^{t_2} (T -V) dt$$ where, $T$ is Kinetic Energy and $V$ is potential energy. How does it give trajectory?
0answers
69 views
### Is there some connection between the Virial theorem and a least action principle?
Both involve some 'averaging' over energies (kinetic and potential) and make some prediction about their mean values. As far as the least action principles, one could think of them as saying that the ...
2answers
615 views
### Hamilton-Jacobi Equation
In the Hamilton-Jacobi equation, we take the partial time derivative of the action. But the action comes from integrating the Lagrangian over time, so time seems to just be a dummy variable here and ...
3answers
389 views
### $\hbar$, the angular momentum and the action
Is there anything interesting to say about the fact that $\hbar$, the angular momentum and the action have the same units or is it a pure coincidence?
2answers
281 views
### Is it circular reasoning to derive Newton's laws from action minimization?
Usually, a typical example of the use of the action principle that I've read a lot is the derivation of Newton's equation (generalized to coordinate $q(t)$). However, in the classical mechanics ...
4answers
725 views
### Why can't any term which is added to the Lagrangian be written as a total derivative (or divergence)?
All right, I know there must be an elementary proof of this, but I am not sure why I never came across it before. Adding a total time derivative to the Lagrangian (or a 4D divergence of some 4 ...
3answers
285 views
### Calculating lagrangian density from first principle
In most of the field theory text they will start with lagrangian density for spin 1 and spin 1/2 particles. But i could find any text where this lagrangian density is derived from first principle.
1answer
148 views
### Differentiation of the action functional
In the QFT book by Itzykson and Zuber, the variation of the action functional $I=\int_{t_1}^{t_2}dtL$ is written as: $$\delta I=\int_{t_1}^{t_2}dt\frac{\delta I}{\delta q(t)}\delta q(t)$$ How is ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085380434989929, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Stimulated_Emission
|
# Stimulated emission
(Redirected from Stimulated Emission)
In optics, stimulated emission is the process by which an atomic electron (or an excited molecular state) interacting with an electromagnetic wave of a certain frequency may drop to a lower energy level, transferring its energy to that field. A photon created in this manner has the same phase, frequency, polarization, and direction of travel as the photons of the incident wave. This is in contrast to spontaneous emission which occurs without regard to the ambient electromagnetic field. However, the process is identical in form to atomic absorption in which the energy of an absorbed photon causes an identical but opposite atomic transition: from the lower level to a higher energy level. In normal media at thermal equilibrium, absorption exceeds stimulated emission because there are more electrons in the lower energy states than in the higher energy states. However, when a population inversion is present the rate of stimulated emission exceeds that of absorption, and a net optical amplification can be achieved. Such a gain medium, along with an optical resonator, is at the heart of a laser or maser. Lacking a feedback mechanism, laser amplifiers and superluminescent sources also function on the basis of stimulated emission.
Stimulated emission was a theoretical discovery by Einstein [1] within the framework of quantum mechanics, wherein the emission is described in terms of photons that are the quanta of the EM field. Stimulated emission can also be described classically, however, without reference to either photons, or the quantum-mechanics of matter.[2]
## Overview
Electrons and how they interact with electromagnetic fields are important in our understanding of chemistry and physics. In the classical view, the energy of an electron orbiting an atomic nucleus is larger for orbits further from the nucleus of an atom. However, quantum mechanical effects force electrons to take on discrete positions in orbitals. Thus, electrons are found in specific energy levels of an atom, two of which are shown below:
When an electron absorbs energy either from light (photons) or heat (phonons), it receives that incident quanta of energy. But transitions are only allowed between discrete energy levels such as the two shown above. This leads to emission lines and absorption lines.
When an electron is excited from a lower to a higher energy level, it will not stay that way forever. An electron in an excited state may decay to a lower energy state which is not occupied, according to a particular time constant characterizing that transition. When such an electron decays without external influence, emitting a photon, that is called "spontaneous emission". The phase associated with the photon that is emitted is random. A material with many atoms in such an excited state may thus result in radiation which is very spectrally limited (centered around one wavelength of light), but the individual photons would have no common phase relationship and would emanate in random directions. This is the mechanism of fluorescence and thermal emission.
An external electromagnetic field at a frequency associated with a transition can affect the quantum mechanical state of the atom. As the electron in the atom makes a transition between two stationary states (neither of which shows a dipole field), it enters a transition state which does have a dipole field, and which acts like a small electric dipole, and this dipole oscillates at a characteristic frequency. In response to the external electric field at this frequency, the probability of the atom entering this transition state is greatly increased. Thus, the rate of transitions between two stationary states is enhanced beyond that due to spontaneous emission. Such a transition to the higher state is called absorption, and it destroys an incident photon (the photon's energy goes into powering the increased energy of the higher state). A transition from the higher to a lower energy state, however, produces an additional photon; this is the process of stimulated emission.
## Mathematical model
Stimulated emission can be modelled mathematically by considering an atom that may be in one of two electronic energy states, a lower level state (possibly the ground state) (1) and an excited state (2), with energies E1 and E2 respectively.
If the atom is in the excited state, it may decay into the lower state by the process of spontaneous emission, releasing the difference in energies between the two states as a photon. The photon will have frequency ν and energy hν, given by:
$E_2 - E_1 = h \, \nu_0$
where h is Planck's constant.
Alternatively, if the excited-state atom is perturbed by an electric field of frequency $\nu_0$, it may emit an additional photon of the same frequency and in phase, thus augmenting the external field, leaving the atom in the lower energy state. This process is known as stimulated emission.
In a group of such atoms, if the number of atoms in the excited state is given by N2, the rate at which stimulated emission occurs is given by:
$\frac{\partial N_2}{\partial t} = -\frac{\partial N_1}{\partial t} = - B_{21} \ \rho (\nu) N_2$
where the proportionality constant B21 is known as the Einstein B coefficient for that particular transition, and ρ(ν) is the radiation density of the incident field at frequency ν. The rate of emission is thus proportional to the number of atoms in the excited state N2, and to the density of incident photons.
At the same time, there will be a process of atomic absorption which removes energy from the field while raising electrons from the lower state to the upper state. Its rate is given by an essentially identical equation:
$\frac{\partial N_2}{\partial t} = -\frac{\partial N_1}{\partial t} = B_{12} \ \rho (\nu) N_1$ .
The rate of absorption is thus proportional to the number of atoms in the lower state, N1. Einstein showed that the coefficient for this transition must be identical to that for stimulated emission:
$B_{12} = B_{21}$ .
Thus absorption and stimulated emission are reverse processes proceeding at somewhat different rates. Another way of viewing this is to look at the net stimulated emission or absorption viewing it as a single process. The net rate of transitions from E2 to E1 due to this combined process can be found by adding their respective rates, given above:
$\frac{\partial N_1 \ (net)}{\partial t} = - \frac{\partial N_2 \ (net)}{\partial t} = B_{21} \ \rho (\nu) (N_2-N_1) = B_{21} \ \rho (\nu) \ \Delta N$.
Thus a net power is released into the electric field equal to the photon energy hν times this net transition rate. In order for this to be a positive number, indicating net stimulated emission, there must be more atoms in the excited state than in the lower level: $\Delta N > 0$. Otherwise there is net absorption and the power of the wave is reduced during passage through the medium. The special condition $N_2 > N_1$ is known as a population inversion, a rather unusual condition that must be effected in the gain medium of a laser.
The notable characteristic of stimulated emission compared to everyday light sources (which depend on spontaneous emission) is that the emitted photons have the same frequency, phase, polarization, and direction of propagation as the incident photons. The photons involved are thus mutually coherent. When a population inversion ($\Delta N > 0$) is present, therefore, optical amplification of incident radiation will take place.
Although energy generated by stimulated emission is always at the exact frequency of the field which has stimulated it, the above rate equation refers only to excitation at the particular optical frequency $\nu_0$ corresponding to the energy of the transition. At frequencies offset from $\nu_0$ the strength of stimulated (or spontaneous) emission will be decreased according to the so-called line shape. Considering only homogeneous broadening affecting an atomic or molecular resonance, the spectral line shape function is described as a Lorentzian distribution:
$g'(\nu) = {1 \over \pi } { (\Gamma / 2) \over (\nu - \nu_0)^2 + (\Gamma /2 )^2 }$
where $\Gamma \,$ is the full width at half maximum or FWHM bandwidth.
The peak value of the Lorentzian line shape occurs at the line center, $\nu = \nu_0$. A line shape function can be normalized so that its value at $\nu_0$ is unity; in the case of a Lorentzian we obtain:
$g(\nu) = { g'(\nu) \over g'(\nu_0) } = { (\Gamma / 2)^2 \over (\nu - \nu_0)^2 + (\Gamma /2 )^2 }$.
Thus stimulated emission at frequencies away from $\nu_0$ is reduced by this factor. In practice there may also be broadening of the line shape due to inhomogeneous broadening, most notably due to the Doppler effect resulting from the distribution of velocities in a gas at a certain temperature. This has a Gaussian shape and reduces the peak strength of the line shape function. In a practical problem the full line shape function can be computed through a convolution of the individual line shape functions involved. Therefore optical amplification will add power to an incident optical field at frequency $\nu$ at a rate given by:
$P =h \nu \ g(\nu) B_{21} \ \rho (\nu) \ \Delta N$.
## Stimulated emission cross section
The stimulated emission cross section (in square meters) is
$\sigma_{21}(\nu) = A_{21} { \lambda^2 \over 8 \pi n^2} g(\nu)$
where
A21 is the Einstein A coefficient (in radians per second),
λ is the wavelength (in meters),
n is the refractive index of the medium (dimensionless), and
g(ν) is the spectral line shape function (in seconds).
## Optical amplification
Under certain conditions, stimulated emission can provide a physical mechanism for optical amplification. An external source of energy stimulates atoms in the ground state to transition to the excited state, creating what is called a population inversion. When light of the appropriate frequency passes through the inverted medium, the photons stimulate the excited atoms to emit additional photons of the same frequency, phase, and direction, resulting in an amplification of the input intensity.
The population inversion, in units of atoms per cubic meter, is
$\Delta N_{21} = \left( N_2 - {g_2 \over g_1} N_1 \right)$
where g1 and g2 are the degeneracies of energy levels 1 and 2, respectively.
### Small signal gain equation
The intensity (in watts per square meter) of the stimulated emission is governed by the following differential equation:
${ dI \over dz} = \sigma_{21}(\nu) \cdot \Delta N_{21} \cdot I(z)$
as long as the intensity I(z) is small enough so that it does not have a significant effect on the magnitude of the population inversion. Grouping the first two factors together, this equation simplifies as
${ dI \over dz} = \gamma_0(\nu) \cdot I(z)$
where
$\gamma_0(\nu) = \sigma_{21}(\nu) \cdot \Delta N_{21}$
is the small-signal gain coefficient (in units of radians per meter). We can solve the differential equation using separation of variables:
${ dI \over I(z)} = \gamma_0(\nu) \cdot dz$
Integrating, we find:
$\ln \left( {I(z) \over I_{in}} \right) = \gamma_0(\nu) \cdot z$
or
$I(z) = I_{in}e^{\gamma_0(\nu) z}$
where
$I_{in} = I(z=0) \,$ is the optical intensity of the input signal (in watts per square meter).
### Saturation intensity
The saturation intensity IS is defined as the input intensity at which the gain of the optical amplifier drops to exactly half of the small-signal gain. We can compute the saturation intensity as
$I_S = {h \nu \over \sigma(\nu) \cdot \tau_S }$
where
h is Planck's constant, and
τS is the saturation time constant, which depends[citation needed] on the spontaneous emission lifetimes of the various transitions between the energy levels related to the amplification.
$\nu$ is the frequency in Hz
### General gain equation
The general form of the gain equation, which applies regardless of the input intensity, derives from the general differential equation for the intensity I as a function of position z in the gain medium:
${ dI \over dz} = { \gamma_0(\nu) \over 1 + \bar{g}(\nu) { I(z) \over I_S } } \cdot I(z)$
where $I_S$ is saturation intensity. To solve, we first rearrange the equation in order to separate the variables, intensity I and position z:
${ dI \over I(z)} \left[ 1 + \bar{g}(\nu) { I(z) \over I_S } \right] = \gamma_0(\nu)\cdot dz$
Integrating both sides, we obtain
$\ln \left( { I(z) \over I_{in} } \right) + \bar{g}(\nu) { I(z) - I_{in} \over I_S} = \gamma_0(\nu) \cdot z$
or
$\ln \left( { I(z) \over I_{in} } \right) + \bar{g}(\nu) { I_{in} \over I_S } \left( { I(z) \over I_{in} } - 1 \right) = \gamma_0(\nu) \cdot z$
The gain G of the amplifier is defined as the optical intensity I at position z divided by the input intensity:
$G = G(z) = { I(z) \over I_{in} }$
Substituting this definition into the prior equation, we find the general gain equation:
$\ln \left( G \right) + \bar{g}(\nu) { I_{in} \over I_S } \left( G - 1 \right) = \gamma_0(\nu) \cdot z$
### Small signal approximation
In the special case where the input signal is small compared to the saturation intensity, in other words,
$I_{in} \ll I_S \,$
then the general gain equation gives the small signal gain as
$\ln(G) = \ln(G_0) = \gamma_0(\nu) \cdot z$
or
$G = G_0 = e^{\gamma_0(\nu) z}$
which is identical to the small signal gain equation (see above).
### Large signal asymptotic behavior
For large input signals, where
$I_{in} \gg I_S \,$
the gain approaches unity
$G \rightarrow 1$
and the general gain equation approaches a linear asymptote:
$I(z) = I_{in} + { \gamma_0(\nu) \cdot z \over \bar{g}(\nu) } I_S$
## References
1. Einstein, A (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft 18: 318–323. Bibcode:1916DPhyG..18..318E.
2. Fain, B; Milonni, P (1987). "Classical stimulated emission". JOSA B 4 (1): 78–85.
• Saleh, Bahaa E. A. and Teich, Malvin Carl (1991). Fundamentals of Photonics. New York: John Wiley & Sons. ISBN 0-471-83965-5.
• Alan Corney (1977). Atomic and Laser Spectroscopy. Oxford: Oxford Uni. Press. ISBN 0-19-921145-0 (Pbk.) 978-0-19-921145-6 (Pbk.) Check `|isbn=` value (help).
## See also
• Absorption
• Active laser medium
• Laser (includes a history section)
• Laser science
• Rabi cycle
• Spontaneous emission
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188037514686584, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/83459/integer-sum-of-irrationals-is-it-if-and-only-if?answertab=active
|
# Integer sum of irrationals - is it “if and only if”?
Given an integer $n$ and an irrational $r$, $n>r$, $n-r$ is irrational but $r + (n - r)$, the sum of two positive irrationals, is an integer. Is that the only way that two irrationals can sum to an integer?
What if the question is rephrased using rationals instead of integers? Is the only way two irrationals can sum to a rational is by using the form $r + (a/b - r)$?
Can $r_{1} + (a/b - r_{2})$ ever be a rational? An integer?
-
If $x$ and $y$ are irrationals such that $x+y$ is an integer (say, $n$), then $y= n-x$. So yes, that's the only way in which you can write $n$ as the sum of irrationals: pick one irrational and the second is just $n$ minus the first. – Srivatsan Nov 18 '11 at 20:04
1
We don't even know if $e+\pi$ is rational or not... – J. M. Nov 18 '11 at 20:05
7
What on earth do you mean by "the only way two irrationals can sum to a rational"? Yes, $a+b=c$ if and only if $b$ can be written as $c-a$. That has nothing to do with being rational or irrational. – Henning Makholm Nov 18 '11 at 20:34
## 4 Answers
The question is about linear combinations of real numbers, where some of the numbers are irrational, and can be interpreted as asking whether there are non-trivial linear relations (where trivial ones are things like $\pi + (5-\pi) = 5$, that are general algebraic properties of all numbers).
In general the expectation is that that given $k$ different irrational numbers, there will not be any non-trivial relations between them unless the numbers are constructed from fewer than $k$ "truly different" sources of irrationality, with some redundancy added. Relations can be interpreted as linear or polynomial (algebraic) relations with rational coefficients. There could, hypothetically, be suprises, such as nonzero integers $a,b,c$ (if any do exist) for which $a + b\pi + ce=0$, and there are a few known types of surprising relations, such as $r_1 = (\sum 1/n^2)$ being related to $r_2 = \pi^2$ by $6r_1 = r_2$. Nevertheless, in concrete situations there is usually at least a specific conjecture as to whether or not a relation with rational coefficients can exist between the numbers. If there is no apparent reason why a relationship should exist, it usually does not.
The formal description of these ideas is in terms of linear independence (over the rational numbers) and transcendence degree (also relative to the rational numbers) of a finite set of real numbers. The case where all the real numbers satisfy algebraic equations is covered by algebraic number theory and is well understood, as is the case where one number is transcendental. The case where several essentially different transcendental numbers are involved is the subject of transcendence theory, where there are many difficult conjectures and fewer theorems defining what is expected in most cases. A typical irrational-looking expression like $\pi + e^e$ has "probability 1" of being irrational but no known proof that it is irrational. A set of several irrationals such as $\pi$, $e$, $\zeta(5)$ will usually be as transcendental as possible (maximum transcendence degree) but the proof is out of reach.
-
Examples like $(1 + \sqrt{2})$ + $(- \sqrt{2})$ indeed sum to a rational number, but relationship between the two quantities being summed need not be so obvious at first glance.
Here is a result with which you are probably familiar. For any real number $x$, $$\cos^2(x) + \sin^2(x) = 1.$$ Note that, for almost all $x$, $\cos^2(x)$ and $\sin^2(x)$ are transcendental, and so we have infinitely-many examples of transcendental numbers summing to an integer.
Certainly, some algebra gives $$\cos^2(x) + (1 - \cos^2(x)) = 1,$$ but this can be done with any triple of numbers. The purpose of my example is merely to show that, on the face of it, the relationship between the irrational numbers being summed need not be as readily apprehended in quite the way that one immediately sees that $\sqrt{2}$ and $-\sqrt{2}$ cancel each other.
-
1
You are correct. – Charles Nov 18 '11 at 21:26
1
But then the two numbers you are summing are $\sin^2(x)$ and $1-\sin^2(x)$ ..... – N. S. Nov 19 '11 at 2:58
1
@N. S. Only because you're "familiar" with the Pythagorean identity. What are the odds that adding up the squares of two series that evaluate to transcendental numbers give a rational, nay, integer result? – J. M. Nov 19 '11 at 3:24
@N.S. Strictly speaking, what you say is true. I think my example addresses the spirit of the question, however, which I took to be "Can two irrationals sum to a rational, even if they don't appear to be related in quite the obvious way that $\sqrt{2}$ and $-\sqrt{2}$ are?". – Austin Mohr Nov 19 '11 at 3:32
1
@QED I have explained my interpretation more clearly in the answer. – Austin Mohr Nov 19 '11 at 5:29
show 1 more comment
Yes, it is the only way two positive irrationals can sum to an integer.
Say you have two positive irrationals, $r$ and $s$, and their sum $r+s=n$ is an integer. Then since $r+s=n$, we get $s=n-r$, so $r+s$ is $r+(n-s)$.
And if $n$ is assumed rational, but not an integer, the same argument applies.
Can $r_1+(a/b-r_2)$ be rational? I assume you mean $r_1$ and $r_2$ are irrational and positive, and $a/b$ is rational. Then you could have, for example, $r_1=1+r_2$, so that $r_1$ and $r_2$ are not equal, and $r_1+(a/b-r_2)= a/b-1$, which is rational. But this is still a case of $r+(n-r)$, where $n$ is rational and $r$ is not, namely $n$ is the rational number $a/b-1$.
-
See if this makes sense. We'll call your sum x. So our equation is
$r_1+(\frac ab-r_2)=x$
Let's subtract $\frac ab$ from both sides.
$r_1-r_2=x-\frac ab$
If $x$ and $\frac ab$ are rational (I assume you want a and b to be integers), the right side of this equation is rational. So $r_1-r_2$ must be rational as well. Is this what you're looking for?
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566618800163269, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/139408/good-reference-on-sample-autocorrelation
|
# Good reference on sample autocorrelation?
I'm not a statistician but I'm writing my thesis on mathematical finance and I think it would be neat to have a short section about independence of stock returns. I need to get better understanding about some assumptions (see below) and have a good book to cite.
I have a model for stock prices $S$ in which the daily ($t_i - t_{i-1}=1$) log-returns
$$X_n = \ln\left(\frac{S(t_n)}{S(t_{n-1})}\right), \ \ n=1,...,N$$
are normally distributed with mean $\mu-\sigma^2/2$ and variance $\sigma^2$. The autocorrelation function with lag 1 is
$$r = \frac{Cov(X_1,X_2)}{Var(X_1)}$$
which I estimate by
$$\hat{r} = \frac{(n+1)\sum_{i=1}^{n-1} \bigl(X_i - \bar{X} \bigr)\bigl(X_{i+1} - \bar{X} \bigr)}{n \sum_{i=1}^{n}\bigl(X_i - \bar{X} \bigr)^2}$$
where
$$\bar{X} = \frac{1}{n}\sum_{i=1}^N X_i$$
Now I understand that under some some assumptions it holds that
$$\lim_{n \rightarrow \infty} \sqrt{n}\hat{r} \in N(0,1)$$
I would be very glad if someone could point me towards a good book which I can cite in my thesis and read about these assumptions (I guess it has something to do with the central limit theorem).
Thank you in advance!
Crossposting at:
Statistics: http://stats.stackexchange.com/questions/27465/good-reference-on-sample-autocorrelation
Quantative Finance: http://quant.stackexchange.com/questions/3390/good-reference-on-sample-autocorrelation
-
1
It's poor form to cross post (as you have already been told in both other forums). I don't really care but some people do. As far as your question, in my time series class, we have been told that stock returns (daily change) is almost always white noise, i.e., a trivial model, i.e., they are iid normal with mean 0 and constant variance. And, in the project I did earlier in the semester, this is exactly what I found to be true in the data I looked at. It's so simple, my professor is considering not allowing stock data. Unless I misunderstand, I think what you're doing is way too complex. – Graphth May 1 '12 at 15:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553863406181335, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/243872/need-help-simplifying-an-equation
|
# Need help simplifying an equation.
I'm trying to speed up the following code:
```sum = 0
for (k = 1 ... N) {
f = Fibonacci(k);
for (a = 1 ... 24)
for (b = 1 ... 24)
for (c = 1 ... 24) {
sum = sum + m(a, b, c) // m(a, b, c) <= 24 for all input
* ((f - c) / 24) * ((f - b) / 24) * ((f - a) / 24);
}
}
```
So I have $$\sum_k\sum_a\sum_b\sum_cM_{a, b, c}\Bigg\lfloor\frac{F_k - a}{24}\Bigg\rfloor\Bigg\lfloor\frac{F_k - b}{24}\Bigg\rfloor\Bigg\lfloor\frac{F_k - c}{24}\Bigg\rfloor$$
So then I push the summation over $k$ in and apply $\lfloor x / y\rfloor = \frac{x - (x \text{ mod } y)}{y}$ to get
$$\sum_a\sum_b\sum_cM_{a, b, c}\left(\sum_k\Bigg\lfloor\frac{F_k - a}{24}\Bigg\rfloor\Bigg\lfloor\frac{F_k - b}{24}\Bigg\rfloor\Bigg\lfloor\frac{F_k - c}{24}\Bigg\rfloor\right)$$
$$\frac{1}{24^3}\sum_a\sum_b\sum_cM_{a, b, c}\sum_k(F_k - c - (F_k - c \text{ mod } 24))\times(F_k - b - (F_k - b \text{ mod } 24)) \times (F_k - a - (F_k - a \text{ mod } 24))$$
Actually that's about as far as I get. Typing this up helped me find a mistake in a simplification I thought I could make earlier. Can't seem to simplify this further. If it helps at all everything is also mod 1e9.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9705366492271423, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/87073/norm-preserving-matrix-fix
|
Norm preserving matrix fix
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hello,
I'll state the problem first and than I'll a little bit of motivation.
Lets be given regular matrix $M \in \mathbb{R}^{n\times n}$ and norm $||.||$ in $\mathbb{R}^{n}$. Define $$U =\{ L\in \mathbb{R}^{n\times n}: \forall x\in \mathbb{R}^{n} \; ||LMx||=||x|| \}$$ (all those "norm fix" matrices for $M$) (sorry I have problems with curly brackets). Now the point is to find the best "norm fixing" matrix. I decided that the best one, call it $\bar{L}$, should satisfy: $$\inf_{L\in U} \; \; \sup_{||x||=1} \; ||LMx-Mx|| \; \; = \; \; \sup_{||x||=1} \; ||\bar{L}Mx-Mx||$$
The problem is to find the matrix $\bar{L}$ explicitly, not sure if $\bar{L}$ is unique but it exists. I'm most interested for p-norm with p equal 1 or 2.
Motivation: I was simulating some physical phenomena on computer. And the final equation basically boiled down to $x_{n+1} = Ax_n$. Often $x$ represents some quantity which is conserved. So I came up with this idea how to fix existing numerical scheme to conservative one (with least damage possible)
-
Edited your Latex, mostly on MO one needs double backslashes in \\{ and \\}, also put in spacing. Does the phrase "regular matrix" refer to a restriction of some kind? If your $M$ is invertible, and you are using $p=2,$ then all $L = O M^{-1}$ with $O \in O_n.$ – Will Jagy Jan 30 2012 at 22:33
Meanwhile, the symmetry group of the "unit sphere" with $p=1$ or $p=\infty$ is finite. – Will Jagy Jan 30 2012 at 22:36
Will: With regular matrix I meant invertible matrix(sorry i have still gaps in English terminology) and thanks for reply Survit: Why would I need ordering of $U$? $\inf_{L∈U}f(L)=\sup_{L∈U}−f(L)$ where $f$ is some function defined on $U$ – Tomas Skrivan Jan 31 2012 at 11:25
sorry, i had previously misread your notation! – S. Sra Jan 31 2012 at 13:13
2
You should step back and look at the literature on conservative numerical methods for PDE's. By asking your question in this narrow fashion, you're likely to miss out on broader answers. – Brian Borchers Mar 10 2012 at 14:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249328374862671, "perplexity_flag": "middle"}
|
http://programmingpraxis.com/2010/02/09/numerical-integration/?like=1&_wpnonce=76a2d9cd23
|
# Programming Praxis
A collection of etudes, updated weekly, for the education and enjoyment of the savvy programmer
## Numerical Integration
### February 9, 2010
Computing the definite integral of a function is fundamental to many computations. In many cases, the integral of a function can be determined symbolically and the definite integral computed directly. Another approach, which we shall examine in today’s exercise, is to compute the definite integral numerically, an operation that numerical analysts call quadrature. We will consider a function f for which we are to find the definite integral over the range a to b, with a < b.
Recall from your study of calculus that the definite integral is the area under a curve. If we slice the curve into hundreds or thousands or millions of vertical strips, we can approximate the area under the curve by summing the areas of the individual strips, assuming that each is a rectangle with height equal to the value of the curve at the midpoint of the strip; this is, obviously, the rectangular method of quadrature. The trapezoidal method is similar, but uses the average of the value of the curve at the two ends of the strip; the strip then forms a trapezoid with parallel sides, a flat bottom on the horizontal axis, and a slanted top that follows the curve. A final method, called Simpson’s method, fits a parabola to the top of the strip by taking the height of the curve as its value at the left side of the strip plus its value at the right side of the strip plus four times its value at the midpoint of the strip. Accuracy improves as we move from the rectangular method (one point) to the trapezoidal method (two points) to Simpson’s method (three points), because the approximation more closely fits the curve, and also as the number of slices increases. The trapezoid rule is illustrated above right.
Many curves are characterized by large portions that are nearly flat, where the approximations work well, with a few small portions that change rapidly, where the approximations fail. If we increase the number of slices, there is little effect on the flat parts of the curve, so much of the work is wasted. Adaptive quadrature works by measuring the approximations at various slices and narrowing the slices where the curve is changing rapidly, allowing us to control the approximation precisely. Adaptive quadrature is a recursive process; if approximations with n and n/2 slices are close enough, recursion stops, otherwise the range is split in two and adaptive quadrature is applied to each of the two halves separately.
Your task is to write functions that perform numerical integration by the rectangular, trapezoidal and Simpson’s methods, as well as a function that performs adaptive quadrature. Use your functions to calculate the logarithmic integral $\mathrm{li}(x) = \int_2^x \frac{dt}{\log{t}}$ for x = 1021, where the logarithmic integral is an approximation of the number of primes less than its argument. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.
### Like this:
Pages: 1 2
Posted by programmingpraxis
Filed in Exercises
5 Comments »
### 5 Responses to “Numerical Integration”
1. February 9, 2010 at 10:28 AM
[...] Praxis – Numerical Integration By Remco Niemeijer In today’s Programming Praxis exercise we have to do some numerical integration. Let’s get [...]
2. Remco Niemeijer said
February 9, 2010 at 10:28 AM
My Haskell solution (see http://bonsaicode.wordpress.com/2010/02/09/programming-praxis-numerical-integration/ for a version with comments):
```int combine f a b n = sum $ map g [0..n - 1] where
w = (b - a) / n
lo i = a + w * i
g i = w * combine (f $ lo i) (f $ lo i + w/2) (f $ lo i + w)
intRect = int (\_ m _ -> m)
intTrap = int (\l _ h -> (l + h) / 2)
intSimp = int (\l m h -> (l + 4 * m + h) / 6)
intAdapt m f a b epsilon = if abs (g10 - g5) < e then g10 else
intAdapt m f a mid (Just e) + intAdapt m f mid b (Just e)
where g5 = m f a b 5
g10 = m f a b 10
mid = (a + b) / 2
e = maybe 1e-7 id epsilon
approxPi n = round $ intAdapt intSimp (recip . log) 2 n Nothing
```
3. Mike said
April 15, 2010 at 8:40 PM
Python code.
```def rectangular(f, lo, hi, nsteps=1):
area = 0
spread = float(hi-lo)
for n in range(nsteps):
x = lo + spread*(2*n+1)/(2*nsteps)
area += f(x)
return area*spread/nsteps
def trapezoidal(f, lo, hi, nsteps=1):
area = -(f(lo) + f(hi))
spread = float(hi-lo)
for n in range(nsteps+1):
x = lo + n*spread/nsteps
area += 2*f(x)
return area*float(hi-lo)/(2*nsteps)
def simpson(f, lo, hi, nsteps=1):
area = -(f(lo) + f(hi))
spread = float(hi-lo)
for n in range(2*nsteps+1):
x = lo + n*spread/(2*nsteps)
area += 4*f(x) if n&1 else 2*f(x)
return area*spread/(6*nsteps)
def adaptive(f, lo, hi, integrator=simpson, eps=1e-3):
lo,hi = float(lo),float(hi)
area = 0
todo = [(lo, hi,integrator(f, lo, hi))]
while todo:
a,b,slice_area = todo.pop()
mid = (a+b)/2
lohalf = integrator(f, a, mid)
hihalf = integrator(f, mid, b)
if abs(slice_area - (lohalf + hihalf)) > eps:
todo.append((a,mid,lohalf))
todo.append((mid,b,hihalf))
else:
area += lohalf + hihalf
return area
# tests
def cube(x):
return x*x*x
def invlog(x):
from math import log
return 1/log(x)
print rectangular(cube, 0, 1, 10000)
print trapezoidal(cube, 0, 1, 10000)
print simpson(cube, 0, 1, 10000)
print adaptive(invlog, 2, 1e21, eps=1e-7)
```
output:
0.24999999875
0.2500000025
0.25
2.11272694866e+19
4. Graham said
April 13, 2011 at 12:40 AM
A bit late to the party, but my solution in available here.
5. Graham said
September 8, 2011 at 4:41 PM
Github account moved, so gist disappeared. Here’s my submission (in Python 2.x):
```from __future__ import division
from math import log
def dx(a, b, n):
return (b - a) / n
def rectangle(f, a, b, n=1):
x = lambda k: a + (k + 0.5) * dx(a, b, n)
return dx(a, b, n) * sum(f(x(k)) for k in xrange(n))
def trapezoid(f, a, b, n=1):
x = lambda k: a + k * dx(a, b, n)
return dx(a, b, n) * (f(a) + f(b) + sum(0.5 * (f(x(k)) + f(x(k + 1))) for k
in xrange(1, n)))
def simpson(f, a, b, n=1):
x = lambda k: a + k * dx(a, b, 2 * n)
return dx(a, b, 2 * n) / 3 * (f(a) + f(b) + 2 * sum(f(x(k)) for k in
xrange(2, 2 * n, 2)) + 4 * sum(f(x(k)) for k in xrange(1, 2 * n, 2)))
def adaptive_quad(f, a, b, quad=simpson, eps=1e-7):
int_with5 = quad(f, a, b, 5)
int_with10 = quad(f, a, b, 10)
m = (a + b) / 2
if abs(int_with5 - int_with10) < eps:
return int_with10
else:
return (adaptive_quad(f, a, m, quad, eps) +
adaptive_quad(f, m, b, quad, eps))
def approx_pi(b):
return adaptive_quad(lambda x: 1 / log(x), 2, b)
if __name__ == "__main__":
cube = lambda x: pow(x, 3)
print rectangle(cube, 0, 1, 10000)
print trapezoid(cube, 0, 1, 10000)
print simpson(cube, 0, 1, 10000)
print approx_pi(1e21)
```
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9074325561523438, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/203828-prove-ab-invertible-its-inverse.html
|
# Thread:
1. ## Prove that AB is invertible and its inverse.
Hi,
I am trying to prove this and found the proof, but have no idea how we are able to multiply by (B-1A-1) in the first step when it is not in the original statement. I understand it, but not where this comes from. Also, on the second side, where does the AB come from?
Question: Prove: (AB)-1 = B-1A-1.
Solution: Using the associativity of matrix multiplication,
(AB)(B-1A-1) = A(BB-1)A-1 = AIA-1 = AA-1 = I
and
(B-1A-1)(AB) = B(AA-1)B-1 = BIB-1 = BB-1 = I:
Thus AB is invertible and B-1A-1 is its inverse
2. ## Re: Prove that AB is invertible and its inverse.
first of all, we have to know a priori that A and B are invertibile, so that A-1 and B-1 exist.
now:
(AB)B-1 = A(BB-1) = AI = A
therefore:
(AB)(B-1A-1) = [(AB)B-1]A-1 (by associativity)
and we just showed that the term inside the square brackets is A (above), so
= AA-1 = I
in the same way:
A-1(AB) = (A-1A)B = IB = B, so:
(B-1A-1)(AB) = B-1[A-1(AB)]
= B-1B = I
the basic idea is this:
multiplying AB by B-1A-1 on the right, is the same as multiplying AB by B-1 on the right first, and by A-1 second.
multiplying AB by B-1A-1 on the LEFT, is the same as multiplying AB by A-1 on the left first, and by B-1 second.
the way to remember this is:
if A is "putting on your socks" and B is "putting on your shoes", then to un-do it, you do the opposite actions "in reverse order" (it doesn't make much sense to take off the socks before the shoes, does it now?).
it's very important to know that if A is invertible, and B is invertible, so is AB. this means we can multiply invertible matrices, and get another invertible matrix. it also means that if we know how to compute the inverses of A and B, we get the inverse of AB "for free".
let's see this in action:
suppose that:
$A = \begin{bmatrix}2&2\\2&3 \end{bmatrix}, B = \begin{bmatrix}5&0\\7&1 \end{bmatrix}$.
then $A^{-1} = \begin{bmatrix}\frac{3}{2}&-1\\-1&1 \end{bmatrix}, B^{-1} = \begin{bmatrix}\frac{1}{5}&0\\-\frac{7}{5}&1 \end{bmatrix}$.
now $AB = \begin{bmatrix}24&2\\31&3 \end{bmatrix}$.
multiplying together the two inverses for A and B in reverse order, we have:
$B^{-1}A^{-1} = \begin{bmatrix}\frac{1}{5}&0\\-\frac{7}{5}&1 \end{bmatrix} \begin{bmatrix}\frac{3}{2}&-1\\-1&1 \end{bmatrix} = \begin{bmatrix}\frac{3}{10}&-\frac{1}{5}\\-\frac{31}{10}&\frac{12}{5} \end{bmatrix}$
which is easily seen to be the inverse of AB.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463948607444763, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/129982/solving-a-pde-in-2-variables-without-boundary-conditions/129985
|
# solving a PDE in 2 variables without boundary conditions
how could i solve the PDE (without boundary or other initial conditions)
$1= y\partial _{y}f(x,y) -x \partial _{x}f(x,y)$
-
What do you mean by "solve"? Find all functions that solve it? Without conditions, the solution is not unique. – joriki Apr 10 '12 at 10:32
## 2 Answers
One solution would be:
$f(x,y) = xy - log(x)$
But so would:
$f(x,y) = n\cdot xy - log(x)$
In general, you need a boundary condition to solve a first order PDE.
-
ok , thanks is it possible to get ALL the solutions of $f(x,y)$ into a closed expression :) – Jose Garcia Apr 10 '12 at 10:33
1
The equation is linear, so taking the difference between two solutions $f_1$ and $f_2$ you have that $y\partial_y(f_1 - f_2) - x\partial_x(f_1-f_2) = 0$, so every solution is a specific solution plus some homogeneous solution. The specific solution is given above by nbubis as $f_0 = \log(|x|)$. Any homogeneous solution has to be constant on the integral curves of $x\partial_x - y\partial_y$, and hence can be written as $g = g(xy,\mathop{sgn}(x))$. That is, as a function depending only on the product $xy$ and on the sign of the variable $x$. The second dependence is because outside $xy=0$... – Willie Wong♦ Apr 10 '12 at 11:29
... the level curves $\{xy = c\}$ (which are the integral curves of $x\partial_x - y\partial_y$) have two connected components, and the function is allowed to be different on the components. The two components can be distinguished by the sign of the $x$ variable there. – Willie Wong♦ Apr 10 '12 at 11:30
This belongs to a PDE of the form http://eqworld.ipmnet.ru/en/solutions/fpde/fpde1211.pdf.
The general solution is $f(x,y)=C(xy)+\int\dfrac{dy}{y}=C(xy)+\ln y$ or $f(x,y)=C(xy)-\int\dfrac{dx}{x}=C(xy)-\ln x$ .
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246947765350342, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/52328/consistent-r-e-extensions-of-non-r-e-theories
|
## Consistent r.e. extensions of non r.e. theories.
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $\mathcal{L}$ be some first-order language, and $T$ be a consistent set of formulas of $\mathcal{L}$ which is not recursively enumerable.
Under what conditions will there be $T'\supset T$ such that $T'$ is consistent and recursively enumerable?
-
3
Are there any specific kinds of conditions you seek? One condition that gives more than you're after: if $T$ is co-r.e. then it has an extension which is not only consistent and r.e., but also complete (hence decidable too). jstor.org/pss/2271337 – Ed Dean Jan 17 2011 at 16:58
## 2 Answers
I'm not sure what kind of conditions you seek, but let me simply point out that one can build a decidable theory $T'$ having continuum many inequivalent subtheories $T$. Since only countably many of these subtheories can be decidable (or even c.e., co-c.e. or even arithmetic, etc.), we would thereby produce continuum many theories with no nice computability features, but which have a decidable extension.
For an example, consider the language having infinitely many constant symbols $c_n$, and let $T'$ be the theory asserting $c_n\neq c_m$ for distinct $n$ and $m$. This is a completely trivial, decidable theory, but every partition of the constant symbols gives rise to a distinct subtheory $T$, which asserts merely that constants in different classes of the partition are distinct. Since there are continuum many partitions, we have continuum many inequivalent subtheories of $T'$, most of which are therefore not c.e.
Similar examples can be made whenever a theory $T'$ asserts the truth of infinitely many logically independent statements, since there would be continuum many subtheories asserting any particular subset of those statements. Most of these subtheories will not be c.e., since there are only countably many c.e. theories, even when $T'$ is decidable.
-
There is an accompanying question that ought to be easier: For any r.e. set there is a theory of the same degree (Feferman). Can we ensure that if $A$ is an r.e. consistent set of sentences, then there is a theory $T'$ extending $A$ and of the same degree? – Andres Caicedo Jan 17 2011 at 23:44
I guess you are requiring theories to be closed under consequence, and in this case the answer seems to be negative, since we can find a c.e. non-computable enumeration of axioms that that axiomatize a complete decidable theory, just by taking complex combinations $\varphi\wedge\cdots\wedge\varphi$ of the axioms so as to code a c.e. non-computable set. – Joel David Hamkins Jan 17 2011 at 23:53
Oh, of course! Too bad. – Andres Caicedo Jan 18 2011 at 0:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Let $A = T \cup R$ where $R$ is any set of refutable statements from $T$ (i.e., $T \vdash \lnot \varphi$ for all $\varphi$ in $R$). A simple necessary condition for such a $T^{\prime}$ to exist is that $A$ is not recursively (computably) enumerable. If it were, then given a r.e. (c.e.) $T^{\prime} \supseteq T$, we could recursively enumerate all statements of $T$ by listing only the ones in $T^{\prime}$ that also appear in $A$ (i.e., $T = T^{\prime} \cap A$). The reason for this equality is that $T^{\prime}$ is an extension of $T$ so it must contain all the statements in $T$ but then it cannot include any in $R$ by its consistency.
Now let $I$ be the set of all statements that are independent of $T$ (i.e. all $\varphi$ such that $T \nvdash \varphi$ and $T \nvdash \lnot \varphi$). In the case that $T$ is deductively closed meaning $T \vdash \varphi$ implies that $\varphi \in T$, a corollary of the above result is that $I$ cannot be the complement of a recursively enumerable set. This is the instance of letting $R$ be the set of all statements refutable from $T$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353513717651367, "perplexity_flag": "head"}
|
http://en.m.wikibooks.org/wiki/Linear_Algebra/Topic:_Markov_Chains
|
# Linear Algebra/Topic: Markov Chains
← Topic: Geometry of Linear Maps Topic: Markov Chains Topic: Orthonormal Matrices →
Here is a simple game: a player bets on coin tosses, a dollar each time, and the game ends either when the player has no money left or is up to five dollars. If the player starts with three dollars, what is the chance that the game takes at least five flips? Twenty-five flips?
At any point, this player has either \$0, or \$1, ..., or \$5. We say that the player is in the state $s_0$, $s_1$, ..., or $s_5$. A game consists of moving from state to state. For instance, a player now in state $s_3$ has on the next flip a $0.5$ chance of moving to state $s_2$ and a $0.5$ chance of moving to $s_4$. The boundary states are a bit different; once in state $s_0$ or stat $s_5$, the player never leaves.
Let $p_{i}(n)$ be the probability that the player is in state $s_i$ after $n$ flips. Then, for instance, we have that the probability of being in state $s_0$ after flip $n+1$ is $p_{0}(n+1)=p_{0}(n)+0.5\cdot p_{1}(n)$. This matrix equation sumarizes.
$\begin{pmatrix} 1 &0.5 &0 &0 &0 &0 \\ 0 &0 &0.5 &0 &0 &0 \\ 0 &0.5 &0 &0.5 &0 &0 \\ 0 &0 &0.5 &0 &0.5 &0 \\ 0 &0 &0 &0.5 &0 &0 \\ 0 &0 &0 &0 &0.5 &1 \end{pmatrix} \begin{pmatrix} p_{0}(n) \\ p_{1}(n) \\ p_{2}(n) \\ p_{3}(n) \\ p_{4}(n) \\ p_{5}(n) \end{pmatrix} =\begin{pmatrix} p_{0}(n+1) \\ p_{1}(n+1) \\ p_{2}(n+1) \\ p_{3}(n+1) \\ p_{4}(n+1) \\ p_{5}(n+1) \end{pmatrix}$
With the initial condition that the player starts with three dollars, calculation gives this.
| | | | | | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $n=0$ | $n=1$ | $n=2$ | $n=3$ | $n=4$ | $\cdots$ | $n=24$ |
| $\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}$ | $\begin{pmatrix} 0 \\ 0 \\ 0.5 \\ 0 \\ 0.5 \\ 0 \end{pmatrix}$ | $\begin{pmatrix} 0 \\ 0.25 \\ 0 \\ 0.5 \\ 0 \\ 0.25 \end{pmatrix}$ | $\begin{pmatrix} 0.125 \\ 0 \\ 0.375 \\ 0 \\ 0.25 \\ 0.25 \end{pmatrix}$ | $\begin{pmatrix} 0.125 \\ 0.1875 \\ 0 \\ 0.3125 \\ 0 \\ 0.375 \end{pmatrix}$ | $\cdots$ | $\begin{pmatrix} 0.39600 \\ 0.00276 \\ 0 \\ 0.00447 \\ 0 \\ 0.59676 \end{pmatrix}$ |
As this computational exploration suggests, the game is not likely to go on for long, with the player quickly ending in either state $s_0$ or state $s_5$. For instance, after the fourth flip there is a probability of $0.50$ that the game is already over. (Because a player who enters either of the boundary states never leaves, they are said to be absorbing.)
This game is an example of a Markov chain, named for A.A. Markov, who worked in the first half of the 1900's. Each vector of $p$'s is a probability vector and the matrix is a transition matrix. The notable feature of a Markov chain model is that it is historyless in that with a fixed transition matrix, the next state depends only on the current state, not on any prior states. Thus a player, say, who arrives at $s_2$ by starting in state $s_3$, then going to state $s_2$, then to $s_1$, and then to $s_2$ has at this point exactly the same chance of moving next to state $s_3$ as does a player whose history was to start in $s_3$, then go to $s_4$, and to $s_3$, and then to $s_2$.
Here is a Markov chain from sociology. A study (Macdonald & Ridge 1988, p. 202) divided occupations in the United Kingdom into upper level (executives and professionals), middle level (supervisors and skilled manual workers), and lower level (unskilled). To determine the mobility across these levels in a generation, about two thousand men were asked, "At which level are you, and at which level was your father when you were fourteen years old?" This equation summarizes the results.
$\begin{pmatrix} 0.60 & 0.29 & 0.16 \\ 0.26 & 0.37 & 0.27 \\ 0.14 & 0.34 & 0.57 \end{pmatrix} \begin{pmatrix} p_{U}(n) \\ p_{M}(n) \\ p_{L}(n) \end{pmatrix} =\begin{pmatrix} p_{U}(n+1) \\ p_{M}(n+1) \\ p_{L}(n+1) \end{pmatrix}$
For instance, a child of a lower class worker has a $.27$ probability of growing up to be middle class. Notice that the Markov model assumption about history seems reasonable— we expect that while a parent's occupation has a direct influence on the occupation of the child, the grandparent's occupation has no such direct influence. With the initial distribution of the respondents's fathers given below, this table lists the distributions for the next five generations.
| | | | | | |
|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| $n=0$ | $n=1$ | $n=2$ | $n=3$ | $n=4$ | $n=5$ |
| $\begin{pmatrix} 0.12 \\ 0.32 \\ 0.56 \end{pmatrix}$ | $\begin{pmatrix} 0.23 \\ 0.34 \\ 0.42 \end{pmatrix}$ | $\begin{pmatrix} 0.29 \\ 0.34 \\ 0.37 \end{pmatrix}$ | $\begin{pmatrix} 0.31 \\ 0.34 \\ 0.35 \end{pmatrix}$ | $\begin{pmatrix} 0.32 \\ 0.33 \\ 0.34 \end{pmatrix}$ | $\begin{pmatrix} 0.33 \\ 0.33 \\ 0.34 \end{pmatrix}$ |
One more example, from a very important subject, indeed. The World Series of American baseball is played between the team winning the American League and the team winning the National League (we follow [Brunner] but see also [Woodside]). The series is won by the first team to win four games. That means that a series is in one of twenty-four states: 0-0 (no games won yet by either team), 1-0 (one game won for the American League team and no games for the National League team), etc. If we assume that there is a probability $p$ that the American League team wins each game then we have the following transition matrix.
$\begin{pmatrix} 0 &0 &0 &0 &\ldots \\ p &0 &0 &0 &\ldots \\ 1-p &0 &0 &0 &\ldots \\ 0 &p &0 &0 &\ldots \\ 0 &1-p &p &0 &\ldots \\ 0 &0 &1-p &0 &\ldots \\ \vdots &\vdots &\vdots &\vdots \end{pmatrix} \begin{pmatrix} p_{\text{0-0}}(n) \\ p_{\text{1-0}}(n) \\ p_{\text{0-1}}(n) \\ p_{\text{2-0}}(n) \\ p_{\text{1-1}}(n) \\ p_{\text{0-2}}(n) \\ \vdots \end{pmatrix} = \begin{pmatrix} p_{\text{0-0}}(n+1) \\ p_{\text{1-0}}(n+1) \\ p_{\text{0-1}}(n+1) \\ p_{\text{2-0}}(n+1) \\ p_{\text{1-1}}(n+1) \\ p_{\text{0-2}}(n+1) \\ \vdots \end{pmatrix}$
An especially interesting special case is $p=0.50$; this table lists the resulting components of the $n=0$ through $n=7$ vectors. (The code to generate this table in the computer algebra system Octave follows the exercises.)
| | | | | | | | | |
|-----|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|
| | $n=0$ | $n=1$ | $n=2$ | $n=3$ | $n=4$ | $n=5$ | $n=6$ | $n=7$ |
| 0-0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 1-0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 |
| 0-1 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 |
| 2-0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 |
| 1-1 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 |
| 0-2 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 |
| 3-0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 |
| 2-1 | 0 | 0 | 0 | 0.325 | 0 | 0 | 0 | 0 |
| 1-2 | 0 | 0 | 0 | 0.325 | 0 | 0 | 0 | 0 |
| 0-3 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 |
| 4-0 | 0 | 0 | 0 | 0 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| 3-1 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 |
| 2-2 | 0 | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 |
| 1-3 | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 |
| 0-4 | 0 | 0 | 0 | 0 | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
| 4-1 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0.125 | 0.125 |
| 3-2 | 0 | 0 | 0 | 0 | 0 | 0.3125 | 0 | 0 |
| 2-3 | 0 | 0 | 0 | 0 | 0 | 0.3125 | 0 | 0 |
| 1-4 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0.125 | 0.125 |
| 4-2 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 0.15625 |
| 3-3 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3125 | 0 |
| 2-4 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 | 0.15625 |
| 4-3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 |
| 3-4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15625 |
Note that evenly-matched teams are likely to have a long series— there is a probability of $0.625$ that the series goes at least six games.
One reason for the inclusion of this Topic is that Markov chains are one of the most widely-used applications of matrix operations. Another reason is that it provides an example of the use of matrices where we do not consider the significance of the maps represented by the matrices. For more on Markov chains, there are many sources such as (Kemeny & Snell 1960) and (Iosifescu 1980).
## Exercises
Use a computer for these problems. You can, for instance, adapt the Octave script given below.
Problem 1
These questions refer to the coin-flipping game.
1. Check the computations in the table at the end of the first paragraph.
2. Consider the second row of the vector table. Note that this row has alternating $0$'s. Must $p_{1}(j)$ be $0$ when $j$ is odd? Prove that it must be, or produce a counterexample.
3. Perform a computational experiment to estimate the chance that the player ends at five dollars, starting with one dollar, two dollars, and four dollars.
Problem 2
We consider throws of a die, and say the system is in state $s_i$ if the largest number yet appearing on the die was $i$.
1. Give the transition matrix.
2. Start the system in state $s_1$, and run it for five throws. What is the vector at the end?
(Feller 1968, p. 424)
Problem 3
There has been much interest in whether industries in the United States are moving from the Northeast and North Central regions to the South and West, motivated by the warmer climate, by lower wages, and by less unionization. Here is the transition matrix for large firms in Electric and Electronic Equipment (Kelton 1983, p. 43)
| | | | | | |
|----|-------|-------|-------|-------|-------|
| | NE | NC | S | W | Z |
| NE | 0.787 | 0 | 0 | 0.111 | 0.102 |
| NC | 0 | 0.966 | 0.034 | 0 | 0 |
| S | 0 | 0.063 | 0.937 | 0 | 0 |
| W | 0 | 0 | 0.074 | 0.612 | 0.314 |
| Z | 0.021 | 0.009 | 0.005 | 0.010 | 0.954 |
For example, a firm in the Northeast region will be in the West region next year with probability $0.111$. (The Z entry is a "birth-death" state. For instance, with probability $0.102$ a large Electric and Electronic Equipment firm from the Northeast will move out of this system next year: go out of business, move abroad, or move to another category of firm. There is a $0.021$ probability that a firm in the National Census of Manufacturers will move into Electronics, or be created, or move in from abroad, into the Northeast. Finally, with probability $0.954$ a firm out of the categories will stay out, according to this research.)
1. Does the Markov model assumption of lack of history seem justified?
2. Assume that the initial distribution is even, except that the value at $Z$ is $0.9$. Compute the vectors for $n=1$ through $n=4$.
3. Suppose that the initial distribution is this.
| | | | | |
|--------|--------|--------|--------|--------|
| NE | NC | S | W | Z |
| 0.0000 | 0.6522 | 0.3478 | 0.0000 | 0.0000 |
Calculate the distributions for $n=1$ through $n=4$.
4. Find the distribution for $n=50$ and $n=51$. Has the system settled down to an equilibrium?
Problem 4
This model has been suggested for some kinds of learning (Wickens 1982, p. 41). The learner starts in an undecided state $s_U$. Eventually the learner has to decide to do either response $A$ (that is, end in state $s_A$) or response $B$ (ending in $s_B$). However, the learner doesn't jump right from being undecided to being sure $A$ is the correct thing to do (or $B$). Instead, the learner spends some time in a "tentative-$A$" state, or a "tentative-$B$" state, trying the response out (denoted here $t_A$ and $t_B$). Imagine that once the learner has decided, it is final, so once $s_A$ or $s_B$ is entered it is never left. For the other state changes, imagine a transition is made with probability $p$ in either direction.
1. Construct the transition matrix.
2. Take $p=0.25$ and take the initial vector to be $1$ at $s_U$. Run this for five steps. What is the chance of ending up at $s_A$?
3. Do the same for $p=0.20$.
4. Graph $p$ versus the chance of ending at $s_A$. Is there a threshold value for $p$, above which the learner is almost sure not to take longer than five steps?
Problem 5
A certain town is in a certain country (this is a hypothetical problem). Each year ten percent of the town dwellers move to other parts of the country. Each year one percent of the people from elsewhere move to the town. Assume that there are two states $s_T$, living in town, and $s_C$, living elsewhere.
1. Construct the transistion matrix.
2. Starting with an initial distribution $s_T=0.3$ and $s_C=0.7$, get the results for the first ten years.
3. Do the same for $s_T=0.2$.
4. Are the two outcomes alike or different?
Problem 6
For the World Series application, use a computer to generate the seven vectors for $p=0.55$ and $p=0.6$.
1. What is the chance of the National League team winning it all, even though they have only a probability of $0.45$ or $0.40$ of winning any one game?
2. Graph the probability $p$ against the chance that the American League team wins it all. Is there a threshold value— a $p$ above which the better team is essentially ensured of winning?
(Some sample code is included below.)
Problem 7
A Markov matrix has each entry positive and each column sums to $1$.
1. Check that the three transistion matrices shown in this Topic meet these two conditions. Must any transition matrix do so?
2. Observe that if $A\vec{v}_0=\vec{v}_1$ and $A\vec{v}_1=\vec{v}_2$ then $A^2$ is a transition matrix from $\vec{v}_0$ to $\vec{v}_2$. Show that a power of a Markov matrix is also a Markov matrix.
3. Generalize the prior item by proving that the product of two appropriately-sized Markov matrices is a Markov matrix.
Solutions
Computer Code
This script markov.m for the computer algebra system Octave was used to generate the table of World Series outcomes. (The sharp character # marks the rest of a line as a comment.)
```# Octave script file to compute chance of World Series outcomes.
function w = markov(p,v)
q = 1-p;
A=[0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-0
p,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-0
q,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-1_
0,p,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 2-0
0,q,p,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-1
0,0,q,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-2__
0,0,0,p,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 3-0
0,0,0,q,p,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 2-1
0,0,0,0,q,p, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-2_
0,0,0,0,0,q, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 0-3
0,0,0,0,0,0, p,0,0,0,1,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 4-0
0,0,0,0,0,0, q,p,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 3-1__
0,0,0,0,0,0, 0,q,p,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 2-2
0,0,0,0,0,0, 0,0,q,p,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0; # 1-3
0,0,0,0,0,0, 0,0,0,q,0,0, 0,0,1,0,0,0, 0,0,0,0,0,0; # 0-4_
0,0,0,0,0,0, 0,0,0,0,0,p, 0,0,0,1,0,0, 0,0,0,0,0,0; # 4-1
0,0,0,0,0,0, 0,0,0,0,0,q, p,0,0,0,0,0, 0,0,0,0,0,0; # 3-2
0,0,0,0,0,0, 0,0,0,0,0,0, q,p,0,0,0,0, 0,0,0,0,0,0; # 2-3__
0,0,0,0,0,0, 0,0,0,0,0,0, 0,q,0,0,0,0, 1,0,0,0,0,0; # 1-4
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,p,0, 0,1,0,0,0,0; # 4-2
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,q,p, 0,0,0,0,0,0; # 3-3_
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,q, 0,0,0,1,0,0; # 2-4
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,p,0,1,0; # 4-3
0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,0,0,0,0, 0,0,q,0,0,1]; # 3-4
w = A * v;
endfunction
```
```> v0=[1;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0]
> p=.5
> v1=markov(p,v0)
> v2=markov(p,v1)
...
```
Translating to another computer algebra system should be easy— all have commands similar to these.
↑Jump back a section
## References
• Feller, William (1968), An Introduction to Probability Theory and Its Applications, 1 (3rd ed.), Wiley .
• Iosifescu, Marius (1980), Finite Markov Processes and Their Applications, UMI Research Press .
• Kelton, Christina M.L. (1983), Trends on the Relocation of U.S. Manufacturing, Wiley .
• Kemeny, John G.; Snell, J. Laurie (1960), Finite Markove Chains, D.~Van Nostrand .
• Macdonald, Kenneth; Ridge, John (1988), "Social Mobility", British Social Trends Since 1900 (Macmillian) .
• Wickens, Thomas D. (1982), Models for Behavior, W.H. Freeman .
← Topic: Geometry of Linear Maps Topic: Markov Chains Topic: Orthonormal Matrices →
↑Jump back a section
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 131, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9243842363357544, "perplexity_flag": "head"}
|
http://planetmath.org/QuotientRing
|
# quotient ring
Definition. Let $R$ be a ring and let $I$ be a two-sided ideal of $R$. To define the quotient ring $R/I$, let us first define an equivalence relation in $R$. We say that the elements $a,b\in R$ are equivalent, written as $a\sim b$, if and only if $a-b\in I$. If $a$ is an element of $R$, we denote the corresponding equivalence class by $[a]$. Thus $[a]=[b]$ if and only if $a-b\in I$. The quotient ring of $R$ modulo $I$ is the set $R/I=\{[a]\,|\,a\in R\}$, with a ring structure defined as follows. If $[a],[b]$ are equivalence classes in $R/I$, then
• $[a]+[b]=[a+b]$,
• $[a]\cdot[b]=[a\cdot b]$.
Here $a$ and $b$ are some elements in $R$ that represent $[a]$ and $[b]$. By construction, every element in $R/I$ has such a representative in $R$. Moreover, since $I$ is closed under addition and multiplication, one can verify that the ring structure in $R/I$ is well defined.
A common notation is $a+I=[a]$ which is consistent with the notion of classes $[a]=aH\in G/H$ for a group $G$ and a normal subgroup $H$.
# Properties
1. 1.
If $R$ is commutative, then $R/I$ is commutative.
2. 2.
The mapping $R\to R/I$, $a\mapsto[a]$ is a homomorphism, and is called the natural homomorphism.
# Examples
1. 1.
For a ring $R$, we have $R/R=\{[0]\}$ and $R/\{0\}=R$.
2. 2.
Let $R=\mathbb{Z}$, and let $I=2\mathbb{Z}$ be the set of even numbers. Then $R/I$ contains only two classes; one for even numbers, and one for odd numbers. Actually this quotient ring is a field. It is the only field with two elements (up to isomorphy) and is also denoted by $\mathbb{F}_{2}$.
3. 3.
One way to construct complex numbers is to consider the field $\mathbb{R}[T]/(T^{2}+1)$. This field can viewed as the set of all polynomials of degree $1$ with normal addition and $(a+bT)(c+dT)=ac-bd+(ad+bc)T$, which is like complex multiplication.
Type of Math Object:
Definition
Major Section:
Reference
Groups audience:
## Mathematics Subject Classification
16-00 General reference works (handbooks, dictionaries, bibliographies, etc.)
81R12 Relations with integrable systems
20C30 Representations of finite symmetric groups
20C32 Representations of infinite symmetric groups
## Recent Activity
May 17
new image: sinx_approx.png by jeremyboden
new image: approximation_to_sinx by jeremyboden
new image: approximation_to_sinx by jeremyboden
new question: Solving the word problem for isomorphic groups by mairiwalker
new image: LineDiagrams.jpg by m759
new image: ProjPoints.jpg by m759
new image: AbstrExample3.jpg by m759
new image: four-diamond_figure.jpg by m759
May 16
new problem: Curve fitting using the Exchange Algorithm. by jeremyboden
new question: Undirected graphs and their Chromatic Number by Serchinnho
## Info
Owner: mathwizard
Added: 2001-10-23 - 19:32
Author(s): mathwizard
typo by drini ✓
## Versions
(v18) by mathwizard 2013-03-22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 47, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8977925777435303, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/152740-rate-change-problem.html
|
# Thread:
1. ## rate of change problem
A rocket is fired into the air with an initial velocity of 98 m/s. The height ( h ) of the rocket after t seconds is given by the expression $h = 98t - 4.9t2$.
a. What the average rate of change for the first 2 seconds.
b. At what point does the rocket reach its maximum height? Show a graphical and algebraic solution.
c. Over what intervals is the rocket’s height increasing and decreasing?
2. a) The average rate of change is given by:
$\frac{h(2)-h(0)}{2-0}$
b) Look for the maximum point of the parabola. This occurs at x = -b/(2a) is the parabola is f(x) =ax^2 +bx + c
Algebraically, differentiate and set the derivative equal to zero, so that the tangent line to the parabola is horizontal. This occurs only at the vertex.
c) Compute h'(x). For increasing intervals, solve h'(x) > 0. For decreasing intervals, solve h'(x) < 0.
Good luck!!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8821571469306946, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/81235/wanted-an-example-of-a-natural-non-k-ahler-metric-on-a-kahler-manifold/81256
|
## Wanted: an example of a natural non-K\"ahler metric on a Kahler manifold
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $X$ be a Kahler manifold. Associated to any hermitian metric $h$ on $X$ is a smooth real $(1,1)$-form $\omega = -\text{Im } h$, called the Kahler form of $h$. One of several equivalent conditions for the metric $h$ to be a Kahler metric is that $\omega$ is symplectic, or that $\text{d} \omega = 0$.
Morally speaking, this implies that Kahler metrics are rare. In a sense, they are contained in a proper subspace of the cone of hermitian metrics on $X$.
Question: Does anyone know an example of a manifold $X$ and a natural metric $h$ on $X$ which is not Kahler?
For extra credit: Can we find such an example where $X$ is compact?
I'll elaborate a bit on what I mean by "natural" and then provide some motivation for the questions. The following paragraphs are not meant to be mathematically rigorous, but rather heuristic, so please be gentle when you see any inaccuracies.
"Naturality": It is easy to give explicit examples of non-Kahler metrics on a Kahler manifold. Just take any Kahler metric $h$ and multiply it by a positive non-constant function $f$: the Kahler form of the new metric will then satisfy $\text{d} (f \omega) = \text{d} f \wedge \omega + f \text{d} \omega = \text{d} f \wedge \omega$. As $\omega$ is symplectic the wedge product $\text d f \wedge \omega$ can only be zero if $\text d f$ is zero, so the new metric is not Kahler.
This feels like cheating to me. It's like starting a book on linear algebra by defining vector spaces axiomatically and then only giving the trivial space as an example. The example does not advance our understanding in any significant way.
I would like to see an example where the metric $h$ arises in a geometric way or is somehow an obvious candidate for a metric on $X$. For example, consider a Hopf surface $X$, which arises as a quotient of $\mathbb C^2 \setminus {0}$ by a group $G$. The naive way to give an example of a metric on $X$ is to find a metric on $\mathbb C^2 \setminus {0}$ which is invariant under the action of $G$, and it is perfectly possible to give an explicit example of such a metric by some calculations (see [1] for an example). If only the Hopf surface were Kahler I would accept this as a "real" example.
Motivation: Given a hermitian metric $h$ there are several equivalent definitions of it being Kahler. One can say that its Kahler form is closed, that one can approximate the euclidean metric to the second degree in local coordinates, or that the Chern and Levi-Civita connections of $h$ are the same. This last condition is the one I like the most, because with good will one can interpret it as saying that the complex and Riemannian geometries defined by the metric are the same.
This is all well and good, and I feel I understand the different definitions and the links between them. However, given an explicit metric, I have absolutely no intuition for if it is Kahler or not. I can't look at a metric and just go "Aha!", I have to "fly blind" and calculate.
For example, take the Fubini-Study metric on $\mathbb P^n$. It can be obtained by considering a scalar product on $\mathbb C^{n+1}$ and saying that the scalar product of lines in that space is the "angle" between the lines (-ish). This is a very pretty and geometric way of obtaining a metric. Now, the only way I know to show that the metric obtained in this beautiful way is Kahler is by long and violent calculations. I can't give you an a priori plausibility argument for it being Kahler. The same is true for any explicit example of a Kahler metric on any manifold.
I see this as failure on my part, and a sign that I have not really understood Kahler metrics. I think that an explicit example of a natural non-Kahler metric would help me understand complex geometry better.
[1] http://mathoverflow.net/questions/57535/examples-of-non-kahler-surfaces-with-explicit-non-kahler-metric
-
2
You likely know about these and have rejected them for being not "explicit" enough: what about the (basically unique) Hermitian-Einstein metrics on the blowups $\mathbb{CP}^2\#\overline{\mathbb{CP}^2}$ (Page, classical) and $\mathbb{CP}^2\#2\overline{\mathbb{CP}^2}$ (Chen-LeBrun-Weber, arxiv.org/abs/0705.0710)? These are non-Kaehler and very natural – macbeth Nov 18 2011 at 14:36
You can also consider the other standard way to define the Fubini-Study metric in the usual charts on $\mathbb{CP}^n$, in which case by definition it is locally $\partial\overline{\partial}$-exact therefore globally closed (no violent calculations here). You then need a simple Cauchy-Schwarz argument to show it is positive definite. – YangMills Nov 18 2011 at 17:29
## 4 Answers
In his paper Invariant Kahler metrics and projective embeddings of the flag manifold, Bull. Austal. Math. Soc. 49 (1994), K. Yang considers the flag manifold $$F_{1,2,3}(\mathbb{C}^3):=SU(3)/S(U(1)^3)$$ and determines the space of invariant Hermitian and Kahler metrics on it.
In particular, he shows that a Killing metric is not Kahler.
On the other hand, by applying Kodaira embedding theorem, he proves that $F_{1,2,3}(\mathbb{C}^3)$ is projective algebraic and provides an explicit projective embedding of it.
The computations are made quite explicitly in terms of the Maurer-Cartan form of $SU(3)$.
So this could be one of the examples you are looking for.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The (real) six-dimensional complex projective space $\mathbb{CP}^3$ is Kähler relative to the Fubini-Study metric; however it has another natural (at least in my opinion) metric which is nearly Kähler but non-Kähler. Moreover, just like the Fubini-Study metric, it is Einstein. It has the property that its metric cone has $G_2$ holonomy. Equivalently, $\mathbb{CP}^3$ with the nearly Kähler admits real Killing spinors. A good place to read about this is this paper by Moroianu, Nagy and Semmelmann: arXiv:math.DG/0611223 and references therein.
-
2
It certainly is a natural metric. Doesn't it arise by considering the twistor space of $S^4$? However, we should point out that the almost Hermitian "package" $(g, J, \omega)$ corresponding to a nearly Kaehler manifold has an almost complex structure $J$ which is NOT integrable. Perhaps this will satisfy Gunnar. But I suspect he wants an (integrable) complex manifold with a Hermitian metric $\omega$ which is not closed. – Spiro Karigiannis Nov 18 2011 at 21:05
Thanks Claudio, Francesco and José for your interesting answers.
This is an "answer in absence" of Demailly; he doesn't use this site, but I thought his remark was nice enough to share. What follows is only his sketch, there are details to fill in that I haven't yet had the time to take care of.
Let $X$ be a smooth projective variety, embedded in $\mathbb P^N$. Take $k$ big enough so that the vector bundle $T_X \otimes \mathcal O(k)$ is generated by its global sections. The Fubini-Study metric on $\mathbb P^N$ now gives a hermitian metric on $X$ by restriction, and on $\mathcal O(k)$ by taking powers of the determinant metric. These induce an $L^2$ metric on the global sections of $T_X \otimes \mathcal O(k)$.
If we tensor by $\mathcal O(-k)$, then we have a surjective bundle map
$$H^0(X, T_X \otimes \mathcal O(k)) \otimes \mathcal O(-k) \to T_X \to 0.$$
The $L^2$ metric and the metric induced by Fubini-Study on $\mathcal O(-k)$ now gives a metric on the tensor product on the left. This induces a quotient metric $h$ on $T_X$. Despite its algebraic origins, this metric $h$ should (almost) never be a Kahler metric.
Moreover, by applying approximation theorems of Tian, Demailly and others, one should be able to prove that these non-Kahler metrics are dense in the cone of hermitian metrics on $X$ -- i.e. starting from any metric on $X$ and using that to define the $L^2$ metric, it should be possible to fabricate a series of non-Kahler metrics as above which converges to the given metric. The process should in fact generalize and yield similar metrics on any holomorphic vector bundle over a projective manifold.
This should imply that any heuristic of the form "my metric is geometric/algebraic, thus Kahler" is doomed to failure.
-
In Left invariant Riemannian metrics on complex Lie groups, M. Goto and K. Uesu prove that if a complex analytic group $G$ admits a left-invariant Kahlerian metric, then $G$ is Abelian. I think this provides lots of natural examples on non-Abelian complex groups, but they are not compact.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 65, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270602464675903, "perplexity_flag": "head"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Electrical_conductor
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Electrical conduction
(Redirected from Electrical conductor)
Electrical conduction is the movement of a materials' charged particles to form an electric current in response to an electric field. The underlying mechanism for this movement depends on the material.
Conduction is well-described by Ohm's Law, which states that the current is proportional to the applied electric field. The ease with which current density (current per area) j appears in a material is measured by the conductivity σ, defined as:
j = σ E
or its reciprocal resistivity ρ:
j = E / ρ
In anisotropic materials, σ and ρ are tensors.
Contents
## Solids (including insulating solids)
In crystalline solids, atoms interact with their neighbors, and the energy levels of the electrons in isolated atoms turn into bands. Whether a material conducts or not is determined by its band structure. Electrons, being fermions, follow the Pauli exclusion principle, meaning that two electrons cannot occupy the same state. Thus electrons in a solid fill up the energy bands up to a certain level, called the Fermi energy. Bands which are completely full of electrons cannot conduct electricity, because there is no state of nearby energy to which the electrons can jump. Materials in which all bands are full (i.e. the Fermi energy is between two bands) are insulators.
### Metals
Metals are good conductors because they have unfilled space in the valence energy band. In the absence of an electric field, there exist electrons travelling in all directions and many different velocities up to the Fermi velocity (the velocity of electrons at the Fermi energy). When an electric field is applied, a slight imbalance develops and mobile electrons flow. Electrons in this band can be accelerated by the field because there are plenty of nearby unfilled states in the band.
Resistance comes about in a metal because of scattering of the electrons from defects in the lattice or by phonons. A crude theory of conduction in simple metals is the Drude model, in which scattering is characterized by a relaxation time τ. The conductivity is then given by the formula
$\sigma = \frac{ne^2 \tau}{m}$
where n is the density of conduction electrons, e is the electron charge, and m is the electron mass. A better model is the so-called semiclassical theory, in which the effect of the periodic potential of the lattice on the electrons gives them an effective mass.
### Semiconductors
A solid with filled bands is an insulator, but at finite temperature, electrons can be thermally excited from the valence band to the next highest, the conduction band. The fraction of electrons excited in this way depends on the temperature and the band gap, the energy difference between the two bands. Exciting these electrons into the conduction band leaves behind positively charged holes in the valence band, which can also conduct electricity. See semiconductor for more details.
In semiconductors, impurities greatly affect the concentration and type of charge carriers. Donor (n-type) impurities have extra valence electrons with energies very close to the conduction band which can be easily thermally excited to the conduction band. Acceptor (p-type) impurities capture electrons from the valence band, allowing the easy formation of holes. If an insulator is doped with enough impurities, a Mott transition can occur, and the insulator turns into a conductor.
### Superconductors
In metals and certain other materials, a transition occurs at low temperature to the superconducting state. By an interaction mediated by some other part of the system (in metals, phonons), the electrons pair up into Cooper pairs. The bosonic Cooper pairs form a superfluid which has zero resistance.
## Electrolytes
Electric currents in electrolytes are flows of electrically charged atoms (ions). For example, if an electric field is placed across a solution of Na+ and Cl–, the sodium ions will move constantly towards the negative electrode (cathode), while the chlorine ions will move towards the positive electrode (anode). If the conditions are right, redox reactions will take place at the electrode surfaces, releasing electrons from the chlorine, and allow electrons to be absorbed into the sodium.
Water-ice and certain solid electrolytes called proton conductors contain positive hydrogen ions which are free to move. In these materials, currents of electricity are composed of moving protons.
In certain electrolyte mixtures, populations of brightly-colored ions form the moving electric charges. The slow migration of these ions during an electric current is one example of a situation where a current is directly visible to human eyes.
## Gases and plasmas
In neutral gases, electrical conductivity is very low. They act as a dielectric or insulator, up until the electric field reaches a breakdown value, freeing the electrons from the atoms in an avalanche process thus forming a plasma. This plasma provides mobile electrons and positive ions, acting as a conductor which supports electric currents and forms a spark, arc or lightning. In ordinary air below the breakdown field, the dominant source of electrical conduction is via mobile ions produced by radioactive gases and cosmic rays.
Plasma is the state of matter where some of the electrons in a gas are stripped or "ionized" from their molecules or atoms. A plasma can be formed by high temperature, or by application of an electric field as noted above. Electrical conduction in a plasma is due to the motion of the electrons and the negatively- or positively-charged ions.
## Vacuum
Since a vacuum normally contains no charged particles, vacuums normally behave as good insulators. However, any metal electrode surfaces present in a vacuum can make a vacuum into a conductor by providing a cloud of free electrons through the process of thermionic emission. Externally heated electrodes can generate an electron cloud, or electrodes themselves can produce an electron cloud via spontaneous heating, for example, during a vacuum arc. Vacuum tubes and sprytrons are some of the electronic switching and amplifying devices based on vacuum conductivity.
03-10-2013 05:06:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151642322540283, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/184709/solving-a-literal-equation-containing-fractions
|
# Solving a literal equation containing fractions.
I know this might seem very simple, but I can't seem to isolate `x`.
$$\frac{1}{x} = \frac{1}{a} + \frac{1}{b}$$
Please show me the steps to solving it.
-
## 3 Answers
You should combine $\frac1a$ and $\frac1b$ into a single fraction using a common denominator as usual:
$$\begin{eqnarray} \frac1x& = &\frac1a + \frac1b \\ &=&{b\over ab} + {a\over ab} \\ &=& b+a\over ab \end{eqnarray}$$
So we get: $$x = {ab\over{b+a}}.$$
Okay?
-
and x~=0. Correct ! – PooyaM Aug 20 '12 at 17:31
How exactly did you flip ${1\over{x}} = {b + a\over{ab}}$ to $x = {ab\over{b+a}}$ ? – Dan the Man Aug 20 '12 at 17:34
@Dan: Suppose that $\frac{u}v=\frac{x}y$. Multiply through by $vy$ to get $uy=xv$, then divide through by $ux$ to get $\frac{y}x=\frac{v}u$. Alternatively, multiply both sides of the original equation by $\frac{y}x$ t0 get $\frac{uy}{vx}=1$, then multiply both sides of that equation by $\frac{v}u$ to get $\frac{y}x=\frac{v}u$. Whenever two non-zero fractions are equal, their reciprocals (obtained by turning them upside down) are also equal. – Brian M. Scott Aug 20 '12 at 17:41
Awesome. Thank you. – Dan the Man Aug 20 '12 at 17:43
$\frac{1}{x} = \frac{b}{ab} + \frac{a}{ab}$
$\frac{1}{x} = \frac{a + b}{ab}$
$x = \frac{ab}{a + b}$
note that $\frac{1}{x} = \frac{1}{a} + \frac{1}{b}$ is possible if and only if $\frac{1}{a} + \frac{1}{b} \neq 0$. This implies that $a \neq -b$; and, hence $a + b \neq 0$.
-
1/x = (a+b)/ab , x~=0
x=ab/(a+b),x~=0
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380475282669067, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/tagged/determinant
|
# Tagged Questions
Question about determinants, computation or theory. If $E$ is a vector space of dimension $d$, then we can compute the determinant of a $d$-uple $(v_1,\ldots,v_d)$ with respect to a basis.
1answer
28 views
### Prove if we have a square unitary Matrix $Q$, then $\det(Q) = e^{i\theta}$
Prove if we have a square unitary Matrix $Q$, then $\det(Q) = e^{i\theta}$ Using $\det(Q)\det(\bar{Q}^T) = I$, I get to the stage $\det(\bar{Q})\det(Q)=1$, but can't do much else with it. Thanks for ...
2answers
27 views
### Linear Algebra determinant and rank relation
True or False? If the determinant of a $4 \times 4$ matrix $A$ is $4$ then its rank must be $4$. Is it false or true? My guess is true, because the matrix $A$ is invertible. But there is ...
1answer
27 views
### Computing Resultant
The resultant of two polynomials is defined as the determinant of the Sylvester matrix. If the polynomials are of degree $n$ and $m$, than the Sylvester matrix will be of dimension \$(m+n)\times ...
1answer
21 views
### Column entries of a matrix sum to zero, so what are the properties?
What kind of properties does a matrix whose column entries sum to zero have? \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & ...
3answers
26 views
### $n$-linear alternating form with $\dim{V}<n$ $\overset{?}{\text{is}}$ the $0$-form
Prove that every $n$-linear alternating form on a vector space of dimension less than $n$ is the zero form.
1answer
69 views
### When does a matrix $A$ with ones on and above the diagonal have $\det(A)=1$?
What conditions, if they're even necessary, must be placed on $\star$ so that the matrix \begin{pmatrix} 1 & & \huge{1} \\ & \ddots & \\ \huge{\star} & & 1 \end{pmatrix}, ...
0answers
33 views
### On integral of a function over a simplex
Help w/the following general calculation and references would be appreciated. Let $ABC$ be a triangle in the plane. Then for any linear function of two variables $u$. \int_{\triangle}|\nabla ...
2answers
77 views
### Determinants: A Special Condition
Under what conditions is $$\det(A_1 + \cdots + A_n) = \det(A_1)+\cdots+\det(A_n),$$ just curious.
3answers
68 views
### How to show that $\det(A+I)\ne 0$
How to show that for any skew symmetric real matrix $A$, we have $\det(A+I)\ne 0?$ Where to begin? I'm looking for some clue only.
2answers
31 views
### Odditiy: An Analysis of Skew-Symmetric $n\times n$ Matrices
Let $A \in M_{n×n}(\mathbb{R})$ be a skew-symmetric matrix, i.e., $A^t = −A$. Prove that if $n$ is odd, then $\det{A} = 0$.
3answers
112 views
### Evaluation of a specific determinant.
Evaluate $\det{A}$, where $A$ is the $n \times n$ matrix defined by $a_{ij} = \min\{i, j\}$, for all $i,j\in \{1, \ldots, n\}$. A_2 \begin{pmatrix} 1& 1\\ 1& 2 \end{pmatrix}; A_3 = ...
0answers
35 views
### Determinant of a matrix with variables in it
Assuming that $z \neq 0$, compute the determinant $d_n(z) = \det D_n \left(1, z, 1 - \frac{1}{z^2} \right)$, where $z$ is a complex variable. In particular, compute the value $d_n(\sqrt{2})$. ...
2answers
114 views
### Multiplication of determinants
Show that for any vectors $\bf{a}$,$\bf{b}$,$\bf{c}$,$\bf{u}$,$\bf{v}$,$\bf{w}$ in $\mathbb{R}^3$, ...
2answers
45 views
### How to calculate the determinant of a matrix using Laplace?
How to calculate the determinant using Laplace? \det \begin{bmatrix} 0 & \dots & 0 & 0 & a_{1n} \\[0.3em] 0 & \dots & 0 & a_{2,n-1} & ...
2answers
81 views
### Simplest way to calculate a determinant [duplicate]
The big $1$'s here just mean that the lower and upper triangular entries are all $1$'s. The trace entries are all zero. The matrix is for a general $n\times n$ matrix of this form. I'm trying to ...
0answers
32 views
### What is the limit $\lim\limits_{(x,y)\to(1,1),\ (x,y)\in S}(1-x^py^q)(1-x^ry^s)\sum_{p/q\le m/n\le r/s}x^my^n$?
Let $S=[0,1)^2$ and $m,n$ are positive integers and $p/q,r/s$ are positive rationals with $p/q<r/s$. What is the limit \lim\limits_{(x,y)\to(1,1),\ (x,y)\in S}(1-x^py^q)(1-x^ry^s)\sum_{p/q\le ...
1answer
77 views
### Skew symmetric matrix decomposes
I am supposed to show that for a skew-symmetric matrix $A$ with $det(A) \neq 0$, meaning that is has an even number of columns and rows, there is an invertible matrix $R$ such that $R^T A R = M$, ...
1answer
50 views
### Linear algebra determinants
I have tried to solve this problem but I don't have an idea how to begin, any hints? For any vector $x$ in $\mathbb{R}^n$ let $(x,x) =\sum\limits_{i=1}^n x_i^2$ . Let $A$ be a matrix of size \$n ...
1answer
67 views
### How to calculate the determinant of a matrix with …
How to calculate the determinant using Laplace? \det \begin{bmatrix} -t & 0 & 0 & \dots & 0 & a_1 \\[0.3em] a_2 & -t & 0 & \dots & 0 ...
1answer
65 views
### prove that determinant is a quadratic form
let $V$ be a vector space of all $2 \times 2$ hermitian matrices with entries from $\mathbb C$, over the field $\mathbb R$. prove that $q(v)=\det(v)$ is a quadratic form. I tried to prove that ...
1answer
70 views
### Determinant problem
I'm stuck in this question: How calculate this determinant ? \Delta=\left|\begin{array}{cccccc} 1&2&3&\cdots&\cdots&n\\ n&1&2&\cdots&\cdots& n-1\\ ...
2answers
50 views
### Product of two matrices equals zero
If the product of two $n \times n$ matrices $A$ and $B$ is zero ie: $AB = 0$ Then either $\det(A)$ or $\det(B)$ must be zero. What additional conditions on $A$ and $B$ would be sufficient ? Clearly ...
2answers
43 views
### Characteristic and minimal polynomial of a special matrix
\$H = \begin{bmatrix} 1 & w^{-1} & w^{-2} & ... & w^{1-n}\\ w & 1 & w^{-1} & ... & w^{2-n} \\ w^{2} & w^1 & 1 & ... & w^{3-n} \\ ... & ... & ...
3answers
51 views
### Characteristic value or eigenvalues and determinant
I am having semester in linear algebra. And have recently got acquainted to eigenvalues. What is the relation between eigenvalues and determinant? Going through answers of some questions I found ...
3answers
39 views
### Questions about matrices and determinants - constant variable multiplication
Is this matrix $$M = \begin{bmatrix} a & -a & a \\[0.3em] -a & -a & -a \\[0.3em] a & a & a \end{bmatrix}$$ the same as: M = ...
2answers
44 views
### Calculate the determinant when the sum of odd rows $=$ the sum of even rows
I have came across this interesting question in linear algebra and I couldn't know for sure the answer. Given a matrix $A \in M_{n \times k} (\mathbb F)$, The sum of odd rows of $A$ $=$ the sum of ...
0answers
46 views
### Generalizing formula for calculating determinant of specific matrix
There is a similar question like this. And this is extension of this question How can we calculate the determinant of this $\,pn-1\times pn-1\,$ matrix. I have tried at my best level, and still am ...
2answers
107 views
### Computing determinant of this matrix
I have a very specific kind of matrix and I have to find the formula to find the determinant of these matrix. a(i,j)=a if(i==j) and a(i,j)=0 if(floor(i/2)=floor(j/2) and i!=j) and n is odd ...
1answer
44 views
### Solving linear equations with Vandermonde
Given this: \begin{pmatrix} 1 & 1 & 1 & ... & 1 \\ a_1 & a_2 & a_3 & ... & a_n \\ a_1^2 & a_2^2 & a_3^2 & ... & a_n^2 \\ \vdots & \vdots & ...
3answers
192 views
### Prove/disprove: if $\det(A+X) = \det(B + X)$ for all $X$, then $A=B$
I have to prove/disprove this: If $\det(A+X) = \det(B + X)~ \forall X \in M_{n \times n} (\mathbb F) \rightarrow A = B$ I believe it is true but I can not think of a direct way to prove it. Any ...
2answers
53 views
### Calculating the determinant of this matrix
Given this (very) tricky determinant, how can we calculate it easily? \begin{pmatrix} \alpha + \beta & \alpha \beta & 0 & ... & ... & 0 \\ 1 & \alpha + \beta & \alpha ...
1answer
105 views
### Determinant of matrix?
How can we calculate the determinant of this $\,pn\times pn\,$ matrix. I have tried at my best level, and still am not able to come up with a solution. The matrix $a_{ij}$ entry is defined as ...
4answers
38 views
### Divide and Conquer matrices to calculate determinant.
Do the determinant of a matrix equal to the determinant of submatrices? det\begin{pmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1k} \\ a_{21} & a_{22} & a_{23} & ...
1answer
45 views
### a problem on solving a determinant equation [duplicate]
Let $a$ be a real number. What is the number of distinct real roots of the following \left| \begin{array}{ccc} x & a & a & a \\ a & x & a & a \\ a & a & x & a \\ ...
1answer
51 views
### Simple/Concise proof of Muir's Identity
I am not a Math student and I am having trouble finding some small proof for the Muir's identity. Even a slightly lengthy but easy to understand proof would be helpful. Muir's Identity \det(A)= ...
1answer
90 views
### Different form of determinant, does it make mine wrong?
Calculate the determinant of the following $(n+1) \times (n+1)$ matrix: A = \pmatrix{1 & 1 & 1 & 1 &\cdots & 1 \\ 1 & a_1 & 0 & 0 &\cdots & 0 \\ 1 ...
1answer
41 views
### Determinant is correct but wrong when I try and check it
I have to work out the determinant of the $(n \times n)$ matrix A = \pmatrix{x & y & 0 & 0 &\cdots & 0 \\ 0 & x & y & 0 &\cdots & 0 \\ 0 & 0 & x ...
1answer
26 views
### Problem related to a complex matrix
I am stuck on the following problem: Let $P$ be a $2 \times 2$ complex matrix such that trace $P=1$ and $\det P=-6.$ Then trace $(P^4-P^3)=?$ Can someone point me in the right direction? ...
1answer
23 views
### Definition of minimal and characteristic polynomials
I have defined the characteristic and minimal polynomial as follows, but have been told this is not strictly correct since det$(-I)$ is not necessarily 1, so my formulae don't match for $A=0$, how can ...
2answers
78 views
### How can I prove $\det(\overline M)=\overline{\det(M)}$?
Of course $\overline M$ is the complex conjugate of an $n\times n$ matrix $M$. Someone gave me advice to use the definition of determinant, then it means I have to use cofactor expasion here?
2answers
58 views
### Is this determinant bounded?
Let $D_n$ be the determinant of the $n-1$ by $n-1$ matrix such that the main diagonal entries are $3,4,5,\cdots,n+1$ and other entries being $1$. i.e. D_n= \det \begin{pmatrix} ...
1answer
17 views
### show that$v(E) = a_1a_2a_3…a_nv(B^n)$
I'm generally pretty good a change in variable type problems, but this one has me stumped. It's on page 264 in Advanced calculus of several variables by Edwards. Thm 5.1: If \$\lambda:R^n \rightarrow ...
1answer
77 views
### How to show by induction that, for $0<\theta<\pi$, $\det A_n=\frac{\sin (n+1)\theta}{\sin \theta}.$
I need help with the underlined part. Thanks in advance Let $A_n$ be the $n\times n$ matrix given by a_{ij}= \begin{cases} 0 & \text{if }|i-j|>1, \\ 1 & \text{if }|i-j|=1, ...
1answer
49 views
### Find the smallest square matrix in which some objects fit following some rules
I have to put some objects in a matrix. The data of these objects is given in another matrix in which each line contains an object, and the first column represents its width, and the second its ...
0answers
18 views
### Hyperdeterminants
Beyond the Cayley hyperdeterminant, are there an explicite general rule to calculate hyperdeterminants for any hypermatrix? I mean a rule to calculate the hyperdeterminant with format \$k_1\times ...
1answer
84 views
### What's the trick for proving one eigenvalue of orthogonal matrix is $-1$ if the determinant is $-1$?
Obviously, the magnitude of the orthogonal matrix is 1, which is easy to prove.. However, I wonder how can one prove that the eigenvalue of an orthogonal matrix is $-1$, if the determinant of this ...
1answer
84 views
### Proof relation between Levi-Civita symbol and Kronecker deltas in Group Theory
In order to proof the following identity: $$\sum_{k}\epsilon_{ijk}\epsilon_{lmk}=\delta_{il}\delta_{jm}-\delta_{im}\delta_{jl}$$ Instead of checking this by brute force, Landau writes de product of ...
1answer
57 views
### Problem with Jacobi's formula for determinants
Jacobi's formula says that: $$\det e^{X}=e^{\operatorname{Tr}(X)}$$ So for any matrix $A$, I could try to find a matrix $X$ (the equivalent to a group generator) such that $A=e^{X}$ holds. But if ...
1answer
25 views
### proof about deteminant of a complex linear transformation
say I have a linear space $V$ over $\Bbb C$ and a linear transformation $T:V \to V$ such that $T=A+iB$ where $A,B \in \Bbb R^{n \times n}$ I proved already that \$T_\Bbb R = \begin{pmatrix} A & -B ...
2answers
56 views
### deteminant of a block skew-symmetric matrix
If I have a matrix if the form \begin{pmatrix} A & -B \\ B & A \end{pmatrix} how do i turn it into something like \begin{pmatrix} X & Y \\ 0 & Z \end{pmatrix} so the determinant is ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 147, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.851412832736969, "perplexity_flag": "head"}
|
http://mathhelpforum.com/algebra/56609-find-zeroes-function-difficult.html
|
# Thread:
1. ## Find zeroes of this function (difficult)
Find the zeros of function
f(x) = x^2(2e^2x) + 2xe^2x + e^2x + 2xe^2x)
Attached is what I got. Is it correct?
Attached Thumbnails
2. Originally Posted by mwok
Find the zeros of function
f(x) = x^2(2e^2x) + 2xe^2x + e^2x + 2xe^2x)
Attached is what I got. Is it correct?
Note that $e^{2x}\neq0~\forall~x\in\mathbb{R}$
Thus, when solving for the zeros, we can rewrite $2x^2e^{2x}+2xe^{2x}+e^{2x}+2xe^{2x}=0\implies e^{2x}\left[2x^2+4x+1\right]=0$ as $2x^2+4x+1=0$
This doesn't factor nicely, so I would recommend using the quadratic formula.
--Chris
3. Ahh, you have to take the commons out and then use the quad formula. Nice!
Question...I used the quad formula and ended up with
x = (-2 +/- sqrt(2)) / 2
Do I make it equal zero and then solve it further OR is that the direct zero?
Also, what about the other factor e^2x....how would you solve that for zero?
e^2x = 0
I'm guessing you have to use the log equation on that?
4. Originally Posted by mwok
Ahh, you have to take the commons out and then use the quad formula. Nice!
Question...I used the quad formula and ended up with
x = (-2 +/- sqrt(2)) / 2
Do I make it equal zero and then solve it further OR is that the direct zero?
Also, what about the other factor e^2x....how would you solve that for zero?
e^2x = 0
I'm guessing you have to use the log equation on that?
Those are the direct zeros.
Just keep in mind that there are two of them: $x=\tfrac{1}{2}\left[-2+\sqrt{2}\right]$ or $x=-\tfrac{1}{2}\left[2+\sqrt{2}\right]$
--Chris
5. Originally Posted by Chris L T521
Those are the direct zeros.
Just keep in mind that there are two of them: $x=\tfrac{1}{2}\left[-2+\sqrt{2}\right]$ or $x=-\tfrac{1}{2}\left[2+\sqrt{2}\right]$
--Chris
Right. What about the other factor though (e^2x)? Do I solve it with a log equation?
6. Originally Posted by mwok
Also, what about the other factor e^2x....how would you solve that for zero?
e^2x = 0
I'm guessing you have to use the log equation on that?
As I mentioned earlier, $e^{2x}{\color{red}\neq}0$ for all values of $x\in\mathbb{R}$.
This can be shown by solving that equation:
$e^{2x}=0\implies 2x=\ln 0$.
However, $\ln 0$ is undefined. Thus, there is no value of x that can cause this equation to be true.
--Chris
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213065505027771, "perplexity_flag": "middle"}
|
http://andrewgelman.com/2012/01/04/bayesian-page-rank/
|
# Statistical Modeling, Causal Inference, and Social Science
## Bayesian Page Rank?
Posted by on 4 January 2012, 9:06 am
Loren Maxwell writes:
I am trying to do some studies on the PageRank algorithm with applying a Bayesian technique. If you are not familiar with PageRank, it is the basis for how Google ranks their pages. It basically treats the internet as a large social network with each link conferring some value onto the page it links to. For example, if I had a webpage that had only one link to it, say from my friend’s webpage, then its PageRank would be dependent on my friend’s PageRank, presumably quite low. However, if the one link to my page was off the Google search page, then my PageRank would be quite high since there are undoubtedly millions of pages linking to Google and few pages that Google links to.
The end result of the algorithm, however, is that all the PageRank values of the nodes in the network sum to one and the PageRank of a specific node is the probability that a “random surfer” will end up on that node.
For example, in the attached spreadsheet, Column D shows each node while Column E shows the probability that a random surfer will land on the node. Columns F through H are used to calculate the PageRank, Columns K and L are the links (with Column K linking to Column L), while Column M is also used to calculate the PageRank. There is a macro in the workbook that is activated by pressing “Control-Shift-P” that will copy Column H to Column E 100 times, which is usually more than enough iterations for the PageRank to converge.
You can change the links around or add nodes if you like to play around with the spreadsheet. The only requirements are that each node links to at least one other node (not a true PageRank requirement, but it keeps that math easy) and that the initial values in Column E equal 1 (usually simply set to 1/N).
The dampening factor (Cell B3) represents the chance that a random surfer will jump to another page at random rather than follow a link from the current node. The chance is 1 – the dampening factor.
In this specific example, a random surfer has a 38.2% chance to land on node A at any given time (Node A’s PageRank from Column E) and only a 3.8% change to be on node D.
My question is that if the data represented a certain time, in my specific case the scheduling of sports teams for a specific season, how can I incorporate a Bayesian method into the PageRank algorithm that represents the probability of landing on a certain node for the following season when only a few games have been played. Basically, I envision the old network as a list of nodes with their associated probabilities for last season that can be compared with a list of corresponding nodes with their associated probabilities for the current season, however I am unsure as to how I might reflect this using Bayesian analysis.
It would be helpful to also see what your thoughts would be on dropping and adding nodes in between the periods as well.
I’d start by estimating the probabilities using a simple (non-network) model, getting posterior simulations from this Bayesian inference, then running your Page Rank algorithm separately for each simulation. You can then use the output from these simulations to summarize your uncertainty. It’s not perfect—you’re not using the network model to get your uncertainties—but it seems like a reasonable start.
Filed under Bayesian Statistics, Sports
Comments are closed
### 6 Comments
1. David Robinson says:
A number of folks have looked at Bayesian algorithms as an alternative or to augment the PageRank algorithms. Probably the most common method is some variation of Latent Dirichlet Allocation (LDA). It actually works fairly well and even in the simplest implementation can handle billions of pages on a desktop computer. The key intuition comes from the consideration of the conditional relationship of mypage with yourpage; the context of mypage relative to yourpage is explicitly considered in LDA-type algorithms. The logic in your [Loren] discussion already captures this conditional dependence and LDA captures the Bayesian structure in a more formal sense. My stuff is unfortunately not accessible and probably already dated relative to what’s out there, but here’s a link to a commercial venture in the area: http://www.scriptol.com/seo/lda.php . FWIW I’m not affiliated with SEO in any capacity, I just grabbed this as a typical example.
• Bob Carpenter says:
That scriptol page was very confused about what Bayesian inference is. As far as I know, no one’s carried out proper Bayesian inference (integrating over the posterior) with LDA because of the problem with modes. Usually they use approximations of a single mode for inference, often without incorporating any estimates of uncertainty.
There are really two issues brought up by Loren Maxwell in the original question. One is how to deal with the time evolution of pages and incorporate information about links last year to predict links this year. Presumably some kind of time-series model would be appropriate here with the assumption that this year looks like last year plus some randomness.
The second is how to make the model Bayesian, which would mean defining a full joint probablity model and then computing the conditional probability of parameters given observed data. PageRank’s usually not presented this way. It’s usually defined as the result of fixing model parameters (a stochastic transition matrix based on link counts and a smoothing fresh start vector), running a simulation, then computing empirical estimates of the percentage of time spent on a single page. If you run long enough, the empirical estimates converge to the fixed point of
$\pi = \pi \theta + \epsilon$,
where $\pi$ is a simplex parameter representing the percentage of time spent on each page in the simulation and $\theta$ is a stochastic transition matrix and $\epsilon$ is the smoothing (background random jump or “dampening”) model. It would be easy enough to put a prior on $\theta$ here based on last year’s data. I’d also look into using predictors like team winningness, national TV appearances, home market, etc. One could also try to put priors on $\epsilon$ or the parameter of interest $\pi$ itself — you can think of the $\epsilon$ as a kind of prior on the transition matrix \$\pi \theta + \epsilon\$.
2. lylebot says:
I have to say I’m a bit amused by this from the email:
It basically treats the internet as a large social network
Is a social network even a real thing? PageRank treats the web as a citation graph. Most models of social networks treat social interaction as a citation graph as well. The citation graph is the thing!
• re lylebot says:
“Social network” perhaps allowed the writer to loosely talk about graphs. Most understand how “social networks” are usually formalized.
3. ted says:
Given my understanding of the reader’s question, we have two Markov chains here: the one for last year’s data and the one for the early part of this year’s data (a random walk over a graph is just a Markov chain).
So to incorporate a prior, we can simply make a new chain that’s a convex combination of this last year’s chain and this year’s chain. Or equivalently, take a convex combination of the probability transition matrices for the markov chains and run page rank on it.
Does that do the trick?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92733234167099, "perplexity_flag": "middle"}
|
http://cms.math.ca/Reunions/hiver11/abs/dg
|
Réunion d'hiver SMC 2011
Delta Chelsea Hotel, Toronto, 10 - 12 décembre 2011 www.smc.math.ca//Reunions/hiver11
Géométrie discrete
Org: Elissa Ross et Walter Whiteley (York)
[PDF]
ANTHONY NIXON, Fields Institute, Toronto
Frameworks with Mixed Dimensional Constraints [PDF]
In this talk I will introduce the idea of bar-joint frameworks in 3-dimensions where the joints are partitioned into two sets, one supported on a fixed 2-dimensional surface and the other on a fixed 1-dimensional surface. I will describe how the basics of rigidity theory adapt to these settings; in particular, relevant classes of graphs are shown to be 2-coloured graphs with various sparsity counts for the whole graph and for each monochromatic subgraph. I will focus on two particular examples of this idea: a line at a 45 degree angle to a plane and a line through the (translational) axis of a cylinder. I will discuss progress towards, and barriers to, finding combinatorial characterisations, or Laman-type theorems, for when such frameworks are rigid.
MEGAN OWEN, Fields Institute
Applications of the space of metric trees [PDF]
The space of metric $n$-trees is a polyhedral complex, in which each polyhedron corresponds to a different tree topology. Under the construction of Billera, Holmes, and Vogtmann, this space is non-positively curved, so there is a unique geodesic (shortest path) between any two trees. Furthermore, the (Frechet) mean of a set of trees in this space is well-defined and computable by an iterative algorithm. We present properties of this mean tree, including some non-Euclidean "sticky" behaviour, as well as applications to biological problems in phylogenetics and medical imaging.
VINCENT PILAUD, Fields Institute Toronto
Coxeter brick polytopes [PDF]
We define the brick polytope of a subword complex on a finite Coxeter group. This construction provides polytopal realizations for a certain class of subword complexes containing in particular the cluster complexes of S. Fomin and A. Zelevinsky. For the later, the brick polytopes coincide with the generalized associahedra of C. Hohlweg, C. Lange, and H. Thomas. We obtain a vertex description of these polytopes and explain some of their combinatorial properties. Joint work with Christian Stump.
ELISSA ROSS, Fields Institute
The rigidity of periodic bar body frameworks [PDF]
From the perspective of rigidity theory, d-dimensional bar body frameworks are a well understood class of structures. That is, good combinatorial characterizations exist to predict the generic rigidity of such graphs in any dimension. In this talk, we consider the question of infinite periodic bar body frameworks, which we study as multigraphs on a torus. We describe necessary conditions for the rigidity of these frameworks, and outline what is known about the sufficiency of these conditions.
BERND SCHULZE, Fields Institute
Predicting flexibility in periodic frameworks with added symmetry [PDF]
Recent work from authors across disciplines has made substantial contributions to counting rules (Maxwell type theorems) which predict when an infinite periodic framework would be rigid or flexible while preserving the periodic pattern. Other work has shown that for finite frameworks, introducing symmetry modifies the previous general counts, and under some circumstances this symmetrized Maxwell type count can predict added finite flexibility in the structure. In this talk we combine these approaches to present new Maxwell type counts for the columns and rows of a modified orbit rigidity matrix for frameworks that have both a periodic structure and additional symmetry within the periodic cells. In a number of cases, this count for the combined group of symmetry operations demonstrates that there is added finite flexibility in what would have been rigid when realized without the symmetry. Given that many crystal structures have these added symmetries, and that their flexibility may be key to their physical and chemical properties, these results are of both practical and theoretic interest. This talk is based on joint work with Elissa Ross and Walter Whiteley.
CSABA TOTH, University of Calgary
New bounds for untangling geometric graphs [PDF]
Suppose that we are given a straight-line drawing $D$ of a planar graph $G$ such that some pairs of edges cross. Since $G$ is planar, it can be redrawn (by relocating some of its vertices) such that no two edges cross anymore. The process of redrawing $G$ to obtain a crossing-free straight-line drawing is called the {\em untangling} of $G$. For every $n\in \mathbb{N}$, there is a planar graph $G_0$ with $n$ vertices and a straight-line drawing $D_0$ of $G_0$ such that in any {\em crossing-free} straight-line drawing of $G_0$, at most $O(n^{.4981})$ vertices lie at the same position as in $D_0$. For every planar graph $G$ with $n$ vertices and every straight-line drawing $D$ of $G$ (with possible edge crossings), there is a {\em crossing-free} straight-line drawing of $G$ such that at least $\Omega(n^{0.3766})$ vertices are at the same position as in $D$. (Joint work with Arikushi, Cano, and Urrutia.)
RYAN TRELFORD, University of Calgary
The Equivalence of The Illumination and Separation Conjectures [PDF]
Let $K$ be $d$-dimensional convex body in $\mathbb{E}^d$, and let $Q\in\mathbb{E}^d\setminus K$. A point $P$ on the boundary of $K$ is said to be illuminated by $Q$ if the ray emanating from $Q$ through $P$ intersects the interior of $K$. One can ask what is the smallest positive integer $n$ such that there exists a set of distinct points $\{Q_1,\ldots,Q_n\}$ whereby every boundry point of $K$ is illuminated by at least one of the $Q_i$'s. The illumination conjecture (formulated by I. Gohberg, H. Hadwiger, and A. Markus) states that $n\leq 2^d$. Surprisingly, $2^d$ is also the conjectured maximum number of hyperplanes that are necessary to separate any interior point $O$ of $K$ from any face of $K$. In this talk, I will outline K. Bezdek's proof that the Illumination Conjecture and the Separation Conjecture are indeed equivalent.
VIKTOR VIGH, Fields Institute
On the diminishing process of B. Tóth [PDF]
B. Tóth suggested some 20 years ago the following random process to investigate: let $B=B_0$ be the unit circular disc in $R^2$ centered at the origin, and define $B_n$ recursively as follows: we choose a random point $p_n$ from $B_{n-1}$ according to the uniform distribution and let $B_n= B_{n-1} \cap (p_n + B)$. $B_n$ is a disc-polygon, the first interesting questions are: what can we say about the expectations of different geometric quantities of $B_n$ (e.g. number of vertices, diameter). These problems turned out to be very tough, no relevant results are known about the process. In this talk we consider some closely related problems.
In the first part of the talk we replace the unit disc in the process by regular simplices (in any dimension) and by regular polygons (in the plane). Asymptotic results are given for the speed of the process, and we examine the limit distribution of the center in the case of simplices.
In the second part of the talk we consider another model to obtain random disc-polygons: we fix a spindle convex disc $S$ in the plane, we choose $n$ independent random point from $S$ according to the uniform distribution, and we define $S_n$ as the spindle convex hull of the chosen points. We show asymptotic results for the expectation of the number of the vertices of $S_n$, if $S$ has smooth enough boundary, or if $S$ is a disc-polygon.
The talk is based on joint work with G. Ambrus, F. Fodor and P. Kevei.
## Commandites
Nous remercions chaleureusement ces commanditaires de leur soutien.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004415273666382, "perplexity_flag": "head"}
|
http://mathhelpforum.com/discrete-math/79761-induction-n-2-2n-n-2-a.html
|
# Thread:
1. ## Induction n^2 > 2n for n>2
Prove n^2 > 2n for n>2 by induction
Check n = 3
9 > 6
Say it is true for n
now check for n+1
This is where I am worried that what I'm doing isn't enough proof
(n+1)^2 > 2(n+1)
n^2+2n+1 > 2n +2
by cancelling
n^2 > 1
we know that n > 2 so n^2 is always going to be greater than 1.
Does this suffice?
2. Hi
The idea behind what you use is a non-inductive proof, which is:
Let $n>2$ be an integer, $n^2>2n \Leftrightarrow n>2$ (the equivalence is true because n positive integer). Since we assumed $n>2,$ we have what we want.
In a usual proof by induction, you assume your induction hypothesis (case $n$) and try to show the statement for $n+1,$ using the case $n.$
So for your proof, writing $(n+1)^2=n^2+2n+1$ is a good thing. What you have to do now is to obtain $n^2+2n+1>2(n+1),$ using the induction hypothesis: $n^2>2n$
3. Ok how bout this
Step 1 as above
Step 2 (I need to prove (n+1)^2 > 2(n+1)
(n+1)^2=n^2 +2n +1
>2n + 2n +1 (Using induction)
= 4n +1
= 2(n+1) +2n -1 (2n-1) is always positive for n>2
>2(n+1)
Should i be using k instead?
4. What you've done is correct, no need to choose another letter.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230841398239136, "perplexity_flag": "middle"}
|
http://mathoverflow.net/revisions/20268/list
|
## Return to Question
Guillemin and Sternberg wrote the following in 1987 in a short article called "Some remarks on I.M. Gelfand's works" accompanying Gelfand's Collected Papers, Volume I:
The theory of commutative normed rings [i.e., (complex) Banach algebras], created by Gelfand in the late 1930s, has become today one of the most active areas of functional analysis. The key idea in Gelfand's theory -- that maximal ideals are the underlying "points" of a commutative normed ring -- not only revolutionized harmonic analyis but had an enormous impact in algebraic geometry. (One need only look at the development of the concept of the spectrum of a commutative ring and the concept of scheme in the algebraic geometry of the 1960s and 1970s to see how far beyond the borders of functional analysis Gelfand's ideas penetrated.)
I was skeptical when reading this, which led to the following:
Basic Question: Did Gelfand's theory of commutative Banach algebras have an enormous impact, or any direct influence whatsoever, in algebraic geometry?
I elaborate on the question at the end, after some background and context for my skepticism.
In the late 1930s, Gelfand proved the special case of the Mazur-Gelfand Theorem that says that a Banach division algebra is $\mathbb{C}$. In the commutative case this applies to quotients by maximal ideals, and Gelfand used this fact to consider elements of a (complex, unital) commutative Banach algebra as functions on the maximal ideal space. He gave the maximal ideal space the coarsest topology that makes these functions continuous, which turns out to be a compact Hausdorff topology. The resulting continuous homomorphism from a commutative Banach algebra $A$ with maximal ideal space $\mathfrak{M}$ to the Banach algebra $C(\mathfrak{M})$ of continuous complex-valued functions on $\mathfrak{M}$ with sup norm is now often called the Gelfand transform (sometimes denoted $\Gamma$, short for Гельфанд). It is very useful.
However, it is my understanding that Gelfand wasn't the first to consider elements of a ring as functions on a space of ideals. Hilbert proved that an affine variety can be considered as the set of maximal ideals of its coordinate ring, and thus gave a way to view abstract finitely generated commutative complex algebras without nilpotents as algebras of functions. On the Wikipedia page for scheme I find that Noether and Krull pushed these ideas to some extent in the 1920s and 1930s, respectively, but I don't know a source for this. Another related result is Stone's representation theorem from 1936, and a good summary of this circle of ideas can be found in Varadarajan's Euler book.
Unfortunately, knowing who did what first won't answer my question. I have not been able to find any good source indicating whether algebraic geometers were influenced by Gelfand's theory, or conversely.
Elaborated Question: Were algebraic geometers (say from roughly the 1940s to the 1970s) influenced by Gelfand's theory of commutative Banach algebras as indicated by Guillemin and Sternberg, and if so can anyone provide documentation? Conversely, was Gelfand's theory influenced by algebraic geometry (from before roughly 1938), and if so can anyone provide documentation?
1
# Did Gelfand's theory of commutative Banach algebras influence algebraic geometers?
Guillemin and Sternberg wrote the following in 1987 in a short article called "Some remarks on I.M. Gelfand's works" accompanying Gelfand's Collected Papers, Volume I:
The theory of commutative normed rings [i.e., (complex) Banach algebras], created by Gelfand in the late 1930s, has become today one of the most active areas of functional analysis. The key idea in Gelfand's theory -- that maximal ideals are the underlying "points" of a commutative normed ring -- not only revolutionized harmonic analyis but had an enormous impact in algebraic geometry. (One need only look at the development of the concept of the spectrum of a commutative ring and the concept of scheme in the algebraic geometry of the 1960s and 1970s to see how far beyond the borders of functional analysis Gelfand's ideas penetrated.)
I was skeptical when reading this, which led to the following:
Basic Question: Did Gelfand's theory of commutative Banach algebras have an enormous impact, or any direct influence whatsoever, in algebraic geometry?
I elaborate on the question at the end, after some background and context for my skepticism.
In the late 1930s, Gelfand proved the special case of the Mazur-Gelfand Theorem that says that a Banach division algebra is $\mathbb{C}$. In the commutative case this applies to quotients by maximal ideals, and Gelfand used this fact to consider elements of a (complex, unital) commutative Banach algebra as functions on the maximal ideal space. He gave the maximal ideal space the coarsest topology that makes these functions continuous, which turns out to be a compact Hausdorff topology. The resulting continuous homomorphism from a commutative Banach algebra $A$ with maximal ideal space $\mathfrak{M}$ to the Banach algebra $C(\mathfrak{M})$ of continuous complex-valued functions on $\mathfrak{M}$ with sup norm is now often called the Gelfand transform (sometimes denoted $\Gamma$, short for Гельфанд). It is very useful.
However, it is my understanding that Gelfand wasn't the first to consider elements of a ring as functions on a space of ideals. Hilbert proved that an affine variety can be considered as the set of maximal ideals of its coordinate ring, and thus gave a way to view abstract finitely generated commutative complex algebras without nilpotents as algebras of functions. On the Wikipedia page for scheme I find that Noether and Krull pushed these ideas to some extent in the 1920s and 1930s, respectively, but I don't know a source for this. Another related result is Stone's representation theorem from 1936, and a good summary of this circle of ideas can be found in Varadarajan's Euler book.
Unfortunately, knowing who did what first won't answer my question. I have not been able to find any good source indicating whether algebraic geometers were influenced by Gelfand's theory, or conversely.
Elaborated Question: Were algebraic geometers (say from roughly the 1940s to the 1970s) influenced by Gelfand's theory of commutative Banach algebras as indicated by Guillemin and Sternberg, and if so can anyone provide documentation? Conversely, was Gelfand's theory influenced by algebraic geometry (from before roughly 1938), and if so can anyone provide documentation?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500211477279663, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/29019/when-a-high-speed-neutrino-just-misses-an-old-neutron-star-why-isnt-it-trapped?answertab=votes
|
# When a high speed neutrino just misses an old neutron star, why isn't it trapped?
Suppose a neutrino is seen travelling so fast that its Lorentz gamma factor is 100,000. It races past an old, no longer active neutron star, narrowly missing it. As far as the neutrino is concerned, it is the NEUTRON STAR that is moving at extreme speed, & its mass is 100,000 times larger than 2 solar masses. Therefore, from the speeding neutrino's perspective, the neutron star should appear to be a black hole definitely large enough to trap the neutrino.
So how come the speeding neutrino continues its travel right past the old stellar remnant?
Is there an agreed name for this question or paradox?
-
1
– Qmechanic♦ May 26 '12 at 17:06
Cool question. Let me make sure I understand you though. With black-holes, there is a radius where light can orbit the black hole with the right impact-parameter. Your question is, if a neutrino travels by a neutron star at almost $c$ at the right impact parameter, it should have an orbit for that star, right? – kηives May 26 '12 at 17:19
@Qmechanic's link is the full answer to the "[it] should appear to be a black hole" part of this question (namely that, no, it shouldn't). I'm not sure if the rest of the question is different or not. Opinions from the relativity experts among us? – dmckee♦ May 26 '12 at 17:40
A boosted object does not collapse because there is momentum as well as energy, and both are gravitating--- you can't use estimates of energy only when objects are fast moving. This is addressed by answers to the previous question. – Ron Maimon May 26 '12 at 20:35
## 3 Answers
Have a look at Can a black hole form due to Lorentz contraction?
This isn't exactly the same as your question, but the answer is the same. It's popularly believed that the mass is the only thing that determines whether a black hole will form or not, but this isn't true. Einstein's equation relates the curvature to a quantity called the stress-energy tensor:
$$G_{\alpha\beta} = 8\pi T_{\alpha\beta}$$
where $G_{\alpha\beta}$ is the Einstein tensor that describes the curvature and $T_{\alpha\beta}$ is the stress-energy tensor. The mass contributes only one component (out of ten) to the tensor. In the rest frame of the neutron star the mass is the dominant component, but when you boost the neutron star the other components are non-zero and they balance out any relativistic change in the mass.
See in particular Ron Maimon's comment to my previous answer for more info about the other components of the stress-energy tensor.
-
See answers to my question If two ultra-relativistic billiard balls just miss, will they still form a black hole? for a closely related situation with a different answer. This raises the question whether the combination of neutron star and neutrino in the center of mass has enough total mass (energy) inside the "hoop" to form a black hole.
-
The laws which govern the "increasing of mass" due to moving are not just the same as simply increasing of $m$ by adding material. Namely, the moving mass never turns into black hole if it is not black hole when still.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273466467857361, "perplexity_flag": "middle"}
|
http://www.citizendia.org/Multipath
|
This article is about the wireless propagation phenomenon. For the computing storage term, see Multipath I/O. In Computer storage, multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer
In wireless telecommunications, multipath is the propagation phenomenon that results in radio signals' reaching the receiving antenna by two or more paths. Wireless communication is the transfer of information over a distance without the use of electrical conductors or " Wires quot Radio propagation is a term used to explain how Radio waves behave when they are Transmitted, or are propagated from one point on the Earth Radio is the transmission of signals by Modulation of electromagnetic waves with frequencies below those of visible Light. In Telecommunication, signalling (UK spelling or signaling (US spelling has the following meanings The use of signals for controlling communications An antenna is a Transducer designed to transmit or Receive electromagnetic waves In other words antennas convert electromagnetic waves into Causes of multipath include atmospheric ducting, ionospheric reflection and refraction, and reflection from terrestrial objects, such as mountains and buildings. In Telecommunication, an atmospheric duct is a horizontal Layer in the lower atmosphere in which the vertical Refractive index gradients are such that Ionospheric reflection: Of electromagnetic waves propagating in the Ionosphere, a redirection i Refraction is the change in direction of a Wave due to a change in its Speed. Reflection is the change in direction of a Wave front at an interface between two different media so that the wave front returns into the medium from which A mountain is a Landform that extends above the surrounding Terrain in a limited area with a peak In Architecture, Construction, Engineering and real estate development the word building may refer to one of the following Any man-made
The effects of multipath include constructive and destructive interference, and phase shifting of the signal. In physics interference is the addition ( superposition) of two or more Waves that result in a new wave pattern The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0 In Telecommunication, signalling (UK spelling or signaling (US spelling has the following meanings The use of signals for controlling communications This causes Rayleigh fading, named after Lord Rayleigh. Rayleigh fading is a statistical model for the effect of a propagation environment on a Radio signal such as that used by Wireless John William Strutt 3rd Baron Rayleigh OM (12 November 1842 &ndash 30 June 1919 was an English Physicist who with William Ramsay, discovered The standard statistical model of this gives a distribution known as the Rayleigh distribution. In Probability theory and Statistics, the Rayleigh distribution is a Continuous Probability distribution.
Rayleigh fading with a strong line of sight content is said to have a Rician distribution, or to be Rician fading. This is about the phenomenon of loss of signal in telecommunications Line-of-sight propagation refers to Electro-magnetic radiation including light emissions traveling in a straight line In Probability theory and Statistics, the Rice distribution, named after Stephen O Rician fading is a Stochastic model for Radio propagation anomaly caused by partial cancellation of a radio signal by itself &mdash the
In facsimile and television transmission, multipath causes jitter and ghosting, seen as a faded duplicate image to the right of the main image. Fax (short for facsimile, from Latin fac simile, "make similar" i Television ( TV) is a widely used Telecommunication medium for sending ( Broadcasting) and receiving moving Images, either monochromatic In Telecommunications transmission is the process of sending propagating and receiving an analogue or digital information signal over a physical point-to-point or Jitter is an unwanted variation of one or more characteristics of a periodic signal in Electronics and Telecommunications. See also Television interference In Television, a ghost is an unwanted Image on the screen appearing Superimposed on the desired Ghosts occur when transmissions bounce off a mountain or other large object, while also arriving at the antenna by a shorter, direct route, with the receiver picking up two signals separated by a delay.
Radar multipath echoes from an actual target cause ghosts to appear.
In radar processing, multipath causes ghost targets to appear, deceiving the radar receiver. Radar is a system that uses electromagnetic waves to identify the range altitude direction or speed of both moving and fixed objects such as Aircraft, ships This article is about a radio receiver for other uses see Radio (disambiguation. These ghosts are particularly bothersome since they move and behave like the normal targets (which they echo), and so the receiver has difficulty in isolating the correct target echo. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height.
In digital radio communications (such as GSM) multipath can cause errors and affect the quality of communications. GSM ( Global System for Mobile communications: originally from Groupe Spécial Mobile) is the most popular standard for Mobile phones in the The word error has different meanings and usages relative to how it is conceptually applied In the vernacular quality can mean a high degree of excellence (“a quality product” a degree of excellence or the lack of it (“work of average quality” or a property of The errors are due to Intersymbol interference (ISI). In Telecommunication, intersymbol interference ( ISI) is a form of Distortion of a signal in which one symbol interferes with Equalisers are often used to correct the ISI. Alternatively, techniques such as orthogonal frequency division modulation and Rake receivers may be used. Orthogonal frequency-division multiplexing ( OFDM) — essentially identical to Coded OFDM ( COFDM) and Discrete multi-tone modulation ( A rake receiver is a Radio receiver designed to counter the effects of Multipath fading.
In a Global Positioning System Receiver (GPSR), multipath signals can cause a stationary receiver's output to indicate as if it were randomly jumping about or creeping. Basic concept of GPS operation A GPS receiver calculates its position by carefully timing the signals sent by the constellation of GPS Satellites high above the Earth When the unit is moving the jumping or creeping is hidden, but it still degrades the displayed accuracy.
## Mathematical modeling
Mathematical model of the multipath impulse response.
The mathematical model of the multipath can be presented using the method of the impulse response used for studying linear systems. The impulse response of a system is its output when presented with a very brief input signal an impulse A linear system is a mathematical model of a System based on the use of a Linear operator.
Suppose to transmit a single, ideal Dirac pulse of electromagnetic power at time 0, i. The Dirac delta or Dirac's delta is a mathematical construct introduced by the British theoretical physicist Paul Dirac. Electromagnetism is the Physics of the Electromagnetic field: a field which exerts a Force on particles that possess the property of e.
x(t) = δ(t)
At the receiver, due to the presence of the multiple electromagnetic paths, more than one pulse will be received (we suppose here that the channel has infinite bandwidth, thus the pulse shape is not modified at all), and each one of them will arrive at different times. Channel, in communications (sometimes called communications channel) refers to the medium used to convey Information from a In fact, since the electromagnetic signals travel at the speed of light, and since every path has a geometrical length possibly different from that of the other ones, there are different air travelling times (consider that, in free space, the light takes 3μs to cross a 1km span). In Classical physics, free space is a concept of Electromagnetic theory, corresponding to a theoretically "perfect" Vacuum, and sometimes Thus, the received signal will be expressed by
$y(t)=h(t)=\sum_{n=0}^{N-1}{\rho_n e^{j\phi_n} \delta(t-\tau_n)}$
where N is the number of received impulses (equivalent to the number of electromagnetic paths, and possibly very large), τn is the time delay of the generic nth impulse, and $\rho_n e^{j\phi_n}$ represent the complex amplitude (i. e. , magnitude and phase) of the generic received pulse. As a consequence, y(t) also represents the impulse response function h(t) of the equivalent multipath model.
More in general, in presence of time variation of the geometrical reflection conditions, this impulse response is time varying, and as such we have
τn = τn(t)
ρn = ρn(t)
φn = φn(t)
Very often, just one parameter is used to denote the severity of multipath conditions: it is called the multipath time, TM, and it is defined as the time delay existing between the first and the last received impulses
TM = τN − 1 − τ0
Mathematical model of the multipath channel transfer function.
In practical conditions and measurement, the multipath time is computed by considering as last impulse the first one which allows to receive a determined amount of the total transmitted power (scaled by the atmospheric and propagation losses), e. g. 99%.
Keeping our aim at linear, time invariant systems, we can also characterize the multipath phenomenon by the channel transfer function H(f), which is defined as the continuous time Fourier transform of the impulse response h(t)
$H(f)=\mathfrak{F}(h(t))=\int_{-\infty}^{+\infty}{h(t)e^{-j 2\pi f t} d t}=\sum_{n=0}^{N-1}{\rho_n e^{j\phi_n} e^{-j2 \pi f \tau_n}}$
where the last right-hand term of the previous equation is easily obtained by remembering that the Fourier transform of a Dirac is a complex exponential function, an eigenfunction of every linear system. This article specifically discusses Fourier transformation of functions on the Real line; for other kinds of Fourier transformation see Fourier analysis and In Mathematics, an eigenfunction of a Linear operator, A, defined on some Function space is any non-zero function f in
The obtained channel transfer characteristic has a typical appearance of a sequence of peaks and valleys (also called notches); it can be shown that, on average, the distance (in Hz) between two consecutive valleys (or two consecutive peaks), is roughly inversely proportional to the multipath time. The so-called coherence bandwidth is thus defined as
$B_C \approx \frac{1}{T_M}$
For example, with a multipath time of 3μs (corresponding to a 1 km of added on-air travel for the last received impulse), there is a coherence bandwidth of about 330 kHz.
## See also
• Choke ring antenna, a design that can reject extraneous multipath signals
• Diversity schemes
• Fading
• Multipath I/O
• Olivia MFSK
• Orthogonal frequency-division multiplexing
• Ultra wide-band
## References
This article contains material from the Federal Standard 1037C, which, as a work of the United States Government, is in the public domain. A choke ring antenna is a particular form of Omnidirectional antenna for use at high frequencies In Telecommunications, a diversity scheme refers to a method for improving the reliability of a message signal by utilizing two or more communication channels with This is about the phenomenon of loss of signal in telecommunications In Computer storage, multipath I/O is a fault-tolerance and performance enhancement technique whereby there is more than one physical path between the CPU in a computer Olivia MFSK is an Amateur radio teletype protocol designed to work in difficult (low Signal-to-noise ratio plus Multipath propagation Orthogonal frequency-division multiplexing ( OFDM) — essentially identical to Coded OFDM ( COFDM) and Discrete multi-tone modulation ( Ultra-wideband (aka UWB, ultra-wide band, ultraband, etc is a radio technology that can be used at very low energy levels for short-range high-bandwidth MIL-STD-188 is a series of US military standards relating to Telecommunications Purpose Faced with “past technical deficiencies in telecommunications Federal Standard 1037C, entitled Telecommunications Glossary of Telecommunication Terms is a United States Federal Standard issued by the General Services Administration Federal Standard 1037C, entitled Telecommunications Glossary of Telecommunication Terms is a United States Federal Standard issued by the General Services Administration A work of the United States government, as defined by United States copyright law, is "a work prepared by an officer or employee of the U The public domain is a range of abstract materials &ndash commonly referred to as Intellectual property &ndash which are not owned or controlled by anyone
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9235497713088989, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/51180/undetstanding-of-vectors-tangent-vectors-tangent-covectors
|
# undetstanding of Vectors, Tangent Vectors, Tangent Covectors
To my limited knowledge, I only know vector as a certain fixed number of real numbers put together. for example $[1,2.3,6.4,0.75]$ is a vector. A vector of dimension $N$ is any of the elements of the set $\mathbb{R} \times \mathbb{R}\times$...($N$ times)..$\mathbb{R}$. Ok, I also know the that sequences and functions of a real variable can also be vectors, for example the collection of all square integrable functions or the collection of all square summable sequences can be vector spaces. I can imagine the notion of addition of vectors in all these examples as addition of corresponding elements of two vectors to form the corresponding element of the sum vector, for example the addition of two square integrable functions $f$ and $g$ is nothing but the pointwise addition of values of the function to form the values of the resultant sum function $f+g$.
The point-wise addition is a must for me to imagine a vector. But I am finding hard and clueless when I try to read and understand concepts like tangent vectors and tangent co-vectors. I am clueless and I can't even try to explain my difficulty, hope someone understands my problem and put things for me so that i can overcome this. I was also reading this answer by Aaron here but i am very far from understanding things like "These tangent vectors act on functions by taking the directional derivative of a function at a point. If you take a tangent covector, it no longer acts on functions, it just acts on vectors. " and "a "dual" space V∗ which consists of linear functions V→𝔽 (where 𝔽 is the underlying field)." I do not understand how linear functions V→𝔽 can be called as vectors. I can go to the extent of reading references but i failed a few times and need some advice.
-
## 1 Answer
The main thing to understand here is that "vector" merely refers to any element of any vector space. Anything that can be added together and multiplied by a scalar (with the appropriate conditions on these operations) is a vector; there's no requirement of points or components. Linear functions are vectors because you can add them together and multiply them by scalars. That's all there is to it. Trying to "imagine a vector" merely distracts you from what's going on.
-
thank you for the answer. – Rajesh D Jul 13 '11 at 8:16
@joriki Just out of curiosity what happens when you think of the concept vector? Does an example come to mind, the definition or something different? – Mark Jul 13 '11 at 8:33
@Mark: an arrow :-) – joriki Jul 13 '11 at 8:50
@joriki I was hoping for something concrete like an example, but I guess an arrow is as concrete as it gets. :) – Mark Jul 13 '11 at 9:10
@Mark: I think the things that come to my mind when I think "vector" are roughly an arrow, a column vector of reals, and the definition of a vector space, in that order. I should add that my academic training was a theoretical physicist :-) – joriki Jul 13 '11 at 9:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484373331069946, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/163406-polar-coordinates-changing-bounds-integration-print.html
|
# polar coordinates-changing bounds of integration
Printable View
• November 15th 2010, 11:31 PM
isuckatcalc
polar coordinates-changing bounds of integration
When I change the integral to polar form, how do i adjust the bounds of integration? Do i have to look at the graph of the new function and determine r and theta from that? or is there a formula to do so? i mean i how to change x and y into rcostheta and rsintheta. but the bounds of integration for theta always involve pi, which i can't get from just some number.
i don't have a specific example, but say the bounds of integration for theta were -3 to 3. is there a formulaic way of figuring out the new bounds, or would i need to look at the graph of the function to determine that?
and when changing the bounds for r, same question basically. is it just using x=rcostheta, etc. or is it using the graph to see where r goes?
thanks for any help
• November 16th 2010, 12:50 AM
mr fantastic
Quote:
Originally Posted by isuckatcalc
When I change the integral to polar form, how do i adjust the bounds of integration? Do i have to look at the graph of the new function and determine r and theta from that? or is there a formula to do so? i mean i how to change x and y into rcostheta and rsintheta. but the bounds of integration for theta always involve pi, which i can't get from just some number.
i don't have a specific example, but say the bounds of integration for theta were -3 to 3. is there a formulaic way of figuring out the new bounds, or would i need to look at the graph of the function to determine that?
and when changing the bounds for r, same question basically. is it just using x=rcostheta, etc. or is it using the graph to see where r goes?
thanks for any help
You will need to post a concrete question that exemplifies what you are asking.
• November 16th 2010, 12:55 AM
DrSteve
You need to make at least a rough sketch of the graph.
The bounds for theta do not have to involve pi. In textbooks they often do because "nice numbers" make the problems seem easier.
After drawing the picture, remember that r is the distance outward from the origin, and theta is the angle made with the positive x-axis.
• November 16th 2010, 05:20 AM
isuckatcalc
updated with problem
ok. so i have:
sin(x^2+y^2)
y is bounded from 0 to root(9-x^2)
x is bounded from -3 to 3
so changing x^2 and y^2 inside the sin function would leave me with r^2, as the cos^2 and sin^2 go to 1.
so i am now integrating for sin(r^2)? now i am confused, because i don't know what the graph looks like. i can't use my calculator because in polar i can't graph with r as a variable.
also, again, how does this now affect the bounds of integration?
any starting tips would be great, i don't need it to be solved (though feel free :))
• November 16th 2010, 06:40 AM
HallsofIvy
Square both sides of $y= \sqrt{9- x^2}$ to get $y^2= 9- x^2$ or $x^2+ y^2= 9$. Recognize that? It is the circle with center at the origin and radius 3. It intersects the x-axis, nicely enough, at x= -3 and x= 3. Since y is the positive square root your region of integration is the upper semi-circle.
Now you should be able to see what bounds r and $\theta$ have. To integrate $sin(r^2)$, remember that the differential of area, dxdy, becomes $r drd\theta$ in polar coordinates. Use the substitution $u= r^2$.
(What kind of graphing calculator do you have? Most, if not all, modern graphing calculators have a "mode" key that allows you to change to graphing equations in polar coordinates.)
• November 16th 2010, 09:01 AM
isuckatcalc
awesome, thank so you much. that makes perfect sense. i just have to practice on some other examples now.
i have a ti-89, though i just got it and am far from knowing how to use all of it. i know how to graph polar functions though.
• November 16th 2010, 10:53 AM
isuckatcalc
a new concern
ok, so i got that one all figured out, and i'm working on another.
it is:
x+y dxdy
x bounded from y to root(2-y^2)
y bounded from 0 to 1
so i converted to polar, and get the double integral of (cos+sin)r^2 drdtheta
i integrate that and get (1/3)r^3(cos+sin) + theta
using the same method as above, i figured the shape is a circle with a radius of root 2
so r is bounded from 0 to root 2
but now how do i find the theta bounds? i have the answer given to me but i can't figure out how to get to it. i made the assumption it was from 0 to pi again, staying on the postive side of the x axis. but that's not getting me the correct answer.
• November 17th 2010, 01:48 AM
HallsofIvy
I assume you have drawn a picture and recognize that the region to be integrated over is the section of the circle with radius $\sqrt{2}$ and bounded by the lines y= 0 and y= x. At y= 0, $\theta= 0$ of course. The slope of y= x is 1 and the slope is the tangent of the angle. What is that angle? (Another way of looking at it: y= x is exactly half way between x= 0 and y= 0.)
All times are GMT -8. The time now is 06:16 AM.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466726183891296, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/43860/if-wave-packets-spread-why-dont-objects-disappear?answertab=votes
|
If wave packets spread, why don't objects disappear?
If you have an electron moving in empty space, it will be represented by a wave packet. But packets can spread over time, that is, their width increases, with it's uncertainty in position increasing. Now, if I throw a basketball, why doesn't the basketball's packet spread as well? Wouldn't that cause its uncertainty in position to increase so much to the point it disappears?
EDIT: I realize I wasn't clear what I meant by disappear. Basically, suppose the wave packet is spread over the entire Solar System. Your field of vision covers only an extremely tiny part of the Solar System. Therefore, the probability that you will find the basketball that you threw in your field of vision is very small.
-
2
Are you asking why macroscopic objects don't show quantum behaviour? If so, have a quick search of this site as that question has been asked many times. Try a search for "decoherence". – John Rennie Nov 10 '12 at 11:07
– Qmechanic♦ Nov 10 '12 at 20:20
because the wavepackets also collapse. – Ron Maimon Nov 11 '12 at 4:17
4 Answers
It is true that the spreading depends on the mass as @twistor59 has already noticed, but the more important fact is that the basketball is an open system and interaction with its surrounds makes that (due to decoherence) the state of the basketball is not described by quantum wavefunction theory [*]. Using the Wigner-Moyal formulation of quantum mechanics it is possible to show that the basketball always have a well-define position $x(t)$ at each instant.
[*] Wavefunction theory only applies to isolated quantum systems.
-
What if throw the basketball in a perfect vacuum, completely empty of anything or any field? – Ignacio Nov 10 '12 at 17:50
@Ignacio: The basketball is not in a pure state. If you manage to put it in a perfect vacuum (which is impossible) it will remain in the same non-pure state because a perfect vacuum cannot change a state. – juanrga Nov 10 '12 at 18:12
1
@juanrga: So when you cut all "interaction with its surrounds", it remains in a mixed state? – Vladimir Kalitvianski Nov 10 '12 at 19:30
@VladimirKalitvianski: I cannot say. It depends on the system, the kind of mixed state, and the dynamics. – juanrga Nov 11 '12 at 12:55
@juanrga So from what I understand it's wrong to say that macroscopic objects exhibit quantum behaviour, but with effects that are too small to be detected. Instead, they really aren't quantum at all. Is this correct? – Ignacio Nov 11 '12 at 20:33
show 1 more comment
Short answer is it won't disappear because the integral of the probability density is still 1 even for a highly spread wavepacket, i.e. the object will still be found somewhere.
Slightly longer answer is that, if I start with a Gaussian wavepacket with width $a$, then after time $t$, the width will have spread to $$\sqrt{\frac{a^2+\hbar^2t^2/m^2}{a}}$$ The incredible smallness of $\hbar$ makes the spread negligble for something as massive as a basketball.
-
Ok, but in the formula the spread can grow indefinitely, given enough time. – Ignacio Nov 10 '12 at 17:43
1
That's true, but if you put some numbers in ($\sqrt{a}=10^{-10}meter; m=0.1kg$, you can calculate how many times the age of the universe you have to wait for it to spread by a significant fraction. The $\hbar$ kills you. (However, @juanrga 's argument is probably stronger). – twistor59 Nov 10 '12 at 18:21
Spreading the wave packet does not mean spreading and disappearing the electron.
If a basketball has initially uncertainty $\Delta V$ of its velocity, then with time the ball position uncertainty will grow as $\Delta V\cdot t$.. With time this uncertainty gets so large that the ball disappears from your sight (I mean you will not find it where you expect it to be without knowing the initial velocity spread).
-
AFAIK there is no answer to this question, that is why there are few "theories" that try to answer this question like GRW
GRW
EDIT:
Let me just elaborate because I gave you the most interesting answer without directly addressing your question. There is no need for you to throw the ball. The atoms of the ball are constrained by their mutual potential so the waves do not spread in the sense of free particles. @Twistor59 answer is heuristic usually given in textbooks, but obviously there are no 100 g particles as such. The main issue is why we don’t see the ball at 100 meters or whatever since wavefunctions have non zero probability throughout space. The measurement problem is a close relation to your “proper “question but a bit different. GRW is more concerned with that.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515554904937744, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/18228/irreducible-tensors-concept
|
# Irreducible tensors concept
This might be a little naive question, but I am having difficulty grasping the concept of irreducible tensors. Particularly, why do we decompose tensors into symmetric and anti-symmetric parts? I have not found a justification for this in my readings and would be happy to gain some intuition here.
-
## 3 Answers
You can decompose a rank two tensor $X_{ab}$ into three parts:
$$X_{ab} = X_{[ab]} + (1/n)\delta_{ab}\delta^{cd}X_{cd} + (X_{(ab)}-1/n \delta_{ab}\delta^{cd}X_{cd})$$
The first term is the antisymmetric part (the square brackets denote antisymmetrization). The second term is the trace, and the last term is the trace free symmetric part (the round brackets denote symmetrization). n is the dimension of the vector space.
Now under, say, a rotation $X_{ab}$ is mapped to $\hat{X}_{ab}=R_{a}^{c}R_{b}^{d}X_{cd}$ where $R$ is the rotation matrix. The important thing is that, acting on a generic $X_{ab}$, this rotation will, for example, take symmetric trace free tensors to symmetric trace free tensors etc. So the rotations aren't "mixing" up the whole space of rank 2 tensors, they're keeping certain subspaces intact.
It is in this sense that rotations acting on rank 2 tensors are reducible. It's almost like separate group actions are taking place, the antisymmetric tensors are moving around between themselves, the traceless symmetrics are doing the same. But none of these guys are getting rotated into members "of the other team".
If, however, you look at what the rotations are doing to just, say the symmetric trace free tensors, they're churning them around amongst themselves, but they're not leaving any subspace of them intact. So in this sense, the action of the rotations on the symmetric traceless rank 2 tensors is "irreducible". Ditto for the other subspaces.
-
Can you convince me that the symmetric part alone (without subtracting off the trace) is reducible? Is it as simple as saying "there is a subspace $\delta_{ab} Tr(X)$ of $X_{(ab)}$ which transforms to itself under rotations"? – levitopher Jan 17 '12 at 21:37
@cduston Yes, the orthogonality property of the rotation matrix means that their action on $\delta_{ab}$ preserves $\delta_{ab}$ and hence the one dimensional subspace of multiples of $\delta_{ab}$ is preserved. – twistor59 Jan 18 '12 at 9:05
@twistor59: How would you mathematically prove that the traceless symmetric part is irreducible? – ramanujan_dirac Feb 21 at 4:01
@ramanujan_dirac Just thinking about SO3, probably the easiest way is to re-examine the argument that led to the (antisymm+symm_traceless+trace) decomposition and try to apply it again. We started with a 9 dim space of rank 2 tensors, and found the 1, 4 and 5 dimensional subspaces by using the invariant tensors $\delta_{ab}$ and $\epsilon_{abc}$. Given that these are the only invariant tensors, then if we wanted an inv't subspace of the 5dim space of symm traceless tensors, we'd have to get it by applying these tensors to the symm. traceless tensors, which can only give trivial answers. – twistor59 Feb 22 at 13:24
Physicists are always interested in what properties of a physical system are invariant under symmetries. If it's tricky to see the symmetry then they'll rearrange the system to make the symmetry more obvious.
For example, consider a covariant rank two tensor like $T^{ab}$. In general the components of this tensor will change if the tensor is rotated in 3D. It's hard to see what might be invariant under rotation.
Now consider a symmetric tensor $S^{ab}$. Again, the components of this tensor will change when it is rotated. However, the property of being symmetric is preserved. So from a physicist's point of view this is interesting. Similarly, an antisymmetric tensor $A^{ab}$ remains antisymmetric when rotated.
So we have nice properties that are preserved for these special classes of tensor, but not for $T^{ab}$. But as you probably know, any tensor $T^{ab}$ can be written as a sum of symmetric and antisymmetric parts, $T^{ab}=S^{ab}+A^{ab}$. So now we know that $T^{ab}$ can be written as a sum of two parts, each of which behaves more simply when rotated. This simplifies the analysis of what happens to $T^{ab}$ when it is rotated.
Once we've done that once the obvious question is "can we do this again"? It'd be nice if we could break $T^{ab}$ into more pieces that behave as simply as possible under rotations.
There's another class of tensor that behaves nicely under rotation: the diagonal tensors of the form $\alpha\delta_{ab}$. Under rotations, they simply map to themselves. That's as simple as it gets. There's also a kind of converse class: the symmetric tensors of trace zero. These keep their trace of zero when they are rotated. But here's the nice bit: any symmetric tensor can be written as the sum of a diagonal tensor and a trace-zero symmetric tensor. So now we've broken down $T^{ab}$ into three pieces, each of which has a nice invariance property with respect to rotations.
Can we keep going? Well it turns out for covariant rank two tensors in 3D this is as far as we can go. If we try to break up the antisymmetric matrices, say, as the sum of two pieces from a pair of complementary classes, we'll always find that some rotation will move an element of one class into the other. So three classes is as far as we can go. The elements of these classes are the irreducible tensors.
The space of covariant rank two tensors has dimension 9. It is the sum of three spaces: the diagonal tensors (a space of dimension 1), the antisymmetric tensors (dimension 3) and the symmetric trace-zero tensors (dimension 5). 1+3+5=9.
For tensors of different rank, and in different dimensions, you get different irreducible tensors.
-
To get the intuition, it is better to become familiar with Wigner–Eckart theorem and spherical tensors which generalize these ideas.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374856352806091, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/14508/galois-representations-attached-to-newforms
|
## Galois representations attached to newforms
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose that $f$ is a weight $k$ newform for $\Gamma_1(N)$ with attached $p$-adic Galois representation $\rho_f$. Denote by $\rho_{f,p}$ the restriction of $\rho_f$ to a decomposition group at $p$. When is $\rho_{f,p}$ semistable (as a representation of $\mathrm{Gal}(\overline{\mathbf{Q}}_p/\mathbf{Q}_p)$?
To make things really concrete, I'm happy to assume that $k=2$ and that the $q$-expansion of $f$ lies in $\mathbf{Z}[[q]]$.
Certainly if $N$ is prime to $p$ then $\rho_{f,p}$ is in fact crystalline, while if $p$ divides $N$ exactly once then $\rho_{f,p}$ is semistable (just thinking about the Shimura construction in weight 2 here, and the corresponding reduction properties of $X_1(N)$ over $\mathbf{Q}$ at $p$). For $N$ divisible by higher powers of $p$, we know that these representations are de Rham, hence potentially semistable. Can we say more? For example, are there conditions on "numerical data" attached to $f$ (e.g. slope, $p$-adic valuation of $N$, etc.) which guarantee semistability or crystallinity over a specific extension? Can we bound the degree and ramification of the minimal extension over which $\rho_{f,p}$ becomes semistable in terms of numerical data attached to $f$? Can it happen that $N$ is highly divisible by $p$ and yet $\rho_{f,p}$ is semistable over $\mathbf{Q}_p$?
I feel like there is probably a local-Langlands way of thinking about/ rephrasing this question, which may be of use...
As a possible example of the sort of thing I have in mind: if $N$ is divisible by $p$ and $f$ is ordinary at $p$ then $\rho_{f,p}$ becomes semistable over an abelian extension of $\mathbf{Q}p$ and even becomes crystalline over such an extension provided that the Hecke eigenvalues of $f$ for the action of $\mu_{p-1}\subseteq (\mathbf{Z}/N\mathbf{Z})^{\times}$ via the diamond operators are not all 1.
-
can you give some references of the known results (for a learner) such as the pst of the attached Galois reps? Thank you! – natura Feb 10 2010 at 18:17
## 2 Answers
The right way to do this sort of question is to apply Saito's local-global theorem, which says that the (semisimplification of the) Weil-Deligne representation built from $D_{pst}(\rho_{f,p})$ by forgetting the filtration is precisely the one attached to $\pi_p$, the representation of $GL_2(\mathbf{Q}_p)$ attached to the form via local Langlands. Your suggestions about the $p$-adic valuation of $N$ and so on are rather "coarse" invariants---$\pi_p$ tells you everything and is the invariant you really need to study.
So now you can just list everything that's going on. If $\pi_p$ is principal series, then $\rho$ will become crystalline after an abelian extension---the one killing the ramification of the characters involved in the principal series. If $\pi_p$ is a twist of Steinberg by a character, $\rho_{f,p}$ will become semistable non-crystalline after you've made an abelian extension making the character unramified. And if $\pi_p$ is supercuspidal, $\rho_{f,p}$ will become crystalline after a finite non-trivial extension that could be either abelian or non-abelian, and figuring out which is a question about $\pi_p$ (it will be a base change from a quadratic extension if $p>2$ and you have to bash out the possibilities).
Seems to me then that semistable $\rho$s will show up precisely when $\pi_p$ is either unramified principal series or Steinberg, so the answer to your question is (if I've got everything right) that $\rho_{f,p}$ will be semistable iff either $N$ (the level of the newform) is prime to $p$, or $p$ divides $N$ exactly once and the component at $p$ of the character of $f$ is trivial. Any other observations you need should also be readable from this sort of data in the same way.
One consequence of this I guess is that $\rho_{f,p}$ is semi-stable iff the $\ell$-adic representation attached to $f$ is semistable at $p$.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Since f is potentially semi-stable, you can look at its attached filtered (φ, N, Gal(L/Qp))-representation (where ρf,p becomes semi-stable when restricted to GL). If its N is zero, then it is potentially cristalline, otherwise it is not.
As for the ordinary case, I'm not sure what definition you're using. Under Greenberg's definition, an ordinary p-adic Galois representation is semi-stable (see Perrin-Riou's article in the Bures). Also, the Tate curve is ordinary at p, but not potentially cristalline (once something is semi-stable and non-cristalline, it can't be potentially cristalline).
-
Thanks for your thoughts, Rob. I was thinking of ordinary as $p$ not dividing $a_p$ and was concentrating on weight 2. My statement about crystalline/ semistable over an abelian extension comes from analyzing the ordinary parts of the p-divisible groups of modular abelian varieties, as in the work of Mazur-Wiles and Tilouine. I agree that the filtered module contains all the information I'm asking about, but this seems like more of a rephrasing of the question, as knowledge of the filtered module requires understanding $L$ first... – B. Cais Feb 7 2010 at 18:11
If $p$ doesn't divided $a_p$, then $\rho=\rho_{f,p}$ is ordinary (in the sense of Greenberg) by a theorem of Wiles and hence is semistable. If, in addition, $p$ divides $N$ and $f$ is new, then $\rho$ is not cristalline and hence can't become cristalline over any extension since it is already semistable. Am I missing something? – Rob Harron Feb 7 2010 at 19:42
Rob: What Wiles shows is that $p$ not dividing $a_p$ implies potentially ordinary, as he only works with his Galois representations up to $\overline{\mathbf{Q}}_p$-equivalence. The point is that the ordinary filtration on the Galois side may not be defined over $\mathbf{Q}_p$. Any weight 2 neworm of level $p^r$ primitive nebentypus has associated Galois representation that is potentially crystalline but non-crystalline over $\mathbf{Q}_p$ if $r>0$. Wiles + Perrin-Riou as you indicate only gives $p$ not dividing $a_p$ implies potentially semistable, which one has anyway... – B. Cais Feb 8 2010 at 0:49
I see, your nebentype is ramified, so Wiles' result only gives nearly ordinary (I guess this is where your condition on the action of the Diamond operators swoops in basically killing the Tate curve case; this is what I was missing). Sorry bout that. – Rob Harron Feb 8 2010 at 5:31
On the automorphic side, what's happening here is that one can check via a representation-theoretic calculation that if f has level p^r and character of conductor p^r too (r>=1) then pi_p must be principal series associated to one ramified (conductor p^r) and one unramified character. Note also that whether or not p divides a_p has consequences on the shape of the Galois representation, but doesn't seem to me to have any consequences for D_{pst}(rho_{f,p}) (in some vague sense). – Kevin Buzzard Feb 8 2010 at 7:53
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 88, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393569231033325, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/1354/best-example-of-energy-entropy-competition/1359
|
# Best example of energy-entropy competition? [closed]
What are the best examples in practical life of an energy-entropy competition which favors entropy over energy? My initial thought is a clogged drain -- too unlikely for the hair/spaghetti to align itself along the pipe -- but this is probably far from an optimal example. Curious to see what you got. Thanks.
-
I'd say all living organisms seem to enter this cathegory. But it's hard to prove quantitatively. But obviously, when we eat, we don't do so to increase our energy (except when growing), we eat to diminish or keep our entropy low. – Raskolnikov Nov 27 '10 at 20:13
– Greg P Nov 27 '10 at 21:36
Personally, I eat because I am either hungry or else I have sweet tooth. I don't recall ever eating in order to diminish my entropy :-) – Marek Nov 28 '10 at 6:42
@Raskolnikov: I've heard several mentions of life and low-entropy, but I've yet to see one that goes any deeper than "a living being is an ordered system". I'd be curious to know what motivates this idea. – Bruce Connor Dec 25 '10 at 0:42
@Bruce, eg measurement of entropy of some denaturation reactions. example: hardboiling an egg. – Georg Jan 18 '11 at 20:58
## closed as not constructive by dmckee♦May 2 at 1:52
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance.
## 7 Answers
One example I know is so-called Brazil nut effect. When you place balls in a container of two different sizes and shake it, the larger ones will go up (even if they are denser than the smaller ones). So the final energy of system after introducing noise is clearly greater than the initial. I believe that the phenomena needs to be entropy-driven. However, I don't know a proof.
-
This is not very correct. Put lead brazil nuts in, instead of normal ones. They will fall to the bottom. The effect is more related to packing than entropy (small shapes pack better than large ones). – Sklivvz♦ Nov 27 '10 at 21:22
@Sklivvz: Of course for a given ball size ratio there is a density threshold. No, it is not that related to packing - even average density (with air included) is larger at the top. – Piotr Migdal Nov 27 '10 at 21:36
– Greg P Nov 27 '10 at 21:36
General question/comment (again, not enough points). What exactly (quantitatively) is meant when we say that "energy-entropy competition favors entropy"? In the appropriate ensemble, entropy and energy are always competing. In equilibrium a situation is reached that balances them exactly. Who is to say who "won"? If instead we are talking about a non-equilibrium situation, then what you are asking is for a process where entropy increases. Then there are plenty of examples - see for example Sklivvz's answer. Similarly, take any example of heat conduction from a hot object to a cold one, etc. – Greg P Nov 27 '10 at 21:45
Thanks. The comment on denser nuts is crucial here, in order for the big nuts at top to be energetically less favorable. Maybe it should be called the "wing nut effect." (Ising model is surely not from everyday life.) By the way, the clogged drain rather similar -- the probable configuration of particles makes the lower-energy packings less favorable. – Eric Zaslow Nov 28 '10 at 14:04
The air in this room. Because of gravity, the lowest energy configuration is clearly one in which all the molecules lie on the floor. But from the entropic point of view they should be exploring all of their phase space and bouncing around the room.
I would like to say also that examples such as the clogged drain and so on should be viewed purely as analogies. Entropy is a real physical quantity that can be calculated. As an actual physics problem, the calculation of the entropy of the drain + hair + water system would be a monster and there is no telling what the result will be! Is it even in equilibrium?
The Brazil nut effect cited above is a non-equilibrium effect, so although it is interesting I don't think it necessarily means anything about energy-entropy competition. The balance of energy and entropy happens when a system (at constant temperature for example) reaches equilibrium, minimizing its free energy. The shaken Brazil nuts are a non-equilibrium problem, and thus are not minimizing their free energy! But very interesting nevertheless, all the more so since it is non-equilibrium.
-
Every time I edit to say "the Brazil nut effect cited below" Piotr's post ends up above mine, and when I changed it back to "cited above" it appeared below :) – Greg P Nov 27 '10 at 20:39
+1 for really simple & important example. When in comes to brazil nut effect, I don't get why it can't be an equilibrium? – Piotr Migdal Nov 27 '10 at 20:41
– David Zaslavsky♦ Nov 27 '10 at 20:53
As you shake the nuts, you add energy to the system that is dissipated in the nuts (thermalized). If we start in the equilibrium state (which by density arguments has the larger nuts on the bottom), shake the nuts to bring the large ones to the top, then let them cool to their original temperature, we have a non-equilibrium state. It would take years for the Brazil nuts to find their way spontaneously back to the bottom where they 'want' to be! Accordingly, the simulations of 'Brazil nut effect' are not equilibrium Monte Carlo sims but rather include steady input of energy (non-equilibrium). – Greg P Nov 27 '10 at 20:58
Thanks for the clarification David! – Greg P Nov 27 '10 at 20:58
show 5 more comments
Blackbody radiation: anything hotter than its environment radiates energy thus increasing the entropy of the universe. Entropy wins :-)
The Sun :-) The Sun's energy does not increase the Earth's total energy! In fact, the Earth radiates almost exactly the same amount of energy as it receives from the Sun. What we really gain from the Sun is that we use the sun rays' low entropy to power life on Earth, and the Earth radiates high entropy microwaves in the night.
-
1
+1 The second one is very nice. – mbq♦ Nov 28 '10 at 10:00
@Sklivvz: Even though it is true I have no idea what it have to do with energy-entropy competition... – Piotr Migdal Nov 28 '10 at 10:51
– Sklivvz♦ Nov 28 '10 at 10:56
@Piotr Migdal: The first example is an example where nature favors entropy over energy: energy decreases, entropy increases. The second is an example where nature favors energy over entropy: energy is constant, entropy decreases. – Sklivvz♦ Nov 28 '10 at 10:59
I don't think blackbody radiation increases entropy. Think of a gas inside a container of ideal mirrors. Let's say I drop a lot o photons inside this (isolated) system. From the point I put them it, up until the point when the photons get absorbed by the gas, the system was isolated. If blackbody radiation increased entropy, than absorption of photons would have to decrease entropy. If that were the case, than the system I mentioned above will have decreased its own entropy spontaneously. – Bruce Connor Dec 25 '10 at 0:39
show 10 more comments
One of the nicest examples I know is the Kosterlitz-Thouless phase transition in the XY model. What is cool is that the transition is driven by the condensation of vortices which have an energy that diverges logarithmically with the size of the system. You would think they couldn't contribute at all because of this, but it turns out their entropy also diverges in the same way, so the free energy $F=E-TS \propto (c-k_BT) log(R)$ where $c$ is a parameter and $R$ is the size of the system. At sufficiently large $T$ the entropy terms wins and the system undergoes a transition through formation of vortices.
p.s. after rereading the question I realize my answer does not involve "practical life" but I'll leave it anyway.
-
Well, as a physicist working mainly in the field of statistical physics, I obviously have to mention Ising model. In this case the model is actually tractable (at least in two dimensions) and can tell us a whole lot about (not just) energy-entropy battle. It's obvious that the ground state (for ferromagnetic case) is all spins pointing one way (say up). Now, if you point some spins the other way then you are losing in terms of energy (something like number of neighbors times number of wrong spins) but actually you gain hugely in entropy (because of translational invariance of lattice models). This argument can be made very precise when working with polymer model (which is isomorphic to Ising model) and considering low-temperature cluster expansion.
I am sorry, but I am not really able to provide nice references. Wikipedia articles are pretty bad. Maybe I should spend some time bringing them up to the current knowledge (that is to say, knowledge since like 1970s) about cluster expansion. For now, if anyone is interested, just read the basic paper on the topic.
-
If you are willing to go down to microscopic scales, a nice example of "entropy winning" is the phenomenon of depletion forces. Large particles in a suspension of smaller ones feel an effective attractive force, even if the interaction between all particles is just hard-wall. The attractive force arises because the volume available to the smaller particles increases when the larger ones get sufficiently close, and hence their entropy increases. See e.g.
Sho Asakura and Fumio Oosawa,
"Interaction between particles suspended in solutions of macromolecules",
J. Pol. Sci. 33 (1958) 183-192.
Depletion forces can be measured directly and are quite important for biological systems.
-
Thanks. A kind of thermodynamic Casimir effect. – Eric Zaslow Dec 25 '10 at 2:12
In my view it's not really correct to talk about an "energy-entropy competition". I guess the reason people tend to think in those terms is because physicists like to speak of systems at a constant temperature minimising the Helmholtz free energy $A=U - TS$. This looks like it's an energy term ($U$) added to a term involving the entropy $S$, with the temperature acting as a kind of conversion factor. So it's easy to think that the system must be trying to minimise its energy and maximise its entropy at the same time, putting both factors in competition with each other.
But really that's not what's happening at all. What's happening is that the system we're interested in is exchanging energy with a heat bath in order to remain at a constant temperature, and the entropy of the combined system (system+heat bath) is increasing towards a maximum. If an amount $\Delta U$ of energy is transferred from the heat bath to the system of interest, causing a change $\Delta S$ in the entropy of the system of interest, then the total entropy has changed by $\Delta S - \Delta U/T$.
Now if we multiply this by $-T$ we get $\Delta U - T\Delta S$, which is equal to $\Delta A$, the change in Helmholtz free energy. Since the total entropy must be maximised, and since $T$ is assumed constant, $A$ must therefore be minimised. But now we can see that not only is the $TS$ term an entropy, but so is the $U$ one. The change in $U$ just represents the change in the entropy of the heat bath - it's just that multiplying it by $T$ has obscured that. The competition is not between the system's energy and its entropy, but between the system's entropy and the heat bath's entropy. The reason for multiplying by $-T$ is purely historical and has always struck me as rather unhelpful. Apart from anything else it's only permissible if $T$ is constant, whereas the $\Delta S - \Delta U/T$ formula works even if it isn't.
Having said all that I can now try to answer the question. What we need is an example of a system where the $TS$ term dominates over the $U$ one. One way to do this is just to prevent any heat from being transferred between the system and the heat bath. For any system that's completely isolated from its environment, the $\Delta U/T$ term will go to zero and only the $\Delta S$ term will be left. So any chemical reaction or other physical process performed in a thermally insulated container will be entirely dominated by the entropy term.
But perhaps a more satisfying example would be an endothermic chemical reaction such as cooking an egg. In this case the $\Delta U$ term is negative, so the system actually sucks in heat from its environment, reducing its entropy. This is offset by a greater increase in the entropy of the system - so we can say the entropy of the system has "won" over the entropy of the heat bath. Or in free energy terms, the $TS$ term has "won" over the $U$ one.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440307021141052, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/calculus/24861-solids-revolution.html
|
# Thread:
1. ## Solids of Revolution
Could use some help on this one:
To make a doughnut for breakfast, rotate about the y-axis the disk bounded by (x-7)^2 + y^2 = 9 , centered at (7,0). Write a definite integral that gives the volume of your breakfast. Evaluate the integral.
Thanks!
2. By the Theorem of Pappus, To find the volume of a doughnut generated by revolving the disk of radius r about a line at a distance 'b' units from the center of the disk. The disk has radius 3 and is 7 units from the y-axis.
$V=2{\pi}\int_{-r}^{r}(b-x)(2\sqrt{r^{2}-x^{2}})dx$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8651156425476074, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/differentiation+quantum-mechanics
|
# Tagged Questions
2answers
235 views
### Derivatives of operators
How do derivatives of operators work? Do they act on the terms in the derivative or do they just get "added to the tail"? Is there a conceptual way to understand this? For example: say you had the ...
2answers
124 views
### Notation for differential operators and wave function math
I know that $[\frac {d^2}{dx^2}]\psi$ is $\frac {d^2\psi}{dx^2}$ but what about this one $[\frac {d^2\psi}{dx^2}]\psi^*$? Is it this like $\frac {d^2\psi\psi^*}{dx^2}$ or this like \$\frac ...
4answers
399 views
### Which Schrodinger equation is correct?
In the coordinate representation, in 1D, the wave function depends on space and time, $\Psi(x,t)$, accordingly the time dependent Schrodinger equation is H\Psi(x,t) = ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118289947509766, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/138946/two-homotopy-colimit-questions
|
# Two Homotopy Colimit Questions
I have two questions about homotopy colimits:
1. What can we say about $\operatorname{hocolim}_j\operatorname{colim}_i F(i,j)$? Iterated homotopy colimits commute, but what can we say when the inner one is a regular colimit? In the case I'm looking at, the homotopy colimit is a homotopy pushout and the colimit is filtered, but I'm quite interested in the general case too. An answer in either the general model theoretic case or the specific case of spaces would be interesting to me.
2. Since $\Sigma X$ is the homotopy pushout of $*\leftarrow X\to *$ and homotopy colimits commute, it follows that if $F\colon\mathbf I\to\mathbf{Top}$, then $$\operatorname{hocolim}_i \Sigma F(i)=\Sigma(\operatorname{hocolim}_i F(i))$$ However, it seems to me that we should be able to get this result using the fact that $\Sigma$ is a Quillen left adjoint functor. Left adjoints preserve colimits, and it seems reasonable to suspect that Quillen left adjoints would preserve homotopy colimits. Technically we should be talking about the total left derived functor, so my question should really read: If $T\colon\mathbf C\to\mathbf D$ is a left Quillen functor and $F\colon\mathbf I\to\mathbf C$ is a diagram, is it true that: $$\operatorname{hocolim}_i (\mathbb LT) F(i)=\mathbb LT(\operatorname{hocolim}_i F(i))$$
-
## 2 Answers
One way I've thought about this, if the diagram category is a Reedy category: $\mathrm{hocolim}_iF(i)$ is $\mathrm{colim}G(i)$ where $G$ is a cofibrant replacement for $F$. Then $TG$ is also a cofibrant replacement for $(\mathbb{L}T)F$ (as $T$ sends cofibrations to cofibrations and pushouts to pushouts).
So
$$\mathrm{hocolim}_i (\mathbb{L}T)F(i) = \mathrm{colim}_i TG(i) = T \mathrm{colim}_i G(i) = \mathbb{L}T \mathrm{hocolim}_i F(i).$$
(Edited to make everything homotopical.)
-
Thank you. This was my first thought, but I wasn't sure if TG was the cofibrant replacement for $TF$. I'll have to think about why this happens. – SL2 Apr 30 '12 at 17:27
You need to check two things: 1) That $TG$ is a cofibrant diagram, and 2) That $TG$ is weakly equivalent to $\mathbb{L}TF$. For 1), use that $G$ is a cofibrant diagram and that $T$ preserves colimits, cofibrations and trivial cofibrations. For 2) the $\mathbb{L}$ is necessary since $T$ is not necessarily homotopical. But $\mathbb{L}TF = \mathbb{L}TG = TG$, since $G$ has cofibrant image. – Thomas Belulovich Apr 30 '12 at 17:36
For (1), you might try figuring out whether the natural map $hocolim_i \rightarrow colim_i$ is an equivalence. For example, you may be in luck and your diagram may be cofibrant in the projective model structure on diagrams.
Alternatively, a homotopy pushout can be constructed using a double mapping cylinder, and it seems not too difficult to verify directly that commutes with ordinary colimits. You might want to be careful when making a general statement about this, because sometimes when people say hocolim they're thinking of different definitions that are only weakly equivalent; if this is the case then the homotopy type of $colim_i hocolim_j F(i,j)$ is not even well-defined. I believe that with the usual bar construction definition of hocolim (which gives a double mapping cylinder in the case you want), at least on a finite diagram, these operations actually do commute. (Hopefully I'm not messing up some subtle point-set topology in saying this.)
-
Hmmm, that's a good point that it isn't even well-defined. So I guess the answer to the first question is 'not much' unless the diagram is cofibrant. – SL2 Apr 30 '12 at 19:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397041201591492, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/6890/generalizations-of-the-birkhoff-von-neumann-theorem/98983
|
## Generalizations of the Birkhoff-von Neumann Theorem
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The famous Birkhoff-von Neumann theorem asserts that every doubly stochastic matrix can be written as a convex combination of permutation matrices.
The question is to point out different generalizations of this theorem, different "non-generalizations" namely cases where an expected generalization is false, and to briefly describe the context of these generalizations.
A related MO question: http://mathoverflow.net/questions/73805/sampling-from-the-birkhoff-polytope
-
1
These are all very nice answers. To encourage more I start a little bounty where as before I will "accept" one useful answer. – Gil Kalai Dec 4 2009 at 9:54
## 11 Answers
I am cheating a little to give this answer, because I am fairly sure that it is part of Gil's motivation in asking the question. The most natural generalization of the Birkhoff hypothesis to quantum probability is only true for qubits. (It might also be true for a qubit tensor a classical system; I did not check that case.)
A quantum measurable space is a von Neumann algebra. We are most interested in the finite-dimensional case, where classically "measurable space" is just a fancy name for the random variables on a finite set. A finite-dimensional von Neumann algebra is a direct sum of matrix algebras. In particular, $M_2$ is called a qubit and $M_d$ is called a qudit.
To make a long story short, the Birkhoff hypothesis can be stated for a direct sum of $a$ copies of `$M_b$`, or `$aM_b$`. In this setting, a doubly stochastic map $E$ is a linear map from `$aM_b$` to itself that preserves trace, that preserves the identity element, and that is completely positive. In this setting, $E$ is completely positive if it takes positive semidefinite elements of `$aM_b$` to positive semidefinite elements, and if $E \otimes I$ also has that property on the algebra $aM_b \otimes N$ for another von Neumann or `$C^*$`-algebra $N$. The natural analogue of permutation matrices are the *-algebra automorphisms of $aM_b$. These are permutations of the matrix blocks, composed with maps of the form `$E(x) = uxu^*$`, where $u$ is a unitary element of $aM_b$. The question as before is whether the doubly stochastic maps are the convex hull of the automorphisms.
This Birkhoff hypothesis is true for `$M_2$`, false for `$M_d$` for $d \ge 3$, and I should check it for `$nM_2$`. It is true for $aM_1 = a\mathbb{C}$, because then it is the usual Birkhoff-von Neumann theorem.
I am left wondering about two infinite classical versions of Birkhoff's theorem, for the algebras $\ell^\infty(\mathbb{N})$ and $L^\infty([0,1])$. In the former case, one would ask whether any stochastic map that preserves counting measure (even though counting measure is not normalized) is an infinite convex sum of permutations of $\mathbb{N}$. In the latter case, whether any stochastic map that preserve Lebesgue measure is a convex integral of measure-preserving permutations of $[0,1]$. Addendum: At least the discrete infinite case is addressed, with generally positive results, in this review and in this older review. The older paper also raises the continuous question but with no results. However, with some more Googling I found this counterexample paper.
Since Gil asks for a reference, a recent one is Unital Quantum Channels - Convex Structure and Revivals of Birkhoff's Theorem, by Mendl and Wolf.
Here also is a more orthodox combinatorial generalization of the Birkhoff theorem, and also another case that I once encountered that is between a generalization and a non-generalization. Since Gil now offers a bounty, maybe it's better to merge this answer with the other one.
A doubly stochastic matrix can be interpreted as a flow through a directed graph, with unit capacities. (See Unimodular matrix in Wikipedia; I learned about this long ago from Jesus de Loera.) Any such graph has a polytope of flows, called a network flow polytope. Any network flow polytope has integer vertices, because it is a totally unimodular polytope.
A totally unimodular polytope is a polytope whose facets have integer equations, and with the property that any maximal, linearly independent collection of facets intersects in an integer point because their matrix has determinant $\pm 1$. In particular the vertices are such intersections, so the vertices are all integral. This is a vast generalization of Birkhoff's theorem that comes from generalizing one of the proofs of Birkhoff's theorem.
Example: An alternating-sign matrix is equivalent to a square ice orientation of a square grid. The square ice orientations can be defined by a network flow, so you obtain an alternating-sign-matrix polytope. The generalized Birkhoff theorem in this case says that every vertex of the polytope is an alternating-sign matrix, in fact that every integer point of the $n$-dilated polytope is a sum of $n$ alternating-sign matrices.
The other case that I encountered was the polytope of fractional perfect matchings of a non-bipartite set with $2n$ elements. By contrast, the Birkhoff polytope is the case of a bipartite set with $n$ elements of each type. By definition, it is the polytope of non-negative weights assigned to the edges of the complete graph on $2n$ vertices, such that the total weight at each vertex is 1. Strictly speaking, the Birkhoff theorem is false; not every vertex is a perfect matching. Instead, all of the vertices are combinations of matched pairs, and odd cycles with weight $\frac12$.
At first glance this looks like bad news for the application of computing a perfect matching or the optimum perfect matching of a graph. Indeed, if instead you take the convex hull of the perfect matchings, the result is a polytope with exponentially many facets. However, a good algorithm exists anyway; there is a version of the simplex algorithm that only ever uses polynomially many of the facets.
-
2
Actually the motivation for the question is a recent extension of the Birkhoff-von Neumann theorem by Ellis, Friedgut and Pilpel that I heard yeasterday. I thought it can be useful to try to collect various such generalizations fron various directions and it is also true that Greg and I discussed over the years no less than three different directions where this theorem is extended on of which is the "non-generalization" Greg's mention. (Greg, is there any reference or link?) (In any case, what is "cheating" about it?) – Gil Kalai Nov 27 2009 at 11:59
It is cheating in the sense that you already knew this answer. – Greg Kuperberg Nov 27 2009 at 17:15
1
Here is a problem that contains interesting links imaph.tu-bs.de/qi/problems/30.html – Guillaume Aubrun Dec 6 2009 at 19:17
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
On recent extension of Birkhoff's theorem is by Ellis, Friedgut and Pilpel. The context is Erdos-Ko-Rado theorems for permutations. See Section 5 and especially Definition 17 and Theorem 29 of this paper.
Let me also mention this paper by Jessica Striker regarding the alternating-sign-matrix polytope.
A conjecture extending Birkoff von Neumann theorem to finite Coxeter groups (The A_n family gives the original case) which is proved in all cases but one can be found in this paper by N. McCarthy, D. Ogilvie, I. Spitkovsky, and N. Zobin.
-
There is a relevant paper by Gromova regarding high dimensional matrices:
Gromova, M. B., The Birkhoff-von Neumann theorem for polystochastic matrices. (Russian) Operations research and statistical simulation, No. 2 (Russian), pp. 3–15, 149. Izdat. Leningrad. Univ., Leningrad, 1974. It was translated to English in the early 90s.
The MR review by by George P. Barker reads: In Section 1 the author gives the basic definitions and introduces a notion of the spectrum of a multidimensional matrix as a vector. The basic theorem which gives necessary and sufficient conditions for a vector to be the spectrum of an extremal matrix is then formulated. The necessary condition of the basic theorem consists of a direct generalization of the Birkhoff-von Neumann theorem.
A related paper is: Brualdi, R. A.; Csima, J. Extremal plane stochastic matrices of dimension three. Linear Algebra and Appl. 11 (1975), no. 2, 105–133.
and an older paper with some relevance is: Jurkat, W. B.; Ryser, H. J. Extremal configurations and decomposition theorems. I. J. Algebra 8 1968 194–222.
-
The remaining case is actually H4. For this case the conjecture is false, since I found 1063 orbits of facets. In the same paper, it is claimed that for F4 the convex hull is given by the Birkhoff tensors. But this is false since I found another orbit defined by a matrix of rank 3. See details there
-
This is remarkable. Thanks, Mathieu! – Gil Kalai Feb 11 2011 at 15:12
Dave Perkinson and coauthors have studied sub-polytopes of the Birkhoff-von Neumann polytope, in the sense that they consider convex hulls of the permutation matrices corresponding to certain subgroups of $S_n$. See, e.g., his paper with Jeff Hood for the case $A_n$.
-
Not sure if this is what you're looking for, but the statement for symmetric doubly stochastic matrices is that every such one can be written as a convex combination of $(\sigma + \sigma^t)/2$ where $\sigma$ is a permutation matrix. Not all of these are necessarily vertices, but it definitely is not an integral polytope, so maybe this would be a "non-generalization."
-
There exists a "true" generalization within quantum theory given by John Watrous in arXiv 0807.2668v1, where it is shown that a mixture of a double stochastic channel with a completely depolarizing channel satisfies the "quantum version" of the Birkhoff's theorem (convex combination of unitary channels).
-
This is a weaker result: The convex hull of the unitary channels contains a homothetically shrunken copy of all of the doubly stochastic channels. – Greg Kuperberg Nov 28 2009 at 16:08
I have found a paper on a generalization of the Birkhoff-von Neumann theorem here:
http://cowles.econ.yale.edu/conferences/2009/sum-09/theory/che.pdf
The authors are Eric Budish, Yeon-Koo Che, Fuhito Kojima, and Paul Milgram.
Here is the Abstract:
The Birkhoff-von Neumann Theorem shows that any bistochastic matrix can be written as a convex combination of permutation matrices. In particular, in a setting where n objects must be assigned to n agents, one object per agent, any random assignment matrix can be resolved into a deterministic assignment in accordance with the specified probability matrix. We generalize the theorem to accommodate a complex set of constraints encountered in many real-life market design problems. Specifically, the theorem can be extended to any environment in which the set of constraints can be partitioned into two hierarchies. Further, we show that this bihierarchy structure constitutes a maximal domain for the theorem, and we provide a constructive algorithm for implementing a random assignment under bihierarchical constraints. We provide several applications, including (i) single-unit random assignment, such as school choice; (ii) multi-unit random assignment, such as course allocation and fair division; and (iii) two- sided matching problems, such as the scheduling of inter-league sports matchups. The same method also finds applications beyond economics, generalizing previous results on the minimize makespan problem in the computer science literature
I have also found a master's thesis that involved generalization from a matrix to a hyper matrix, a matrix in higher dimensions. So one example would be a cubic array of numbers instead of a square. He proves a generalization to the three dimensional matrices which are called blocks. There are some open questions there as well. I found it interesting as I have wondered about extending the two dimensions of matrices to three coordinates and seeing what happened. It is available here:
https://ritdml.rit.edu/dspace/bitstream/1850/5967/1/NReffThesis05-18-2007.pdf
-
It would have been nice if you quoted a sentence or two from the abstract, or gave us a notion what the paper is about. Life is too short to download PDFs just in case there might be something of interest there. – Harald Hanche-Olsen Nov 26 2009 at 20:28
I have added the abstract – Kristal Cantwell Nov 27 2009 at 2:27
Dear Kristal, the link is broken; can you add the name of the author? – Gil Kalai Feb 11 2011 at 15:15
I have replaced the URL and added the names of the authors. – Kristal Cantwell Feb 11 2011 at 19:55
The book Combinatorial Matrix Classes by Richard Brualdi has a number of generalizations of a combinatorial nature, around Chapter 8. Also, the least common denominator of the vertices of a rational polytope is related to the period of the Ehrhart quasipolynomial, see for example Exercise 3.25 in the book of Beck and Robins, http://math.sfsu.edu/beck/papers/noprint.pdf .
-
Thanks, Brendan, best --Gil – Gil Kalai Aug 28 2011 at 17:54
The asymptotic quantum Birkhoff conjecture by Smolin, Verstraete, and Winter was disproved by Haagerup and Musat. The following paper http://www.iro.umontreal.ca/~qip2012/SUBMISSIONS/short/qip2012_submission_105.pdf presents the state of the art and gives some references.
-
The paper DESIGNING RANDOM ALLOCATION MECHANISMS: THEORY AND APPLICATIONS by ERIC BUDISH, YEON-KOO CHE, FUHITO KOJIMA, AND PAUL MILGROM Also describes an extension of the Birkhoff-von Neumann theorem.
This is related to the following: There is some relation between the Birkhoff-von Neumann theorem and Scarf's theorem that asserts that balanced games have non empty core. The fact that that the houses allocation game is balanced follows directly from B-vN theorem. The fact that the game describing the stable-marriage problem requires a certain generalization.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160531759262085, "perplexity_flag": "head"}
|
http://nrich.maths.org/4722
|
# An Introduction to Differentiation
##### Stage: 4 and 5
Article by Vicky Neale
This article is a gentle introduction to differentiation, a tool that we shall use to find gradients of graphs. It is intended for someone with no knowledge of calculus, so should be accessible to a keen GCSE student or a student just beginning an A-level course. There are a few exercises. Where you need the answer for later parts of the article, solutions are provided, but you are strongly encouraged to try the questions as you go: none of them is particularly hard, and you will get a much better idea of what is going on if you try things out for yourself. Use the solutions to check your answers, rather than to avoid doing the questions!
To work out how fast someone has travelled, knowing how far they went and how long it took them, we work out $$\textrm{speed} = \frac{\textrm{distance}}{\textrm{time}}.$$ On a distance-time graph, this is equivalent to working out the gradient.
If the person was travelling at a constant speed, then the graph will be a straight line, and so it's quite easy to work out the gradient. For example, in the graph above we can work out the gradient of each straight line section. But what if they were travelling at varying speeds? Then the graph will be a curve, and it's not quite so obvious how we can get the gradient.
To find the gradient at a particular point, we need to work out the gradient of the tangent to the graph at that point - that is, the gradient of the straight line that just touches the graph there.
Note that a straight line has the same gradient all the way along, whereas a curve has a varying gradient; we find the gradient at some specified point.
But actually trying to draw this tangent is both fiddly and inaccurate. What would be really useful would be a more precise way of working out the gradient of a curve at a particular point. We have such a formula when the curve is a straight line: you may be used to the expression "(change in $y$)/(change in $x$)". But to do something similar for a curve, we're going to need differentiation .
The idea of differentiation is that we draw lots of chords, that get closer and closer to being the tangent at the point we really want. By considering their gradients, we can see that they get closer and closer to the gradient we want. Have a go with the following interactivity to see what I mean.
This text is usually replaced by the Flash movie.
Do you agree that if we could work out the gradients of different chords as they approximate the tangent better and better, and if they tend to a limit , then we could work out the gradient of the tangent? By "tend to a limit", I mean that they get closer and closer, and in fact get as close as we like.
For example, suppose I had chords that got closer and closer to the tangent, and their gradients were 1, $\frac{1}{2}$, $\frac{1}{4}$, $\frac{1}{8}$, $\frac{1}{16}$, $\ldots$. Do you see that these are getting closer and closer to $0$, and no matter how close I want to get, I can find a chord with a gradient that close? I'm deliberately being a little bit vague here, because making this rigorous is quite hard (it comes up in the first year of most university maths courses), but as long as you get the general idea of what "tends to a limit" means, that's fine for now. Ok, so we've got the general principle. But can we actually use it? Let's have a go with a fairly nice curve: $y=x^2$.
#### Exercise 1
(i) Sketch the curve $y=x^2$ - you'll need a nice, large graph for the next part, so fill the piece of paper! You could find several points on the curve and join them with a nice smooth curve, or perhaps you could use a graphic calculator or graphing software on a computer (but you'll need a printout for part (ii)).
(ii) Try to work out the gradient at some points, by drawing tangents on your graph as well as you can. Try several different points, and see whether you can spot a pattern.
Let's be bold, and try to find the gradient at a general point $A$ at $(x,y)=(x,x^2)$. To do this, we're going to need another point $B$ at $(x+h,y+k)=(x+h,(x+h)^2)$. Remember the idea? We're going to find the gradient of the chord between $A$ and $B$, and then we're going to let $h$ tend to $0$ (that is, we'll move $B$ closer and closer to $A$) and see whether we can figure out the limit of the gradients.
What is the gradient of the chord $A B$? Well, the chord is just a straight line, so its gradient is (change in $y$)/(change in $x$). The change in $x$ is easy: that's just $h$. What about the change in $y$? Well, the $y$-value at $A$ is $x^2$, and the $y$-value at $B$ is $(x+h)^2$, so the change is $(x+h)^2-x^2=x^2+2h x+h^2-x^2=2h x+h^2$ (multiply out the brackets yourself if you're not sure about this!). So the gradient of the chord $AB$ is $(2h x+h^2)/h = 2x+h$. So far, so good. Now, as $h$ tends to 0, can you see that $2x+h$ is going to tend to $2x$? So as we move $B$ towards $A$, the gradients of the chords tend to $2x$, so the gradient of the curve at the point $(x,y)$ is $2x$. And I never got my pencil and ruler out to actually draw some tangents!
#### Exercise 2
How does this answer compare with your experimentation in Exercise 1?
Now let's try a curve that's a little bit more complicated (but not much): $y=x^3$.
#### Exercise 3
Repeat Exercise 1, but this time using $y=x^3$.
For this curve, our general point $A$ is going to be $(x,y)=(x,x^3)$, and our point $B$ will be $(x+h,y+k)=(x+h,(x+h)^3)$. What's the gradient of $A B$? The change in $x$ is just $h$, again.
#### Exercise 4
By multiplying out the brackets (or using the Binomial Theorem, if you know about this), work out $(x+h)^3$.
This time the change in $x$ is $(x+h)^3-x^3=x^3+3h x^2+3h^2 x+h^3-x^3=3h x^2+3h^2 x+h^3$. So the gradient of $A B$ is $(3h x^2+3h^2 x+h^3)/h=3x^2+3h x+h^2$. Now, what happens as $h$ tends to 0? Well, certainly the $h^2$ bit is going to tend to $0$. (Are you happy with this?) But also, so is the $3h x$ bit - even if $x$ is quite big, when $h$ gets absolutely tiny, $3h x$ is going to be pretty small. Try this with some numbers if you don't believe me! So as $h$ tends to 0, the gradient of $A B$ tends to $3x^2$, so this is the gradient of $y=x^3$ at $(x,y)$.
#### Exercise 5
Compare this answer with your experimentation in Exercise 3.
#### Exercise 6
Work out $(x+h)^4$ (I promise not to do any more of these, but this one shouldn't be too bad!).
#### Exercise 7
Using the ideas from above and your answer to Exercise 6, work out the gradient of $y=x^4$ at $(x,y)$.
#### Exercise 8
Draw up a table like this one:
$\quad y= \quad$ $\quad$ Gradient $\quad$
$x^2$ $2x$
$x^3$
$x^4$
$x^n$
Fill in the answers you've got so far. Can you spot a pattern? Can you guess what the gradient's going to be for $y=x^n$?
You may by now have spotted that to do this more generally we're going to need to work out $(x+h)^n$. To do this properly, we'd need the Binomial Theorem. I'm not going to go into details about that now; instead, we're going to cheat slightly (but I promise it does work really!). Hopefully you worked out $(x+h)^3$ and $(x+h)^4$ earlier. Did you notice that we got something of the form $x^n+n h x^{n-1}+h^2\times(\textrm{some other stuff})$? (Yes, I know, some other stuff'' isn't very mathematical, but that's where we'd use the Binomial Theorem if we were being rigorous.) This time, our point $A$ is $(x,y)=(x,x^n)$, and our point $B$ is $(x+h,(x+h)^n)$. Again, the change in $x$ is $h$, and when we work out the change in $y$, we're going to get $(x+h)^n-x^n=n h x^{n-1}+h^2(\textrm{some other stuff})$. So when we work out the gradient of $A B$, we're going to have $(n h x^{n-1}+h^2(\textrm{some other stuff}))/h=n x^{n-1}+h(\textrm{some other stuff})$. Now let's think about what happens as $h$ tends to 0. Well, as we hopefully agreed earlier, $h$ times anything fixed is going to tend to 0 as $h$ tends to $0$, and whilst the (some other stuff) isn't actually fixed, the only thing in it that changes is anything involving $h$, so that's just going to get smaller too. So the gradient of $A B$ tends to $n x^{n-1}$ as $h$ tends to $0$, so the gradient of $y=x^n$ is $n x^{n-1}$ at $(x,y)$. Does this agree with your guess in Exercise 8?
#### Exercise 9
Work out the gradients of
(i) $y=2x^2$;
(ii) $y=17x^2$;
(iii) $y=-x^2$;
(iv) $y=a x^2$ where $a$ is some fixed number;
(v) $y=2x^3$;
(vi) $y=a x^3$;
(vii) $y=a x^n$.
#### Exercise 10
What would happen if we tried to work out the gradient of $y=x^3+x^2$? Think carefully about what you'd get if you used the technique above. Now can you work out the gradient of $y=x^n+x^m$ without really doing any work? (If you need to, start writing it all out, and see whether you can spot how to make it easier.)
#### Exercise 11
What happens if you use our rule on a straight line $y=a x+b$? Does this give the answer you'd expect? What about $y=7$, or $y=15$?
#### Exercise 12 (A little harder)
Try using the technique we've used above to work out the gradient of the chord $A B$ on the curve $y=\frac{1}{x}$, and see whether you can work out the gradient of the curve at $(x,y)$. How does this compare with the formula? (Note that $y=\frac{1}{x}=x^{-1}$, so you can substitute $n=-1$ into the formula above, although we haven't actually proved that it should work, because we don't know what $(x+h)^{-1}$ is.)
This technique we've developed to find the gradient of a curve is called differentiation . Hopefully you now understand how to differentiate any polynomial. You don't have to do it from first principles each time: once we've proved the basic results, we can just quote the fact that $a x^n$ differentiates to $n a x^{n-1}$ and so on. It's possible to differentiate other curves too; for example, we could find the gradient of the curves $y=\frac{1}{x^n}$ (maybe you've already guessed how to do this), $y=\sin x$, or $y=2^x$. However, these require a little bit more technical machinery, so we'll leave them for now.
As a quick aside, let's very briefly mention integration , as it's the 'other' part of calculus that comes up at A-level, although we shan't go into any details here. Let's imagine a slightly different scenario: here, we know how fast someone travelled, and how long for, and want to work out how far they went. This time we use $$\textrm{distance} = \textrm{speed}\times\textrm{time}$$ (just rearranging the formula from above). This time, we could use a speed-time graph and work out the area under the graph to find the distance. If the lines surrounding the region are all straight, then this isn't too hard - you've probably done questions like this that involve you having to find the areas of triangles, rectangles and trapezia.
But what if the line is curved? You might have come across the idea of approximating the area by roughly splitting it into triangles, rectangles, and trapezia,
but this effectively means pretending that the curve is made up of several straight sections, and this is never going to be precise. We can use integration to find the area under the curve without this approximation.
I've said that we use differentiation to find speed on a distance-time graph, and integration on a speed-time graph. This sort of suggests that they're related - a little bit like the link between addition and subtraction, where we can use one to "undo" the other. There is a theorem called the Fundamental Theorem of Calculus (sounds impressive, doesn't it?!) that explains this relationship more precisely, and that's why I wanted to mention integration briefly too.
Differentiation (and calculus more generally) is a very important part of mathematics, and comes up in all sorts of places, not only in mathematics but also in physics (and the other sciences), engineering, economics, $\ldots$ The list goes on!
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 132, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584808945655823, "perplexity_flag": "head"}
|
http://scicomp.stackexchange.com/questions/3521/why-does-matlabs-quadprog-outperform-mosek-for-my-problem
|
# Why does MATLAB's quadprog outperform MOSEK for my problem?
For a problem I am trying to solve it appears MOSEK's Quadratic Program solver is 100 times slower than MATLAB's Interior Point solver.
Has anyone encountered this behavior in the past, or maybe could guess what sort of problem might cause this behavior?
The problem is of the form:
\begin{align} \text{min }& 0.5 x^T Q x + c^T x \\ \text{s.t. }& A x \leq b \end{align}
With more linear constraints than variables.
-
Can you give an example problem where you are seeing this performance difference, as well as the version of MOSEK you are using and hardware? I'm happy to invite a MOSEK developer on here to respond but your question will need a bit more detail. – Aron Ahmadia Oct 23 '12 at 8:38
I've tried to clean your question up to make it more useful to future visitors, as it was first stated it was pretty unclear what you were asking. Please try to spend time making your question both detailed and general enough that it will be of future use to other users. – Aron Ahmadia Oct 24 '12 at 1:26
Thanks :) of course, I would have put emphasis on the form of the problem if I knew that the primal\dual selection is critical. – noam Oct 24 '12 at 1:28
## 1 Answer
For problems of this form, you should solve the dual problem using MOSEK. In some cases this can provide several orders of magnitudes of speedup. MOSEK is tuned for the more common case
\begin{align} \text{min }& 0.5 x^T Q x + c^T x \\ \text{s.t. }& A x = b \\ & x >= 0 \end{align}
where there are many more variables than constraints.
If you contact MOSEK support at [email protected] and are willing to give us your problem then we can most likely tell what you should change to get a better performance of MOSEK. If you are not willing to provide any information (Size and density of $Q$ and $A$, other special structure to the problem) then it is hard to help you.
-
Your guess is correct! Indeed my problem is more of the first form. However, when I add the line param.MSK_IPAR_INTPNT_SOLVE_FORM=res.symbcon.MSK_SOLVE_DUAL; to make mosek solve the dual problem, I still get Optimizer - solved problem : the primal and no improvement in times – noam Oct 23 '12 at 22:53
@user3207, Erling, thanks for the detailed answer (and the astute guesswork). I've modified noam's question to provide the details he should have originally put in, and excised them from your answer. Feel free to edit further if you'd like to fix anything. – Aron Ahmadia Oct 24 '12 at 1:29
2
MOSEK only dualize linear problems. We should print out warning when we do not dualize as requested. The upcoming version 7 does that. It is not that hard to dualize a QP by hand. It is shown in the MOSEK manuals for instance. – user3207 Oct 24 '12 at 13:06
@user3207 Thanks! – noam Oct 24 '12 at 14:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430670142173767, "perplexity_flag": "middle"}
|
http://terrytao.wordpress.com/tag/mirror-symmetry/
|
What’s new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
# Tag Archive
You are currently browsing the tag archive for the ‘mirror symmetry’ tag.
## At the AustMS conference
2 October, 2009 in math.AT, math.GR, math.GT, math.PR, talk, travel | Tags: AustMS, fundamental group, Langlands program, mirror symmetry, random groups | by Terence Tao | 17 comments
This week I was in my home town of Adelaide, Australia, for the 2009 annual meeting of the Australian Mathematical Society. This was a fairly large meeting (almost 500 participants). One of the highlights of such a large meeting is the ability to listen to plenary lectures in fields adjacent to one’s own, in which speakers can give high-level overviews of a subject without getting too bogged down in the technical details. From the talks here I learned a number of basic things which were well known to experts in that field, but which I had not fully appreciated, and so I wanted to share them here.
The first instance of this was from a plenary lecture by Danny Calegari entitled “faces of the stable commutator length (scl) ball”. One thing I learned from this talk is that in homotopy theory, there is a very close relationship between topological spaces (such as manifolds) on one hand, and groups (and generalisations of groups) on the other, so that homotopy-theoretic questions about the former can often be converted to purely algebraic questions about the latter, and vice versa; indeed, it seems that homotopy theorists almost think of topological spaces and groups as being essentially the same concept, despite looking very different at first glance. To get from a space ${X}$ to a group, one looks at homotopy groups ${\pi_n(X)}$ of that space, and in particular the fundamental group ${\pi_1(X)}$; conversely, to get from a group ${G}$ back to a topological space one can use the Eilenberg-Maclane spaces ${K(G,n)}$ associated to that group (and more generally, a Postnikov tower associated to a sequence of such groups, together with additional data). In Danny’s talk, he gave the following specific example: the problem of finding the least complicated embedded surface with prescribed (and homologically trivial) boundary in a space ${X}$, where “least complicated” is measured by genus (or more precisely, the negative component of Euler characteristic), is essentially equivalent to computing the commutator length of the element in the fundamental group ${\pi(X)}$ corresponding to that boundary (i.e. the least number of commutators one is required to multiply together to express the element); and the stable version of this problem (where one allows the surface to wrap around the boundary ${n}$ times for some large ${n}$, and one computes the asymptotic ratio between the Euler characteristic and ${n}$) is similarly equivalent to computing the stable commutator length of that group element. (Incidentally, there is a simple combinatorial open problem regarding commutator length in the free group, which I have placed on the polymath wiki.)
This theme was reinforced by another plenary lecture by Ezra Getzler entitled “${n}$-groups”, in which he showed how sequences of groups (such as the first ${n}$ homotopy groups ${\pi_1(X),\ldots,\pi_n(X)}$) can be enhanced into a more powerful structure known as an ${n}$-group, which is more complicated to define, requiring the machinery of simplicial complexes, sheaves, and nerves. Nevertheless, this gives a very topological and geometric interpretation of the concept of a group and its generalisations, which are of use in topological quantum field theory, among other things.
Mohammed Abuzaid gave a plenary lecture entitled “Functoriality in homological mirror symmetry”. One thing I learned from this talk was that the (partially conjectural) phenomenon of (homological) mirror symmetry is one of several types of duality, in which the behaviour of maps into one mathematical object ${X}$ (e.g. immersed or embedded curves, surfaces, etc.) are closely tied to the behaviour of maps out of a dual mathematical object ${\hat X}$ (e.g. functionals, vector fields, forms, sections, bundles, etc.). A familiar example of this is in linear algebra: by taking adjoints, a linear map into a vector space ${X}$ can be related to an adjoint linear map mapping out of the dual space ${X^*}$. Here, the behaviour of curves in a two-dimensional symplectic manifold (or more generally, Lagrangian submanifolds in a higher-dimensional symplectic manifold), is tied to the behaviour of holomorphic sections on bundles over a dual algebraic variety, where the precise definition of “behaviour” is category-theoretic, involving some rather complicated gadgets such as the Fukaya category of a symplectic manifold. As with many other applications of category theory, it is not just the individual pairings between an object and its dual which are of interest, but also the relationships between these pairings, as formalised by various functors between categories (and natural transformations between functors). (One approach to mirror symmetry was discussed by Shing-Tung Yau at a distinguished lecture at UCLA, as transcribed in this previous post.)
There was a related theme in a talk by Dennis Gaitsgory entitled “The geometric Langlands program”. From my (very superficial) understanding of the Langlands program, the behaviour of specific maps into a reductive Lie group ${G}$, such as representations in ${G}$ of a fundamental group, étale fundamental group, class group, or Galois group of a global field, is conjecturally tied to specific maps out of a dual reductive Lie group ${\hat G}$, such as irreducible automorphic representations of ${\hat G}$, or of various structures (such as derived categories) attached to vector bundles on ${\hat G}$. There are apparently some tentatively conjectured links (due to Witten?) between Langlands duality and mirror symmetry, but they seem at present to be fairly distinct phenomena (one is topological and geometric, the other is more algebraic and arithmetic). For abelian groups, Langlands duality is closely connected to the much more classical Pontryagin duality in Fourier analysis. (There is an analogue of Fourier analysis for nonabelian groups, namely representation theory, but the link from this to the Langlands program is somewhat murky, at least to me.)
Related also to this was a plenary talk by Akshay Venkatesh, entitled “The Cohen-Lenstra heuristics over global fields”. Here, the question concerned the conjectural behaviour of class groups of quadratic fields, and in particular to explain the numerically observed phenomenon that about ${75.4\%}$ of all quadratic fields ${{\Bbb Q}[\sqrt{d}]}$ (with $d$ prime) enjoy unique factorisation (i.e. have trivial class group). (Class groups, as I learned in these two talks, are arithmetic analogues of the (abelianised) fundamental groups in topology, with Galois groups serving as the analogue of the full fundamental group.) One thing I learned here was that there was a canonical way to randomly generate a (profinite) abelian group, by taking the product of randomly generated finite abelian ${p}$-groups for each prime ${p}$. The way to canonically randomly generate a finite abelian ${p}$-group is to take large integers ${n, D}$, and look at the cokernel of a random homomorphism from ${({\mathbb Z}/p^n{\mathbb Z})^d}$ to ${({\mathbb Z}/p^n{\mathbb Z})^d}$. In the limit ${n,d \rightarrow \infty}$ (or by replacing ${{\mathbb Z}/p^n{\mathbb Z}}$ with the ${p}$-adics and just sending ${d \rightarrow \infty}$), this stabilises and generates any given ${p}$-group ${G}$ with probability
$\displaystyle \frac{1}{|\hbox{Aut}(G)|} \prod_{j=1}^\infty (1 - \frac{1}{p^j}), \ \ \ \ \ (1)$
where ${\hbox{Aut}(G)}$ is the group of automorphisms of ${G}$. In particular this leads to the strange identity
$\displaystyle \sum_G \frac{1}{|\hbox{Aut}(G)|} = \prod_{j=1}^\infty (1 - \frac{1}{p^j})^{-1} \ \ \ \ \ (2)$
where ${G}$ ranges over all ${p}$-groups; I do not know how to prove this identity other than via the above probability computation, the proof of which I give below the fold.
Based on the heuristic that the class group should behave “randomly” subject to some “obvious” constraints, it is expected that a randomly chosen real quadratic field ${{\Bbb Q}[\sqrt{d}]}$ has unique factorisation (i.e. the class group has trivial ${p}$-group component for every ${p}$) with probability
$\displaystyle \prod_{p \hbox{ odd}} \prod_{j=2}^\infty (1 - \frac{1}{p^j}) \approx 0.754,$
whereas a randomly chosen imaginary quadratic field ${{\Bbb Q}[\sqrt{-d}]}$ has unique factorisation with probability
$\displaystyle \prod_{p \hbox{ odd}} \prod_{j=1}^\infty (1 - \frac{1}{p^j}) = 0.$
The former claim is conjectural, whereas the latter claim follows from (for instance) Siegel’s theorem on the size of the class group, as discussed in this previous post. Ellenberg, Venkatesh, and Westerland have recently established some partial results towards the function field analogues of these heuristics.
Read the rest of this entry »
## Distinguished Lecture Series II: Shing-Tung Yau, “The Basic Tools to Construct Geometric Structures”
17 May, 2007 in DLS, math.GT, math.MG, math.MP | Tags: general relativity, gluing, mirror symmetry, Shing-Tung Yau, string theory, symmetric spaces, T-duality | by Terence Tao | 7 comments
On Thursday, Yau continued his lecture series on geometric structures, focusing a bit more on the tools and philosophy that goes into actually building these structures. Much of the philosophy, in its full generality, is still rather vague and not properly formalised, but is nevertheless supported by a large number of rigorously worked out examples and results in special cases. A dominant theme in this talk was the interaction between geometry and physics, in particular general relativity and string theory.
As usual, there are likely to be some inaccuracies in my presentation of Yau’s talk (I am not really an expert in this subject), and corrections are welcome. Yau’s slides for this talk are available here.
Read the rest of this entry »
### Recent Comments
Sandeep Murthy on An elementary non-commutative…
Luqing Ye on 245A, Notes 2: The Lebesgue…
Frank on Soft analysis, hard analysis,…
andrescaicedo on Soft analysis, hard analysis,…
Richard Palais on Pythagoras’ theorem
The Coffee Stains in… on Does one have to be a genius t…
Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
%anchor_text% on Books
Luqing Ye on 245B, Notes 7: Well-ordered se…
Arjun Jain on 245B, Notes 7: Well-ordered se…
Luqing Ye on 245A, Notes 2: The Lebesgue…
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 50, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311416745185852, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/4920/second-derivatives-using-implicit-differentiation
|
# Second Derivatives Using Implicit Differentiation
According to my textbook, the second derivative of $y^{2}+xy-x^{2}=9$ is $\frac{90}{(2y+x)^{3}}$. The problem states "Express $\frac{d^{2}y}{dx^{2}}$ in terms of $x$ and $y$." I've tried for two days straight now, and I can't get that answer. I am convinced the book has a typo.
-
What is the question? or what would you like to know? – anon Sep 18 '10 at 15:13
muad: "express $\frac{\mathrm{d}^2 y}{\mathrm{d}x^2}$ in terms of $x$ and $y$", per the OP. In other words, what is $y^{\prime\prime}(x)$ in terms of $x$ and $y$. – J. M. Sep 18 '10 at 16:06
## 2 Answers
Based on your work as linked in the comments on J.M.'s answer, you've very nearly got it (except for a typo in differentiating *: there's a dy/dx where there should be a d/dx, but the mathematics that follows is correct as if it were d/dx). The numerator you have is $$\begin{align} -2 ((2 x - y)^2 &+ (2 x - y) (2 y + x) - (2 y + x)^2) \\ &=-10x^2+10xy+10y^2 \\ &=10(y^2+xy-x^2) \\ &=10\cdot9 \\ &=90. \end{align}$$
-
One little question: How did you get from the first step to the second step? When I factor it out, I get $2(5y^2+4xy-5x^2)$ – G.P. Burdell Sep 18 '10 at 19:50
I threw it into Mathematica and applied `Expand[]`. Ignoring the -2 outside, the first item is $(2x-y)^2=4x^2-4xy+y^2$, the second is $(2x-y)(2y+x)=3xy-2y^2+2x^2$, and the third is $-(2y+x)^2=-4y^2-4xy-x^2$. All together, $(4+2-1)x^2+(-4+3-4)xy+(1-2-4)y^2=5(x^2-xy-y^2)$. – Isaac Sep 18 '10 at 20:02
I hate basic errors. Thanks Isaac! – G.P. Burdell Sep 18 '10 at 20:09
Hint: treat $y$ as a function $y(x)$, so differentiating $xy(x)$ should give something like $x y^{\prime}(x)+y(x)$. Differentiate expressions twice, and solve for $y^{\prime\prime}(x)$
-
I've tried that but I get a different result than the answer. I'll upload my work, just gimme an hour or so in LaTeX – G.P. Burdell Sep 18 '10 at 16:37
Hmm, if you didn't get $\frac{10(y(x+y)-x^2)}{(2y+x)^3}$ ... something has gone horribly wrong. – J. M. Sep 18 '10 at 16:53
– G.P. Burdell Sep 18 '10 at 18:06
1
M.: the numerator in your comment is $10(y(x+y)-x^2)=10(y^2+xy-x^2)=90$. – Isaac Sep 18 '10 at 18:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9488239884376526, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Arithmetic_hierarchy
|
# Arithmetical hierarchy
(Redirected from Arithmetic hierarchy)
In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene-Mostowski hierarchy classifies certain sets based on the complexity of formulas that define them. Any set that receives a classification is called arithmetical.
The arithmetical hierarchy is important in recursion theory, effective descriptive set theory, and the study of formal theories such as Peano arithmetic.
The Tarski-Kuratowski algorithm provides an easy way to get an upper bound on the classifications assigned to a formula and the set it defines.
The hyperarithmetical hierarchy and the analytical hierarchy extend the arithmetical hierarchy to classify additional formulas and sets.
## The arithmetical hierarchy of formulas
The arithmetical hierarchy assigns classifications to the formulas in the language of first-order arithmetic. The classifications are denoted $\Sigma^0_n$ and $\Pi^0_n$ for natural numbers n (including 0). The Greek letters here are lightface symbols, which indicates that the formulas do not contain set parameters.
If a formula $\phi$ is logically equivalent to a formula with only bounded quantifiers then $\phi$ is assigned the classifications $\Sigma^0_0$ and $\Pi^0_0$.
The classifications $\Sigma^0_n$ and $\Pi^0_n$ are defined inductively for every natural number n using the following rules:
• If $\phi$ is logically equivalent to a formula of the form $\exists n_1 \exists n_2\cdots \exists n_k \psi$, where $\psi$ is $\Pi^0_n$, then $\phi$ is assigned the classification $\Sigma^0_{n+1}$.
• If $\phi$ is logically equivalent to a formula of the form $\forall n_1 \forall n_2\cdots \forall n_k \psi$, where $\psi$ is $\Sigma^0_n$, then $\phi$ is assigned the classification $\Pi^0_{n+1}$.
Also, a $\Sigma^0_n$ formula is equivalent to a formula that begins with some existential quantifiers and alternates $n-1$ times between series of existential and universal quantifiers; while a $\Pi^0_n$ formula is equivalent to a formula that begins with some universal quantifiers and alternates similarly.
Because every formula is equivalent to a formula in prenex normal form, every formula with no set quantifiers is assigned at least one classification. Because redundant quantifiers can be added to any formula, once a formula is assigned the classification $\Sigma^0_n$ or $\Pi^0_n$ it will be assigned the classifications $\Sigma^0_m$ and $\Pi^0_m$ for every m greater than n. The most important classification assigned to a formula is thus the one with the least n, because this is enough to determine all the other classifications.
## The arithmetical hierarchy of sets of natural numbers
A set X of natural numbers is defined by formula φ in the language of Peano arithmetic if the elements of X are exactly the numbers that satisfy φ. That is, for all natural numbers n,
$n \in X \Leftrightarrow \mathbb{N} \models \phi(\underline n),$
where $\underline n$ is the numeral in the language of arithmetic corresponding to $n$. A set is definable in first order arithmetic if it is defined by some formula in the language of Peano arithmetic.
Each set X of natural numbers that is definable in first order arithmetic is assigned classifications of the form $\Sigma^0_n$, $\Pi^0_n$, and $\Delta^0_n$, where $n$ is a natural number, as follows. If X is definable by a $\Sigma^0_n$ formula then X is assigned the classification $\Sigma^0_n$. If X is definable by a $\Pi^0_n$ formula then X is assigned the classification $\Pi^0_n$. If X is both $\Sigma^0_n$ and $\Pi^0_n$ then $X$ is assigned the additional classification $\Delta^0_n$.
Note that it rarely makes sense to speak of $\Delta^0_n$ formulas; the first quantifier of a formula is either existential or universal. So a $\Delta^0_n$ set is not defined by a $\Delta^0_n$ formula; rather, there are both $\Sigma^0_n$ and $\Pi^0_n$ formulas that define the set.
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of the natural numbers. Instead of formulas with one free variable, formulas with k free number variables are used to define the arithmetical hierarchy on sets of k-tuples of natural numbers.
## Relativized arithmetical hierarchies
Just as we can define what it means for a set X to be recursive relative to another set Y by allowing the computation defining X to consult Y as an oracle we can extend this notion to the whole arithmetic hierarchy and define what it means for X to be $\Sigma^0_n$, $\Delta^0_n$ or $\Pi^0_n$ in Y, denoted respectively $\Sigma^{0,Y}_n$ $\Delta^{0,Y}_n$ and $\Pi^{0,Y}_n$. To do so, fix a set of integers Y and add a predicate for membership in Y to the language of Peano arithmetic. We then say that X is in $\Sigma^{0,Y}_n$ if it is defined by a $\Sigma^0_n$ formula in this expanded language. In other words X is $\Sigma^{0,Y}_n$ if it is defined by a $\Sigma^{0}_n$ formula allowed to ask questions about membership in Y. Alternatively one can view the $\Sigma^{0,Y}_n$ sets as those sets that can be built starting with sets recursive in Y and alternatively projecting and taking the complements of these sets up to n times.
For example let Y be a set of integers. Let X be the set of numbers divisible by an element of Y. Then X is defined by the formula $\phi(n)=\exists m \exists t (Y(m)\and m\times t = n)$ so X is in $\Sigma^{0,Y}_1$ (actually it is in $\Delta^{0,Y}_0$ as well since we could bound both quantifiers by n).
## Arithmetic reducibility and degrees
Arithmetical reducibility is an intermediate notion between Turing reducibility and hyperarithmetic reducibility.
A set is arithmetical (also arithmetic and arithmetically definable) if it is defined by some formula in the language of Peano arithmetic. Equivalently X is arithmetical if X is $\Sigma^0_n$ or $\Pi^0_n$ for some integer n. A set X is arithmetical in a set Y, denoted $X \leq_A Y$, if X is definable a some formula in the language of Peano arithmetic extended by a predicate for membership in Y. Equivalently, X is arithmetical in Y if X is in $\Sigma^{0,Y}_n$ or $\Pi^{0,Y}_n$ for some integer n. A synonym for $X \leq_A Y$is: X is arithmetically reducible to Y.
The relation $X \leq_A Y$ is reflexive and transitive, and thus the relation $\equiv_A$ defined by the rule
$X \equiv_A Y \Leftrightarrow X \leq_A Y \and Y \leq_A X$
is an equivalence relation. The equivalence classes of this relation are called the arithmetic degrees; they are partially ordered under $\leq_A$.
## The arithmetical hierarchy of subsets of Cantor and Baire space
The Cantor space, denoted $2^{\omega}$, is the set of all infinite sequences of 0s and 1s; the Baire space, denoted $\omega^{\omega}$ or $\mathcal{N}$, is the set of all infinite sequences of natural numbers. Note that elements of the Cantor space can be identified with sets of integers and elements of the Baire space with functions from integers to integers.
The ordinary axiomatization of second-order arithmetic uses a set-based language in which the set quantifiers can naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classification $\Sigma^0_n$ if it is definable by a $\Sigma^0_n$ formula. The set is assigned the classification $\Pi^0_n$ if it is definable by a $\Pi^0_n$ formula. If the set is both $\Sigma^0_n$ and $\Pi^0_n$ then it is given the additional classification $\Delta^0_n$. For example let $O\subset 2^{\omega}$ be the set of all infinite binary strings which aren't all 0 (or equivalently the set of all non-empty sets of integers). As $O=\{ X\in 2^\omega | \exists n (X(n)=1) \}$ we see that $O$ is defined by a $\Sigma^0_1$ formula and hence is a $\Sigma^0_1$ set.
Note that while both the elements of the Cantor space (regarded as sets of integers) and subsets of the Cantor space are classified in arithmetic hierarchies but these are not the same hierarchy. In fact the relationship between the two hierarchies is interesting and non-trivial. For instance the $\Pi^0_n$ elements of the Cantor space are not (in general) the same as the elements $X$ of the Cantor space so that $\{X\}$ is a $\Pi^0_n$ subset of the Cantor space. However, many interesting results relate the two hierarchies.
There are two ways that a subset of Baire space can be classified in the arithmetical hierarchy.
• A subset of Baire space has a corresponding subset of Cantor space under the map that takes each function from $\omega$ to $\omega$ to the characteristic function of its graph. A subset of Baire space is given the classification $\Sigma^1_n$, $\Pi^1_n$, or $\Delta^1_n$ if and only if the corresponding subset of Cantor space has the same classification.
• An equivalent definition of the analytical hierarchy on Baire space is given by defining the analytical hierarchy of formulas using a functional version of second-order arithmetic; then the analytical hierarchy on subsets of Cantor space can be defined from the hierarchy on Baire space. This alternate definition gives exactly the same classifications as the first definition.
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of Baire space or Cantor space, using formulas with several free variables. The arithmetical hierarchy can be defined on any effective Polish space; the definition is particularly simple for Cantor space and Baire space because they fit with the language of ordinary second-order arithmetic.
Note that we can also define the arithmetic hierarchy of subsets of the Cantor and Baire spaces relative to some set of integers. In fact boldface $\bold{\Sigma}^0_n$ is just the union of $\Sigma^{0,Y}_n$ for all sets of integers Y. Note that the boldface hierarchy is just the standard hierarchy of Borel sets.
## Extensions and variations
It is possible to define the arithmetical hierarchy of formulas using a language extended with a function symbol for each primitive recursive function. This variation slightly changes the classification of some sets.
A more semantic variation of the hierarchy can be defined on all finitary relations on the natural numbers; the following definition is used. Every computable relation is defined to be $\Sigma^0_0$ and $\Pi^0_0$. The classifications $\Sigma^0_n$ and $\Pi^0_n$ are defined inductively with the following rules.
• If the relation $R(n_1,\ldots,n_l,m_1,\ldots, m_k)\,$ is $\Sigma^0_n$ then the relation $S(n_1,\ldots, n_l) = \forall m_1\cdots \forall m_k R(n_1,\ldots,n_l,m_1,\ldots,m_k)$ is defined to be $\Pi^0_{n+1}$
• If the relation $R(n_1,\ldots,n_l,m_1,\ldots, m_k)\,$ is $\Pi^0_n$ then the relation $S(n_1,\ldots,n_l) = \exists m_1\cdots \exists m_k R(n_1,\ldots,n_l,m_1,\ldots,m_k)$ is defined to be $\Sigma^0_{n+1}$
This variation slightly changes the classification of some sets. It can be extended to cover finitary relations on the natural numbers, Baire space, and Cantor space.
## Meaning of the notation
The following meanings can be attached to the notation for the arithmetical hierarchy on formulas.
The subscript $n$ in the symbols $\Sigma^0_n$ and $\Pi^0_n$ indicates the number of alternations of blocks of universal and existential number quantifiers that are used in a formula. Moreover, the outermost block is existential in $\Sigma^0_n$ formulas and universal in $\Pi^0_n$ formulas.
The superscript $0$ in the symbols $\Sigma^0_n$, $\Pi^0_n$, and $\Delta^0_n$ indicates the type of the objects being quantified over. Type 0 objects are natural numbers, and objects of type $i+1$ are functions that map the set of objects of type $i$ to the natural numbers. Quantification over higher type objects, such as functions from natural numbers to natural numbers, is described by a superscript greater than 0, as in the analytical hierarchy. The superscript 0 indicates quantifiers over numbers, the superscript 1 would indicate quantification over functions from numbers to numbers (type 1 objects), the superscript 2 would correspond to quantification over functions that take a type 1 object and return a number, and so on.
## Examples
• The $\Sigma^0_1$ sets of numbers are those definable by a formula of the form $\exists n_1 \cdots \exists n_k \psi(n_1,\ldots,n_k,m)$ where $\psi$ has only bounded quantifiers. These are exactly the recursively enumerable sets.
• The set of natural numbers that are indices for Turing machines that compute total functions is $\Pi^0_2$. Intuitively, an index $e$ falls into this set if and only if for every $m$ "there is an $s$ such that the Turing machine with index $e$ halts on input $m$ after $s$ steps”. A complete proof would show that the property displayed in quotes in the previous sentence is definable in the language of Peano arithmetic by a $\Sigma^0_1$ formula.
• Every $\Sigma^0_1$ subset of Baire space or Cantor space is an open set in the usual topology on the space. Moreover, for any such set there is a computable enumeration of Gödel numbers of basic open sets whose union is the original set. For this reason, $\Sigma^0_1$ sets are sometimes called effectively open. Similarly, every $\Pi^0_1$ set is closed and the $\Pi^0_1$ sets are sometimes called effectively closed.
• Every arithmetical subset of Cantor space of(or?) Baire space is a Borel set. The lightface Borel hierarchy extends the arithmetical hierarchy to include additional Borel sets. For example, every $\Pi^0_2$ subset of Cantor or Baire space is a $G_\delta$ set (that is, a set which equals the intersection of countably many open sets). Moreover, each of these open sets is $\Sigma^0_1$ and the list of Gödel numbers of these open sets has a computable enumeration. If $\phi(X,n,m)$ is a $\Sigma^0_0$ formula with a free set variable X and free number variables $n,m$ then the $\Pi^0_2$ set $\{X \mid \forall n \exists m \phi(X,n,m)\}$ is the intersection of the $\Sigma^0_1$ sets of the form $\{ X \mid \exists m \phi(X,n,m)\}$ as n ranges over the set of natural numbers.
## Properties
The following properties hold for the arithmetical hierarchy of sets of natural numbers and the arithmetical hierarchy of subsets of Cantor or Baire space.
• The collections $\Pi^0_n$ and $\Sigma^0_n$ are closed under finite unions and finite intersections of their respective elements.
• A set is $\Sigma^0_n$ if and only if its complement is $\Pi^0_n$. A set is $\Delta^0_n$ if and only if the set is both $\Sigma^0_n$ and $\Pi^0_n$, in which case its complement will also be $\Delta^0_n$.
• The inclusions $\Delta^0_n \subsetneq \Pi^0_n$ and $\Delta^0_n \subsetneq \Sigma^0_n$ hold for $n \geq 1$.
• The inclusions $\Pi^0_n \subsetneq \Pi^0_{n+1}$ and $\Sigma^0_n \subsetneq \Sigma^0_{n+1}$ hold for all $n$ and the inclusion $\Sigma^0_n \cup \Pi^0_n \subsetneq \Delta^0_{n+1}$ holds for $n \geq 1$. Thus the hierarchy does not collapse.
## Relation to Turing machines
The Turing computable sets of natural numbers are exactly the sets at level $\Delta^0_1$ of the arithmetical hierarchy. The recursively enumerable sets are exactly the sets at level $\Sigma^0_1$.
No oracle machine is capable of solving its own halting problem (a variation of Turing's proof applies). The halting problem for a $\Delta^{0,Y}_n$ oracle in fact sits in $\Sigma^{0,Y}_{n+1}$.
Post's theorem establishes a close connection between the arithmetical hierarchy of sets of natural numbers and the Turing degrees. In particular, it establishes the following facts for all n ≥ 1:
• The set $\emptyset^{(n)}$ (the nth Turing jump of the empty set) is many-one complete in $\Sigma^0_n$.
• The set $\mathbb{N} \setminus \emptyset^{(n)}$ is many-one complete in $\Pi^0_n$.
• The set $\emptyset^{(n-1)}$ is Turing complete in $\Delta^0_n$.
The polynomial hierarchy is a "feasible resource-bounded" version of the arithmetical hierarchy in which polynomial length bounds are placed on the numbers involved (or, equivalently, polynomial time bounds are placed on the Turing machines involved). It gives a finer classification of some sets of natural numbers that are at level $\Delta^0_1$ of the arithmetical hierarchy.
## References
• Japaridze, Giorgie (1994), "The logic of arithmetical hierarchy", Annals of Pure and Applied Logic 66 (2): 89–112, doi:10.1016/0168-0072(94)90063-9, Zbl 0804.03045 .
• Moschovakis, Yiannis N. (1980), Descriptive Set Theory, Studies in Logic and the Foundations of Mathematics 100, North Holland, ISBN 0-444-70199-0, Zbl 0433.03025 .
• Nies, André (2009), Computability and randomness, Oxford Logic Guides 51, Oxford: Oxford University Press, ISBN 978-0-19-923076-1, Zbl 1169.03034 .
• Rogers, H., jr (1967), Theory of recursive functions and effective computability, Maidenhead: McGraw-Hill, Zbl 0183.01401 .
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 172, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8958442807197571, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/35514?sort=votes
|
## Pair of curves joining opposite corners of a square must intersect---proof?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Reposting something I posted a while back to Google Groups.
In his 'Ordinary Differential Equations' (sec. 1.2) V.I. Arnold says "... every pair of curves in the square joining different pairs of opposite corners must intersect".
This is obvious geometrically but I was wondering how one could go about proving this rigorously. I have thought of a proof using Brouwer's Fixed Point Theorem which I describe below. I would greatly appreciate the group's comments on whether this proof is right and if a simpler proof is possible.
We take a square with side of length 1. Let the two curves be $(x_1(t),y_1(t))$ and $(x_2(t),y_2(t))$ where the $x_i$ and $y_i$ are continuous functions from $[0,1]$ to $[0,1]$. The condition that the curves join different pairs of opposite corners implies, $$(x_1(0),y_1(0)) = (0,0)$$ $$(x_2(0),y_2(0)) = (1,0)$$ $$(x_1(1),y_1(1)) = (1,1)$$ $$(x_2(1),y_2(1)) = (0,1)$$
The two curves will intersect if there are numbers $a$ and $b$ in $[0,1]$ such that
$$p(a,b) = x_2(b) - x_1(a) = 0$$ $$q(a,b) = y_1(a) - y_2(b) = 0$$
We define the two functions
$$f(a,b) = a + p(a,b)/2 + |p(a,b)| (1/2 - a)$$ $$g(a,b) = b + q(a,b)/2 + |q(a,b)| (1/2 - b)$$
Then $(f,g)$ is a continuous function from $[0,1]\times [0,1]$ into itself and hence must have a fixed point by Brouwer's Fixed Point Theorem. But at a fixed point of $(f,g)$ it must be the case that $p(a,b)=0$ and $q(a,b)=0$ so the two curves intersect.
Figuring out what $f$ and $g$ to use and checking the conditions in the last para is a tedious. Can there be a simpler proof?
-
1
My guess is you need to appeal to some form of the Jordan curve theorem. – Qiaochu Yuan Aug 13 2010 at 18:05
I second the Jordan Curve Theorem suggestion. – Thierry Zell Aug 13 2010 at 18:15
1
I am looking for any rigorous proof, the more elementary the better. – Jyotirmoy Bhattacharya Aug 13 2010 at 18:23
2
See John Stillwell's answer below, which explains how you can reduce to the case where the curves are piecewise linear, which certainly makes things simpler! – Emerton Aug 14 2010 at 3:52
1
I like the proofs based on $K_5$ and the polygonal Jordan curve theorem, but if all of them are unwound, the Brouwer fixed point theorem proof is the most direct and transparent. – Victor Protsak Aug 14 2010 at 6:27
show 5 more comments
## 10 Answers
Since the full Jordan curve theorem is quite subtle, it might be worth pointing out that theorem in question reduces to the Jordan curve theorem for polygons, which is easier.
Suppose on the contrary that the curves $A,B$ joining opposite corners do not meet. Since $A,B$ are closed sets, their minimum distance apart is some $\varepsilon>0$. By compactness, each of $A,B$ can be partitioned into finitely many arcs, each of which lies in a disk of diameter $<\varepsilon/3$. Then, by a homotopy inside each disk we can replace $A,B$ by polygonal paths $A',B'$ that join the opposite corners of the square and are still disjoint.
Also, we can replace $A',B'$ by simple polygonal paths $A'',B''$ by omitting loops. Now we can close $A''$ to a polygon, and $B''$ goes from its "inside" to "outside" without meeting it, contrary to the Jordan curve theorem for polygons.
-
1
This is a great argument! – Victor Protsak Aug 14 2010 at 6:14
7
This is gorgeous! It's possibly worth pointing out quite how elementary the Jordan curve theorem for polygons is, to show how little is being black-boxed here. Fix some point $x_0$ not on $C$, and for any (other) point $x$ not on $C$, look at the line segment from $x$ to $x_0$; count the parity of how many times it crosses $C$ (counting double/none if it hits vertices of $C$). This is well-defined and locally constant on $\mathbb{R}\setminus C$ (this is where we use that $C$ is a polygon); so as a locally constant, surjective function to $\{0,1\}$, it disconnects $\mathbb{R}^2$. – Peter LeFanu Lumsdaine Aug 14 2010 at 6:47
Dear Peter: Your line segment may cross $C$ infinitely many times. [You probably mean $\mathbb{R}^2\setminus C$, not $\mathbb{R}\setminus C$.] – Pierre-Yves Gaillard Aug 14 2010 at 11:18
1
Peter: That proof goes through even when the polygon isn't simple, e.g. a polygonal figure-eight, where the parity function disconnects the plane into more than two components. So, there's a bit more work to do. – Per Vognsen Aug 14 2010 at 14:49
1
@Pierre: thanks, yes, I did mean $\mathbb{R}^2$; and while the segment can't cross $C$ infinitely often (a polygon has finitely many edges by definition, at least in the conventions I know) it could contain an edge of $C$ as a subsegment, in which case we do have to look at what directions the adjacent edges of $C$ go off in. @Per: you're right, of course; this doesn't establish the full Jordan curve theorem; I was just thinking of what was necessary for the application at hand (which just needs disconnectedness plus the fact that the second curve's endpoints are in opposite components). – Peter LeFanu Lumsdaine Aug 14 2010 at 17:34
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
ORIGINAL: This follows from the fact that the complete graph $K_5$ on five vertices cannot be imbedded in $\mathbb S^2,$ in itself an application of Jordan Curve. If your two square curvy diagonals stay inside the square without intersecting, a fifth point outside the square can be joined to the four vertices by disjoint arcs, thus creating a complete graph on five vertices. Very nice book by James Munkres, "Topology: a first course" where, on page 386 exercise 5, he does the graph on five vertices. Note that the concept of inside for the square uses elementary ideas such as convexity.
EDIT: As mentioned by Henry Wilton in comment below, there may be other routes here. In particular, I have a book by Robin J. Wilson just called Introduction to Graph Theory, second edition, and in section 13, pages 64-67, in which he develops Euler's formula for planar graphs and as a quick corollary shows that $K_5$ and $K_{3,3}$ are nonplanar, these being Theorem 13A and Corollary 13E. It is anybody's guess whether JCT is used implicitly in defining "faces" properly for Euler's formula. I don't know.
-
That's a neat idea! – Somnath Basu Aug 13 2010 at 18:43
1
Do you really need the Jordan Curve Theorem to see that $K_5$ is non-planar? I ask because Thomason gives a proof of the JCT using the fact that $K_{3,3}$ is non-planar. – HW Aug 14 2010 at 1:41
very nice proof! – Changwei Zhou Aug 14 2010 at 2:24
Henry, I don't think I know how to sort out first principles here. The Munkres book, same page, in exercise 4, has the reader use JCT to show that $K{3,3}$ is non-planar. I'm going to guess that the three facts are roughly equivalent in the sense of quick proofs in both directions for any pair. So the questions become, does the nonplanarity of the complete graph on five vertices imply JCT quickly, and is there some trickery where each nonplanar graph gives the other, all "quickly." – Will Jagy Aug 14 2010 at 2:45
1
Just to clarify slightly, Thomason quickly observes that if a graph is planar then it has a polygonal embedding in the plane; so you can talk about faces before you've proved the JCT. – HW Aug 14 2010 at 5:07
show 8 more comments
There is a simple proof that a game of Hex must have a winner, which implies the result you want.See here: Brouwer's Fixed Point Theorem and the Jordan Curve Theorem, Lemma 5.5. The Brouwer fixed point theorem and the Jordan Curve theorem follow from this.
This proof is based on the paper The Game of Hex and the Brouwer Fixed-Point Theorem (by David Gale. The American Mathematical Monthly, Vol. 86, No. 10. (Dec., 1979), pp. 818-827).
Edit: Actually the reference shows that the Game of Hex always has a winner => Brouwer Fixed Point theorem => a pair of curves in the square joining opposite corners must intersect. So it does use Brouwer's fixed point theorem, but gives an elementary proof of it.
-
4
As much as I like the connection with Hex, Gale comments on, but doesn't give the proof of, the additional statement "not both" in the Hex theorem, which is the Hex analogue of the intersection property. Thus for the purposes of this question, Hex is a distraction. – Victor Protsak Aug 14 2010 at 6:09
Yes, at first I thought that they proved it directly from the game of Hex. The result asked for would follow from the result that you can't have a state in which both players have a winning line, but the references only show directly that at least one person must win. However, the first link does have a short proof of the result asked for, albeit using the fixed point theorem (and a proof of that). So, it's not as direct and elementary as I thought at first. Still, it answers the question. – George Lowther Aug 14 2010 at 12:19
This should probably go in a comment, but I don't have enough reputation points.
Note that there is a pair of connected sets in the square containing opposite pairs of corners that don't intersect. There are pictures in the reference below.
Robert J. MacG. Dawson, Paradoxical Connections. The American Mathematical Monthly Vol. 96, No. 1 (Jan., 1989), pp. 31-33. http://www.jstor.org/stable/2323252
-
+1 for retaining something from one of our conversations . . . – Danny Calegari Oct 1 2011 at 12:15
You could use Brouwer degree for a more intuitive proof:
The degree of the usual diagonals intersecting each other is 1. One at a time, deform the diagonals via straight-line homotopies to the desired curves. This should preserve Brouwer degree. Lastly, Brouwer degree is well-defined even for continuous functions (using a smooth approximation), and non-zero Brouwer degree implies an intersection.
Alternately, you could artificially avoid the phrase "Brouwer degree" and directly track what happens to the intersection point under the homotopy.
-
+1 for the name, a combination of "Anton Gorodetsky" and "Sergei Lukyanenko" -- any relation? – Pete L. Clark Aug 13 2010 at 21:27
:) No, not as far as I know. – Anton Lukyanenko Aug 13 2010 at 22:10
I'm not sure that the last paragraph avoiding the degrees works: the curves may intersect in more than one point, so it's challenging to "directly track what happens to the intersection point". – Victor Protsak Aug 14 2010 at 6:19
A homological proof would use the intersection form of the torus. if you consider these paths as based loops on the torus, you see that they are represented as (1,1), and (1,-1), in terms of the standard homology generators. knowing that the intersection form is (0 1; -1 0), we find that the intersection index
Q(v,w) = (1,1)(0 1; -1 0)(1,-1)^t = 2
they already intersect once at the origin, so they must intersect somewhere else in the square. However, you must already have had to compute the intersection form.
-
How about the following, using the Nested Intervals Theorem (which was in my 2nd year Calculus text) which says the intersection of a nested sequence of closed intervals in $\mathbb{R}$ is non-empty. Here goes the proof:
We construct recursively a nested sequence $I_j := [a_j, b_j]$ of closed intervals for $j \geq 0$. Let $I_0 := [0,1]$. For every $j \geq 0$, construct $I_{j+1}$ as follows: let $m_j$ be the midpoint of $I_j$. If the curves intersect at $t = m_j$, then we are done, so stop the sequence. Otherwise set $I_{j+1}$ to be $[a_j, m_j]$ or $[m_j, b_j]$ depending on whether the curves "switch from left to right" on the first sub-interval or the 2nd (let's say you always make sure that $c_1$ is to the "left" of $c_2$ at $t = a_j$ and to the "right" of $c_2$ at $t = b_j$).
If the sequence is finite, then the curves must intersect, as noted above. So assume the sequence is infinite. The Nested Intervals Theorem and the fact that the length decreases by a factor of 2 at every step implies that $\cap_{j=0}^\infty I_j = \lbrace t\rbrace$ for some $t \in [0,1]$. Then we must have $c_1(t) = c_2(t)$.
-
See
R. Maehara, The Jordan Curve Theorem via the Brouwer Fixed Point Theorem, Amer. Math. Monthly 91, 641--643 (1984)
which is availiable on Andrew Ranicki's website.
-
Hi all, This is really interesting qn. But it only involves basic homotopy theory, not anything as subtle as the jordan curve theorem.
Proof:
Let the paths be parameterized as v(t), and w(s). t,s in I := [0,1]
assume the paths never intersect. Then the map f : I x I -> S^1, given by f(s,t) = (v(t)-w(s))/|v(t)-w(s)| is well defined.
Think of I x I as being a homotopy between the paths,
\$a1(t) = { (0,2t) : 0< t <1/2
{ (2t-1,1) : 1/2 < t<1\$
and
$a2(t) = { (2t,0) : 0 < t<1/2 { (1,2t-1) : 1/2 < t < 1$
(i.e. the two boundary components of IxI, as paths from (0,0) -> (1,1) )
Then, we see that f(a1(t)) is a path that starts at the north pole of the circle, and ends at the south pole, and traverses clockwise, whereas f(a2(t)) starts and ends the same, but traverses counterclockwise. Now f(I x I) provides a homotopy between these paths, however they are not homotopic as paths in the circle, this give a contradiction. and hence the paths must intersect.
-
Isn't a path in $S^1$ contractible to a point? – Piero D'Ancona Sep 2 2010 at 12:24
By 'paths', i mean paths starting at the top of the circle, and ending at the bottom. – Ryan Mickler Sep 2 2010 at 13:29
Perhaps i should have been more clear. f(IxI) is a homotopy 'of paths that start at the top of the circle and end at the bottom'. – Ryan Mickler Sep 2 2010 at 13:34
1
I guess a more elegant way to see it, is to consider again, the map f: I x I -> S^1, when restricted to the boundary, d(IxI) ~= S^1, we find (from the argument above) that f_d(IxI) : S^1 -> S^1 winds once, but f_IxI gives a homotopy from this map to the constant map, hence a contradiction. – Ryan Mickler Sep 2 2010 at 14:01
1
After thinking about it some more. This question is really a generalization of the intermediate value theory. The IVT is really a homotopy theory question, where you are detecting pi_0(R-{0}), in this case, we are detecting pi_1(R^2-{0}). – Ryan Mickler Sep 3 2010 at 4:50
This is the main step of the proof that the plane (in this case, the square) has topological dimension 2. You can find a proof (as elementary as I could make it) in my text Measure, Topology, and Fractal Geometry. In particular, no previous knowledge of algebraic topology is required.
-
Did you mean to say that no previous knowledge of algebraic topology is required? – Victor Protsak Sep 2 2010 at 15:21
thanks, corrected. – Gerald Edgar Sep 3 2010 at 12:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388840794563293, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/electronics+electric-circuits
|
# Tagged Questions
1answer
43 views
### When does Thevenin's theorem not apply (modelling a power source with a ohmic internal resistance)
Most physics text books say that a power source can be modelled as an EMF with a internal resistance. This is also know as Thevenin's theorem or Norton's theorem. However I have read in some sources ...
1answer
70 views
### Full wave rectification [duplicate]
In the construction of a full wave rectifier, why is there central tapping in the secondary winding of the power transformer? What's the pure reason behind it? If there was no central tapping, what ...
2answers
91 views
### Electronic filter
Can you explain, please, step-by-step how Electronic filter does work? For example, high pass filter. I know It's a trivial things, but I can't get it completely. Don't bring me formula and etc... ...
1answer
169 views
### What is the role of center-tapping in a full wave rectifier?
Note: I have already tried googling. Although similar questions have been asked on different forums, I couldn't find a detailed explanation, which I could really understand. Circuit diagram ...
1answer
165 views
### Explanation of the Graetz circuit
My knowledge of circuits is pretty rudimentary and I've never really understood circuits, so I'm having trouble with the concept of Graetz circuits: When you register the voltage on the resistor R ...
0answers
58 views
### Explanation of NMOS processes
So, gate (poly-si + $SiO_2$) and $p^{-}$ silicon operate as capacitator. But how are voltages and charges are applied? In order for inversion to occur, there should be charges formed in poly-Si, ...
2answers
170 views
### Nodal Analysis of an electrical circuit
I have several doubts about solving circuits. Can any circuit be solved using Nodal Analysis? If some circuit can be solved using Nodal Analysis, can it be solved using Mesh Analysis too? Why do we ...
0answers
120 views
### When the p-n junction of a transistor is reverse biased? [closed]
When the p-n junction of a transistor is reverse biased? A. current flows from the p-type to the n-type. B. no current flows from the p-type to the n-type. C. conduction of current occurs. D. ...
4answers
1k views
### How does power consumption vary with the processor frequency in a typical computer?
I am looking for an estimate on the relationship between the rate of increase of power usage as the frequency of the processor is increased. Any references to findings on this would be helpful.
2answers
214 views
### What are the effects of cosmic rays on consumer electronics?
When electronics/computer companies design a new chip, processor/ memory card/ or a solar cell, do they study the effect of cosmic rays on such electronically sensitive materials? If not, why not?
0answers
161 views
### Algorithmic approach for applying Kirchhoff's Rules to circuit analysis
Working with Kirchhoff's rules, I am attempting to device an algorithmic approach to finding the unknowns of the problems, I am of a Computer Science background and I am finding it difficult to ...
1answer
137 views
### Could a variable capacitor divider replace a Variac?
Hmmm... You can definitely drop down the voltage, and ideal capacitors don't dissipate any power. So it seems, at first glance, that you could use a capacitor divider as a lossless voltage step-down ...
0answers
234 views
### Turn-on delay time for Laser diode
Do you know any simple explanation on the reason why the turn-on delay time on a laser diode is reducing while we increase the bias current? Turn on delay,is the time that the laser needs from the ...
0answers
74 views
### Simplifying a biquadratic filter equation [closed]
I am trying to calculate input impedance of a multiple feedback low pass filter. What I need is the symbolic expression as simple as possible so that later I fill in the values and get the impedance ...
5answers
3k views
### What would be the effective resistance of the ladder of resistors having n steps
I'm a tutor. This is a high school level problem. In high school, every one have might have solved a problem of effective resistance of a ladder of resistors having infinite steps. Now the problem is ...
1answer
155 views
### Waveforms for a given ideal inverters circuit
I have the following circuit. All the inverters are ideal CMOS. I need to draw the waveforms for each point (A, B, D, E, Vo), given that waveform at input. What I want to ask is if what I did so ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310703873634338, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/232326/graph-theory-clique-concepts?answertab=oldest
|
# Graph Theory: Clique concepts
I was trying to solve a basic clique problem but i have stucked at some following points:
• `what is is the minimum size of the largest clique in any graph with N nodes and M edges`
• `To Find the largest clique in a graph`
Please tell me difference between above two statement in the context of `basic clique problem` . I am newbie in `Graph Theory`
-
1
For the first problem, you may find Turan's Theorem useful. – Austin Mohr Nov 7 '12 at 19:04
## 1 Answer
Let $\mathscr{G}_{N,M}$ be the family of all graphs with $N$ nodes and $M$ edges. Each $G\in\mathscr{G}_{N,M}$ contains some cliques. Let $c(G)$ be the maximum number of nodes in any clique in $G$. The problem asks you to find
$$\min\{c(G):G\in\mathscr{G}_{N,M}\}\;.$$
In words: if a graph has $N$ nodes and $M$ edges, what is the smallest possible size of its largest clique?
A small example may help. Suppose that $N=M=6$. One of the graphs in $\mathscr{G}_{6,6}$ is the graph $G$ consisting of two disjoint triangles; each of those triangles is a clique of size $3$, so $c(G)=3$. But another graph in $\mathscr{G}_{6,6}$ is the circuit $C_6$ of $6$ nodes, like a necklace; its maximal cliques are pairs of adjacent vertices, so $c(C_6)=2$. You can’t have maximal cliques any smaller than that in a graph that has at least one edge, so
$$\min\{c(G):G\in\mathscr{G}_{6,6}\}=2\;.$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422233700752258, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/72039?sort=votes
|
Stieltjes Transform of $F^{*}PF$ as a function of the Stieltjes Transform of $P$ where $F$ is drawn from an $n \times n$ Gaussian-like random matrix distribution
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am trying to calculate the Stieltjes Transform of $F^{*}PF$ as a function of the Stieltjes Transform of $P$ where $F$ is drawn from an $n \times n$ Gaussian-like random matrix distribution. I am looking for the solution to look like this:
Let us call the Stieltjes Transform of $F^{*}PF$ to be $S_{F^{*}PF}=t(z)$. I want to show that
\begin{equation} t^2(z)=-\frac{1}{z}S_P\left(-\frac{1}{t(z)}\right) \nonumber \end{equation} where $S_P(z)$ is the Stieltjes Transform of $P.$
I know I have to use Marcenko-Pasture Theorem but couldn't figure out how.
I considered the Marcenko-Pasture Theorem and the iteration they talk about as $B_n=A_n+1/nX_m^{*}T_mX_m$ and compared this to $F^{*}PF$ which means $A_n$ is zero and $X_m=\sqrt{n}F.$ Therefore,
\begin{equation} t(z)=-\frac{1}{z-\int \frac{\tau dH(\tau)}{1+\tau t(z)}} \end{equation}
I cannot go on from here.
-
1 Answer
Let us call the Stieltjes Transform of $F_i^{*}P_iF_i$ to be $S_{F_i^{*}P_iF_i}=t(z)$. We want to show that
\begin{equation} t^2(z)=-\frac{1}{z}S_{P_i}\left(-\frac{1}{t(z)}\right) \nonumber \end{equation} where $S_{P_i}(z)$ is the Stieltjes Transform of $P_i.$
We consider the Marcenko-Pasture Theorem and see that $A_n$ is zero and $X_m=\sqrt{n}F_i.$ Therefore,
\begin{equation} t(z)=-\frac{1}{z-\int \frac{\tau dH(\tau)}{1+\tau t(z)}} \label{case2_hasibi} \end{equation}
where $H(\tau)$ is the empirical(eigenvalue) distribution of $P_i.$ In general we know that
\begin{eqnarray} \int dH(\tau)&=&1=\int \frac{(\tau-y) dH(\tau)}{(\tau-y)} \nonumber\ &=&\int \frac{\tau dH(\tau)}{\tau-y}-\int \frac{y dH(\tau)}{\tau-y} \nonumber\ &=& \int \frac{\tau dH(\tau)}{\tau-y}-y \int \frac{dH(\tau)}{\tau-y} \nonumber\ &=&\int \frac{\tau dH(\tau)}{\tau-y}-yS_{Z}(y) \nonumber \end{eqnarray} By writing the last equation for $Z=P_i$ and $y=-\frac{1}{t(z)}$, we have \begin{eqnarray} 1&=&\int \frac{\tau dH(\tau)}{\tau+\frac{1}{t(z)}}+\frac{1}{t(z)} \int \frac{dH(\tau)}{\tau+\frac{1}{t(z)}} \nonumber\ &=& t(z)\int \frac{\tau dH(\tau)}{\tau t(z)+1}+\frac{1}{t(z)} S_{P_i}(z). \nonumber \end{eqnarray} Then, \begin{eqnarray} \frac{1}{t(z)}=\int \frac{\tau dH(\tau)}{\tau t(z)+1}+\frac{1}{t^2(z)} S_{P_i}(z). \nonumber \end{eqnarray} Therefore, $\int \frac{\tau dH(\tau)}{\tau t(z)+1}=\frac{1}{t(z)}-\frac{1}{t^2(z)} S_{P_i}(z).$ By replacing this integration in (\ref{case2_hasibi}) we get \begin{equation} t(z)=-\frac{1}{z-\int \frac{\tau dH(\tau)}{1+\tau t(z)}}=-\frac{1}{z-[\frac{1}{t(z)}-\frac{1}{t^2(z)} S_{P_i}(z)]} \label{case2_hasibi_final} \end{equation} By simplifying both sides of (\ref{case2_hasibi_final}) we have \begin{equation} -t(z)z+1-\frac{1}{t(z)}S_{P_i}\left( -\frac{1}{t(z)} \right) =1. \nonumber \end{equation} And so \begin{equation} t(z)z=-\frac{1}{t(z)}S_{P_i}\left( -\frac{1}{t(z)} \right), \nonumber \end{equation} which means that \begin{equation} t^2(z)=-\frac{1}{zS_{P_i}\left( -\frac{1}{t(z)} \right)}. \label{case2_hasibi_final2} \end{equation}
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6063501834869385, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/61059/what-are-the-lengths-that-can-be-constructed-with-straightedge-but-without-compas
|
## What are the lengths that can be constructed with straightedge but without compass?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Most field theory textbooks will describe the field of constructible numbers, i.e. complex numbers corresponding to points in the Euclidean plane that can be constructed via straightedge and compass. This field is the smallest field of characteristic 0 that is closed under square root (i.e. is Pythagorean) and is closed under conjugation.
I'm interested in know: What is the field of numbers that can be constructed if we disallow compass and use only straightedge?
I have not checked this up, but it seems that this question led Hilbert to formulate his 17th problem, particularly the version involving polynomials with rational coefficients (rather than the real coefficients which Artin proved). I'm also interested in knowing more about this history too.
-
This might well be my ignorance or a misunderstanding, but could you give some examples of things that one can construct in your setting. – quid Apr 8 2011 at 13:23
For example you can construct $\sqrt{2}$ by drawing an isoceles right-angled triangle. – Colin Tan Apr 8 2011 at 13:25
1
To piggyback on unknown's question, is it even obvious that this is a field? I think that one has to be very careful about the ground rules; for example, are you given the operation of moving / duplicating a segment? If not, then what you can construct seems to me to depend on how your ‘numbers’ are originally positioned. (Of course, switching which tool we're allowed, you doubtless already know about en.wikipedia.org/wiki/….) – L Spice Apr 8 2011 at 13:38
4
"Maybe by straightedge I mean at least something that can translate lengths. Not just to connect points." Maybe you need to think about what you mean to ask more precisely before you post the question? – Todd Trimble Apr 8 2011 at 13:56
3
Note that moving to a "marked ruler" is a fairly dangerous generalization of the notion of "straightedge". In particular, if you allow yourself a marked straightedge and a compas, then you can trisect and angle: en.wikipedia.org/wiki/… – Theo Johnson-Freyd Apr 8 2011 at 14:34
show 10 more comments
## 4 Answers
In view of the many comments, I will make a (I hope correct) summary of these comments in CW mode; everybody please feel free to edit:
1. If one starts with a completely 'blank sheet of paper' it seems that one can do almost nothing with a straightedge alone.
2. However, as mentioned by François Brunault given certain 'initial constellations' one can construct some additional interesting points using a staightedege alone (see here (in French)).
3. Daniel Briggs suggested to 'add' just one circle with known center (the unit circle). If one does this, then by the Poncelet-Steiner theorem (mentioned by François Brunault) one can already construct everything one can construct with straightedge and compass.
4. L. Spice mentioned that by the Mohr-Mascheroni theorem the converse situation (only a compass no straightedge) allows also to construct everything one can construct with straightedge and compass.
5. The book Leçons sur les constructions géométriques by Lebesgue is entirely devoted to the question of geometric constructions using various instruments. The table of contents of this book (in French, again) is available here.
-
I fixed the links, they did not work. – Pieter Naaijkens Apr 8 2011 at 15:44
I added the reference to Lebesgue's book. – François Brunault Apr 8 2011 at 15:56
2
3: You must add one circle with its center point identified. – Will Jagy Apr 8 2011 at 20:48
In particular, all classical constructible points can also be constructed with a straightedge and a "rusty" compass (that has a fixed radius, since this obviously gives one circle). For example, a (straight) knife and a fork would work. I think that this formulation is closest to the original question. (A marked straightedge is much more than translating length with a compass.) – thei Apr 8 2011 at 23:00
@Will Jagy, yes I know (the 'the unit circle' was meant to convey this, having its center in 'the origin'). Though, on second thought, I agree this should be said explictly; I add it. Thanks for pointing it out. – quid Apr 9 2011 at 1:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It depends on what you regard as a starting point.
For ruler and compass, we start with the points 0 and 1.
For an unmarked ruler, this is not a good start, because an unmarked ruler is good at conserving cross-ratios, but if you start with two (or three including $\infty$) points, there is no cross-ratio yet to conserve.
With a marked ruler, this problem disappears because you obviously get all the integers and then are able to construct parallel lines by building a complete quadrilateral over three equidistant points. So you already get all the rational numbers.
-
The short answer is that nothing is constructible. As is standard, we begin with the two points $(0,0)$ and $(1,0)$. Then we can draw a line between them, and that's it. We can't draw any more lines, and hence we can't construct any new points. The Euclidean rules say that we are only allowed to draw a new line if we are joining two already-constructed points, and a point can only be constructed if it is the intersection of two lines (or, irrelevant to this discussion, two circles or a line and a circle).
However, suppose you begin with a finite collection of points $(x_1,y_1),\ldots, (x_n,y_n)$. Let $C$ be the set of points constructible from this set using only a straightedge (unmarked). If a point $(x,y)$ is in $C$, then $x$ and $y$ can be formed from $x_1,\ldots,x_n,y_1,\ldots,y_n$ using the operations $+$, $-$, $\cdot$, and $\div$ (since new points are created as intersections between lines). However the converse of this is not true (as the two-point example shows). I suppose you could say more about what $C$ looks like, but it would probably be messy.
-
Bogomolny's site "Cut the Knot" has lots of interesting math... http://www.cut-the-knot.org/impossible/straightedge.shtml
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400659799575806, "perplexity_flag": "middle"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.