url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://mathoverflow.net/questions/59245?sort=votes
|
## What relations exist among (quasi-) modular forms of different levels?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Given a (quasi-) modular form $f(\tau)$ for some congruence subgroup (say) $\Gamma(k)$, we know that $f(N\tau)$ is a (quasi-) modular form for $\Gamma(N k)$. Is there anything known about when we can do a partial reverse, that is, when we can take linear combinations of (quasi-) modular forms for some higher level subgroup to obtain one of stricktly lower level?
An example is the following:
Let $E(q) = \sum_{k=0}^\infty \sigma_1(2k+1)q^{2k+1}$ and $A(q) = \sum_{k=1}^\infty \sigma_1(k)q^k$. Then $E(q)$ is modular with respect to a non-trivial character, and both $A(q^2)$ and $A(q^4)$ are quasi-modular of level 2 and 4, respectively (though not of pure weight).
However: it turns out that $$E(q) + 3A(q^2) - 2A(q^4) = A(q)$$ which shows that a linear combination of higher level terms (and one which is modular with respect to a non-trivial character) yields one of lower level.
Is this simply random chance? Are there known relations of this type?
-
## 1 Answer
There are (at least) two answers: there is a Galois theory of modular functions (as in G. Shimura's 1971 book, or in Lang's book on elliptic functions), and a theory of newforms" (Atkin-Lehner, and also Casselman in a representation-theoretic context).
Thinking in terms of general Galois theory, it ought not be so surprising that various sums of non-invariant things become invariant, I suppose, but the particulars are often non-trivial.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284429550170898, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/135917/calculating-a-real-integral-using-complex-integration
|
# Calculating a real integral using complex integration
$$\int^\infty_0 \frac{dx}{x^6 + 1}$$
Does someone know how to calculate this integral using complex integrals? I don't know how to deal with the $x^6$ in the denominator.
-
## 2 Answers
Thankfully the integrand is even, so we have
$$\int^\infty_0 \frac{dx}{x^6 + 1} = \frac{1}{2}\int^\infty_{-\infty} \frac{dx}{x^6 + 1}. \tag{1}$$
To find this, we will calculate the integral
$$\int_{\Gamma_R} \frac{dz}{z^6+1},$$
where $\Gamma_R$ is the semicircle of radius $R$ in the upper half-plane, $C_R$, together with the line segment between $z=-R$ and $z=R$ on the real axis.
(Image courtesy of Paul Scott.)
Then
$$\int_{\Gamma_R} \frac{dz}{z^6+1} = \int_{-R}^{R} \frac{dx}{x^6+1} + \int_{C_R} \frac{dz}{z^6+1}.$$
We need to show that the integral over $C_R$ vanishes as $R \to \infty$. Indeed, the triangle inequality gives
$$\begin{align} \left| \int_{C_R} \frac{dz}{z^6+1} \right| &\leq L(C_R) \cdot \max_{C_R} \left| \frac{1}{z^6+1} \right| \\ &\leq \frac{\pi R}{R^6 - 1}, \end{align}$$
where $L(C_R)$ is the length of $C_R$. From this we may conclude that
$$\lim_{R \to \infty} \int_{\Gamma_R} \frac{dz}{z^6+1} = \int_{-\infty}^{\infty} \frac{dx}{x^6+1}. \tag{2}$$
The integral on the left is evaluated by the residue theorem. For $R > 1$ we have
$$\int_{\Gamma_R} \frac{dz}{z^6+1} = 2\pi i \sum_{k=0}^{2} \operatorname{Res}\left(\frac{1}{z^6+1},\zeta^k \omega\right),$$
where $\zeta$ is the primitive sixth root of unity and $\omega = e^{i\pi/6}$. Note that this is because $\omega$, $\zeta\omega$, and $\zeta^2 \omega$ are the only poles of the integrand inside $\Gamma_R$. The sum of the residues can be calculated directly, and we find that
$$\int_{\Gamma_R} \frac{dz}{z^6+1} = 2\pi i \sum_{k=0}^{2} \operatorname{Res}\left(\frac{1}{z^6+1},\zeta^k \omega\right) = \frac{\pi}{3 \sin(\pi/6)} = \frac{2\pi}{3}.$$
Thus, from $(1)$ and $(2)$ we conclude that
$$\int_{0}^{\infty} \frac{dx}{x^6+1} = \frac{\pi}{3}.$$
In general,
$$\int_{0}^{\infty} \frac{dx}{x^{2n}+1} = \frac{\pi}{2 n \sin\left(\frac{\pi}{2n}\right)}$$
for $n \geq 1$.
-
Beautiful solution. (I started to write up a solution, but it wasn't this elegant.) – Nicholas Stull Apr 23 '12 at 19:16
Thanks, @Nicholas :) – Antonio Vargas Apr 23 '12 at 19:17
Very nice mate, cheers. Havent done residues yet...I was thinking I would have to get 6th roots of -1 to find all the singularites...and then use CIF multiple times to get the integral. Residues let you skip all that stuff yes? – Jim_CS Apr 23 '12 at 19:55
1
@Jim_CS, yes, precisely. The residue theorem can essentially be seen as an application of Cauchy's integral formula (in addition to many other interesting interpretations). – Antonio Vargas Apr 23 '12 at 20:11
$$\int_0^\infty\frac{dx}{x^6+1}=\frac{1}{2}\lim_{R\to\infty}I_R$$ where $$I_R:=\int_{-R}^{R}\frac{dx}{x^6+1}.$$ Let us integrate $f(z):=\frac{1}{1+z6}$ along the closed oriented curve constituted by the upper semicircumpherence $C_R$ with center $0$ and radius $R>1$ and the interval $[-R,R]$.
Applying the residue theorem we get $$I_R+\int_{C_R}f(z)dz=2\pi i\sum_{k=1}^3\textrm{Res}(f;\textrm{exp}(\frac{1+2k}{6}i\pi)).\qquad(*)$$
Remarking $\lim_{R\to\infty}\int_{C_R}f(z)dz=0,$ from $(*)$ you get your integral.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9153953790664673, "perplexity_flag": "head"}
|
http://mathematica.stackexchange.com/questions/17342/how-do-i-get-the-equivalent-of-the-latex-tag-in-mathematica?answertab=votes
|
How do I get the equivalent of the $\LaTeX$ \tag{} in Mathematica?
How do I get the equivalent of the $\LaTeX$ \tag{} in Mathematica?
$a+b=x \tag{1}$
I'm not necessarily asking how to use the $\LaTeX$ `\tag{}` itself in Mathematica, I'm just looking for some feature that will allow me to mark an equation like that.
-
`CellLabel` gets close but as `Right` is not a valid setting for `CellLabelPositioning` it cannot, AFAIK, be made to look as you show. – Mr.Wizard♦ Jan 6 at 7:31
There's also the Automatic Numbering. – Gustavo Bandeira Jan 6 at 7:34
2 Answers
I do not think what you are asking for is possible. But you can do what you want by numbering an equation in Mathematica like this
do not know if this will meet your needs.
Using Mathematica like Latex is not really practical or useful. I found it is better to just use Latex for typesetting, and use Mathematica for doing the computation and analysis.
-
Yes. But I wasn't expecting to use $\LaTeX$ to do that. I was expecting for any alternative that would lead me to do that. Thanks for the answer. – Gustavo Bandeira Jan 6 at 7:40
As others have pointed out, automatic numbering is nicely taken care of through the built-in `DisplayFormulaNumbered` style. Custom numbering (both a separate counter and counting sequence) can be introduced into stylesheets, for example, I have made a `Theorem` style before.
However, for a one off equation numbers / labels, like the $\LaTeX$ `\tag` option, you just want to manually set the `CellFrameLabels` option. This can be done using the option inspector or by using `CellPrint` or Show Expression. For example
````CellPrint[Cell[BoxData[
FormBox[RowBox[{"a", "+", "b", "=", "c"}], TraditionalForm]],
"DisplayFormula", CellFrameLabels -> {{None, "(1a)"}, {None, None}}]]
````
produces
selecting the cell and looking at the option inspector shows you how you could edit this option with a GUI:
However, referring to that equation using the standard CellTags and Automatic Numbering / CounterBox combination seems to be a bit tricky... There might be a nice way of doing it, but nothing immediately comes to mind.
-
lang-mma
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342105984687805, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/94838/what-are-the-properties-of-a-prime-number/94899
|
# What are the properties of a prime number?
For instance, we know that odd numbers behave like:
$$x = 2y + 1 \quad\text{where}\quad x,y\in\mathbb Z$$
For even numbers:
$$a = 2b \quad\text{where}\quad a,b\in\mathbb Z$$
-
2
Clearly all primes other than $2$ must be of the form $2k + 1$ for some integer $k$ (else it is divisible by $2$). This is the same as saying all primes (other than 2) are congruent to 1 (mod 2). Similarly, all primes are congruent to 1 or 2 (mod 3). And all primes are congruent to 1 or 3 (mod 4)... etc. What exactly are you looking for? There is no 'general' form of a prime number, if that's what you're asking. – Daniel Freedman Dec 29 '11 at 0:26
3
There is no comparably simple rule for prime numbers. But google the term "sieve of Eratosthenes". So much has been written about their properties that brilliant people spend their lives studying them without reading most of what's been published. In around 300 BC, Euclid proved that infinitely many of them exist. Maybe that's the most interesting result that can be presented at an elementary level. – Michael Hardy Dec 29 '11 at 0:28
@Daniel, that answer my question. I am indeed looking for a "general" form. Almost like a formula with what I have. – jak Dec 29 '11 at 0:32
2
– alex.jordan Dec 29 '11 at 0:32
1
@jak, would "Is there a general formula for prime numbers?" be a more accurate title for your question? – Rahul Narain Dec 29 '11 at 10:01
## 9 Answers
$p \not = ab$ when $a,b > 1 \in \mathbb N$.
-
There's no nice, algebraic formula for primes; there are some examples at the Wikipedia article on the subject, but they are all ugly and impractical. Overall, the simplest way to define the primes is as numbers with only 2 divisors.
You can, of course, say things such as "all primes except 2 are odd numbers", which follows from the definition, but this doesn't tell you anything about which odd numbers are prime, and there are not clear patterns.
-
Re "the simplest way to define the primes is as numbers with only 2 divisors": AFAIK that's usual the definition of irreducible. AFAIK the usual definition of prime is that a number is prime iff its being a divisor of $ab$ implies it's a divisor of $a$ or of $b$. (That is, however, equivalent to primality.) – msh210 Dec 29 '11 at 1:02
1
@msh210: Your "usual definition" leads to $1$ being a prime, which is not a commonly held meaning. – Henning Makholm Dec 29 '11 at 1:12
1
@HenningMakholm, yes, I didn't state I was talking about integers $>1$. I also didn't state I was talking about integers at all. But I was, and I was. (More generally, a nonzero non-unit is prime if....) – msh210 Dec 29 '11 at 1:14
@msh210: Any reference in support of the usual in the usual definition? – Did Dec 29 '11 at 9:29
@DidierPiau, not really: it'd require a survey of all sources of definitions for the terms. However, I strongly suspect most elementary abstract algebra books agree on this. – msh210 Dec 29 '11 at 17:42
show 2 more comments
To add to the fact there is no general formula for primes, it may help to trace back history of prime number. Euclid defined primes in Elements, Book VII, Definition 11 as:
A prime number is that which is measured by a unit alone.
which in turn relies on definitions of number and unit.
Definition 1 from the same book:
A unit is that by virtue of which each of the things that exist is called one.
Definition 2:
A number is a multitude composed of units.
As far as formal definition, Metamath Proof Explorer defines it as such: primes.
As a sidenote, although it is not a property of prime numbers, Goldbach's conjecture states that:
Every even integer greater than 2 can be expressed as the sum of two primes.
-
Here's another property somewhat related to primes that I studied in the morning:
For any given integer $m$,there is no polynomial $p(x)$ with integer coefficients such that $p(n)$ is prime for all integers $n\ge m$.
References: 1) 104 Number Theory Problems: From the Training of the USA IMO Team
by Titu Andreescu, Dorin Adrica and Zuming Feng.
Edit: I was browsing through the internet when I stumbled upon this book dedicated to primes.(I am unable to comment on it as I have no background in advanced maths)
-
I am sure there are many other properties and I guess they are too many to list. – Eisen Dec 29 '11 at 8:17
Wilson's theorem: a natural number $n > 1$ is a prime number iff $(n-1)!\ \equiv\ -1 \pmod n$
-
A formula might be created using the$\mod(\cdot)$ function.
$\mod({\rm PrimeNumber}/n) > 0$ for all $n \in \mathbb{N}$ and $n > 1$ other than PrimeNumber
-
In a way there are more prime numbers than there are a square numbers, this is not an exact statement but the sum over all prime numbers diverges where as the sum over square of all numbers converges.
-
Since people are talking about "formulas for primes" and someone mentioned polynomials, it's amusing to mention this fact: There actually is a 4th- (if I recall correctly) -degree polynomial in 14 (if I recall correctly) variables, with integer coefficients---let us call it $f(p,x_1,\ldots,x_{13})$---such that $$p\text{ is prime if and only if }\exists x_1\ \cdots\ \exists x_{13}\ f(p,x_1,\ldots,x_{13})=0.$$ (Or maybe I should have "$14$" where "$13$" appears?)
The polynomial, in all its splendor, is too long to write in this margin. But if you read about Hilbert's 10th problem you'll probably come across it.
Later note: That such a polynomial exists is what is expressed by saying that the set of all prime numbers is a "Diophantine set".
-
1
For reference: The table displayed in this question (taken from Ribenboim's The New Book of Prime Number Records) lists a few known facts on such polynomials. In particular, there are a polynomial in $12$ variables of degree $13697$ and a (not explicitly written) polynomial in $42$ variables of degree $5$ representing the primes. – t.b. Dec 30 '11 at 5:18
I strongly recommend that you take a look at Zagier's paper on first 50 million Prime Numbers to learn about some other characterizations of the notion of "prime number".
As to congruences that characterize primality, Wilson's theorem provides you with one of the most-well known (as the fact that it's already been mentioned above/below testifies). Nonetheless, there is an interesting near-miss by a Subbarao. You can read about this in one of those volumes by Ross Honsberger. Furthermore, if you are curious enough you may want to call on Scott Kominers' homepage: there is a generalization of the said criterion by Subbarao in a note of him that was recently published in INTEGERS.
Here you have some other equivalences of the definition of prime number:
A. $p$ is a positive prime number iff $\phi(p) = p-1$.
When I told my teacher about this finding of mine, he generously answered with Lehmer's Totient Problem:
B. $p$ is a positive prime number iff $p$ is the least factor $>1$ of some natural number.
C. Later...
-
– Brian M. Scott Feb 24 '12 at 10:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947629451751709, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/73193/dirichlet-series-of-the-reciprocal-radical-function
|
## Dirichlet series of the reciprocal radical function
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Define $rad(n):=\prod_{p|n}p$ and $a_n:=\frac{n}{rad(n)}.$ For example $a_n=1$ whenever n is a squarefree integer. The assosiated Dirichlet series $$F(s):=\sum_{n} \frac{a_n}{n^s}=\prod_{p} (1+\frac{1}{p^s} \frac{1}{1-p^{1-s}})$$ has abcissa of convergence $s_0=1.$ Are there any results regarding the distribution of $a_n$, e.g. whether $\sum \limits_{n \leq x} a_n \ll x (\log x)^A$ for some real constant $A>0$ ?
-
## 1 Answer
This problem was studied by De Bruijn, see
N. G. de Bruijn and J. H. van Lint, "On the number of integers $\le x$ whose prime factors divide $n$", Acta Arith. 8 (1963) 349–356
The main result is that $$\log\left(S(x)\right)=\log\left(\sum_{n\le x} \frac{1}{rad(n)}\right)\sim \left(\frac{8\log x}{\log \log x}\right)^{1/2}$$ and that $$\sum_{n\le x} \frac{n}{rad(n)}=o\left(xS(x)\right).$$ See also the article "Idempotents and Nilpotents Modulo n" for a discussion of this and similar problems (and more complete references).
-
I take it the appearances of $n$ in the last term of the first display should instead be $x$? – Gerry Myerson Aug 19 2011 at 12:52
Thank you, it is fixed. – Gjergji Zaimi Aug 19 2011 at 23:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8683494925498962, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/40830/how-to-define-a-partial-derivative-invariantly?answertab=votes
|
# How to define a partial derivative invariantly?
Let $M$ be a smooth manifold and $f$, $g$ be smooth functions in some neibourhood of a point $x_0\in M$, $\nabla g\ne0$.
1) How to define $\displaystyle \frac{\partial f}{\partial g}$ invariantly? If $M$ is a domaqin in $\mathbb R^n$ then the derivative in the direction of $\nabla g$ seems to give an answer: $\displaystyle \frac{(\nabla f,\nabla g)}{|\nabla g|^2}$. But to calculate $\nabla g$ and $|\nabla g|$ on a manifold one needs a metric. From the other hand, if we consider smooth coordinates $(g_1=g, g_2,\ldots,g_n)$ in some neibourhood of $x_0$, then partial derivatives $\displaystyle \frac{\partial f}{\partial g}$ seem to be defined in the standard way. But the question arises, would the value $\displaystyle \frac{\partial f}{\partial g}$ be independent from the choice of $g_2,\ldots,g_n$? If no, what are the correct way do do it? Is there some reference?
2) Let $f_1, f_2,\ldots,f_n$ be a smooth coordinates in some neibourhood of $x_0$. What is the object $(\displaystyle \frac{\partial f_1}{\partial g},\ldots,\frac{\partial f_n}{\partial g})$? Would it be by some chance a section of some good fiber bundle? Is there some reference where such objects are considered?
Thanks in advance!
Addition of may 27
Suppose that now there is a riemannian metric on $M$. Then what would be most natural definition of partial derivative $\displaystyle \frac{\partial f}{\partial g}$? For expample, to take $\displaystyle \frac{\partial f}{\partial g}=df(\nabla g)=(df,*dg)$ seems not right since that would mean $\displaystyle \frac{\partial f}{\partial g}=\displaystyle \frac{\partial g}{\partial f}$. If we take $\displaystyle \frac{(\nabla f,\nabla g)}{|\nabla g|}$ it would be sort of directional derivative. So the right question seems to me here is what value of $a$ it is best to take for $\displaystyle \frac{(\nabla f,\nabla g)}{|\nabla g|^a}$ to be a partial derivative? Or is there no the "best" choice? I tried to apply to dimensional analysis and it seems $a=1$ is the choice to have the result be like $\nabla f$ but I'm not sure because may be one have to assign some dimension to metric.
-
## 3 Answers
The partial derivative $\frac{\partial f}{\partial g}$ is not invariant. One way to think about partial derivatives (the way relevant to how you've phrased the question) is that you are picking out a particular coefficient in the Jacobian $df_p : T_p(M) \to T_p(\mathbb{R}) \cong \mathbb{R}$ presented as a linear combination of some basis for the cotangent space, and this construction depends on the entire basis (not just $dg_p$). If $M$ is Riemannian then the cotangent space can be equipped with an inner product, so then you don't need the entire dual basis, just a particular cotangent vector.
A nicer notion of partial derivative is to pick a tangent vector $v \in T_p(M)$ and consider $df_p(v)$. This is invariant in the sense that it comes from the evaluation map $T_p(M) \times T_p(M)^{\ast} \to \mathbb{R}$.
-
This is only a partial answer, but that might be considered appropriate for this question. :)
Partial derivatives can only be defined if you have a coordinate system: vary one coordinate, keeping all the other ones fixed. So yes, it definitely depends on the choice of $g_2,\dots,g_n$.
-
I had to think about this recently. My conclusion was that it can't be done: the invariant object is $dg$, and to pass from $dg$ to $\dfrac{\partial}{\partial g}$ you need to choose an isomorphism from the cotangent space to the tangent space. This can be done if you, for example, choose a basis for your cotangent space or have a (pseudo-)Riemannian metric.
In full: Let $M$ be a smooth manifold (without boundary) and let $g_1, \ldots, g_n : U \to \mathbb{R}$ be a family of smooth functions defined on some open set $U \subseteq M$. Then, if at $p \in U$ the differentials $dg_1, \ldots, dg_n$ form a basis of the cotangent space $T^*_p M$, by the inverse function theorem, $(g_1, \ldots, g_n) : U \to \mathbb{R}^n$ is locally invertible with smooth inverse. We may therefore assume $U$ is small enough that $(g_1, \ldots, g_n) : U \to \mathbb{R}^n$ is a diffeomorphism onto its image, and the dual basis $\dfrac{\partial}{\partial g_1}, \ldots, \dfrac{\partial}{\partial g_n}$ is just what you expect under this chart. If we choose a different chart $(\tilde{g}_1, \ldots, \tilde{g}_n)$, then we have the relation $$\dfrac{\partial}{\partial \tilde{g}_j} = \sum_{k=1}^{n} \dfrac{\partial g_k}{\partial \tilde{g}_j} \dfrac{\partial}{\partial g_k}$$ so in particular even if you set $g_1 = \tilde{g}_1$, in general $\dfrac{\partial}{\partial g_1} \ne \dfrac{\partial}{\partial \tilde{g}_1}$, even though $dg_1 = d\tilde{g}_1$!
As for your second question, I'm not sure what $\dfrac{\partial}{\partial g} (f_1, \ldots, f_n)$ could be. I mean, it's the coefficients of $\dfrac{\partial}{\partial g}$ with respect to the basis $\dfrac{\partial}{\partial f_j}$, but interpreting it that way doesn't seem to yield anything useful.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486030340194702, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/34435?sort=oldest
|
## Automatic proving some expression is positive
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there any automated (i.e., some algorithm) to prove that a certain algebraic expression is always non-negative in some range ? If so, is there any implementation you would suggest? My concrete problem is that I want to prove that for $f \in [0,1], 1 \leq a \leq L-2$ the following is true:
$$2^{(-a - L)} f^{-a} (1 + f)^{(-1 - a)} \left\{2^{(1 + a)} f^a (1 + f)^L (1 + 2 f) \left(-(1 + f)^{(1 + a)} + 2^a (1 + f^{(1 + a)})\right)\right.$$ $$+ 2^L (1 + f)^a \left[-2^a (1 + f) \left(-f^{(1 + 2 a)} + f^L + 3 f^{(a + L)} + 3 f^{(1 + a + L)}\right) \right.$$ $$\left.\left.+ (1 + f)^a \left((-3 + f) f^{(1 + a)} - a (-1 + f) (1 + 2 f) (f^a - f^L) + f^L (2 + 3 f (3 + f))\right)\right]\right\} >=0$$
-
What is the range on $L$? $L \in \mathbb{R}$ ? – Joseph O'Rourke Aug 3 2010 at 21:00
2
Fixed up your math display for you. It looks like there is a lot more simplification that can be done for this expression. (For starters there is the manifestly positive multiplier outside the outermost curly brackets.) – Willie Wong Aug 3 2010 at 21:14
1
This is of no help, but when $f=1$, the expression is zero. – Joseph O'Rourke Aug 3 2010 at 21:15
1
L is an integer L >= 3. I plotted for some values of (a,L) and it looks like it is a function that is increasing up some point then decreasing back to zero (unimodal). Also: I have the feeling that it goes by some convexity argument. I've encountered similar expressions which I could prove it was positive by applying things of type (1+f^k)/2 >= ((1+f)/2)^k But since the expressions got larger for this application (this is a part of a Machine Learning problem) I don't know how to find the correct convexity arguments anymore. – Renato Aug 3 2010 at 22:25
Either Joseph's comment, or Willie's correction of the original expression is wrong (now, when $f=1$, the last factor in the first line is $0$, the second line depends on $L$, and the third does not). Renato, please, reenter the original expression without erasing Willie's. – fedja Aug 4 2010 at 3:18
show 6 more comments
## 3 Answers
Some CASes do implement mechanisms that can sometimes answer such questions. For example, in Maple you could specify the ranges of your parameters using assume() and then test for positivity using is() or coulditbe(). Mind you, it won't always work, and sometimes you might need to help the computer along. In some ways, using a CAS like this effectively is just as much of an art as doing the math all by yourself.
In any case, you should start by simplifying your expression: the factor of $2^{-a-L} f^{-a} (1+f)^{-1-a}$ in the beginning, being a product of powers of non-negative numbers, is always non-negative and can therefore be omitted. A decent CAS ought to figure that out for you, but there's no point in even bothering it with such things.
Also, this is a generic solving technique: if you can write your expression as a product, you can try to determine the sign of each factor separately. Ditto if you have a sum and can show each term to be non-negative, although in that case even one bad term can spoil the whole sum.
More generally, try applying the intermediate value theorem. In particular, if a function has no zeros and no discontinuities on an interval, and is positive for some value on that interval, it must be positive on all of it. It's often easier, for both humans and computers, to just look for the zeros of a function than to directly deduce its sign.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You may want to look at global optimization methods. If you can compute a positive lower bound for the global minimum, then you're done.
-
Here is a possible approach, more ad hoc than those previously suggested. Let $E=E(f,L,a)$ be the expression without the "manifestly positive" factor that Willie Wong noticed is irrelevant. Hope that establishing it for integers $a$ will lead to settling it for real $a$ (that's just a hope). So focus on integral $a$. Because $a=1$ is a bit different, separate that case off. So now explore $E(f,L,a)$ for $1 < a \le L-2$, where both $a$ and $L$ are integers. For $L$ even, $$E = -2^a \; f^{a+1} \; (1+f)^{a+1} \; \mathrm{poly}(f^{L+1}),$$ where $\mathrm{poly}(f^{L+1})$ is a polynomial in $f$ of degree $L+1$. For $L$ odd, $$E = -2^a \; f^a \; (1+f)^{a+2} \; \mathrm{poly}(f^L).$$ Examples, $L$ even: $$L=6,a=2: \quad E = -8 f^2 (f+1)^3 \left(34 f^7-31 f^6-56 f^5+59 f^4+10 f^3+23 f^2-20 f-19\right).$$ $$L=6,a=3: \quad E = -16 f^3 (f+1)^4 \left(46 f^7-27 f^6-76 f^5+19 f^4+110 f^3-5 f^2-48 f-19\right).$$ $$L=6,a=4: \quad E = -32 f^4 (f+1)^5 \left(44 f^7-15 f^6-90 f^5+55 f^4+80 f^3+15 f^2-66 f-23\right).$$
$L$ odd: $$L=7,a=2: \quad E = -8 f^2 (f+1)^4 \left(74 f^7-127 f^6+91 f^4-46 f^3+39 f^2+4 f-35\right).$$
$$L=7,a=3: \quad E = -16 f^3 (f+1)^5 \left(106 f^7-147 f^6-36 f^5+55 f^4+74 f^3+27 f^2-48 f-31\right).$$
$$L=7,a=4: \quad E = -32 f^4 (f+1)^6 \left(118 f^7-143 f^6-56 f^5+15 f^4+174 f^3-f^2-76 f-31\right).$$
$$L=7,a=5: \quad E = -64 f^5 (f+1)^7 \left(104 f^7-105 f^6-106 f^5+137 f^4+28 f^3+73 f^2-90 f-41\right).$$
Now the task is prove that $\mathrm{poly}(\;)$ is negative for $f$ in $[0,1]$. As observed previously, $f=1$ is a root, so $(f-1)$ is a factor. Just taking the last polynomial above as an example, it has a root at $f=-0.346213$ and is negative between there and $f=1$. It seems feasible to analyze the structure of $\mathrm{poly}(\;)$ and prove that it has no roots in $[0,1]$, which would settle it for integers $a>1$.
Of course I am aware that I am leaving much to hope and further work.
-
Thanks. We are trying to investigate those polynomials. Actually, we are happy with $a,L \in \mathbb{Z}$ since this is enough for our application. The question somewhat reduces to proving that some polynomial has no roots in an interval. – Renato Aug 4 2010 at 15:27
@Renato: Yes, I would say your question precisely reduces to showing that these polynomials have no root in [0,1]. Some of them have no real roots at all, e.g., for L=7, a=3,4, after factoring out (f-1). These may be easier to tackle first. – Joseph O'Rourke Aug 5 2010 at 13:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449227452278137, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/38289/comparing-lower-central-series-and-augmentation-ideal-completions/38318
|
## Comparing lower central series and augmentation ideal completions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let G be a group. Let $G^p$ be the completion of G with respect to the mod p lower central series of G.i.e. $G^p=\varprojlim_{q} G/\gamma_qG$, where $\gamma_qG$ is generated by all ${[x_1,\cdots,x_s]^{p^t}:sp^t\geq q, x_i\in G}$ and $[x_1,\cdots,x_s]$ is the iterated commutator $[\cdots[x_1,x_2],\cdots,x_s]$.
Is it true that the group ring of the completion ${\mathbb{Z}}/p[G^p]$ is the completion of the group ring ${\mathbb{Z}}/p[G]$ with respect to the products of the augmentation ideal? i.e. ${\mathbb{Z}}/p[G^p]\cong \varprojlim_{q}{\mathbb{Z}}/p[G]/I^q$, where $I$ is the augmentation ideal of the group ring ${\mathbb{Z}}/p[G]$?
-
## 2 Answers
As Simon points out, the answer is no in a simple case and if you think about his argument the answer should probably be no as soon as $G^p$ is infinite. However, there is a statement that is very close to the question which is true: The completion of the group ring in the $I$-adic topology is isomorphic to $\varprojlim_q\mathbb Z/p[G/\gamma_qG]$. This follows from a result of Jennings-Lazard that the topology on $G$ defined by the $p$-lower central series equals that induced by the $I$-adic filtration on the group ring (see Quillen: On the associated graded ring of a group ring. J. Algebra 10 for an even more precise statement).
-
it seems there is a contradiction. If the group $G$ is not abelian and not finitely generated, then $\varprojlim_{q}{\mathbb{Z}}/p[G/\gamma_qG] \rm{not}\cong \varprojlim_{q}{\mathbb{Z}}/p[G]/I^q$. However, for the same $G$, consider $\varprojlim_{q}{\mathbb{Z}}/p[G]/I^q$ as the completion of the group ring $\mathbb{Z}}/p[G]$ in the $I$-adic topology, it is isomorphic to $\varprojlim_{q}{\mathbb{Z}}/p[G/\gamma_qG]$. – Colin Tan Sep 12 2010 at 5:21
To clarify: By completion of the group ring in the $I$-adic topology do you mean $\varprojlim_{q}{\mathbb{Z}}/p[G]/I^q$ or something else? Thanks. – Colin Tan Sep 12 2010 at 5:33
Yes, I mean exactly what you propose. I have to admit that I am a little bit queasy about the non-fg case but I do not see the contradiction you suggest, could you be a bit more specific? I am not suggesting (if that is what you are saying) that $\varprojlim_q \mathbb Z/p[G/\gamma_qG] = \mathbb Z/p[G^p]$. – Torsten Ekedahl Sep 12 2010 at 12:15
Beriefly the question is like this. Let G be a group. Is the $I$ adic completion of the group ring ${\mathbb{Z}}/p[G]$,denoted by $\varprojlim_{q}{\mathbb{Z}}/p[G]/I^q$ is isomorphic to ${\mathbb{Z}}/p[\varprojlim_{q}G/\gamma_qG]\cong\varprojlim_{q}{\mathbb{Z}}/p[G/\gamma_qG]$? – Colin Tan Sep 14 2010 at 6:47
Your stated isomorphism is not true. To the left you have only finite sums, to the right you have some infinite sums. – Torsten Ekedahl Sep 18 2010 at 21:24
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't quite follow your definition of the mod $p$ lower central series as $s$ only seems to appear once in the definition. However whatever it is the answer is no.
If $G=\mathbb{Z}$ then the $I$-adic completion of $\mathbb{Z}/p[G]$ is isomorphic to a power series in one variable $T=x-1$ with coefficients in $\mathbb{Z}/p$ --- here $x$ is a generator of $G$.
Thus this $I$-adic completion is a commutative Noetherian algebra that is not finitely generated over the base field so cannot be a group algebra of any group since commutativity would imply that the group is abelian and Noetherianity would imply the group has no strictly ascending chains of subgroups. Amongst abelian groups only finitely generated groups have this latter property.
-
I should perhaps add that these $I$-adically completed things are studied mostly under the name of Iwasawa algebras. See math.uiuc.edu/documenta/vol-coates/… for a survey of algebraic results. – Simon Wadsley Sep 10 2010 at 16:22
First, thank you for your answer and the reference.:) Then I correct mistake of $\gamma_qG$. In the case of $G$ not abelian group, with what conditions (such as finitely generated not abel group) the answer will be Yes? – Colin Tan Sep 11 2010 at 0:32
I think Torsten has now basically answered that question. Probably never yes for infinite groups but you can say something. – Simon Wadsley Sep 11 2010 at 9:46
Is the group $G\gamma_qG$ nilpotent for arbitrary group $G$? I think if $G$ is nilpotent, then $\varprojlim_{q}{\mathbb{Z}}/p[G]/I^q$ is isomorphic to $\varprojlim_{q}{\mathbb{Z}}/p[G/\gamma_qG]$. Am I right? any comments are welcome. thanks – Colin Tan Sep 12 2010 at 8:14
I think that all these things will be true when $G^p$ is profinite (in fact necessarily pro-$p$ in that case). Moreover $G$ finitely generated will suffice for this. I suggest consulting a book on profinite or pro-$p$ books such as those by J.S. Wilson or Dixon, Du Sautoy, Mann and Segal. – Simon Wadsley Sep 12 2010 at 13:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409432411193848, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/50243/questions-on-central-simple-algebras/112763
|
## questions on central simple algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Suppose $A$ is a finite dimensional central simple algebra over a field $F$, $B$ is a simple subalgebra of $A$. $C$ is the centralizer of $B$ in $A$.
From Wedderburd-Artin theorem and the double centralizer theorem We know
1. $A\cong M_n(\Delta_A)$,where $\Delta_A$ is a central division algebra over $F$
2. $B\cong M_m(\Delta_B)$ ,where $\Delta_B$ is a division algebra over $F$
3. $C\cong M_k(\Delta_C)$ ,where $\Delta_C$ is a division algebra over $F$
so what is the relationship between $\Delta_A$, $\Delta_B$ and $\Delta_C$, and $m,n,k$?
if we write their schur index, the dimension of a division algebra over its maximal subfield, as $Ind_A$, $Ind_B$, $Ind_C$, what is the relationship between these three numbers?
-
## 2 Answers
The double centralizer theorem (cf. Knapp, Anthony W. (2007), Advanced algebra, Cornerstones p 115 Theorem 2.43) implies that $$\textrm{dim}_F B \cdot \textrm{dim}_F C = \textrm{dim}_F A.$$ Thus we should have $$m^2\textrm{dim}_F \Delta_B \cdot k^2\textrm{dim}_F \Delta_C = n^2\textrm{dim}_F \Delta_A.$$ Furthermore, $C$ is simple, and $B$ is the centralizer of $C$ in $A$.
As a corollary of the double centralizer theorem, If $\Delta$ is a central finite dimensional division algebra over a field $F$, and if $K$ is the maximal subfield of $\Delta$, then $$\textrm{dim}_F\Delta=(\textrm{dim}_F K)^2.$$ So, for $\Delta_A$, we have $\textrm{dim}_F \Delta_A = (Ind_A)^2$. For $\Delta_B$ and $\Delta_C$, note that $Z(B)=B\cap C = Z(C)$. Thus, $$\textrm{dim}_F\Delta_B=\textrm{dim}_{B\cap C}\Delta_B\cdot\textrm{dim}_F(B\cap C)=(Ind_B)^2\cdot\textrm{dim}_F(B\cap C)$$ $$\textrm{dim}_F\Delta_C=\textrm{dim}_{B\cap C}\Delta_C\cdot\textrm{dim}_F(B\cap C)=(Ind_C)^2\cdot\textrm{dim}_F(B\cap C).$$
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I don't know if we may expect a "simple"(pun unintended) answer. You can take $B={\mathbb H}$ the hamiltonian quaternions, embed it in $M_4({\mathbb R})$ and the latter can be embedded in $M_n({\mathbb R})$ any old how. (Then $C$ may be a bit more complicated).
-
I don't understand you would further embed $M_4(\mathbb{R})$ in $M_n(\mathbb{R})$. Based on OP, the field $\mathbb{F}=\mathbb{R}$, and your example is $B=\mathbb{H}$, $A=M_4(\mathbb{R})$, $C$ is the centralizer of $\mathbb{H}$ in $M_4(\mathbb{R}$. By direct computation, $C$ is isomorphic to $\mathbb{H}$ with different matrices corresponding to $i$, $j$, and $k$. – i707107 Feb 26 at 21:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9172207713127136, "perplexity_flag": "head"}
|
http://nrich.maths.org/5963/note
|
### Rotating Triangle
What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle?
### Doodles
A 'doodle' is a closed intersecting curve drawn without taking pencil from paper. Only two lines cross at each intersection or vertex (never 3), that is the vertex points must be 'double points' not 'triple points'. Number the vertex points in any order. Starting at any point on the doodle, trace it until you get back to where you started. Write down the numbers of the vertices as you pass through them. So you have a [not necessarily unique] list of numbers for each doodle. Prove that 1)each vertex number in a list occurs twice. [easy!] 2)between each pair of vertex numbers in a list there are an even number of other numbers [hard!]
### Russian Cubes
How many different cubes can be painted with three blue faces and three red faces? A boy (using blue) and a girl (using red) paint the faces of a cube in turn so that the six faces are painted in order 'blue then red then blue then red then blue then red'. Having finished one cube, they begin to paint the next one. Prove that the girl can choose the faces she paints so as to make the second cube the same as the first.
# A Long Time at the Till
### Why do this problem?
This problem provides an introduction to advanced mathematical behaviour which might not typically be encountered until university. The content level is secondary, but the thinking is sophisticated and will benefit the mathematical development of school-aged mathematicians. It will be of particular interest to students who want to learn to think like mathematicians and can be used at any point in the curriculum. It will need to be used with students who are already used to engaging with sustained mathematical tasks.
Essentially the task involves carefully reading and then reflecting upon the merits of two very different solutions to a 'difficult-to-solve-but-easy-to-understand' problem. This is of value because mathematicians don't simply stop once an answer is found; reflecting on the method of solution is a key part of advanced mathematical activity. It will help train school students in the art of assessing their own solutions, which will inevitably lead to better performance in exams.
Note: The Full Solutions are to be found here - just click on the 'View full solution' link.
### Possible approaches
This task ideally requires at least two students to work together so that ideas arising can be discussed. We suggest two different ways of using the problem:
1. Filling time for early-finishers/mathematics club
Print out a few copies of the problem and solutions to have to hand. Give them out to groups of keen early-finishers to consider in 'spare' lesson time over the course of a week. Give them space to discuss the two solutions, help each other to understand the subtleties and then to discuss the relative merits of the solutions. The problem will automatically generate discussion amongst students, but you might like them to 'report' back to you or others with things that they have discovered or explored.
2. Whole-class activity
Set the background task itself as a homework problem with a fixed time-limit, stressing that only a partial solution is expected. Students should come prepared to report on the ways that they tried to solve the problem and the things that they have discovered about the problem.
Back in the lesson, group students into pairs or fours. Hand out printed copies of the solutions. Give the groups half an hour or so to try to understand the solutions with the explicit task of writing down 5 short bullet points which explain the key aspects of the solution method. Some students will prefer to discuss solutions together as they work through them whereas others will prefer to work alone. Both approaches are fine, so you might wish to group students according to their preferred style.
Next spend 10 minutes sharing the different lists of bullet points to create a 'shared' list for each problem on the board.
Spend the remanining time back in groups considering the suggested variations on the background problem. Note that some of these are significantly easier problems to solve because of their simplified prime factorisation. As a focus for the activity, set the explicit task: "which of the variation problems would you choose to solve, and why?"
### Key questions
What are the 'key steps' in the solutions, and what are the 'details'.
Can you follow the overall 'strategy' of the two solutions?
Which of the two solutions seems more 'reusable' for similar variants on the background problem?
Which of the two solutions do you prefer? Why?
Of the suggested variants, which seems likely to be the easiest to analyse? Why would you think that?
### Possible extension
A simple-to-set extension is to ask students to solve one or more of the suggested variations on the background problems
Another more sophisticated extension is to ask: what would make a variation of the background problem difficult or easy to solve? Can you create a much simpler problem which has a unique solution?
### Possible support
Recall that we only recommend that you use this task with students already used to sustained mathematical engagement with tasks.
To help students to get started with thinking about the background task, suggest that they work in pence and convert the two conditions into equations involving whole numbers. Stress that the sum will be $711$ but the product will be $711,000,000$ due to multiplying by $100$ four times. Suggest also that prime factorisation will be useful and a clear recording system will be necessary to keep track of calculations.
In assessing the solutions encourage students to go through the solutions carefully line by line and to ask for clarification when there is a line that they do not understand.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374842047691345, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/98196/multiple-linear-regression-estimation-without-full-recalc
|
## Multiple Linear Regression Estimation without full recalc
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Ok, so I am running a classic linear regression where betahat = (X'X)^-1X'y
Due to performance issues, I would like to estimate betahat with an additional data point (x1,x2,x3,x4,...,y) without recalculating based on the whole history.
Could I do some sort of multiplication based on Xmu or std deviation of the variables? And then what is the probability that betahat is predicting the true beta from this point? This would be used so that I can judge when I need to do a full recalculation.
Thank you for any help or suggestions, Jeremy
-
## 1 Answer
There's a very large literature on updating solutions to least squares problems as new data are added. The naive formula $\hat{\beta}=(X^{T}X)^{-1}X^{T}y$ can be problematic in practice because of numerical issues with ill-conditioning. The QR factorization is typically a much better choice in practice. There are lots of papers on updating the QR factorization as data are added. See for example:
http://eprints.ma.man.ac.uk/1192/01/covered/MIMS_ep2008_111.pdf
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146888256072998, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/number-theory/206021-can-there-ever-more-primes-n-2n-than-0-n.html
|
# Thread:
1. ## Can there EVER be more primes from n to 2n than from 0 to n?
Thank you for considering my question. It is a very simple question. But it is one I need to find the answer to for something I am working on.
The question is this:
Is it conceivably, morally, imaginably POSSIBLE for there ever to be more primes from n to 2n than from 0 to n. Like, for example, could there be more primes from 18 to 36 than from 0 to 18 (of course, we know it's not true in this particular case, but I want to generalize it.)
Yes, it seems counter-intuitive to imagine that this would be the case. Yes, it smacks in the face of our immediate reason. But can it be PROVEN? HAS it been proven? And proven by whom?
I couldn't for the life of me try to prove it myself.
It might involve assuming a zero derivative for pi(n), which might be logarithmic. But we know only that it's close to some logarithmic function n/(ln n). Not that it's in fact a logarithmic function itself (or DO we?).
This would be the last step in a very important theorem I am working on. Please keep in mind that I have no degree in anything and have no formal training in mathematics. So any gaping holes in my knowledge should be made understandable to you.
Thank you.
2. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
I don't think this is known. Your statement is roughly equivalent to asking whether $\pi(2n) - \pi(n) > \pi(n)$. The Second Hardy Littlewood Conjecture states that $\pi(x + y) \leq \pi(x) + \pi(y)$ (so let x = y = n), but it's not known whether the conjecture is true.
3. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
[0, 2) and [2, 4), is one corner case. Or, do you guys consider 1 to be a prime? (I heard that analysis folks do that).
Salahuddin
http://maths-on-line.blogspot.in/
4. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
Thanks, guys. Salahuddin559: even if we consider 1 a prime, it wouldn't present a problem as far as the particular thesis I am working on. So let us say we discount 1 as a prime in my question.
Originally Posted by Petek
I don't think this is known. Your statement is roughly equivalent to asking whether $\pi(2n) - \pi(n) > \pi(n)$. The Second Hardy Littlewood Conjecture states that $\pi(x + y) \leq \pi(x) + \pi(y)$ (so let x = y = n), but it's not known whether the conjecture is true.
Petek, is it known whether or not $\pi(n)$ is itself a simple logarithmic function (i.e. one not involving trigonometric functions)? What about n/(ln n) - $\pi(n)$? Would that difference be itself a logarithmic function?
If there ever were a case where $\pi(2n) - \pi(n) > \pi(n)$, then if we define $\pi(n)$ continuously, then there must be some point $\pi(x)$ where there are exactly as many primes from 0 to x as from x to 2x, even if both that number of primes and that number x are transcendental numbers. In that case, we could easily re-solve for n/(ln n) and say to ourselves, "Okay, in this case $\pi(n)$ is n times some transcendental number t divided by the logarithm of n times that transcendental number t". That is (nt)/(ln (nt)). But if we use that to solve $\pi(>x)$ for some number higher than x, we are never going to find that $\pi(n)$ is higher from (>x) to (2 (>x)) than from 0 to (>x), unless x happens to be a certain very rarely occurring type of number.
I'll have to work this out in more detail. This will be just one more obstacle to overcome.
Thank you.
5. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
Please define what you mean by a "simple logarithmic function" and a "logarithmic function" in this context. Thanks.
6. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
Can't we use the sieve of eratosthanese here? It also says about prime number density as, on average, Product of (1 - 1/Pi). Basically, for first n numbers, the number of primes used here will be additional, since those are prime for 0 .. n, but not for n .. to 2n. Basically we have to prove if this ratio can vary more than the number of terms in it (for n and for 2n)?
Salahuddin
Maths online
7. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
Originally Posted by Petek
Please define what you mean by a "simple logarithmic function" and a "logarithmic function" in this context. Thanks.
By that I mean a function that can be expressed as any formula that you wish, as long as A) it contains the value "ln (n)" or "ln (nr)" [where "r" represents any positive real number], and (b) there is no zero derivative anywhere in that formula.
Edit: Or, rather, a zero cannot be DERIVED anywhere in that formula upon a first derivative.
8. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
Salahuddin559: I just realized you were absolutely right about the case of 4, if we define pi(n) continuously. This is because, if we define pi(n) continuously, then pi(4) wouldn't be 2, but 2 and some change. Perhaps 2.5 (after all, 4 is halfway between two primes). And since pi(2) is exactly one - i.e. even where we define pi(n) continously - then, pi(4) - pi(2) would be 1 and some change. Say, 1.5. Which is certainly larger than 1. But I guess the next question would be: is that the last time that happens?
9. ## Re: Can there EVER be more primes from n to 2n than from 0 to n?
Sorry, I don't have any other suggestions or comments.
#### Search Tags
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523926377296448, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/19269?sort=oldest
|
## What are some examples of narrowly missed discoveries in the history of mathematics?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What are the examples of some mathematicians coming very close to a very promising theory or a correct proof of a big conjecture but not making or missing the last step?
-
6
Shizhuo- The original title of this post was so unhelpful, I went in and changed it myself (if you don't like what I chose, of course, you're free to change it to something else). When choosing a title, imagine yourself in the position of someone reading on the first page. If you read "Can you tell the similar phenomenon in the history of mathematics?" would you have any clue as to what the question was about other than something about the history of mathematics? – Ben Webster♦ Mar 25 2010 at 17:27
3
Thank you for changing the title! – Shizhuo Zhang Mar 25 2010 at 18:39
With Ben's title change, I at least now understand what the question intends to ask. But the actual question in the text I still don't understand. Maybe since this is CW I should just change it, but I can't think of a way to make it a significantly better question. – Theo Johnson-Freyd Mar 25 2010 at 19:24
I've removed the "motivational" part of the question, since it doesn't seem particularly helpful or correspond with the answers already given. – Victor Protsak Jun 20 2010 at 16:48
## 14 Answers
Freeman Dyson discusses a few examples of this in his article Missed Opportunities. One that I thought was particularly striking was that mathematicians could have discovered special relativity decades before Einstein just by staring at Maxwell's equations hard enough, and also on the basis that the representation theory of the Poincare group is simpler than the representation theory of the Galilean group.
-
2
I always thought that Poincare did discover special relativity years before Einstein. His discovery just wasn't noticed our understood well enough. – Johannes Hahn Mar 25 2010 at 20:07
6
My understanding is that Lorentz did not understand the point of the formal time parameter he introduced. He regarded it as a computational convenience, whereas Einstein's contribution was to think of it as "real" time. – Qiaochu Yuan Mar 25 2010 at 23:52
5
Yes, but the point is that Einstein basically had no mathematical work left to do, but he did not cite any of the results that he basically lifted from Lorentz and Poincaré. – Harry Gindi Mar 26 2010 at 0:33
6
Did he lift those results, or rediscover them independently? They're not obvious but not that deep either. – Mark Meckes Mar 26 2010 at 13:46
2
And let's not forget Minkowski in connection with special relativity. As for representation theory of the Poincare and Galilean group, that's wishful thinking in the extreme: the conceptual development of representation theory to the point that we can compare them took a lot of highly nontrivial effort, even if we tend to forget it nowadays. – Victor Protsak Jun 20 2010 at 16:09
show 4 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Emil L. Post was very close to proving Gödel's incompleteness theorem, and the existence of algorithmically unsolvable problems in the early 1920s. He realized that one could enumerate all algorithms, and hence obtain an unsolvable problem by diagonalization. Moreover, the "problem" can be viewed as a computable list of questions $Q_1,Q_2,Q_3,\ldots$ for which the sequence of answers (yes or no) is not computable. It follows that there cannot be any complete formal system that proves all true sentences of the form "The answer to $Q_i$ is yes" or "The answer to $Q_i$ is no," because this would solve the unsolvable problem.
But then Post was stuck because he needed to formalize the notion of computation. He had in fact (an equivalent of) the right definition, but logicians were not ready for a definition of computation, and did not believe there was such a thing until the Turing machine concept came along in 1936. Gödel avoided this problem when he proved his theorem (1930) by proving incompleteness of a particular system (Principia Mathematica).
-
Church's $\lambda$-calculus is essentially simultaneous to Turing's machines, though, IIRC? – Mariano Suárez-Alvarez Mar 25 2010 at 4:33
Yes, it is, and of course it is "Church's thesis" that lambda-calculus captures the concept of computability. However, logicians weren't convinced that the concept of computability was formalizable until the Turing machine arrived and Gödel threw his weight behind it. – John Stillwell Mar 25 2010 at 4:50
I've had a conversation with John Conway in which he mentioned that von Neumann was very frustrated that he missed Gödel's incompleteness theorem. – Victor Protsak Jun 21 2010 at 3:03
Von Neumann certainly would have been frustrated to have missed Gödel's second incompleteness theorem -- the one about the unprovability of consistency. He pointed it out to Gödel in a letter before Gödel had published it, but Gödel still got full credit for the theorem. – John Stillwell Jun 21 2010 at 3:55
As mentioned in Categories for the Working Mathematician, Bourbaki nearly had the right definition of an adjunction 10-15 years before the proper definition was formulated by Dan Kan. In fact, Bourbaki actually proved the Special Adjoint functor theorem (cf. Categories for the Working Mathematician - notes at the end of the chapter on Adjunctions)
Edit: To explain a little more (from Mac Lane), Bourbaki had figured out most of these notions, but since they had not formulated the general adjoint functor theorem, only the special adjoint functor theorem, they restricted their definition of a "universal construction" to those adjunctions for which the SAFT holds. I find it pretty impressive that they were able to do all of this without the benefit of a formal framework for category theory.
-
Bourbaki is not an actual person. Who exactly had the right definition before the proper definition was formulated? – Rune Mar 25 2010 at 18:39
10
A useful piece of information with regards to contacting him is that Serre is not dead. – Mariano Suárez-Alvarez Mar 25 2010 at 19:51
1
@fpqc, just a little correction: Looking at the notes at the end of the section on limits in Mac Lane, I see that Bourbaki formulated some sort of solution set for the construction of universal arrows (books.google.co.il/…). At least in Mac Lane (p. 130), SAFT does not include a solution set condition (as opposed to the AFT). Also, I think that there is no reference to the AFT/SAFT in the notes at the end of the section on adjunctions (pp. 107--108). – unknown (google) Mar 25 2010 at 20:11
2
Rune- The Bourbakistes. – Ben Webster♦ Mar 25 2010 at 21:30
2
@fpqc: Thank you for your suggestion. It seems that such editing requires deleting half of the post, and I feel uncomfortable to make such a massive change. I also have some ideas of what should be added: The fact that Bourbaki was looking for something too general, and the final verdict given by Mac Lane: "good general theory does not search for the maximum generality, but for the right generality." I think that it would not be appropriate for someone other than the OP to make such a large change in the answer. [Just to put things in context, I really like your answer and I have +1'ed it] – unknown (google) Mar 25 2010 at 22:48
show 5 more comments
When I took a course in set theory, I was told that in the early 1920's, Thoralf Skolem essentially proved what is now Gödel's Completeness Theorem, but was unaware of its significance because the concept of completeness was not understood fully at the time.
-
I might be wrong but I thought that what he proved was the compactness theorem. – Tran Chieu Minh Mar 26 2010 at 10:03
I think you are right. They are equivalent, though. – Ketil Tveiten Mar 26 2010 at 16:42
2
Ketil Tveiten: I don't think they're equivalent. Compactness theorem can be stated and proved in a simple way without defining a formal proof system at all. In that case, the theorem claims that if any finite subset of a set of formulas has a model satisfying it, then there's a model satisfying the whole set. – Zsbán Ambrus Jun 20 2010 at 12:02
They are deducible from eachother. That usually means they are equivalent, no? – Ketil Tveiten Jun 21 2010 at 11:15
1
See the discussion in mathoverflow.net/questions/9309/… – Dan Petersen Jun 21 2010 at 14:35
In his final scientific work, the Two New Sciences, Galileo almost discovered Cantor's theory of infinite cardinalities ...
See: http://en.wikipedia.org/wiki/Galileo%27s_paradox
-
12
I fundamentally disagree with this statement. It's like claiming that Jules Verne almost put a man on the moon. – Franz Lemmermeyer Mar 25 2010 at 19:36
4
Perhaps you would prefer the wording that I use when I teach a first course in set theory? "In his final scientific work, the Two New Sciences, Galileo blew his chance of becoming a famous set theorist." – Simon Thomas Mar 26 2010 at 3:02
Archimedes and the Integral calculus?
-
2
How can you almost discover calculus without knowing what a function is? – Franz Lemmermeyer Jun 20 2010 at 6:45
2
@Franz Yeah,RIGHT?My question exactly! Archimedes almost discovering integration is like saying Newton almost discovered general relativity except for some erronous base assumptions and the absence of multilinear algebra. – Andrew L Jun 20 2010 at 7:13
13
To Franz Lemmermeyer: you know, I'm not really sure Newton and Leibniz knew what functions are either. – Zsbán Ambrus Jun 20 2010 at 11:57
10
Here is what David Mumford says about a proposition of Archimedes (cf. EMS Newsletter, Dec 08): "...he [Archimedes] is evaluating a Riemann sum of $\int_0^\theta\sin(\phi)d\phi$, /.../ No historian will convince me that his idea is not that same of mine when looking at this mathematical proposition." It seems that Archimedes did indeed discover some of the basic ideas of calculus, expressed in a geometric language. – A Stasinski Jun 20 2010 at 14:19
5
I'm not a historian, but I'm pretty sure that both derivatives and integrals were studied in some form before Newton and Leibniz. Their accomplishments were 1. discovering the fundamental theorem of calculus, and 2. giving a systematic treatment of derivatives and integrals. – Andy Putman Jun 20 2010 at 18:15
show 2 more comments
Leibniz was extremely close to 'discovering' modal logic. He definitely understood the difference between intension and extension, and knew about valuations as functions from possible worlds. And I don't mean that one can "extract" this from his writings by reading hard between the lines -- it is quite explicit, and in many manuscripts. He did not, of course, have the formal tools in hand needed to formalize these ideas, but then again no one did until the work of Frege after 1880 and Russell after 1900. That's a 200 year gap!
-
1
And determinants, too! (Found in his manuscripts and published in the 19th century.) – Victor Protsak Jun 20 2010 at 16:14
Fulkerson came extremely close to a proof of the Perfect Graph Conjecture eventually proved by Lovasz. He was stuck on one relatively easy lemma, which he had apparently become convinced was false.
I don't know a reliable source for this, but I have heard that after learning that the conjecture was proven, Fulkerson went to his office and finished his own proof later that day.
-
One of the important conjectures in set theory in the 60s was that strongly inaccessible cardinals are measurable. This was disproved by Tarski in 1960. Soon after this Erdős and Hajnal realized that they were very close to Tarski's result in their paper "On the structure of set mappings" which appeared in 1958.
Another miss was in the paper of Erdős, Hajnal, and Milner ("On sets of almost disjoint subsets of a set") from 1968 where they went pretty close to discovering Silver's famous theorem on GCH at singular cardinals of uncountable cofinality.
-
Schubert came extremely close to discovering the JSJ-decomposition of 3-manifolds in his paper "Knoten und Vollringe" (1953). With a little more work, one could turn Schubert's paper into something equivalent to the JSJ-decomposition applied to knot and link complements in $S^3$. That would have allowed people to conjecture the JSJ-decomposition for 3-manifolds, around 20 years earlier than it was.
It's interesting to speculate whether or not the connection between 3-manifold theory and hyperbolic geometry would have been made much earlier, as some of the ingredients were already in place -- Seifert-Weber space, and the Gieseking manifold, but I do not think people knew finite-volume hyperbolic manifolds to be atoroidal until much later.
-
5
A decade after Schubert, Waldhausen also came close to the JSJ decomposition, in his work on graph manifolds for example. It almost seems as if Seifert back in the 1930s could have discovered the JSJ decomposition. – Allen Hatcher Jun 20 2010 at 17:33
Issac Barrow very nearly had the fundamental theorum of calculus a generation before Newton and Liebniz,but he failed to see the relationship between the quadrature problem and the fluxion problem,despite making huge contributions to the development of the theory of limits in both.
-
In their attempts of justifying Euclid's fifth postulate, Girolamo Saccheri and Johann Heinrich Lambert proved many theorems of non-Euclidean geometry. However, they were so convinced that the fifth postulate must be true that they stopped short of actually discovering the new geometry.
-
A famous example is that John Conway, who fathered the concept of a Skein relation, didn't discover the Jones (or HOMFLYPT) polynomials. By just searching for knot invariants defined via Skein relations, he would doubtlessly have found them, and the history of low-dimensional topology would have looked quite different.
-
In the book The Scientists by John Gribbin, he mentions that, in his search for the theory of general relativity, Einstein apparently wrote down a correct equation that would have led him to correctly discovering the rest of the equations for general relativity very quickly. But, he did not see the equation for what it was and ran down the wrong path for two entire years before coming back to the correct equation. Here's the quote from the book:
"Einstein himself is often presented as the prime example of someone who did great things alone, without the need for a community. This myth was fostered, perhaps even deliberately, by those who have conspired to shape our memory of him. Many of us were told a story of a man who invented general relativity out of his own head, as an act of pure individual creation, serene in his contemplation of the absolute as the First World War raged around him.
It is a wonderful story, and it has inspired generations of us to wander with unkempt hair and no socks around shrines like Princeton and Cambridge, imagining that if we focus our thoughts on the right question we could be the next great scientific icon. But this is far from what happened. Recently my partner and I were lucky enough to be shown pages from the actual notebook in which Einstein invented general relativity, while it was being prepared for publication by a group of historians working in Berlin. As working physicists it was clear to us right away what was happening: the man was confused and lost - very lost. But he was also a very good physicist (though not, of course, in the sense of the mythical saint who could perceive truth directly). In that notebook we could see a very good physicist exercising the same skills and strategies, the mastery of which made Richard Feynman such a great physicist. Einstein knew what to do when he was lost: open his notebook and attempt some calculation that might shed some light on the problem.
So we turned the pages with anticipation. But still he gets nowhere. What does a good physicist do then? He talks with his friends. All of a sudden a name is scrawled on the page: 'Grossman!!!' It seems that his friend has told Einstein about something called the curvature tensor. This is the mathematical structure that Einstein had been seeking, and is now understood to be the key to relativity theory.
Actually I was rather pleased to see that Einstein had not been able to invent the curvature tensor on his own. Some of the books from which I had learned relativity had seemed to imply that any competent student should be able to derive the curvature tensor given the principles Einstein was working with. At the time I had had my doubts, and it was reassuring to see that the only person who had ever actually faced the problem without being able to look up the answer had not been able to solve it. Einstein had to ask a friend who knew the right mathematics.
The textbooks go on to say that once one understand the curvature tensor, one is very close to Einstein's theory of gravity. The questions Einstein is asking should lead him to invent the theory in half a page. There are only two steps to take, and one can see from this notebook that Einstein has all the ingredients. But could he do it? Apparently not. He starts out promisingly, then he makes a mistake. To explain why his mistake is not a mistake he invents a very clever argument. With falling hearts, we, reading the notebook, recognize his argument as one that was held up to us as an example of how not to think about the problem. As good students of the subject we know that the agument being used by Einstein is not only wrong but absurd, but no one told us it was Einstein himself who invented it. By the end of the notebook he has convinced himself of the truth of a theory that we, with more experience of this kind of stuff than he or anyone could have had at the time, can see is not even mathematically consistent. Still, he convinced himself and several others of its promise, and for the next two years they pursued this wrong theory. Actually the right equation was written down, almost accidentally, on one page of the notebook we looked at it. But Einstein failed to recognize it for what it was, and only after following a false trail for two years did he find his way back to it. When he did, it was questions his good friends asked him that finally made him see where he had gone wrong."
-
4
This is very interesting but not really a near miss. You explain that general relativity is in fact a near near-miss. – Roland Bacher Jun 21 2010 at 16:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.972935140132904, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/21678-help-please-few-problems.html
|
# Thread:
1. ## Help please, few problems
1.) 4k+2
....____ > 2 Had to put periods in to keep it in place, but it is 4k+2 diveded by 5 is greater then 2, how do I solve that?
.......5
2.) A person'as body mass index is given by the expression Weight/h2 where W is the weight in kilogrammes and h is the height in meters. A normal BMI is at least 18.5 but less than 25. If a ballplayer is 2 metres tall, then what can his weight be if his BMI is normal? Hint: since you know his height, first calculate his h2, Now you have an expression for his BMI with only one varialbe in it, this is supposed to be at least 18.5 but less than 25. Any help on this would would be great too.
Thanks in advance, You guys rock!
2. Originally Posted by MathMack
1.) 4k+2
....____ > 2 Had to put periods in to keep it in place, but it is 4k+2 diveded by 5 is greater then 2, how do I solve that?
.......5
2.) A person'as body mass index is given by the expression Weight/h2 where W is the weight in kilogrammes and h is the height in meters. A normal BMI is at least 18.5 but less than 25. If a ballplayer is 2 metres tall, then what can his weight be if his BMI is normal? Hint: since you know his height, first calculate his h2, Now you have an expression for his BMI with only one varialbe in it, this is supposed to be at least 18.5 but less than 25. Any help on this would would be great too.
Thanks in advance, You guys rock!
Question 2: First of all you know that his height is 2m, and his BMI equals his weight divided by height squared. You also know that his BMI has to be between 18.5 and 25.
$W/2^2 =BMI$
$W/4 =BMI$
Next you have to rearrange the equation to find his weight:
$W= 4*BMI$
Can you work it out from here?
3. No i'm really lost, could you help me out more, Appreciate it! Thanks for your help
4. Let W equal weight, and H equal height.
$W/H^2 =BMI$
$W/2^2 =BMI$
Now rearrange the question
$H^2*BMI =W$
$4*BMI =W$
The lowest his BMI can be is 18.5:
$4*18.5 =W$
$4*18.5 =74$
So, the lowest weight he can be is 74kg.
The highest his BMI can be is 25:
$4*25 =W$
$4*25 =100$
So the highest weight he can be is 100kg.
He must weigh between 74kg and 100kg.
Does this make sense? Im not really good at explaining things, sorry. Maybe someone else can explain it better.
5. Originally Posted by MathMack
1.) 4k+2
....____ > 2 Had to put periods in to keep it in place, but it is 4k+2 diveded by 5 is greater then 2, how do I solve that?
.......5
...
Hello,
I'll show you how to get the solution step by step:
$\frac{4k+2}{5}>2$ Multiply by 5. Since 5 is positive the ">"-sign doesn't change.
$4k+2 > 10$ Subtract 2 at both sides
$4k > 8$ Divide by 4. Since 4 is positive the ">"-sign doesn't change
$\boxed{k>2}$ That's it!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9806594252586365, "perplexity_flag": "middle"}
|
http://cms.math.ca/Events/summer12/abs/nt.html
|
2012 CMS Summer Meeting
Regina Inn and Ramada Hotels (Regina~Saskatchewan), June 2 - 4, 2012 www.cms.math.ca//Events/summer12
Number Theory
Org: Mark Bauer (Calgary), Richard McIntosh (Regina) and Eric Roettger (Mount Royal)
[PDF]
MICHAEL BENNETT, University of British Columbia
A problem of Erdos and Graham revisited [PDF]
We construct, given an integer $r \geq 5$, an infinite family of r non-overlapping blocks of five consecutive integers with the property that their product is always a perfect square. In this particular situation, this answers a question of Erd\H{o}s and Graham in the negative. We survey more general results in the literature and sketch what we hope are promising directions. This is joint work with Ronald van Luijk.
DAN BROWN, Certicom (subsidiary of Research in Motion)
Cryptography in Diophantine Cloak [PDF]
Some important cryptographic problems can be easily expressed as Diophantine problems: quite simply for public-key cryptography, or using the notion of straight line program for symmetric-key cryptography. This talk will review some theorems about solving the factoring and Rivest--Shamir--Adleman (RSA) problems using a straight line program. This talk will also relate the security U.S. Federal Information Processing Standard (FIPS) 186-3 Digital Signature Algorithm (DSA) to the well-known discrete logarithm problem, and a not-so-well-known problem Diophantine problem: the one-up problem.
PAUL BUCKINGHAM, University of Alberta/PIMS
Connecting homomorphisms associated to Tate sequences [PDF]
The Tate sequence is the result of a unification of local and global class field theory, and describes the cohomology of the $S$-units in a Galois extension of number fields. In the traditional construction, $S$ was assumed to be large enough that the $S$-class-group was trivial. A refinement of Ritter and Weiss removed that assumption, so that their Tate sequence involved both the $S$-units and the $S$-class-group, giving rise to connecting homomorphisms not previously studied. We will provide the first descriptions of some of these connecting homomorphisms, and discuss some consequences.
MICHAEL COONS, University of Waterloo and Fields Institute
The rational-transcendental dichotomy of Mahler functions [PDF]
In the late 1920s and early 1930s, Mahler wrote a series of articles concerning the algebraic character of values of power series which satisfy a certain type of functional equation; these functional equations (and functions) are now called Mahler-type functional equations (and Mahler functions). He was able to show that if a Mahler function $f(z)$ is transcendental then the number $f(a)$ is transcendental for all but finitely many nonzero algebraic numbers $a$ in the radius of convergence of $f(z)$. Of course this result relies on the transcendence of a series, which may itself be difficult to ascertain. Some decades after Mahler's original investigations, Nishioka showed that a Mahler function was either transcendental or rational. Thus to show transcendence it is enough to show irrationality. In this talk, we will give a new (and much simpler) proof of Nishioka's theorem and discuss some refinements and generalizations.
KARL DILCHER, Dalhousie University
Congruences for sums of reciprocals [PDF]
The sums of reciprocals modulo $p$ over integers in $N$ subintervals of equal length of the interval $1\leq j\leq p-1$ are closely related to the Fermat quotients, and they have been studied in connection with the classical theory of Fermat's last theorem. In this talk we present new classes of linear relations between these sums for both even and odd $N$, and it is shown that for each even $N$ there are at least $\lfloor N/4\rfloor$ linearly independent relations. (Joint work with Ladislav Skula).
MATTHEW GREENBERG, Calgary
PATRICK INGRAM, Colorado State University
The arithmetic of Henon maps [PDF]
We will survey various recent results in the arithmetic dynamics of Henon maps.
MICHAEL JACOBSON, University of Calgary
Tabulating Class Groups of Real Quadratic Fields [PDF]
Class groups of real quadratic fields have been studied since the time of Gauss, and in modern times have been used in applications such as integer factorization and public-key cryptography. Tables of class groups are used to provide valuable numerical evidence in support of a number of unproven heuristics and conjectures, including those due to Cohen and Lenstra. In this talk, we discuss recent progress in our efforts to extend existing, unconditionally correct tables of real quadratic fields. This includes incorporating ideas of Sutherland for computing orders of elements in a group, as well as constructing a unconditional verification algorithm using the trace formula of Maass forms based on ideas of Booker.
This is joint work with C. Bian, A. Booker, A. Shallue, and A. Str\"ombergsson.
DAVID JAO, University of Waterloo
Isogeny-based Cryptography [PDF]
Cryptosystems based on isogenies between elliptic curves have recently been proposed as plausible alternatives to traditional public-key cryptosystems. These systems are of particular interest because they are conjectured to be resistant to attacks by quantum computers. We survey the existing constructions of isogeny-based public-key cryptosystems and describe the fastest known attacks against them. In the case of ordinary curves, we present an algorithm for evaluating isogenies, whose running time is provably subexponential under GRH. For supersingular curves, we propose a public-key cryptosystem based on pairs of isogenies over a curve with disjoint kernels, having performance competitive with standard cryptosystems, and describe our recent performance optimizations.
Joint work with A. Childs, L. De Feo, J. Plût, and V. Soukharev.
KEITH JOHNSON, Dalhousie University
Integer valued polynomials on noncommutative rings [PDF]
Rings of polynomials taking integral values on specified sets have been of interest to algebraists and number theorists at least since the work of Polya and Ostrowski in 1919. In the past this has usually been restricted to subsets of commutative rings, particularly rings of algebraic integers. We will discuss some examples involving noncommutative rings and in particular will give a description of the ring of rational polynomials taking integal values on nxn lower triangular matrices.
RICHARD MCINTOSH, University of Regina
p-adic equations for power sums [PDF]
For odd primes $p$ and positive integers $k$, define $S_k=\sum_{r=1}^{p-1}r^{-k}$. Applying the $p$-adic logarithm to the identity $\prod_{r=1}^{p-1}(1-{p\over r})=1$, we obtain $\sum_{k=1}^\infty p^k{S_k\over k}=0$, where the convergence is $p$-adic. (This means that the equation holds modulo $p^m$ for arbitrarily large $m$.) In this talk I will give some other $p$-adic equations for the power sums $S_k$. For example, $\sum_{k=1}^\infty p^k(-1)^{k-1}B_{k-1}S_k=0$, where $B_n$ is the $n$th Bernoulli number.
RENATE SCHEIDLER, University of Calgary
Cubic Function Field Tabulation and 3-Ranks of Hyperelliptic Curves [PDF]
We present an algorithm for tabulating all cubic function fields of square-free discriminant $D(x) \in \mathbb{F}_q(x)$ up to a given discriminant degree bound $B$ so that the hyperelliptic curve $y^2 = -3D(x)$ has only one infinite place. Our method is an extension of Belabas' technique for tabulating cubic number fields and requires $O(B^4 q^B)$ operations in $\mathbb{F}_q$ as $B \rightarrow \infty$. The main ingredient is a function field analogue of the Davenport-Heilbronn correspondence between triples of $\mathbb{F}_q(x)$-conjugate cubic function fields and certain equivalence classes of binary cubic forms over $\mathbb{F}_q(x)$, described via reduced representatives.
Our method additionally finds for any $r \in \mathbb{Z}^{\geq 0}$ all hyperelliptic curves $y^2 = -3D(x)$ whose class group has 3-rank $r$. For $q \equiv -1 \pmod{3}$, our numerical data largely supports the predicted heuristics of Friedman-Washington and partial results on the distribution of the counts of such curves due to Ellenberg-Venkatesh-Westerland. For $q \equiv 1 \pmod{3}$, our data seems to agree with a result due to Achter as well as recent conjectures due to Garton that incorporate into the Friedmann-Washington heuristics a correction factor first proposed by Malle for the number field scenario.
CAMERON STEWART, University of Waterloo
Well spaced integers generated by an infinite set of primes [PDF]
In this talk we discuss an old question of Wintner and its resolution by Tijdeman as well as recent developments due to the speaker and Jeongsoo Kim.We shall prove that there is an infinite set of prime numbers with the property that the sequence of positive integers made up from the set is well spaced. This is joint work with Jeongsoo Kim.
COLIN WEIR, University of Calgary
Decomposing the Jacobians of Hermitian Curves [PDF]
Hermitian curves are examples of maximal curves - they contain as many points as possible when considered over $\mathbb{F}_{q^2}$. As such, they are well studied objects. For example, it is known that the Jacobian of a Hermitian curve is isogenous to a product of super-singular elliptic curves. However, it is not known in general how their Jacobians decompose up to isomorphism (instead of isogeny). We explore this problem by instead considering the decomposition of the p-torsion group scheme of their Jacobians. This approach allows us to translate this problem into one that is purely combinatoric. This gives rise to an explicit decomposition with several interesting consequences. This is joint work with Rachel Pries.
HUGH WILLIAMS, University of Calgary
Compact Representations of Certain Algebraic Integers [PDF]
Suppose we have a real quadratic number field of discriminant d. If we have a principal ideal I, it usually requires an exponential (in log d) amount of time to write out a generator of I in the conventional way. However, there exists a representation of this generator, called a compact representation, which can be written out in polynomial time. In this talk I discuss algorithms for finding compact representations of such a generator, when we are given an approximate value of the logarithm of the absolute value of it and an integral basis of I. I go on to point out several improvements that have been to algorithms used in the past.
Event Sponsors
Support from these sponsors is gratefully acknowledged.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160992503166199, "perplexity_flag": "head"}
|
http://www.physicsforums.com/showthread.php?t=454639
|
Physics Forums
Recognitions:
Gold Member
## Prove using the definition of a limit, Please help! :)
1. The problem statement, all variables and given/known data
Prove using only the definition of a limit, that the sequence:
$$\frac{n}{(n+1)^1/2}$$ - $$\frac{n}{(n+2)^1/2}$$ converges.
2. Relevant equations
Let E>0 and choose a special N = something*E that whenever n>N our difference of limits is less than E...
3. The attempt at a solution
I know that the limit is 0, but I'm having trouble finding the special N. The algebra for this is horrible and I've spent a long while working on it. Please help. It will be greatly appreciated.
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
hint #1 show that $$\sqrt{\frac{n}{n+1}}-\sqrt{\frac{n}{n+2}}<\sqrt{\frac{n}{n+1}-\frac{n}{n+2}}$$ and use this fact to simplify what needs to be less than E. That is, find N such $$\sqrt{\frac{n}{n+1}-\frac{n}{n+2}}$$ is less than E for all n > N and you've got it.
Recognitions: Gold Member But I don't have a square root in the numerator; it's just in the denominator. I'm slightly confused, but will keep looking at it - in case it was me. :( Here's what I did: I have that my above sequence is equal to $$\frac{3n^2}{n^2+3n+2}$$ < $$\frac{3n^2}{n^2+3n}$$ < E and my N = $$\frac{2E}{3-E}$$
## Prove using the definition of a limit, Please help! :)
Quote by silvermane But I don't have a square root in the numerator; it's just in the denominator.
Sorry. i misread it.
I'm slightly confused, but will keep looking at it - in case it was me. :( Here's what I did: I have that my above sequence is equal to $$\frac{3n^2}{n^2+3n+2}$$ < $$\frac{3n^2}{n^2+3n}$$ < E and my N = $$\frac{2E}{3-E}$$
So you've got it now?
Recognitions:
Gold Member
Quote by pellman Sorry. i misread it. So you've got it now?
Well, I want to make sure that what I'm doing is correct. I've gone to my professor's office, and he wasn't very helpful to me. (there's a language barrier)
Either way, does my N make sense and is mathematically correct?
I wanted to get an N without squaring it as well, so I don't think what I've done above is allowed to be done.
Thank you so much for your help so far :)
Well, actually no. How did you get $$\frac{3n^2}{n^2+3n+2}$$ ?
Recognitions: Gold Member I squared everything and then simplified. I've come to realize however, that it's not something I shouldn't have done, but I wasn't thinking clearly at the time. I'm stuck when it comes down to the algebra: I know the limit is 0, so I just need to simplify my expression... I can then find a comparison to get a good N for my proof. I'm starting to become very frustrated with this problem. :( So far, I have this: $$\frac{n}{(n+1)^1/2}$$ - $$\frac{n}{(n+2)^1/2}$$ = $$\frac{(n^3 + n^2)^1/2 - (n^3 + 2n^2)^1/2}{(n^2 + 3n +2)^1/2}$$ Could I say that, $$\frac{(n^3 + n^2)^1/2 - (n^3 + 2n^2)^1/2}{(n^2 + 3n +2)^1/2}$$ < $$\frac{(n^3 + n^2)^1/2}{(n^2 + 3n +2)^1/2}$$, then $$\frac{(n^3 + n^2)^1/2}{(n^2 + 3n +2)^1/2}$$ < $$\frac{(n^3/2)}{(3n)^1/2}$$ = $$\frac{n}{3^1/2}$$ But I think this won't work with the definition of a limit. Someone please help! Any hint would do, I don't want an answer. :(
lol. I don't think this converges. the numerator goes to infinity faster than the denominator. Numerator is ~n^1. Denominator is ~n^(1/2) .
Recognitions: Gold Member If it does diverge, could I show that using the definition of a limit and reach a contradiction? I've been working on this for days, and the way the question is worded, it leads the student to think that the series converges. This is just a problem to help me prepare for the final, since it is a problem in the practice final, and it models the true quite closely. If you think my math above is correct, then it must diverge under the definition of the limit. But then I thought about it some more, and was wondering if this was true: $$\frac{(n^3 + n^2)^(1/2) - (n^3 +2n^2)^(1/2)}{(n^2 + 3n + 2)^(1/2)}$$ < $$\frac{(n^2)^(1/2) - (2n^2)^(1/2)}{(n^2 + 3n)^(1/2)}$$ = $$\frac{n(1-\sqrt{2})}{(n^2 + 3n)^(1/2)}$$ I'm really trying my best to work this out, but I can't see what I'm doing wrong here. If anyone could put me in the right direction, they would be a lifesaver. Thank you for helping me this much already pellman!
I think it is simpler than that. Let $$a_n = \frac{n}{\sqrt{n+1}}-\frac{n}{\sqrt{n+2}}$$ It should be straightforward to show that $$a_{n+1} > a_n$$ for all n greater than some N. I bet N is rather low, probably 1 or 2. This will require some rather tedious algebra or some clever shortcuts, if there are any. That's what I'd try first. It might be difficult though. I don't have time to work through it myself.
You are over complicating things. $$a_n = \frac{n}{\sqrt{n+1}}-\frac{n}{\sqrt{n+2}}$$ $$a_n = \frac{n \left( \sqrt {n+2} - \sqrt{n+1}\right)}{\sqrt{n+1} \sqrt{n+2}}$$ Rationalize to get $$a_n = \frac{n}{\sqrt{n+1} \sqrt{n+2}\left( \sqrt {n+2} + \sqrt{n+1}\right)}$$ From here it is straight forward to show that $$a_n < \frac{1}{2 \sqrt{n}}$$ So it is easy to show the limit is zero.
Sheesh. I'm getting old.
Recognitions:
Gold Member
Quote by ╔(σ_σ)╝ You are over complicating things. $$a_n = \frac{n}{\sqrt{n+1}}-\frac{n}{\sqrt{n+2}}$$ $$a_n = \frac{n \left( \sqrt {n+2} - \sqrt{n+1}\right)}{\sqrt{n+1} \sqrt{n+2}}$$ Rationalize to get $$a_n = \frac{n}{\sqrt{n+1} \sqrt{n+2}\left( \sqrt {n+2} + \sqrt{n+1}\right)}$$ From here it is straight forward to show that $$a_n < \frac{1}{2 \sqrt{n}}$$ So it is easy to show the limit is zero.
lol I actually did something like that, and worked it out last night. I ended up getting 1/E^2 :)
Combining and THEN taking the conjugate was what needed to be done. Thank you so much for your help! I feel very prepared for my final now
Recognitions: Gold Member PS: you're not old, you're awesome :) Heck, I couldn't figure it out, and MANY others I went to didn't even know where to start. I'm very happy that I was able to finally get it, and I think you should be too. :) You deserve a pat on the back!
Tags
convergence, definition of limit, epsilon, limit
Thread Tools
| | | |
|-----------------------------------------------------------------------------|----------------------------|---------|
| Similar Threads for: Prove using the definition of a limit, Please help! :) | | |
| Thread | Forum | Replies |
| | Calculus | 11 |
| | Calculus | 3 |
| | Calculus & Beyond Homework | 7 |
| | General Math | 5 |
| | General Math | 3 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 32, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9737590551376343, "perplexity_flag": "head"}
|
http://mathoverflow.net/revisions/55279/list
|
## Return to Answer
4 added 292 characters in body
I'm a bit confused by the quoted wikipedia entry, because the category of reduced rings also has coproducts (take the tensor product, and then pass to the quotient by the nilradical), and hence the category of reduced schemes, the category of varieties over a field, and so on, all admit fibre produtsproducts. [Added: See Jim's Borgers series of comments below for a discussion of why, nevertheless, there may be a purely categorical description of the sense in which constructions in the category of reduced rings can be "wrong", while constructions in the category of all rings are the right ones.]
So the answer to the question of why we need nilpotents is not that it is necessary for the existence of fibre products.
Grothendieck introduced nilpotents for many reasons, a number of which are discussed in the other answers: to get correct counting in degenerate situations, it is typically necessary to allow nilpotents; they are also the bedrock of deformation theory and other applications of analytic ideas in algebraic geometry.
It might be helpful to recall another motivation, which forms a significant part of Grothendieck's overall strategy for studying algebraic geometry: Suppose that we want to prove a property about a morphism $f: X \to S$. A typical approach is to first show that it is a local property, in some sense, so that we can reduce to the case when Spec $\mathcal O_{S,s}$ for some point $s \in S$, and hence assume that $S$ is local; and then to use a flat descent argument to pass from $\mathcal O_{S,s}$ to its completion, and thus assume that $S$ is the Spec of a complete local ring. We then write this complete local ring as the projective limit of the quotients by its maximal ideals, and so reduce to the case when $S$ is the Spec of an Artinian local ring. Since such a Spec has a single point, we can then hope to reduce to checking our property on the fibre over this one point, which reduces us to the case when $S$ is the Spec of a field.
This is a powerful method, which absolutely requires us to be able to do geometry over an Artinian ring (and hence requires us to allow nilpotents). It comes up in lots of places, e.g. in establishing basic properties of abelian schemes, by reducing to the abelian variety case. See the anwers to this question for some examples.
3 deleted 1 characters in body
I'm a bit confused by the quoted wikipedia entry, because the category of reduced rings also has coproducts (take the tensor product, and then pass to the quotient by the nilradical), and hence the category of reduced schemes, the category of varieties over a field, and so on, all admit fibre produts.
So the answer to the question of why we need nilpotents is not that it is necessary for the existence of fibre products.
Grothendieck introduced nilpotents for many reasons, a number of which are discussed in the other answers: to get correct counting in degenerate situations, it is typically necessary to allow nilpotents; they are also the bedrock of deformation theory and other applications of analytic ideas in algebraic geometry.
It might be helpful to recall another motivation, which forms a significant part of Grothendieck's overall strategy for studying algebraic geometry: Suppose that we want to prove a property about a morphism $f: X \to S$. A typical approach is to first show that it is a local property, in some sense, so that we can reduce to the case when Spec $\mathcal O_{S,s}$ for some point $s \in S$, and hence assume that $S$ is local; and then to use a flat desecent descent argument to pass from $\mathcal O_{S,s}$ to its completion, and thus assume that $S$ is the Spec of a complete local ring. We then write this complete local ring as the projective limit of the quotients by its maximal ideals, and so reduce to the case when $S$ is the Spec of an Artinian local ring. Since such a Spec has a single point, we can then hope to reduce to checking our property on the fibre over this one point, which reduces us to the case when $S$ is the Spec of a field.
This is a powerful method, which absolutely requires us to be able to do geometry over an Artinian ring (and hence requires us to allow nilpotents). It comes up in lots of places, e.g. in establishing basic properties of abelian schemes, by reducing to the abelian variety case. See the anwers to this question for some examples.
2 added 10 characters in body
I'm a bit confused by the quoted wikipedia entry, because the category of reduced rings also has coproducts (take the tensor product, and then pass to the quotient by the nilradical), and hence the category of reduced schemes, the category of varieties over a field, and so on, all admit fibre produts.
So the answer to the question of why we need nilpotents is not that it is necessary for the existence of fibre products.
Grothendieck introduced nilpotents for many reasons, a number of which are discussed in the other answers: to get correct counting in degenerate situations, it is typically necessary to allow nilpotents; they are also the bedrock of deformation theory and other applications of analytic ideas in algebraic geometry.
It might be helpful to recall another motivation, which forms a significant part of Grothendieck's overall strategy for studying algebraic geometry: Suppose that we want to prove a property about a morphism $f: X \to S$. A typical approach is to first show that it is a local property, in some sense, so that we can reduce to the case when Spec $\mathcal O_{S,s}$ for some point $s \in S$, and hence assume that $S$ is local; and then to use a flat desecent argument to pass from $\mathcal O_{S,s}$ to its completion, and thus assume that $S$ is the Spec of a complete local ring. We then write this complete local ring as the projective limit of the quotients by its maximal ideals, and so reduce to the case when $S$ is the Spec of an Artinian local ring. Since such a Spec has a single point, we can then hope to reduce to checking our property on the fibre over this one point, which reduces us to the case when $S$ is the Spec of a field.
This is a powerful method, which absolutely requires us to be able to do geometry over an Artinian ring (and hence requires us to allow nilpotents). It comes up in lots of places, e.g. in establishing basic properties of abelian schemes, by reducing to the abelian variety case. See the anwers to this question for some examples.
1
I'm a bit confused by the quoted wikipedia entry, because the category of reduced rings also has coproducts (take the tensor product, and then pass to the quotient by the nilradical), the category of reduced schemes, the category of varieties over a field, and so on, all admit fibre produts.
So the answer to the question of why we need nilpotents is not that it is necessary for the existence of fibre products.
Grothendieck introduced nilpotents for many reasons, a number of which are discussed in the other answers: to get correct counting in degenerate situations, it is typically necessary to allow nilpotents; they are also the bedrock of deformation theory and other applications of analytic ideas in algebraic geometry.
It might be helpful to recall another motivation, which forms a significant part of Grothendieck's overall strategy for studying algebraic geometry: Suppose that we want to prove a property about a morphism $f: X \to S$. A typical approach is to first show that it is a local property, in some sense, so that we can reduce to the case when Spec $\mathcal O_{S,s}$ for some point $s \in S$, and hence assume that $S$ is local; and then to use a flat desecent argument to pass from $\mathcal O_{S,s}$ to its completion, and thus assume that $S$ is the Spec of a complete local ring. We then write this complete local ring as the projective limit of the quotients by its maximal ideals, and so reduce to the case when $S$ is the Spec of an Artinian local ring. Since such a Spec has a single point, we can then hope to reduce to checking our property on the fibre over this one point, which reduces us to the case when $S$ is the Spec of a field.
This is a powerful method, which absolutely requires us to be able to do geometry over an Artinian ring (and hence requires us to allow nilpotents). It comes up in lots of places, e.g. in establishing basic properties of abelian schemes, by reducing to the abelian variety case. See the anwers to this question for some examples.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445769190788269, "perplexity_flag": "head"}
|
http://mathhelpforum.com/differential-geometry/185435-question-monotone-sequences.html
|
# Thread:
1. ## Question on monotone sequences
Let A be an infinite subset of the real numbers that is bounded above and let u=supA. Show that there exists an increasing sequence $(x_n)$ with $x_n \in A \forall n \in \mathbb{N}$ such that $u=lim(x_n)$.
The only way I can think of starting this question is to form some sort of sequence involving u, but can't think of how to do this. Help?
2. ## Re: Question on monotone sequences
Originally Posted by worc3247
Let A be an infinite subset of the real numbers that is bounded above and let u=supA. Show that there exists an increasing sequence $(x_n)$ with $x_n \in A \forall n \in \mathbb{N}$ such that $u=lim(x_n)$.
The only way I can think of starting this question is to form some sort of sequence involving u, but can't think of how to do this. Help?
Hint:
Use recursion.
Maybe I mistaken here, but here it goes:
Let us defined the following sequence:
$x_{n+1}=\sqrt{u+x_n}$, $x_1=\sqrt{u}\inA$( it because $A\neq\emptyset$).
Now, try to prove $x_n$ is monotonic increasing and bounded; and in the last stage find the limit.
3. ## Re: Question on monotone sequences
Originally Posted by Also sprach Zarathustra
$x_{n+1}=\sqrt{u+x_n}$, $x_1=\sqrt{u}\inA$( it because $A\neq\emptyset$).
How do you know that $x_n\in A, \forall n$?
@worc3247: use the definition of $\sup$: for all $\varepsilon>0$ we can find $x_{\varepsilon}\in A$ such that $u-\varepsilon<x_{\varepsilon}$. Now that $\varepsilon =\frac 1{n+1}$ for $n\in\mathbb{N}$.
4. ## Re: Question on monotone sequences
Originally Posted by worc3247
Let A be an infinite subset of the real numbers that is bounded above and let u=supA. Show that there exists an increasing sequence $(x_n)$ with $x_n \in A \forall n \in \mathbb{N}$ such that $u=lim(x_n)$.
That statement is false.
Let $A=[0,1]\cup\{2\}$.
There is no increasing (as opposed to non-decreasing) sequence in $A$ converging to $2$.
(I do not consider a constant sequence as increasing.)
5. ## Re: Question on monotone sequences
Well, it is. Otherwise, we have to say strictly increasing.
By the way, girdav's solution requires the axiom of countable choice. Maybe there is a way without it... I don't see it (except for a set with an ending interval).
6. ## Re: Question on monotone sequences
Originally Posted by pece
Well, it is. Otherwise, we have to say strictly increasing.
By the way, girdav's solution requires the axiom of countable choice. Maybe there is a way without it... I don't see it (except for a set with an ending interval).
I beg your pardon. There is a whole school of analysis for which that statement is false. There are just four types of monotone sequences: increasing, decreasing, non-increasing, or non-decreasing. When we see the word 'increasing' it means what it says, the sequence increases. If two consecutive terms are equal then there is no increase is there? So that sequence is possibly non-decreasing but it certainly not increasing.
7. ## Re: Question on monotone sequences
Sorry. For us french, increasing means non-decreasing for you. I translate it too literally.
In that case, it's clearly non-decreasing that would be used.
8. ## Re: Question on monotone sequences
The crucial point, as Plato said, is that you cannot prove the original statement- it is false.
If we are given that the sup of the set, u, is not in the set, then it is true. For any n> 0, there exist a member of the set in the interval from u- 1/n to u- otherwise u-1/n would be an upper bound on the set. Call that number " $x_n$". The (non-decreasing) sequence $\{x_n\}$ converges to u. A strictly increasing sequence can be derived from that sequence.
(We need u not in the set since otherwise, as in Plato's example, $x_n$ might be u itself.)
I have seen textbooks in English that use "non-decreasing; increasing" and others that use "increasing; strictly increasing". Normally which convention is being used is made clear.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411572813987732, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/navier-stokes
|
# Tagged Questions
The Navier-Stokes equations describe fluid flows in continuum mechanics.
1answer
37 views
### boundary conditions for liquid with surface tension
so one uses equations of motion to describe liquids (e.g. Navier–Stokes equations). These are equations for $\vec{v}(\vec{r},t)$ with boundary conditions on the surface $S$ of the liquid (e.g. ...
0answers
25 views
### What is problem that is interest to clay math institute about Navier Stokes equations? [duplicate]
What is Navier Stokes equations interested to Clay math institute? What is the question of clay math institute? What is the problem which posed by Clay institute? my question is clear what is not ...
1answer
29 views
### Additional boundary conditions for inclined flow?
I am solving an inclined flow problem, and am stuck. The problem is to find the volumetric flow rate of inclined flow in a square channel. Once I have the velocity profile, I can just integrate over ...
2answers
175 views
### General procedure for solving fluid flow problems
Could someone help me devise a short series of steps for solving an arbitrary fluid flow problem? Often the most difficult part of these problems is just figuring out what path to take in solving ...
1answer
172 views
### What type of PDE are Navier-Stokes equations, and Schrödinger equation?
What type of PDE are Navier-Stokes equations, and Schrödinger equation? I mean, are they parabolic, hyperbolic, elliptic PDEs?
1answer
280 views
### Exact Solutions to the Navier-Stokes Equations
There are a number of exact solutions to the Navier-Stokes equations. How many exact solutions are currently known? Is it possible to enumerate all of the solutions to the Navier-Stokes equations?
3answers
106 views
### Navier-Stokes system
I have to study this system which name is Navier-Stokes. Can you explain please what means that $p$, $u$ and $(u \cdot \nabla)u$. What represents in reality? Tell me please, how should I read the ...
1answer
55 views
### What is the term for heat generation by a flowing fluid?
I would like to know more about the heat distribution over time in a flowing liquid. To this end, I consider the Navier-Stokes equation (where the coefficients may be temperature dependent) and the ...
1answer
211 views
### What is the mystery of turbulence?
One of the great unsolved problems in physics is turbulence but I'm not too clear what the mystery is. Does it mean that the Navier-Stokes equations don't have any turbulent phenomena even if we solve ...
3answers
119 views
### Unclear how heat interacts with Navier Stokes
I am playing around with an Navier stokes solver and I'm having trouble introducing heat. Am I right in thinking this would be introduced in the ${\bf f}$ term of \${\partial{\bf u}\over\partial t} = ...
0answers
43 views
### Is sonoluminescence relevant to the behaviour of Navier-Stokes (or converse)?
More precisely, could Sonoluminescence be a singularity of Navier-Stokes(NS)? Is there some other connection that might be interesting, or is it completely irrelevant? Wiki page mentions NS, but says ...
1answer
179 views
### Gravity duals to Navier Stokes and interpretation of non linear contributions
I have been reading the paper The Incompressible Non-Relativistic Navier-Stokes Equation from Gravity. In it they state, "An instability, if it occurs, must necessarily break a symmetry ... ...
2answers
136 views
### Where can I check a solution to 3D Navier Stokes?
A few years ago I developed a solution to the Navier-Stokes equations and as of yet have not been able to locate a similar version of the solution. I would like to know if anyone has seen a solution ...
0answers
90 views
### Is there a nice way to write Navier-Stokes equations in exterior calculus
I'm considering to study some high-dimensional Navier-Stokes equations. One problem is to do write the viscous equation for vorticity, helicity and other conserved quantities. I think it might be ...
3answers
418 views
### Why are Navier-Stokes equations needed?
Can't we picture air or water molecules individually? Then, why are Navier-Stokes equations needed, after all? Can't we just aggregate individual ones? Or is it computationally difficult, or ...
1answer
1k views
### Boussinesq approximation for the Navier Stokes' equation - discrepancy
In the Navier Stokes' equation: $\rho_0 \left( \frac{\partial v}{\partial t} + v \cdot \nabla v\right) = -\nabla p + \mu \nabla^2 v + \hat{f}$ I included the temperature variation of density as ...
1answer
295 views
### About Turbulence modeling
There is a paper titled "Lagrangian/Hamiltonian formalism for description of Navier-Stokes fluids" in PRL. After reading the paper, the question arises how far can we investigate turbulence with this ...
1answer
125 views
### Validity of the Multi-Species Navier-Stokes Equations for real gases
I'm wondering what are the validity limits of Multi-Species Navier-Stokes equations. I'm aware of the limit for rarefied gases. But is there any new limit that arises in the context of real gases? I ...
0answers
71 views
### Energies decay in 3D homogeneous rotating turbulence
In three-dimensional rotating homogeneous turbulence governed by the hyper-viscous Navier-Stokes equation with an additional Coriolis force in a three-periodic setup: \begin{equation} ...
1answer
250 views
### Why does a transformation to a rotating reference frame NOT break temporal scale invariance?
Naively, I thought that transforming a scale invariant equation (such as the Navier-Stokes equations for example) to a rotating reference frame (for example the rotating earth) would break the ...
1answer
444 views
### Is there an analytical solution for fluid flow in a square duct?
I couldn't find one but assumed it must exist. Tried to find it on the back of an envelope, but got to an ugly differential equation I can't solve. I'm assuming a square duct of infinite length, ...
0answers
101 views
### How to integrate twice of this viscous term?
I am reading a paper, and I do not understand why the author said the following term when integrated twice will become, \$\int\limits_\Omega {{\rm{d}}\Omega {{\bf{\psi }}^{\bf{u}}}\cdot\nabla ...
2answers
2k views
### Convective and Diffusive terms in Navier Stokes Equations
My question has 2 parts: I just followed the derivation of Navier Stokes (for Control Volume CFD analysis) and was able to understand most parts. However, the book I use (by Versteeg) does not ...
1answer
193 views
### Reynolds number with hyper-viscosity
Is it possible to evaluate a Reynolds number when viscosity operator is substituted by hyper-viscosity operator at the power H (Laplacien to the power H) in the incompressible Navier-Stokes equations ...
3answers
346 views
### Occurrence of turbulences in Fluid Dynamics from the equations of motion?
How can it be shown that turbulences occur in Fluid Dynamics? I think poeple imply that they develope because of the $\text{rot}$ terms in the equations of motion, i.e. the Navier-Stokes equations, ...
1answer
335 views
### Free boundary conditions
I am trying to simulate liquid film evaporation with free boundary conditions (in cartesian coordinates) and my boundary conditions are thus: $$\frac{\partial h}{\partial x} = 0, \qquad (1)$$ ...
1answer
204 views
### Lagrangian for Euler Equations in general relativity
The stress energy tensor for relativistic dust $$T_{\mu\nu} = \rho v_\mu v_\nu$$ follows from the action S_M = -\int \rho c \sqrt{v_\mu v^\mu} \sqrt{ -g } d^4 x = -\int c \sqrt{p_\mu ...
1answer
343 views
### Would a solution to the Navier-Stokes Millennium Problem have any practical consequences?
I know the problem is especially of interest to mathematicians, but I was wondering if a solution to the problem would have any practical consequences. Upon request: this is the official problem ...
1answer
484 views
### Friction term in Navier-Stokes equation
The friction term in Navier-Stokes equation assumes that the viscosity coefficients are the same for the longitudinal and transverse directions. This doesn't seem intuitive, because the former is ...
1answer
301 views
### How to derive the Karman-Howarth-Monin relation for anisotropic turbulence?
I find the derivation of the Karman-Howarth-Monin relation in the book Turbulence from Frisch (1995) a bit to short. Can someone point me to a more detailed derivation of that relation, if possible in ...
3answers
425 views
### How to calculate the upper limit on the number of days weather can be forecast reliably?
To put it bluntly, weather is described by the Navier-Stokes equation, which in turn exhibits turbulence, so eventually predictions will become unreliable. I am interested in a derivation of the ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9131722450256348, "perplexity_flag": "middle"}
|
http://scicomp.stackexchange.com/questions/2391/derive-pca-with-svd
|
# Derive PCA with SVD
The context is I have a big matrix, 20K * 50K, and I want reduce the dimensionality. In R, it's impossible to apply PCA with more variables(columns) than observations(rows). Therefore, I am trying a partial SVD with this matrix, for instance, I calculate the top 100 left and right singular vectors. But with these singular vectors, how can I obtain the new data. For example, for PCA, we have loadings for calculating the scores in the new transformed coordinate system, can we do the same thing with SVD?
Thanks for your kind response !
-
## 2 Answers
Suppose your data matrix is $\mathbf{X} \in \mathbb{R}^{n \times m}$, where $m$ is the number of data points, and $n$ is the dimensionality of the data. Let the SVD of $\mathbf{X}$ be
\begin{align} \mathbf{X} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}. \end{align}
Assume for the sake of simplicity that the data points that make up $\mathbf{X}$ have been not been translated (for example, if you normally subtract the mean, assume it's zero). Also, assume that the singular values are in decreasing order from left to right. Let $\mathbf{U}_{k}$ be the matrix consisting of the leftmost $k$ columns of $\mathbf{U}$ (i.e., the left singular vectors corresponding to the $k$ largest singular values). Then PCA defines the following transformations:
• Mapping into a lower dimensional coordinate system: given $\mathbf{x} \in \mathbb{R}^{n}$, where $n$ corresponds to the dimensionality of your original data, the mapped (or transformed) data will be $\mathbf{y} \in \mathbb{R}^{k}$, where $\mathbf{y} = \mathbf{U}_{k}^{T}\mathbf{x}$
• Mapping the lower dimensional, transformed data back to the original host space and coordinate system: given the transformed data point $\mathbf{y} \in \mathbb{R}^{k}$, the representation of that point in the original host space and coordinate system is $\tilde{\mathbf{x}} = \mathbf{U}_{k}\mathbf{y}$.
• Projecting onto a lower dimensional subspace: given $\mathbf{x} \in \mathbb{R}^{n}$, PCA defines an orthogonal projector $\mathbf{P}_{k} = \mathbf{U}_{k}\mathbf{U}_{k}^{T}$, so that the projected data point $\tilde{\mathbf{x}} \in \mathbb{R}^{n}$ is defined by $\tilde{\mathbf{x}} = \mathbf{P}_{k}\mathbf{x}$. Projection is equal to the second mapping in this list composed with the first mapping in this list.
If you translate all of your data by a point $\mathbf{x}_{0} \in \mathbb{R}^{n}$, just replace $\mathbf{x}$ in the above list with the expression $(\mathbf{x} - \mathbf{x}_{0})$, and the same expressions will hold.
-
Thanks at first, but still I don't understand well your explanation. Especially the first point, i.e. mapping into a lower dimensional coordinate system. Suppose the X is a matrix 8 by 10, and we apply a partial SVD on it, taking only top 5 singular values, and singular right/left vectors. Then according to my experiment, U will be 8 by 5, and V is 10 by 5. Now I have a test dataset, A to be transformed, and its size is 2 by 10 (same columns as attributes), clearly the U^T won't have an appropriate size for matrix multiplication with A. Am I right ? – Ensom Hodder May 30 '12 at 14:56
I think you're treating data points as rows in your data matrix, but the explanation above assumes that each data point is a column of the data matrix, in which case you'd have to transpose matrices appropriately. – Geoff Oxberry♦ May 30 '12 at 15:00
You're right ! That's my fault :-p Anyway, thanks a lot for your kind response ! – Ensom Hodder May 30 '12 at 15:51
By the way, for someone uses row for observations, and columns for attributes(variables), we may obtain the transformed data by this formula : $\mathbf{y} = \mathbf{x}\mathbf{V}_{k}^{T}$ – Ensom Hodder May 30 '12 at 16:05
Checkout gensim, Latent Semantic Analysis is the field of linguistics code-word for iterative SVD.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8892008662223816, "perplexity_flag": "head"}
|
http://mathforum.org/mathimages/index.php?title=Lissajous_Curve&diff=34006&oldid=33398
|
# Lissajous Curve
### From Math Images
(Difference between revisions)
| | | | |
|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| | | Current revision (11:07, 16 July 2012) (edit) (undo) | |
| (22 intermediate revisions not shown.) | | | |
| Line 2: | | Line 2: | |
| | |ImageName=Lissajous Box | | |ImageName=Lissajous Box |
| | |Image=Lissajous_Box.jpg | | |Image=Lissajous_Box.jpg |
| - | |ImageIntro=This is a beautiful Lissajous Box. The Lissajous Curves on its sides have an angular frequency ratio of 10:7.<br><br> | + | |ImageIntro=<br>This is a beautiful Lissajous Box. The curves on its sides are Lissajous Curves with a frequency ratio of 10:7.<br><br> |
| | |ImageDescElem= | | |ImageDescElem= |
| | <br> | | <br> |
| - | '''Lissajous Curves''', or Lissajous Figures, are patterns formed when two <balloon hover="Click!" title="load:onewordtitle" click="1">harmonic vibrations</balloon><span id="onewordtitle" style="display:none">In physics, <font color = "green">harmonic vibration</font> is a type of periodic motion where the restoring force is proportional to the displacement. Its equation of motion is ''x'' = ''A'' sin (ωt + φ), in which ''A'' is its '''magnitude''', ω is its '''angular frequency''', and φ is its '''initial phase'''. This equation gives us a sinusoidal ''x - t'' graph. See Image:[[Image:Harmonic Vibration.gif|350px]].</span> along perpendicular lines are superimposed. For example, the following animation tells us how to generate a Lissajous Curve: | + | In physics, '''harmonic vibration''' is a type of periodic motion where the restoring force is proportional to the displacement. If you haven't seen harmonic vibrations before, please read through [[Simple Harmonic Motion|this helper page]] before proceeding, since this concept is crucial in our following discussion of Lissajous Curves. |
| | <br><br> | | <br><br> |
| | | + | '''Lissajous Curves''', or Lissajous Figures, are beautiful patterns formed when two harmonic vibrations along perpendicular lines are superimposed. The following animation tells us how to generate one Lissajous Curve: |
| | | + | <br><br><br> |
| | <div id = "Generate"> | | <div id = "Generate"> |
| | ::::::{{{!}} border=1 cellpadding=0 cellspacing=0 | | ::::::{{{!}} border=1 cellpadding=0 cellspacing=0 |
| Line 23: | | Line 25: | |
| | :<math>\left.\begin{array}{rcl} x & \mbox{=} & A \sin(at + \phi) \\ y & \mbox{=} & B \sin(bt) \end{array}\right.</math> | | :<math>\left.\begin{array}{rcl} x & \mbox{=} & A \sin(at + \phi) \\ y & \mbox{=} & B \sin(bt) \end{array}\right.</math> |
| | <br> | | <br> |
| - | in which ''A'' and ''B'' are magnitudes of two harmonic vibrations, ''a'' and ''b'' are their angular frequencies, and φ is their phase difference. The term "phase difference" means that it's the difference between the two vibrations' initial phases. To see exactly what these terms are, and how they affect the appearance of Lissajous Curves, please refer to the [[#MME|More Mathematical Explanation]] section. | + | in which ''A'' and ''B'' are magnitudes of two harmonic vibrations, ''a'' and ''b'' are their angular frequencies, and φ is their phase difference. If you are unfamiliar with these terms, please refer to [[Simple Harmonic Motion|this helper page]]. |
| | <br><br> | | <br><br> |
| | The Lissajous Curve in [[#Figure1|Figure 1]] has ''A'' = ''B'' = 10, ''a'' = 3, ''b'' = 2, and φ = 0. As we have stated before, we can get more Lissajous Curves by changing these parameters. The following images show some of these figures: | | The Lissajous Curve in [[#Figure1|Figure 1]] has ''A'' = ''B'' = 10, ''a'' = 3, ''b'' = 2, and φ = 0. As we have stated before, we can get more Lissajous Curves by changing these parameters. The following images show some of these figures: |
| Line 76: | | Line 78: | |
| | :<math>\left.\begin{array}{rcl} x & \mbox{=} & \sin(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> | | :<math>\left.\begin{array}{rcl} x & \mbox{=} & \sin(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> |
| | <br> | | <br> |
| - | from which we can easily get: | + | This set of equations tells us: |
| | <br><br> | | <br><br> |
| | :<math>x = y</math> | | :<math>x = y</math> |
| Line 96: | | Line 98: | |
| | :<math>\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + {\pi \over 2}) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> | | :<math>\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + {\pi \over 2}) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> |
| | <br> | | <br> |
| - | Using the trigonometric identity <math>\sin (t + {\pi \over 2}) = \cos(t)</math>, we will get: | + | Using the trigonometric identity <math>\sin (t + {\pi \over 2}) = \cos(t)</math>, we can get: |
| | | | |
| | :<math>\left. \begin{array}{rcl} x & \mbox{=} & \cos(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> | | :<math>\left. \begin{array}{rcl} x & \mbox{=} & \cos(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> |
| | <br> | | <br> |
| - | Using the trigonometric identity <math>\sin^2(\theta) + \cos^2(\theta) = 1</math>, we will get: | + | Since <math>\sin^2(\theta) + \cos^2(\theta) = 1</math>, we can eliminate ''t'' and get the following equation: |
| | <br><br> | | <br><br> |
| | :<math>x^2 + y^2 = 1</math> | | :<math>x^2 + y^2 = 1</math> |
| | <br><br> | | <br><br> |
| - | Which gives us the circle shown in [[#Figure3b|Figure 3-b]]. | + | which gives us the circle shown in [[#Figure3b|Figure 3-b]]. |
| | <br><br><br> | | <br><br><br> |
| | ===Example 3: parabola=== | | ===Example 3: parabola=== |
| Line 112: | | Line 114: | |
| | This time, if we change the parameters into ''a'' = 1, ''b'' = 1, and φ = <math>\pi \over 4</math>, then the parametric equations will become: | | This time, if we change the parameters into ''a'' = 1, ''b'' = 1, and φ = <math>\pi \over 4</math>, then the parametric equations will become: |
| | <br> | | <br> |
| - | {{EquationRef2|Eq. 1}}<math> x = \sin(t + {\pi \over 4})</math> | + | {{EquationRef2|Eq. 1a}}<math> x = \sin(t + {\pi \over 4})</math> |
| - | {{EquationRef2|Eq. 2}}<math> y = \sin(2t)</math> | + | {{EquationRef2|Eq. 1b}}<math> y = \sin(2t)</math> |
| | <br> | | <br> |
| - | from {{EquationNote|Eq. 1}} we can get: | + | from {{EquationNote|Eq. 1a}} we can get: |
| | <br><br> | | <br><br> |
| | :<math>2x^2 - 1 = 2\sin^2(t + {\pi \over 4}) - 1 </math> | | :<math>2x^2 - 1 = 2\sin^2(t + {\pi \over 4}) - 1 </math> |
| Line 123: | | Line 125: | |
| | :<math>2x^2 - 1 = - \cos (2t + {\pi \over 2}) </math> | | :<math>2x^2 - 1 = - \cos (2t + {\pi \over 2}) </math> |
| | <br> | | <br> |
| - | Applying the formula <math>\cos(\theta + {\pi \over 2}) = - \sin(\theta)</math>, we can get: | + | Applying the formula <math>\cos(\theta + {\pi \over 2}) = - \sin(\theta)</math>: |
| | <br><br> | | <br><br> |
| | :<math>2x^2 - 1 = \sin(2t)</math> | | :<math>2x^2 - 1 = \sin(2t)</math> |
| | <br> | | <br> |
| - | Combining it with {{EquationNote|Eq. 2}}, we can get: | + | Combining it with {{EquationNote|Eq. 1b}}, we can again eliminate ''t'' and get the following equation: |
| | <br><br> | | <br><br> |
| | :<math> y = 2x^2 - 1</math> | | :<math> y = 2x^2 - 1</math> |
| Line 205: | | Line 207: | |
| | :<math>\left.\begin{array}{rcl} x & \mbox{=} & \sin(rt) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> | | :<math>\left.\begin{array}{rcl} x & \mbox{=} & \sin(rt) \\ y & \mbox{=} & \sin(t) \end{array}\right.</math> |
| | <br> | | <br> |
| - | in which ''r'' stands for the angular frequency ratio of two component vibrations. It could be rational of irrational. Here we are only studying the rational case because it's simpler, and the irrational version comes in [[#Irrational|later sections]]. The argument for this representation goes as following: | + | in which ''r'' stands for the angular frequency ratio of two component vibrations. It could be rational or irrational. Here we are only studying the rational case because it's simpler, and the irrational version comes in [[#Irrational|later sections]]. The argument for this representation goes as following: |
| | <br><br> | | <br><br> |
| | For any Lissajous Curve, | | For any Lissajous Curve, |
| Line 239: | | Line 241: | |
| | which leads to: | | which leads to: |
| | <div id = "omit"> | | <div id = "omit"> |
| - | {{EquationRef2|Eq.1}}<math>{a \over b}(t_1 - t_0) = 2k_1 \pi</math><br> | + | {{EquationRef2|Eq. 2a}}<math>{a \over b}(t_1 - t_0) = 2k_1 \pi</math><br> |
| - | {{EquationRef2|Eq.2}}<math>(t_1 - t_0) = 2k_2 \pi</math> | + | {{EquationRef2|Eq. 2b}}<math>(t_1 - t_0) = 2k_2 \pi</math> |
| | <br> | | <br> |
| | in which ''k''<sub>1</sub> and ''k''<sub>2</sub> are integers. The other possibility <math>t_1 + t_0 = (2k + 1)\pi</math> is omitted because they represent the intersections inside one cycle. At these intersections, although the positions overlap, the velocities don't. So the Lissajous Curve is not closed at these points. | | in which ''k''<sub>1</sub> and ''k''<sub>2</sub> are integers. The other possibility <math>t_1 + t_0 = (2k + 1)\pi</math> is omitted because they represent the intersections inside one cycle. At these intersections, although the positions overlap, the velocities don't. So the Lissajous Curve is not closed at these points. |
| | </div> | | </div> |
| | <br> | | <br> |
| - | Substituting {{EquationNote|Eq.2}} into {{EquationNote|Eq.1}}, we get:<br> | + | Substituting {{EquationNote|Eq. 2b}} into {{EquationNote|Eq. 2a}}, we get:<br> |
| | :<math>{a \over b} = {k_1 \over k_2}</math> | | :<math>{a \over b} = {k_1 \over k_2}</math> |
| | <br> | | <br> |
| - | since ''a'' / ''b'' is assumed to be an irreducible fraction (if they weren't, we could divide them by their common factor without changing the Lissajous Curve), the smallest ''k''<sub>1</sub> and ''k''<sub>2</sub> that satisfy this equation are ''k''<sub>1</sub> = ''a'' and ''k''<sub>2</sub> = ''b''. Substituting back into {{EquationNote|Eq.2}}, we can get: | + | since ''a'' / ''b'' is assumed to be an irreducible fraction (if they weren't, we could divide them by their common factor without changing the Lissajous Curve), the smallest ''k''<sub>1</sub> and ''k''<sub>2</sub> that satisfy this equation are ''k''<sub>1</sub> = ''a'' and ''k''<sub>2</sub> = ''b''. Substituting back into {{EquationNote|Eq. 2b}}, we can get: |
| | <br><br> | | <br><br> |
| | :<math>(t_1 - t_0) = 2b \pi </math> | | :<math>(t_1 - t_0) = 2b \pi </math> |
| Line 284: | | Line 286: | |
| | :<math>\left. \begin{array}{rcl} x & \mbox{=} & \cos(t + \phi) \\ y & \mbox{=} & \sin(t + \phi) \end{array}\right.</math> | | :<math>\left. \begin{array}{rcl} x & \mbox{=} & \cos(t + \phi) \\ y & \mbox{=} & \sin(t + \phi) \end{array}\right.</math> |
| | <br> | | <br> |
| - | The variable parameter φ here doesn't change the shape of the circle, as we still have the relationship <math>x^2 + y^2 = 1</math>. But if we change the value of φ , then the circle will <balloon hover="Click!" title="load:rotate" click="1">rotate</balloon><span id="rotate" style="display:none">It's a rotational motion because if we change φ by Δφm then for every point on the circular base we are getting a rotated point on the same circle. See image:<br>[[Image:RotationExp.png]]</span> about the origin ''O''. Of course we can't see the motion here, because ''O'' is also the center of the circle. However, this rotation is going to make a difference later. | + | The variable parameter φ here doesn't change the shape of the circle, as we still have the relationship <math>x^2 + y^2 = 1</math>. But if we change the value of φ , then the circle will <balloon hover="Click!" title="load:rotate" click="1">rotate</balloon><span id="rotate" style="display:none">It's a rotational motion because if we change φ by Δφ , then for every point on the circular base we are getting a rotated point on the same circle. See image:<br>[[Image:RotationExp.png]]</span> about the origin ''O''. Of course we can't see the motion here, because ''O'' is also the center of the circle. However, this rotation is going to make a difference later. |
| | <br><br> | | <br><br> |
| | The next step to generate this harmonic height function is to raise (or lower) each point in this circular base to a certain height. This height is determined by the function: | | The next step to generate this harmonic height function is to raise (or lower) each point in this circular base to a certain height. This height is determined by the function: |
| Line 353: | | Line 355: | |
| | which leads to: | | which leads to: |
| | <br> | | <br> |
| - | {{EquationRef2|Eq-1}}<math>r(t_1 - t_0) = 2p \pi</math> | + | {{EquationRef2|Eq. 3a}}<math>r(t_1 - t_0) = 2p \pi</math> |
| - | {{EquationRef2|Eq-2}}<math>(t_1 - t_0) = 2q \pi</math> | + | {{EquationRef2|Eq. 3b}}<math>(t_1 - t_0) = 2q \pi</math> |
| | <br> | | <br> |
| | in which p and q are integers. The other possibility <math>t_1 + t_0 = (2q + 1)\pi</math> is omitted due to the same reason discussed [[#omit|before]]. | | in which p and q are integers. The other possibility <math>t_1 + t_0 = (2q + 1)\pi</math> is omitted due to the same reason discussed [[#omit|before]]. |
| | <br><br> | | <br><br> |
| - | Substituting {{EquationNote|Eq-2}} into {{EquationNote|Eq-1}}, we can get: | + | Substituting {{EquationNote|Eq. 3b}} into {{EquationNote|Eq. 3a}}, we can get: |
| | <br><br> | | <br><br> |
| | :<math>r = {p \over q}</math> | | :<math>r = {p \over q}</math> |
## Current revision
Lissajous Box
Field: Geometry
Image Created By: Michael Trott
Website: www.wolfram.com
ask for permission before using it elsewhere!
Lissajous Box
This is a beautiful Lissajous Box. The curves on its sides are Lissajous Curves with a frequency ratio of 10:7.
# Basic Description
In physics, harmonic vibration is a type of periodic motion where the restoring force is proportional to the displacement. If you haven't seen harmonic vibrations before, please read through this helper page before proceeding, since this concept is crucial in our following discussion of Lissajous Curves.
Lissajous Curves, or Lissajous Figures, are beautiful patterns formed when two harmonic vibrations along perpendicular lines are superimposed. The following animation tells us how to generate one Lissajous Curve:
Click to stop animation.
Figure 1How to generate a Lissajous curve
In the animation above, points X and Y are simple harmonic oscillators in x and y directions. They have the same magnitude of 10, but their angular frequencies are different. As we can see in the animation, the x - vibrator completes 3 cycles from the beginning to the end, while the y - vibrator completes only 2. In fact, these vibrators follow the equations of motion x = sin (3t ), and y = sin (2t ), respectively.
Now, we will try to get the superposition of these two vibrations, which is what we really care about. To get this superposition, we can draw from X a line perpendicular to x-axis, and from Y a line perpendicular to y-axis, and locate their intersection P. By simple geometry, P will have the same x-coordinate as X, and y-coordinate as Y, so it combines the motion of X and Y. As we can see in Figure 1, the trace of P turns out to be a complicated and beautiful curve, which we refer to as the "Lissajous Curve". More specifically, it's one Lissajous Curve in a big family, since we can easily generate more Lissajous Curves with other angular frequencies and phases using the same mechanism.
Mathematically speaking, since the motion of point P consists of two component vibrations, whose equations of motions are already known to us, we can easily get the parametric equations of P 's motion:
$\left.\begin{array}{rcl} x & \mbox{=} & A \sin(at + \phi) \\ y & \mbox{=} & B \sin(bt) \end{array}\right.$
in which A and B are magnitudes of two harmonic vibrations, a and b are their angular frequencies, and φ is their phase difference. If you are unfamiliar with these terms, please refer to this helper page.
The Lissajous Curve in Figure 1 has A = B = 10, a = 3, b = 2, and φ = 0. As we have stated before, we can get more Lissajous Curves by changing these parameters. The following images show some of these figures:
Figure 2-aLissajous Curve: a = 1, b = 2 Figure 2-bLissajous Curve: a = 3, b = 4 Figure 2-cLissajous Curve: a = 5, b = 4
# A Dip Into the History
Figure 3-a
Photograph of Joules Lissajous. Year and Photographer Unknown
Lissajous Curves were named after French mathematician Jules Antoine Lissajous (1822–1880)[1], who devised a simple optical method to study compound vibrations. Lissajous entered the Ecole Normale Superieure in 1841, and later became a professor of physics at the Lycee Saint-Louis in Paris, where he studied vibrations and sound.
During that age, people were enthusiastic about standardization in science. And the science of acoustics was no exception, since musicians and instrument makers were crying out for a standard in pitches. In response to their demand, Lissajous invented the Lissajous Tuning Forks, which turned out to be a great success since they not only allowed people to visualize and analyse sound vibrations, but also showed the beauty of math through interesting patterns.
The structure and usage of Lissajous Tuning Forks are shown in Figure 3-b. Each tuning fork is manufactured with a small piece of mirror attached to one prong, and a small metal ball attached to the other as counterweight. Two tuning forks like this are placed besides each other, oriented in perpendicular directions. A beam of light is bounced off the two mirrors in turn and directed to a screen. If we put a magnifying glass between the second tuning fork and the screen (to make the small deflections of light beam visible to human eyes), we can actually see Lissajous Curves forming on the screen.
Figure 3-b
Demonstration of Lissajous Tuning Forks
The idea of visualizing sound vibrations may not be surprising nowadays, but it was ground-breakingly new in Lissajous' age. Moreover, as we are going to see in the More Mathematical Explanation section, the appearances of Lissajous Curves are extremely sensitive to the frequency ratio of tuning forks. The most stable and beautiful patterns only appear when the two forks vibrate at frequencies of simple ratios, such as 2:1 or 3:2. These frequency ratios correspond to the musical intervals of the octave and perfect fifth, respectively. So, by observing the Lissajous Curve formed by an unadjusted fork and a standard fork of known frequency, people were able to make tuning adjustments far more accurately than tuning by ear.
Because of his contributions to acoustic science, Lissajous was honored as member of a musical science commission set up by the French Government in 1858, which also featured great composers such as Hector Berlioz (1803-1869) and Gioachino Rossini (1792-1868).
Acknowledgement: Most of the historical information in this section comes from this website: click here, and Trigonometric Delights, by Eli Maor[2][3].
# A More Mathematical Explanation
[Click to view A More Mathematical Explanation]
In previous sections, we have encountered this question for many times:
• What determines the appea [...]
[Click to hide A More Mathematical Explanation]
In previous sections, we have encountered this question for many times:
• What determines the appearance of Lissajous Curves?
In this section, we are going to answer this question in two ways. The first method is simple and direct, but is limited to several special cases. The second one applies to almost all Lissajous Curves, but as a result it's more subtle and complicated.
## First Method: Direct Elimination of t
Since Lissajous Curves are defined by the following parametric equations:
$\left.\begin{array}{rcl} x & \mbox{=} & \sin(a \cdot t + \phi) \\ y & \mbox{=} & \sin(b \cdot t) \end{array}\right.$
In principle, one can use trigonometric formulas to eliminate t from these equations and get a relationship between x and y. See the following examples:
(Note: in all examples below, we are going to assume that A = B = 1, since changing these magnitudes will only make the curves dilate or contract in horizontal or vertical direction. They don't affect the structure of Lissajous curves.)
### Example 1: line segment
Figure 3-a
Lissajous Curve 1: Line Segment
If in addition to A = B = 1, we have a = b = 1, and φ = 0, then the parametric equations will become:
$\left.\begin{array}{rcl} x & \mbox{=} & \sin(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.$
This set of equations tells us:
$x = y$
Moreover, since the range of sin(x ) is from -1 to 1, we have:
$-1 \leq x \leq 1$
Together, they give us the line segment shown in Figure 3-a.
### Example 2: circle
(Starting in this example we will use some trigonometric formulas to help us reduce the equations. These formulas, together with some explanations, can be found here[4].)
Figure 3-b
Lissajous Curve 2: Circle
In this case, we still have a = b = 1. But instead of leting φ = 0, we change it to $\pi \over 2$:
$\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + {\pi \over 2}) \\ y & \mbox{=} & \sin(t) \end{array}\right.$
Using the trigonometric identity $\sin (t + {\pi \over 2}) = \cos(t)$, we can get:
$\left. \begin{array}{rcl} x & \mbox{=} & \cos(t) \\ y & \mbox{=} & \sin(t) \end{array}\right.$
Since $\sin^2(\theta) + \cos^2(\theta) = 1$, we can eliminate t and get the following equation:
$x^2 + y^2 = 1$
which gives us the circle shown in Figure 3-b.
### Example 3: parabola
Figure 3-c
Lissajous Curve 3: Parabola
This time, if we change the parameters into a = 1, b = 1, and φ = $\pi \over 4$, then the parametric equations will become:
$x = \sin(t + {\pi \over 4})$
$y = \sin(2t)$
from Eq. 1a we can get:
$2x^2 - 1 = 2\sin^2(t + {\pi \over 4}) - 1$
Using the trigonometric identity $\cos (2\theta) = 1 - 2 \sin^2(\theta)$, we can get:
$2x^2 - 1 = - \cos (2t + {\pi \over 2})$
Applying the formula $\cos(\theta + {\pi \over 2}) = - \sin(\theta)$:
$2x^2 - 1 = \sin(2t)$
Combining it with Eq. 1b, we can again eliminate t and get the following equation:
$y = 2x^2 - 1$
with x confined between -1 and 1. This gives us the parabola in Figure 3-c.
### Conclusion: pros and cons
In the examples above, we can clearly see some advantages of the direct elimination method: it's clear, accurate, and easy to understand. However, these advantages are quickly shadowed by the complexity of calculation when we get to larger frequency ratios. For example, see the following parametric equations of a Lissajous Curve:
$\left. \begin{array}{rcl} x & \mbox{=} & \sin(9t) \\ y & \mbox{=} & \sin(8t) \end{array}\right.$
In principle, this could be solved by expanding the x- and y- function into powers of $\sin(t)$ and $\cos(t)$:
$x = \sin 9t = \sin^9 t - {9\cdot8 \over 2!}\sin^7 t\cos^2 t + {9\cdot8\cdot7\cdot6 \over 4!}\sin^5 t \cos^4 t -{9\cdot8\cdot7\cdot6\cdot5\cdot4 \over 6!}\sin^3 t\cos^6 t + {9! \over 8!}\sin t\cos^8 t$
$y = \sin 8t = \sin^8 t - {8\cdot7 \over 2!}\sin^6 t\cos^2 t + {8\cdot7\cdot6\cdot5 \over 4!}\sin^4 t \cos^4 t -{8\cdot7\cdot6\cdot5\cdot4\cdot3 \over 6!}\sin^2 t\cos^6 t + {8! \over 8!}\cos^8 t$
Notice that in these equations, if we consider sin(t ) and cos(t ) as unknowns, then we will have a set of two polynomial equations with two unknowns, and in principle we can solve sin(t ) and cos(t ) in terms of x and y. Then, the identity sin2(t ) + cos2(t ) = 1 will give us a direct relationship between x and y. However, in practice, few people are willing to carry on with the algebra, because the calculations involved are just so cumbersome and annoying. To make things worse, as group theory tells us, not all polynomial equations of powers higher than 5 can be solved with exact expression of roots [5]. So there is no guarantee that our effort will lead us to the answer. Even if they can, the relationship between x and y is going to be too complicated to tell us anything useful about the shape of the curve. So the method of elimination fails here, and we would like a new way to study these curves.
## Second Method: Experiment and Observation
As shown in the previous discussion, the attempt to directly solve Lissajous Curves fails when we try to deal with large angular frequencies, so we have to find another way to study them. One such way is to study them through experiment and observation. That is, we can use computer software to draw some Lissajous Curves with different parameters, and see how they affect the appearance of Lissajous Curves.
When we study something with multiple variable parameters, it's much easier to study these parameters separately, rather than together. There are three variable parameters, a, b, and φ , in Lissajous Curves. So in the rest of this section we will fix the phase difference φ to study the angular frequencies a and b, and then fix the angular frequencies to study the phase difference.
### Study a and b with φ fixed
The following table shows the Lissajous Curves:
$\left.\begin{array}{rcl} x & \mbox{=} & \sin(a \cdot t + \phi) \\ y & \mbox{=} & \sin(b \cdot t) \end{array}\right.$
with angular frequencies a and b varying from 1 to 5, and phase difference φ fixed at 0:
Figure 4-a
A table of Lissajous Curves with different angular frequency ratios
There are many interesting properties associated with this table:
1. All Lissajous Curves in the table are confined in a 2 * 2 square box. The curves can touch, but cannot go beyond, the lines x = 1, x = –1, y = 1, and y = –1, because the amplitudes of both horizontal and vertical vibrations are set to 1.
2. The Lissajous Curve with a = b = 1 is identical to the curves with a = b = 2, a = b = 3 ... Similarly, the Lissajous Curve with a = 1, b = 2 is identical to the curve with a = 2, b = 4, as shown in Figure 4-b. In other words, the only thing that matters is the ratio between a and b. It can be shown that Lissajous Curves with the same angular frequency ratio must have the same appearance. For example, if we do the substitution t = 2u in the Lissajous Curve with a = 1 and b = 2: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(t) \\ y & \mbox{=} & \sin(2t) \end{array}\right.$ we will get: $\left.\begin{array}{rcl} x & \mbox{=} & \sin(2u) \\ y & \mbox{=} & \sin(4u) \end{array}\right.$ which is nothing different from the Lissajous Curve with a = 2 and b = 4, because whether we use the symbol t or u doesn't matter here. This analysis can be generalized to all Lissajous Curves with rational frequency ratios. Figure 4-bProperty #2
3.The Lissajous Curve with a = 1 and b = 2 is the reflection of the Lissajous Curve with a = 2 and b = 1 about line y = x, as shown in Figure 4-c. In fact, if we exchange the values of a and b in any Lissajous Curve, the result will be the original curve "flipped" about line y = x. To prove this, let's see the Lissajous Curve: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(at) \\ y & \mbox{=} & \sin(bt) \end{array}\right.$ if we replace a with b, and b with a, we will get the following Lissajous Curve: $\left. \begin{array}{rcl} x & \mbox{=} & \sin(bt) \\ y & \mbox{=} & \sin(at) \end{array}\right.$ However, the same resulting curve could also be achieved by replacing x with y, and y with x in the original curve. In other words, the exchange of a and b is equivalent to the exchange of x and y. Moreover, in Cartesian coordinates, exchanging x and y in the equation of the curve is equivalent to flipping the curve about line y = x. So exchanging a and b is also equivalent to flipping about line y = x. Figure 4-cProperty #3
From these properties, we can see that many Lissajous Curves with different angular frequencies are actually the same thing, and we do not need to study all of them. In fact, we can use the following family to represent all Lissajous Curves:
$\left.\begin{array}{rcl} x & \mbox{=} & \sin(rt) \\ y & \mbox{=} & \sin(t) \end{array}\right.$
in which r stands for the angular frequency ratio of two component vibrations. It could be rational or irrational. Here we are only studying the rational case because it's simpler, and the irrational version comes in later sections. The argument for this representation goes as following:
For any Lissajous Curve,
$\left. \begin{array}{rcl} x & \mbox{=} & \sin(at) \\ y & \mbox{=} & \sin(bt) \end{array}\right.$
in which a and b are integers. We can assume that a ≤ b , since if a > b then we can exchange their values, and according to property #3 the curve will only be flipped about line y = x. This doesn't affect the curve's structure, which is what we really care about.
The next step is to divide both angular frequencies by b. According to property #2, the curve will not change, and we will get:
$\left. \begin{array}{rcl} x & \mbox{=} & \sin({a \over b}t) \\ y & \mbox{=} & \sin(t) \end{array}\right.$
The last set of parametric equations belongs to the family we mentioned above. So we only need to study this family of Lissajous Curves, since others can all be reduced to this case.
The following animation shows some of the Lissajous Curves in this family, with the frequency ratio a / b varying continuously from 0 to 1:
Click to stop animation.
Figure 4-dLissajous Curves with varying frequency ratioClick to stop or replay animation
Surprisingly, as we can see in the animation, most of these Lissajous Curves are rather convoluted. But there are some simple ones scattered in them. A more careful examination shows that, when these simple patterns occur, the frequency ratio must be equal to a simple fraction. In fact, if we reduce the frequency ratio a / b to simplest fraction (that is, a and b have a greatest common divisor of 1), then the larger a and b are, the more complicated our Lissajous Curve is going to be. This phenomenon is not hard to understand if we look at the generation process of Lissajous Curves once again.
Suppose the two component vibrations start at t = t0. As long as the frequency ratio is rational, the moving point will eventually return to its starting place, and make a closed Lissajous Curve. Suppose this happens at t = t1. So the time period between t0 and t1 is a complete cycle of this Lissajous Curve. Moreover, since the starting point and ending point overlap, we must have:
$\left. \begin{array}{rcl} x (t_0) & \mbox{=} & x (t_1) \\y(t_0) & \mbox{=} & y(t_1) \end{array}\right.$
Substitute into the parametric equations of Lissajous Curve with rational frequency ratio, we can get:
$\left. \begin{array}{rcl} \sin({a \over b}t_0) & \mbox{=} & \sin({a \over b}t_1) \\ \sin(t_0) & \mbox{=} & \sin(t_1) \end{array}\right.$
which leads to:
${a \over b}(t_1 - t_0) = 2k_1 \pi$
$(t_1 - t_0) = 2k_2 \pi$
in which k1 and k2 are integers. The other possibility $t_1 + t_0 = (2k + 1)\pi$ is omitted because they represent the intersections inside one cycle. At these intersections, although the positions overlap, the velocities don't. So the Lissajous Curve is not closed at these points.
Substituting Eq. 2b into Eq. 2a, we get:
${a \over b} = {k_1 \over k_2}$
since a / b is assumed to be an irreducible fraction (if they weren't, we could divide them by their common factor without changing the Lissajous Curve), the smallest k1 and k2 that satisfy this equation are k1 = a and k2 = b. Substituting back into Eq. 2b, we can get:
$(t_1 - t_0) = 2b \pi$
So the larger b is, the longer it's going to take before the Lissajous Curve closes and repeats itself, and the more convoluted it's going to be. For a simple angular frequency ratio like 1/2, the vibrations soon start to repeat, and the Lissajous Curve is simple, as shown in the previous table. However, a ratio like 37/335 will make the curve much more complicated. In an extreme case, if the ratio is irrational, then both a and b will be "infinitely large", and the Curve is no longer closed. This special case is treated later in this section. Click here to see.
In conclusion, the angular frequency ratio a / b, reduced to simplest fraction, determines the complexity of Lissajous Curves. Large a and b lead to complicated Lissajous Curves; small a and b give us simple ones. This is why Lissajous Tuning Forks are so suitable for tuning notes. In music theory, most of the important intervals are simple fractions. For example, the interval of a perfect octave is 1:2, perfect fifth is 2:3, perfect fourth is 3:4, and so on[6]. These intervals all correspond to simple Lissajous Curves with distinctive features.
### Study φ with a and b fixed
In the previous subsection we have figured out how angular frequencies of a Lissajous Curve affect its appearance. Now we are going to fix the angular frequencies to study the third, and last, variable parameter: the phase difference φ .
The following animation shows the Lissajous Curve
$\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + \phi) \\y & \mbox{=} & \sin(3t) \end{array}\right.$
with φ varying continuously from $0$ to $2\pi$:
Click to stop animation.
Figure 5-aLissajous Curve with varying phase difference φClick to stop or replay animation
An interesting fact to notice is that the animation above looks more like a rotating 3-D curve, rather than a changing 2-D one. The reason for this illusion is related to another way to define Lissajous Curves. In the beginning of this page, we introduced the following definition:
A Lissajous Curve is the superposition of two harmonic vibrations in perpendicular directions.
However, this is not the only available definition for Lissajous Curves. These curves can also be viewed as the projection of a 3-D harmonic height function over a circular base. The following set of images explain this definition in more detail:
Figure 5-bCircular base of harmonic height function Figure 5-cRaising process Figure 5-dProjection onto y-z plane
The first step to generate this harmonic height function is to draw a circular base in x-y plane, as shown in Figure 5-b. The parametric equation of this circular base is:
$\left. \begin{array}{rcl} x & \mbox{=} & \cos(t + \phi) \\ y & \mbox{=} & \sin(t + \phi) \end{array}\right.$
The variable parameter φ here doesn't change the shape of the circle, as we still have the relationship $x^2 + y^2 = 1$. But if we change the value of φ , then the circle will rotateIt's a rotational motion because if we change φ by Δφ , then for every point on the circular base we are getting a rotated point on the same circle. See image:
about the origin O. Of course we can't see the motion here, because O is also the center of the circle. However, this rotation is going to make a difference later.
The next step to generate this harmonic height function is to raise (or lower) each point in this circular base to a certain height. This height is determined by the function:
$z = \sin(3t)$
The raising process is shown in Figure 5-c. Note that if we change φ now, the rotation is visible, since the curve's rotational symmetry is broken in the raising process.
Finally, if we make the projection of that rotating height curve onto the y-z plane, as shown in Figure 5-d, we can see that it's exactly same as the animation in Figure 5-a. In other words, this Lissajous Curve can be viewed as the projection of this 3-D height function. Changing the value of φ makes the 3-D curve rotate, and in turn changes the 2-D curve. In fact, this is why we had the 3-D illusion in Figure 5-a.
Algebraic analysis agrees with this result. As we have seen, the parametric equations of this harmonic height function are:
$\left.\begin{array}{rcl} x & \mbox{=} & \cos(t + \phi) \\ y & \mbox{=} & \sin(t + \phi) \\ z & \mbox{=} & \sin(3t)\end{array}\right.$
To project it onto the y-z plane, we can fix its x component to be 0:
$\left.\begin{array}{rcl} x & \mbox{=} & 0 \\ y & \mbox{=} & \sin(t + \phi) \\ z & \mbox{=} & \sin(3t)\end{array}\right.$
Compare this projection to the Lissajous Curve we had before:
$\left. \begin{array}{rcl} x & \mbox{=} & \sin(t + \phi) \\y & \mbox{=} & \sin(3t) \end{array}\right.$
we can see that they are indeed the same thing.
Although we used a special case a = 1, b = 3 in our discussion, the result applies to all Lissajous Curves with rational frequency ratios. The following images show the 3-D height function of some other Lissajous Curves:
Figure 5-ea = 2, b = 3, φ = 0 Figure 5-fa = 3, b = 5, φ = $\pi$/10
### To put it all together: a java applet
So far we have talked a lot about the appearance of Lissajous Curves. We know that some simple cases can be solve by direct elimination of t in the parametric equations, that the frequency ratio of a Lissajous Curve determines its complexity, and that the phase difference φ affects a Lissajous Curve by rotating its corresponding 3-D height function. Here is an interactive java applet that puts these all together. It allows the user the change both angular frequencies from 1 to 9, and animate the curve by changing φ[7]:
### What happens when things get irrational?
We have limited previous discussions to Lissajous Curves with rational frequency ratios. So, one may naturally wonder, what happened to all those with irrational frequency ratios? Well, they have all died painfully because of their irrationality ...
Just kidding. They are still there, waiting for us to study. For example, see the following Lissajous Curve:
$\left. \begin{array}{rcl} x & \mbox{=} & \sin(2t) \\y & \mbox{=} & \sin(\pi t) \end{array}\right.$
It's a known fact that $\pi$ is irrational. So the frequency ratio $2 \over \pi$ here is also irrational, and this curve is going to be radically different from any one we have encountered so far. See the animation below to get a sense of what it looks like:
Click to stop animation.
Figure 6Lissajous Curve with irrational angular frequency ratioClick to stop or replay animation
Figure 6 shows the trace this Lissajous Curve in accelerating motion. In the beginning, it looks just like an ordinary Lissajous Curve. However, soon we can see the difference: this curve is never closed! It keeps going on and on, and eventually fills the whole 2*2 box. In fact, it can be shown that all Lissajous Curves with irrational frequency ratios cannot close. Without loss of generality, let the parametric equation of such a Lissajous Curve be:
$\left.\begin{array}{rcl} x & \mbox{=} & \sin(rt) \\ y & \mbox{=} & \sin(t) \end{array}\right.$
in which r is an irrational number.
Now we suppose that the curve is closed, and try to derive a contradiction. Let the starting time be t0 and the closing time be t1 as before, so we must have:
$\left.\begin{array}{rcl} x(t_0) & \mbox{=} & x(t_1) \\ y(t_0) & \mbox{=} & y(t_1) \end{array}\right.$
which gives us:
$\left.\begin{array}{rcl} \sin(rt_0) & \mbox{=} & \sin(rt_1) \\ \sin(t_0) & \mbox{=} & \sin(t_1) \end{array}\right.$
which leads to:
$r(t_1 - t_0) = 2p \pi$
$(t_1 - t_0) = 2q \pi$
in which p and q are integers. The other possibility $t_1 + t_0 = (2q + 1)\pi$ is omitted due to the same reason discussed before.
Substituting Eq. 3b into Eq. 3a, we can get:
$r = {p \over q}$
However, recall that we assumed r to be irrational, which means it cannot be written as an integer fraction ${p \over q}$. So the equation above cannot be true, and this Lissajous Curve is never closed.
# Why It's Interesting
As a family of beautiful figures, Lissajous Curves are themselves an interesting subject to study. Moreover, they also have some practical applications, including oscilloscopes and harmonographs.
## Application to Oscilloscopes
Figure 7-a
Structure of oscilloscope
Oscilloscope is a type of electronic instrument in physics that allows observation of constantly varying signal voltages. The following image shows the simplified structure of a typical Cathode Ray Oscilloscope:
In Figure 7-1, the electron gun at left generates a beam of electrons when heated, which is then directed through a deflecting system. The deflecting system is made of two sets of parallel metal plates, one for deflection in x - direction, and the other for y - direction. A signal voltage applied to the X-plates gives them an electronic potential difference, generates a uniform electronic field between them, and makes the electron beam deflect in x - direction. Same for the y - plates. The angle of deflection is proportional to the voltage applied.
After passing the deflecting system, the electron beam is then directed to a screen, which is covered by fluorescent material so that we can see green light on the places hit by electrons. If there is no voltage applied to the deflecting system, then the electron beam hits the screen right at the center. If there is voltage applied, then the electrons will hit somewhere else. So the oscilloscope makes signal voltages visible to us.
Now if we apply a sinusoidal signal on each set of the plates, then both the X-plates and the Y-plates will have varying electronic fields between them, and the electron beam will oscillate in both directions. As a result, the trace on the screen should be the superposition of these two oscillations. As we have discussed before, this is a Lissajous Curve. The following images show some of the Lissajous Curves achieved on oscilloscopes:
Figure 7-b Figure 7-c Figure 7-d
Similar to Lissajous Tuning Forks, Lissajous Figures on the oscilloscope can give us some information about the two component vibrations. For example, just by looking at the Lissajous Curve in Figure 7-b, experienced observers can tell that the frequency ratio between its two component vibrations is 1:3, and the phase difference is $\pi \over 2$. Engineers and physicists often use this method to analyze signals and waves.
## Application to Harmonographs
A harmonograph is a mechanical apparatus that employs pendulums to create geometric images. The drawings created are typically Lissajous curves, or related drawings of greater complexity. See the following video the get a sense of how it works[8]:
As we can see in the video, a typical three pendulum rotary harmonograph consists of a table, a drawing board, a pen, and 3 pendulums. Two of them are linear pendulums oriented in perpendicular directions, and they control the motion of the pen. The third pendulum is free to swing in both directions, and it's connected to the drawing board.
Harmonographs can be used to draw Lissajous Curves. We only need to fix the pendulum connecting to the drawing board, and assume that there is no friction in the other two pendulums. In mechanics, it is a known fact that motion of a frictionless pendulum can be viewed as simple harmonic motion, provided that the swinging angle is small[9]. So the pen's motion is the superposition of two perpendicular harmonic vibrations, which is a Lissajous Curve by definition.
However, in practice, friction cannot be completely eliminated. So the two linear pendulums are actually doing damped oscillations, rather than simple harmonic motion. Physicists give us the following equation of motion for damped harmonic oscillations[10]:
$x(t) = A e ^{- \gamma t} \sin(\omega t + \phi)$
in which $\gamma$ is called the damping constant. The larger $\gamma$ is, the more heavily this oscillator is damped, and the faster its magnitude decreases.
Since both linear pendulums are doing damped harmonic oscillations, the pen should have the following equation of motion:
$\left.\begin{array}{rcl} x & \mbox{=} & A e ^{- \gamma _1 t} \sin(\omega _1 t + \phi) \\ y & \mbox{=} & B e ^{- \gamma _2 t} \sin(\omega _2 t) \end{array}\right.$
If $\gamma _1$ = $\gamma _2$, then the common factor $e ^ {- \gamma t}$ can be extracted from the equations above, and we are left with the parametric equations of a Lissajous Curve:
$(x(t),y(t)) = e^ {- \gamma t}(A \sin(\omega _1 t + \phi),B \sin(\omega _2 t))$
which gives us a Lissajous Curve with exponentially decreasing magnitudes. For example, see the following computer simulations of the harmonograph with A = B =10, ω1 = 3, ω2 = 2, and φ = π/2:
Figure 8-aLissajous Curve with decreasing magnitudes Figure 8-bThe corresponding frictionless case
Figure 8-a shows the damped case with $\gamma$ = 0.04, and Figure 8-b shows the corresponding frictionless motion with $\gamma$ = 0. One can clearly see that they have similar shapes, except the magnitude of the curve in Figure 8-a decreases a little bit after each cycle, which is exactly what we mean by damping.
If $\gamma _1 \neq \gamma _2$, then things get more complicated, because the shape of the curve is distorted during the damping process. For example, see the following images:
Figure 8-cThe frictionless case Figure 8-d$\gamma _1 = \gamma _2 = 0.04$ Figure 8-e$\gamma _1 = 0.01$, $\gamma _2 = 0.04$
The three curves above all have A = B =10, ω1 = ω2 = 1, and φ = π/4. Figure 8-c shows the frictionless case, which is a Lissajous Curve we have discussed before. Figure 8-d shows the damped case with equal damping constants, and one can see that the motion decreases uniformly in both directions. Figure 5-e shows the damped case with unequal damping constants. Since the motion in y - direction is damped much more heavily than in x - direction, the shape of this curve is distorted towards x - axis during the damping process.
If we add more complexity by releasing the free pendulum connecting to the drawing board, then the curve will be the superposition of all these motions. We are not going to study the math behind this, since it's way too complicated with more than 10 variable parameters. However, as we have seen in the video, more complexity also gives us more beautiful and interesting images. The following images are works created by harmonographs. Some of them are computer simulations, others are real pictures from harmonograph makers:
# Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
# References
1. ↑ Jules Antoine Lissajous, from Wikipedia. This is a biography of Jules Lissajous, discoverer of Lissajous Curves.
2. ↑ Lissajous tuning forks: the standardization of musical sound from Wipple Collections. This is a brief introduction to Lissajous' Tuning Forks and his contribution in acoustic science.
3. ↑ Trigometric Delights, by Eli Maor, Princeton Press. Pg. 145 - 149: Jules Lissajous and his figures.
4. ↑ List of Trigonometric identities, from Wikipedia. This page lists some of the trigonometric formula we used the derive the shape of Lissajous curves.
5. ↑ Polynomial, from Wikipedia. This briefly explains why we can't find a general solution for equations of powers higher than 5.
6. ↑ Interval (music), from Wikipedia. This article explains more about musical notes and their frequency intervals.
7. ↑ Animated Lissajous figures.This is the source of the embedded java applet.
8. ↑ Three Pendulum Harmonograph, from youtube. This is the source of the embedded video.
9. ↑ Pendulum, from Wikipedia. This article explains the physics behind pendulums.
10. ↑ Damping, from Wikipedia. This article explains how we derive the equation of motion for damped oscillators.
# Future Directions for this Page
.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
Categories: | | | | | | |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8894490599632263, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/35755-distribution-density-expected.html
|
# Thread:
1. ## Distribution to density to expected
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density function is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
Then
$E(X) = 0.5 + \int_1^2 x(x-1) dx$
$E(X^2) = 0.5 + \int_1^2 x^2(x-1) dx$
I got the f(x) = x-1 part, and I got how to calculate the variance after you have the expected values, but I'm lost on other questions.
My questions are:
Where do we get $f(x) = 0.5$ if x=1? F(1) = 0.5, but I can't figure out why f(1) would equal 0.5.
What is the rule for putting parts of the stepwise density function into the expected value equations? I don't know what the rule is called so I don't know how to review it. We're adding the slope at a single point to the slope over a big area, which is something I can't quite work out visually.
2. Originally Posted by Boris B
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density fucntion is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
Then
$E(X) = 0.5 + \int_1^2 x(x-1) dx$
$E(X^2) = 0.5 + \int_1^2 x^2(x-1) dx$
I got the f(x) = x-1 part, and I got how to calculate the variance after you have the expected values, but I'm lost on other questions.
My questions are:
Where do we get $f(x) = 0.5$ if x=1? F(1) = 0.5, but I can't figure out why f(1) would equal 0.5.
What is the rule for putting parts of the stepwise density function into the expected value equations? I don't know what the rule is called so I don't know how to review it. We're adding the slope at a single point to the slope over a big area, which is something I can't quite work out visually.
$F(x) = 0$ for $x<1$ implies that $\Pr(X < 1) = 0$.
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$ implies that $\Pr(X \leq 1) = \frac{1^2 - 2(1) + 2}{2} = \frac{1}{2}$.
It follows that $\Pr(X = 1) = \Pr(X \leq 1) - \Pr(X < 1) = \frac{1}{2}$.
3. Originally Posted by Boris B
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density fucntion is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
Then
$E(X) = 0.5 + \int_1^2 x(x-1) dx$
$E(X^2) = 0.5 + \int_1^2 x^2(x-1) dx$
I got the f(x) = x-1 part, and I got how to calculate the variance after you have the expected values, but I'm lost on other questions.
My questions are:
Where do we get $f(x) = 0.5$ if x=1? F(1) = 0.5, but I can't figure out why f(1) would equal 0.5.
The $1/2$ comes from the jump discontinuity in the cumulative distribution $F(x)$ at $x=1$, indicating a probability mass of $1/2$ at that point.
Another way of thinking about this is to think of the density as a generalised function, then we may represent the density of a piecewise continuous cumulative distribution as the sum of a continuous function and delta functionals at the discontiuities of amplitude equal to the size of the jumps.
RonL
4. Originally Posted by Boris B
A random variable X has the cumulative distribution function
$F(x) = 0$ for $x<1$
$F(x) = \frac{x^2 - 2x + 2}{2}$ for $1 \leq x < 2$
$F(x) = 1$ for $x \geq 2$
Calculate the variance of X.
The answer specifies that the density fucntion is
$f(x) = 0.5$ if x=1
$f(x) = x-1$ if 1<x<2
$f(x) = 0$ otherwise
If this last is the given answer for the "density" then it is wrong, as as given it is not a density since:
$\int_{-\infty}^{\infty} f(x)~dx=1/2$
The density is the generalised function:
$f(x)=g(x)+(1/2)\delta(x-1)$
where
$g(x) = x-1, \ \ 1<x<2.$
$g(x) = 0, \ \ \mbox{otherwise}.$
RonL
5. Originally Posted by Boris B
What is the rule for putting parts of the stepwise density function into the expected value equations? I don't know what the rule is called so I don't know how to review it. We're adding the slope at a single point to the slope over a big area, which is something I can't quite work out visually.
If you use the generalised function approach you have a perfectly normal looking equation for the expected value. The problem arrises if you represent the distribution as mixed discrete/continuous, when you have to work with the two components seperately.
RonL
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 60, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341855049133301, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/39124/what-is-a-killing-vector-field/39130
|
# What is a Killing vector field?
I recently read a post in physics.stackexchange that used the term "Killing vector". What is a Killing vector/Killing vector field?
-
2
+1, I also saw it when I was looking up the surface gravity of a black hole. Good question. – ja72 Oct 5 '12 at 12:50
2
please could you summarise the initial research effort you made to find out what a killing vector is? – EnergyNumbers Oct 5 '12 at 12:55
1
– benshepherd Oct 5 '12 at 13:00
2
Note Killing is a name associated with the concept, as a quick look at Wikipedia will inform you. – Emilio Pisanty Oct 5 '12 at 13:17
4
Wikipedia is generally a trustworthy source of information for scientific concepts. If you want to know about the definition of something, we do expect you to check there before posting a question here. If there is an article that directly answers your question, as in this case, it's not really a good question for this site. – David Zaslavsky♦ Oct 5 '12 at 17:22
show 5 more comments
## 3 Answers
I think https://en.wikipedia.org/wiki/Killing_vector_field answers your question pretty good:
"Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point on an object the same distance in the direction of the Killing vector field will not distort distances on the object."
A killing vectorfield $X$ fulfills $L_X g=0$ where $L$ is the Lie derivative or more explicit $\nabla_\mu X_\nu + \nabla_\nu X_\mu =0$.
So in a layback manner: When you move the metric $g$ a little bit by $X$ and $g$ doesn't change, X is a killing vectorfield.
For example the Schwarzschildmetric https://en.wikipedia.org/wiki/Schwarzschild_metric has two obvious Killing vectorfields $\partial_t$ and $\partial_\phi$ since $g$ is independent of $t$ and $\phi$.
Edit: On recommndation I add a nice link to a discussion of how to use Killing vector fields: See the answer of Willie Wong at Killing vector fields
-
2
It should also be noted that one can use that fact to construct conserved quantities and sometimes make a system integrable. In the case of the Schwarzschild Metric, those killing fields relate to conservation of energy and angular momentum (respectively). – Benjamin Horowitz Feb 12 at 18:36
Another definition is;
If $V$ is a vector field whose flow $\phi$ is a one parameter group of isometries, then $V$ is called a Killing vector field (or just a Killing vector).
$V$ is a killing vector if and only if $L_vg=0$ ; where Lie derivative.
Here I am giving you a good paper for reference: http://www.physics.ohio-state.edu/~mathur/grnotes2.pdf
-
If any set of points is displaced by $x^i dx_i$ where all distance relationships are unchanged (i.e., there is an isometry), then the vector field is called a Killing vector.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052535891532898, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/26950/extensions-of-dhr-superselection-theory-to-long-range-forces
|
# Extensions of DHR superselection theory to long range forces
For Haag-Kastler nets $M(O)$ of von-Neumann algebras $M$ indexed by open bounded subsets $O$ of the Minkowski space in AQFT (algebraic quantum field theory) the DHR (Doplicher-Haag-Roberts) superselection theory treats representations that are "localizable" in the following sense.
The $C^*-$algebra
$$\mathcal{A} := clo_{\| \cdot \|} \bigl( \bigcup_{\mathcal{O}}\mathcal{M}(\mathcal{O}) \bigr)$$
is called the quasi-local algebra of the given net.
For a vacuum representation $\pi_0$, a representation $\pi$ of the local algebra $\mathcal{A}$ is called (DHR) admissible if $\pi | \mathcal{A}(\mathcal{K}^{\perp})$ is unitarily equivalent to $\pi_0 | \mathcal{A}(\mathcal{K}^{\perp})$ for all double cones $K$.
Here, $\mathcal{K}^{\perp}$ denotes the causal complement of a subset of the Minkowski space.
The DHR condition says that all expectation values (of all observables) should approach the vacuum expectation values, uniformly, when the region of measurement is moved away from the origin.
The DHR condition therefore excludes long range forces like electromagnetism from consideration, because, by Stokes' theorem, the electric charge in a finite region can be measured by the flux of the field strength through a sphere of arbitrary large radius.
In his recent talk
• Sergio Doplicher: "Superselection structure in Local Quantum Theories with (neutral) massless particle"
at the conference Modern Trends in AQFT, it would seem that Sergio Doplicher announced an extension of superselection theory to long range forces like electromagnetism, which has yet to be published.
I am interested in any references to or explanations of this work, or similar extensions of superselection theory in AQFT to long range forces. (And of course also in all corrections to the characterization of DHR superselection theory I wrote here.)
And also in a heads up when Doplicher and coworkers publish their result.
-
## 2 Answers
Now I wish I had paid more attention while I was sitting in the audience. :-) Unfortunately, I'm not familiar enough with the original DHR analysis to have retained more than just the broad outlines of the arguments anyway.
With that disclaimer, I do (imperfectly) recall some apparently important points. Doplicher drew attention to the parallels between their new analysis and the analysis of Buchholz and Fredenhagen (CMP 84 1, doi), which relied only on spacelike wedges for a notion of localization, rather than the double diamonds of DHR. Starting from wedges, localization properties can be refined to spacelike cones and, under fortuitous circumstances, to arbitrarily small bounded regions. On the other hand, the new analysis makes use of localization in future pointed light cones. The analogs of spacelike cones are now played by hyperbolic cones, which are thickenings of cones defined on 3-hyperbolids asymptotic to the given light cone. I'm afraid I cannot be more specific, but this notion seems to have come up independently in hyperbolic 3-geometry.
As to the results, I recall that they are very similar to the results of the previous DHR or BF analyses. In particular, no exotic statistics appear and only the standard (para)bose and (para)fermi cases are possible. I can't recall any result that is different from the previous analyses (though that could be just my memory).
-
Well, that's a good start :-) But I'll leave the question open for now. – Tim van Beek Sep 28 '11 at 9:49
Slides by Buchholz on this project are available here: http://www.univie.ac.at/qft-lhc/?page_id=10
-
Thanks for the tip! I am curious about Bucholz's conlcusion "Origin of infrared difficulties can be traced back to unreasonable idealization of observations covering all of Minkowski space". I remember asking about this idealization in all QFT calculations in an introductory class, but had no idea that this would reappier in such a context :-) – Tim van Beek Oct 7 '11 at 13:54
I think the key physical insights are that 1) for an observer the relevant part of Minkowski spacetime where he can perform measurements (observables) is its causal future (future lightcone $V_+$) and 2) photons from the past lightcone $V_-$ cannot enter $V_+$ which provides, in a sense, a geometric infrared cutoff. But of course, if you want to compare measurements for different observers, you need to consider all of Minkowski space. – Eric Oct 10 '11 at 7:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911011278629303, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/44838?sort=oldest
|
A simple problem in markov chains
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I'm trying to understand a 1954 paper of Kubo intitled "Note on the stochastic theory of resonance absorption". The specific problem can be stated mathematically as follows: let $X(t)$ be a random process taking $n$ positive real values ${\omega_1,...,\omega_n}$. Suppose that $X(t)$ is Markov and its probablility transitions $P_{ij}(t) := P(X(t)= \omega_j | X(0) = \omega_i)$ satisfy
$P'_{jk}(t)= -c_kP_{jk}(t)+\sum_m P_{jm}(t)c_m P_{mk}$,
where $p_{ii}=0$ and $\sum_j p_{ij} = 0$.
We want to find the expectation value of $M(t):=\exp(i\int_0^t X(t')dt')$ in terms of parameters $\omega_i$, $c_i$ and $p_{ij}$.
Kubo's strategy seems to condition on $X(0) = \omega_i \wedge X(t) = \omega_j$ so he can find a link to the $P_{ij}(t)$, but I don't understand this too much... specifically he introduce a funtion $Q_{ij}(t)$ which is the average of $M(t)$ on the condition that the process is in $\omega_i$ at time $t=0$ and is found in the state $\omega_k$ at time $t$. He concludes that
$Q'_{jk}(t) = (i\omega_k-c_k)Q_{jk} (t) + \sum_m Q _{jm}(t)c_m p_{mk}$
I can't figure how to get to this conclusion.
-
There should be an answer! The problem is so easily formulated: given the Markov process $X(t)$ taking $n$ positive real values, find the expectation value of $M(t):=exp(i\int_0^t X(t') dt')$. – The man in the box Nov 9 2010 at 9:06
Does the answer below correspond to what you were asking for? – Didier Piau Apr 11 2011 at 17:10
1 Answer
Indeed, these formulas are standard. Their derivation in a slighly more general setting than yours is as follows.
For every $t\ge0$, let $$M_t=\displaystyle\exp\left(\int_0^tv(X_s)\mathrm{d}s\right),$$ for a given function $v$ defined on the state space of the process $(X_t)$. (In your setting, every $X_t$ is real valued and $v(x)=\mathrm{i}x$ for every $x$ but these details are irrelevant.) For every states $x$ and $y$, let $$A_t(y)=[X_t=y],\qquad Q_t(x,y)=E(M_t1_{A_t(y)}\vert X_0=x).$$ Let $Q_t$ denote the associated square matrix (indexed by the state space, possibly infinite). For instance, $Q_0$ is the identity matrix. Note also that in the expression of $Q_t(x,y)$, $[X_0=x]$ appears as a conditioning while $A_t(y)=[X_t=y]$ is the event to which the expectation is restricted and that these are different operations hence your interpretation of Kubo's method should be rephrased.
The dynamics of $(Q_t)$ is driven by a linear differential equation $Q'_t=GQ_t$, where $G$ is a deformation of the infinitesimal generator of the process $(X_t)$. To identify $G$, one can compute $Q_{t+s}$ at the order $s$, for $s > 0$, when $s$ is small.
To do so, call $r(x,y)$ the transition rate of $(X_t)$ from $x$ to $y\ne x$, and $c(x)$ the sum over $y\ne x$ of $r(x,y)$. (In Kubo's setting as reproduced in your post, $c(x)$ is your $c_x$ and $r(x,y)$ is your $c_xp_{xy}$. By the way, the sum over $y\ne x$ of your $p_{xy}$ should be $1$ instead of $0$ and you should make up your mind between the notations $p_{xy}$ and $P_{xy}$.)
Then, conditioning on $[X_0=x]$, one can decompose the expectation which defines $Q_{t+s}(x,y)$ along the values of $X_s$. This decomposition goes as follows. For every $z\ne x$, $X_s=z$ with probability $r(x,z)s+o(s)$, and $X_s=x$ with probability $1-c(x)s+o(s)$. Furthermore, for every $z\ne x$, $M_{t+s}=(1+o(1))M_t$ on $[X_0=x,X_s=z]$. And on $[X_0=X_s=x]$, the probability of a double transition in the time interval $[0,s]$ is $o(s)$, hence $M_{t+s}=(1+v(x)s+o(s))M_{t+s}/M_s$ where $M_{t+s}/M_s$ is distributed like $M_t$ conditional on $[X_0=x]$.
All this leads to $$Q_{t+s}(x,y)=Q_t(x,y)(1+v(x)s)(1-c(x)s)+\sum_zQ_t(z,y)r(x,z)s+o(s).$$ When $s\to0$, one gets $$Q'_t(x,y)=(v(x)-c(x))Q_t(x,y)+\sum_zr(x,z)Q_t(z,y).$$ In other words, $G(x,x)=v(x)-c(x)$ for every $x$ and $G(x,y)=r(x,y)$ for every $y\ne x$. These are the equations in Kubo's paper.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448911547660828, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/tagged/diffie-hellman+dsa
|
# Tagged Questions
1answer
118 views
### Request for 1024-bit primes $p$ , subgroup $q$ and subgroup generator $g$
I need to find a prime $p$ of $1024$ bits with a $160$ bit sub group size $q$, such that $q|p-1$ , and $g$ is the generator of the sub group size $q$. I'm looking for the numeric values of $p$ , $q$ ...
2answers
201 views
### Besides key and ciphertext sizes what are other advantages of elliptic curve versions of various protocols?
There are elliptic curve variants of Diffie-Hellman, ElGamal, DSA and possibly other protocols/algorithms. I know that these elliptic curve variants have smaller key and ciphertext sizes which will ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.895836591720581, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/pre-calculus/147493-partial-fractions.html
|
# Thread:
1. ## partial fractions
how do I get the answer shown in the attached problem? when I tried it out, I couldn't get the numerator to 1.
Thanks!
Attached Thumbnails
remember that:
$a^2 - b^2 = (a+b)(a-b)$
Can you do it now?
3. Originally Posted by calculus0
how do I get the answer shown in the attached problem? when I tried it out, I couldn't get the numerator to 1.
Thanks!
It will be easier to address your specific trouble if you show all of your working.
4. We write
$\frac{2}{1-x^2}=\frac{2}{(1-x)(1+x)}=\frac{A}{1-x}+\frac{B}{1+x}_{.}$
With a bit of algebraic manipulation of the above equality(actually identity), we can find out that
$A = B = 1$.(There are several ways to do this. One of them is writing the right hand side as a single fraction and we get an equation by equating the denominator with 2)
So we have
$-1+\frac{2}{1-x^2} = -1 + \frac{2}{(1-x)(1+x)} = 1 +\frac{1}{1-x}+\frac{1}{1+x}_{.}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220407009124756, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/263492/can-you-explain-this-card-trick?answertab=votes
|
# Can you explain this card trick?
Can you explain the card trick that is explained here?
Edit: Here's a summary of the trick as explained in the video:
Start by asking a spectator to pick any three cards they like out of a standard 52-card deck, without showing them to you, and write them down (to make sure they won't forget them). (Ed. note: You can shuffle the deck if you like, or even let the spectator shuffle it, but you don't have to.)
Divide the remaining cards into four piles, so that first pile will have 10 cards, the second and third piles will have 15 cards each, and fourth pile, set aside, will have the remaining 9 cards.
Now, tell the spectator to put the first card they picked on top of the first pile, then cut the second pile anywhere they want and put the top half on top of the first pile (and the card they picked). Then they should put the second card they picked on top of what remains of the second pile, cut the third pile anywhere they want and put the top half on top of the second pile, and finally put the third card they picked on what remains of the third pile and place the entire fourth pile on top of it. (Ed. note: You could have the spectator cut the fourth pile too, if you wanted; it shouldn't matter as long as all nine cards of it eventually end up on top of the third pile.)
Now collect the three piles of cards together so that pile #3 ends up on top of the deck, pile #2 in between and pile #1 on the bottom. Next, take four cards off the top of the deck and place them on the bottom. Deal the cards from the top of the deck alternatingly into two piles, the first pile face up and the second pile face down. Tell the spectator in advance to say "stop" if they see any of their cards in the face-up pile (which they won't).
Once you've dealt out the entire deck, set the face-up pile aside, pick up the face-down pile and repeat the process, dealing it into two smaller piles, the first pile face up and the other face down. Again, tell the spectator to say "stop" if they see any of their three cards in the face-up pile — they won't. Keep repeating this process until you're down to just three face-down cards. Show those cards to the spectator; they'll be exactly the ones they picked and wrote down.
-
5
I disagree with closing this question. It would be much better to make the question self-contained, by describing the trick in detail, so people didn't have to follow the link. But it is a real question with a mathematical answer, as Marvis has shown. – Ross Millikan Dec 22 '12 at 4:17
2
It was such an involved trick that I thought the video would be the most efficient way to explain all that is going on. – stan Dec 22 '12 at 4:30
1
If you want to type it out, have at it. Otherwise, close it. – stan Dec 22 '12 at 4:56
1
Come on! This question was asked only 8 hours ago, but the external content is not accessible. This is really bad! Present the card trick here.. – stefan Dec 22 '12 at 9:59
1
Link still working for me. – stan Dec 22 '12 at 16:31
show 8 more comments
## 3 Answers
As Ross rightly points out, the cuts are just to seemingly make it random though it doesn't affect anything. Irrespective of the cuts, note that there are $15$ cards between the first card the contestant places and the second card the contestant places. Similarly, irrespective of the cuts, note that there are $15$ cards between the second card the contestant places and the third card the contestant places.
Let us label the cards as follows. Let $a_k^{j}$ be the $k^{th}$ card from top in the hand of the performer after the $j^{th}$ up-down phase. Initially, i.e. after the contestant places the cards and before the first up-down starts, $j=0$.
Now the cards in the last pile i.e. the pile containing $9$ cards (the only pile on which the contestant doesn't place any card) be $a_1^{0}, a_2^{0}, \ldots, a_9^{0}$ starting from the top most card. Let the third card the contestant places on the third pile be $a_{10}^{0}$. Then there are $15$ cards followed by the second card the contestant places on the second pile. Accounting for the $15$ cards in between, the second card is $a_{26}^{0}$. Now there are $15$ cards followed by the first card the contestant places on the first pile. Accounting for the $15$ cards in between, the first card is $a_{42}^{0}$. Hence, now the contestant cards are $\color{red}{a_{10}^{0}, a_{26}^{0}, a_{42}^{0}}$.
Now the performer moves $4$ cards to the rear. Hence, now the contestant cards are $a_6^{0}, a_{22}^{0}$ and $a_{38}^{0}$.
$$\color{red}{\{a_{10}^{0}, a_{26}^{0}, a_{42}^{0}\} \to \{a_{6}^{0}, a_{22}^{0}, a_{38}^{0}\}}$$
Now in the first up-down phase all the odd number cards are eliminated i.e. $a_{2k-1}^{0}$ gets eliminated. However, on the pile with the cards closed, the order has reversed i.e. $a_2^{0}$ is the bottom most card, followed by $a_4^{0}$ and so on and the top-most card is $a_{52}^{0}$. Now reordering the card so that the topmost card is now $a_1^{1}$, we find that the card $a_{2k}^{0}$ gets mapped to $a_{27-k}^{1}$. Hence, the contestant cards are now at $a_{24}^{1}, a_{16}^{1}$ and $a_8^{1}$. $$\color{red}{\{a_{6}^{0}, a_{22}^{0}, a_{38}^{0}\} \to \{a_{24}^{1}, a_{16}^{1}, a_8^{1}\}}$$ There are now $26$ cards left.
Now in the second up-down phase all the odd number cards are eliminated i.e. $a_{2k-1}^{1}$ gets eliminated. As before, on the pile with the cards closed, the order has reversed i.e. $a_2^{1}$ is the bottom most card, followed by $a_4^{1}$ and so on and the top-most card is $a_{26}^{1}$. Now reordering the card so that the topmost card is now $a_1^{2}$, we find that the card $a_{2k}^{1}$ gets mapped to $a_{14-k}^{2}$. Hence, the contestant cards are now at $a_{2}^{2}, a_{6}^{2}$ and $a_{10}^{2}$. $$\color{red}{\{a_{24}^{1}, a_{16}^{1}, a_8^{1}\} \to \{a_{2}^{2}, a_{6}^{2}, a_{10}^{2}\}}$$
There are now $13$ cards left.
Now in the third up-down phase all the odd number cards are eliminated i.e. $a_{2k-1}^{2}$ gets eliminated. As before, on the pile with the cards closed, the order has reversed i.e. $a_2^{2}$ is the bottom most card, followed by $a_4^{2}$ and so on and the top-most card is $a_{6}^{2}$. Now reordering the card so that the topmost card is now $a_1^{3}$, we find that the card $a_{2k}^{2}$ gets mapped to $a_{7-k}^{3}$. Hence, the contestant cards are now at $a_{6}^{3}, a_{4}^{3}$ and $a_{2}^{3}$. $$\color{red}{\{a_{2}^{2}, a_{6}^{2}, a_{10}^{2}\} \to \{a_6^3,a_4^3, a_2^3\}}$$ There are now $6$ cards left.
Hence, the last up-down has the open cards as $a_1^{3}$, $a_3^{3}$ and $a_5^{3}$; the closed cards being $a_2^{3}$, $a_4^{3}$ and $a_6^{3}$, which are precisely the contestant cards.
EDIT
Below is an attempt to explain this pictorially. The document was created using $\LaTeX$ and below is a screenshot.
-
3
+1. Well done.. – Ross Millikan Dec 22 '12 at 5:20
3
+1 Now I call that "making an effort"...wow! – DonAntonio Dec 22 '12 at 10:38
If you follow carefully, the cuts are an illusion. Before you start dealing the cards up and down you have a deck with 15 cards (including the five moved from the final 9), the spectator's card, 15 more, the spectator's card, 15 more, the spectator's card, and 4. The cards face up are all the ones in odd positions, but the spectator's cards are in places 16,32,48, so they don't show up. The even cards are now in their original positions divided by 2, so the spectator's cards are at 8, 16, 24. Each deal takes out the odd cards. Four deals leave you with just the spectator cards.
-
@stan: I had missed the reversal that Marvis points out. +1 to him. – Ross Millikan Dec 22 '12 at 3:47
As mentioned, the key is to realize the special cards are sitting in positions 15, 31, and 47 (where the bottom card is 1 and the top card is 52).
When the up/downs begin, keeping mind the inherent order reversal that occurs when we pick up the down pile and start the next iteration, the positioning of the cards just goes like this:
So, at the 4th iteration, the only cards in the down pile are the ones in positions 15, 31, 47---the special cards!
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9634981155395508, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showpost.php?p=3796410&postcount=5
|
View Single Post
Recognitions:
Gold Member
Quote by faen Yes I thought about this too. But the result seems strange to me, because it now predicts that time goes slower in the rest frame and not in the moving frame, just because we changed the location of the experiment. For example if we had done the experiment in frame A instead of B, t and t' would change place in the equation predicting opposite results, as such: $$t'=\frac{t}{\sqrt{1-\frac{v^2}{c^2}}}.$$. How is this possible?
According to Special Relativity, time does not go slower in any rest frame you choose, it only goes slower for objects/observers/clocks that are moving in any rest frame you choose. It's no different than saying that the rocket is moving in the ground's rest frame and the ground is moving in the rocket's rest frame. How is that possible?
But it does help to consider that in the ground's rest frame, the clock on the rocket is ticking slower and it is moving away from the clock on the ground. So between each tick of the rocket clock, as it observes the ground clock, it has moved farther away and so the image of the ground clock's ticks have farther to travel resulting in a longer time compared to its own ticks. It turns out that the ratio of the ground clock's tick interval compared to the rocket clock's tick interval as determined by the rocket is the same as the ratio of the rocket clock's tick interval compared to the ground clock's tick as determined by the ground. Work it out, you'll see that this is true.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9660897850990295, "perplexity_flag": "head"}
|
http://mathhelpforum.com/advanced-statistics/142719-characteristic-functions.html
|
# Thread:
1. ## Characteristic functions
Given that E(e^(itx)) (the characteristic function of a Cauchy rv)=e^(-abs(t)), use this fact to find the pdf for X bar for all n.
I have no idea how to do this, and it was on the test. I got 4/25 points, and the professor doesn't give the correct answers.
I just started computing moments, and trying to integrate stuff, but he just wrote NO! on my paper. We have never done this in class or in the book, not making excuses, but I would just like to know how to do it in case it comes up again on the final.
Thank you.
2. $\phi_{\bar{X}}(t) = E(e^{it\frac{X_{1}+X_{2}+...+X_{n}}{n}})$
assuming independent observations
$= E(e^{it\frac{X_{1}}{n}})E(e^{it\frac{X_{2}}{n}}) \cdot \cdot \cdot E(e^{it\frac{X_{n}}{n}}) = \phi_{X} \Big(\frac{t}{n}\Big)^{n}$ since all n random variables follow Cauchy's distribution
$= (e^{-|\frac{t}{n}|})^{n} = e^{-|t|}$
so the sample mean follows the same Cauchy distribution
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940758466720581, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/22562/convex-hull-of-finite-set-is-compact
|
## convex hull of finite set is compact [closed]
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
In a Banach space, is the convex hull of finite set compact?
-
17
The convex hull of a set of $n$ points is the image of the $n$-simplex in $\mathbb R^n$ under a continuous function, so yes. – Mariano Suárez-Alvarez Apr 26 2010 at 5:46
4
Mariano: do you want to write this as an answer (community wiki if you wish) so I can vote it up? I think it's slightly more to the point than Pete's answer (no offence, Pete!) which works fine but I think is slightly over-elaborate. To be fair, the same idea underlies both. – Yemon Choi Apr 26 2010 at 8:05
4
I am also not keen on questions which give little to no indication of (a) why the questioner wants to know (b) what they've tried doing (c) what level they are at. – Yemon Choi Apr 26 2010 at 8:06
1
@Yemon: I agree so much that I deleted my answer. – Pete L. Clark Apr 26 2010 at 16:58
## 5 Answers
Suppose $X$ is your Banach space and let $\{x_1,\dots,x_n\}$ be a finite subset of $X$. Let $$S=\{(t_1,\dots,t_n)\in\mathbb R^n:t_1,\dots,t_n\geq0,\,t_1+\cdots+t_n=1\}$$ be the standard simplex in $\mathbb R^n$. The map $$\phi:(t_1,\dots,t_n)\in S\mapsto t_1x_1+\cdots+t_nx_n\in X$$ is evindently continuous and its image is $\mathrm{conv}\{x_1,\dots,x_n\}$. Since $S$ is compact, so is $\mathrm{conv}\{x_1,\dots,x_n\}$.
-
Just to make perfectly clear what is used in the proof: $\phi$ is continuous because $X$ is a topological vector space and $S$ is compact as $X$ is a separated space. – Torsten Ekedahl Apr 26 2010 at 15:46
2
$S$ is compact because it is closed and bounded in $\mathbb R^n$, rather! :) – Mariano Suárez-Alvarez Apr 26 2010 at 15:48
Sorry, misread I thought $S$ was the image. – Torsten Ekedahl Apr 26 2010 at 16:48
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Of course yes: the n points lie in the finite-dimentional linear subspace generated by themselves (remember that any norm, when restricted to a finite-dimensional linear subspace, gives rise to the same topology on that space).
-
Actually, the convex hull of a sequence of points $(x_n)$ is (relatively) compact when $x_n\rightarrow 0$, and this easily gives a positive answer to your question (but is somewhat overkill). In fact, a closed convex set K in a Banach space is compact if and only if it's contained in the closed convex hull of a sequence $(x_n)$ with $x_n\rightarrow 0$. See, for example, Lindenstrauss and Tazfriri, vol I, Proposition 1.e.2.
-
Actually, the statement in the question holds for all topological vector spaces, not just Banach spaces. The generalisation here does not apply so widely. I think you need local convexity. – George Lowther Nov 4 2011 at 12:21
Just apply Induction for finite dim spaces with Dim (n points).
-
Sorry, but I did not answer the question as well as another one, ever. I do not understand why this "Just apply Induction for finite dim spaces with Dim (n points)." appeared as my answer... Oleg Reinov
-
You should contact the moderators if there is an identity problem. – S. Carnahan♦ Nov 5 2011 at 1:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401573538780212, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/differential-geometry/142210-measure-theory.html
|
Thread:
1. Measure Theory
Quick question: Does the sigma-algebra generated by a countable number of sets have a countable number of elements?
2. It can be shown that any infinite sigma algebra has cardinality greater than or equal to that of the continuum.
3. Thanks, but how? I tried proving that you can't have a bijection from the natural numbers to the sigma-algebra, with no sucess...
4. Given a countably infinite $\sigma$-algebra $\Sigma$ over an infinite set X, let $\{E_n\}_{n\in\mathbb{N}}$ be an enumeration of $\Sigma$. Wlog, we can take these to be disjoint. Then, for every binary sequence, we may produce another set in $\Sigma$. Namely, consider a sequence $b=b_1b_2b_3\ldots$, and let $F_b=\bigcup_{b_i=1}E_i$.
5. Originally Posted by Tikoloshe
Given a countably infinite $\sigma$-algebra $\Sigma$ over an infinite set X, let $\{E_n\}_{n\in\mathbb{N}}$ be an enumeration of $\Sigma$. Wlog, we can take these to be disjoint. Then, for every binary sequence, we may produce another set in $\Sigma$. Namely, consider a sequence $b=b_1b_2b_3\ldots$, and let $F_b=\bigcup_{b_i=1}E_i$.
This is not immediate. I don't think it works quite that simply, there is no guarantee that two sequences will give different sets.
You probably need to use the axiom of choice. Let X be a set and $\mathcal{F}$ an infinite sigma algebra over X. Then define the equivalence relation ~ as x~y if and only if $x \in A \in \mathcal{F} \iff y \in A$. Then each equivalence class E(x) is measurable, as it is the countable intersection of sets in the sigma-algebra containing x.
Let I be the set of representatives from each equivalence class (Axiom of Choice here). Then $X= \bigcup_{x \in I} E(x)$ is a disjoint union. Hence if we suppose that $\mathcal{F}$ is countable then I must either be finite or countable. Thus we have an injection from $\{0,1\}^I$ to $\mathcal{F}$ as we can pick an element of $\mathcal{F}$ by taking $\bigcup_{x\in S \subset I}E(x)$ (as they are equivalence classes, you do have a guarantee of getting distinct sets).
This is impossible as $|\{0,1\}^I|> |I|$ if I is countable or finite.
6. Originally Posted by Focus
This is not immediate. I don't think it works quite that simply, there is no guarantee that two sequences will give different sets.
I don’t understand your objection. If $\{E_i\}_{i\in\mathbb{N}}$ is a countable sequence of disjoint, nonempty sets, then using different index sets, the unions over the $E_i$ will be distinct sets. I.e., if $A,B\subset\mathbb{N}$ where $A\neq B$, then $\bigcup_{i\in A}E_i\neq\bigcup_{i\in B}E_i$. Given an arbitrary countable sequence in a $\sigma$-algebra, you can of course make a new sequence of pairwise disjoint, nonempty sets (without AC).
Originally Posted by Focus
You probably need to use the axiom of choice.
I don’t think AC is necessary for the proof (c.f. mine above). Also, here is another proof which takes a different approach.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290536642074585, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/99462/alternative-solutions-to-a-probability-problem/99514
|
# Alternative solutions to a probability problem
Some fellow students ran a contest this past summer. I had stopped following it by the time this problem came out. I looked back through some of the older problems, saw this and thought I'd try to solve it.
Let $S$ be an infinite sequence of random variables, all $0$ or $1$, such that the probability of $S_n$ being $0$ is $1/n^2$. Let $p$ be the expected value of the probability that the product of the elements of a randomly chosen subsequence of $S$ is nonzero.
Regarding how subsequences are chosen, keep in mind this problem was written by high schoolers like me (so no measure theory or the like). I don't know much about infinite discrete probability spaces, so I just approached the problem as if it were asking for the limit of the value for the finite case with a uniform distribution. They seem to have interpreted it the same way. If another interpretation makes sense and is interesting, by all means, use it.
The value I obtained for $p$ was $$\frac{1}{\Gamma\left(1 + \frac{1}{\sqrt{2}} \right) \Gamma\left( 1 - \frac{1}{\sqrt{2}} \right)} = \frac{\sqrt{2} \sin\left( \frac{\pi}{\sqrt{2}} \right)}{\pi}$$ which the problem writer agreed with. My solution is not very insightful, however, and the problem writer only remembered the answer, not the apparently quite slick solution. Another one of the problem writers mentioned the Borel–Cantelli lemma, but while the given example is very similar (and probably the inspiration for the problem), it doesn't seem close enough to really help. I'd like to know if anyone can come up with an easier solution than the following (which is not very slick).
Write $$p_n = \frac{\sum_a \prod_i \left( 1 - \frac{1}{a_i^2} \right)}{2^n}$$ where the sum is over all subsequences of the first $n$ random variables, and $$p = \lim_{n\to\infty} p_n$$
Then the numerator can be rewritten as $$\prod_i \left( 1 + \left(1 - \frac{1}{i^2} \right) \right) = \prod_i \left( 2 - \frac{1}{i^2} \right) = 2^n \prod_i \left( 1 - \frac{1}{\sqrt{2} i} \right) \left( 1 + \frac{1}{\sqrt{2} i} \right)$$
Using the same manipulations as those discussed here, we get $$\frac{2^n \Gamma\left( n + 1 - \frac{1}{\sqrt{2}} \right) \Gamma\left( n + 1 + \frac{1}{\sqrt{2}} \right)}{\Gamma\left( 1 + \frac{1}{\sqrt{2}} \right) \Gamma\left( 1 - \frac{1}{\sqrt{2}} \right) \Gamma(1 + n)^2}$$ Dividing by $2^n$ and taking the limit as $n \to \infty$ gives the desired result.
So, is there a better, less algebraic and more probabilistic solution to this problem?
-
Could you make the question self-contained, instead of referring to some content outside of the site? – Did Jan 16 '12 at 6:32
What does a randomly chosen subset of $S$ mean? First, $S$ is a sequence, not a set. Second, if we interpret this as the set of values $S_n$ for $n$ in a randomly chosen subset $K\subseteq\mathbb N$, how does one choose randomly $K$? // Likewise: what is $a_i$? – Did Jan 16 '12 at 6:57
$a$ refers to a sequence of indices of the random variables, e. g., $(1, 2, 3)$ corresponds to the subsequence $(S_1, S_2, S_3)$. – Nick Haliday Jan 16 '12 at 7:22
A full new paragraph in the edited version explains how to choose $K$. (Note that the request to avoid infinite discrete probability spaces may seem odd regarding a problem about almost surely infinite subsets of $\mathbb N$.) – Did Jan 16 '12 at 7:38
I said I don't know much about them. If that approach is simpler, than feel free to use it, even if it yields a different answer for some reason. If there's a better, more formal way of defining this problem, I'd love to hear it. – Nick Haliday Jan 16 '12 at 7:51
## 1 Answer
One should add the hypothesis that the random variables $S_n$ are independent and that the choice of a random subset is independent on them.
Formal setting
One is given on the one hand an infinite sequence $(A_n)_{n\geqslant1}$ of independent events such that $\mathrm P(A_n)=p_n$ with $p_n=1-\frac1{n^2}$, and on the other hand an infinite sequence $(Y_n)_{n\geqslant1}$ of independent Bernoulli random variables such that $\mathrm P(Y_n=0)=\mathrm P(Y_n=1)=\frac12$. The sequences $(A_n)_{n\geqslant1}$ and $(Y_n)_{n\geqslant1}$ are independent.
Here is why this models the situation you have in mind. The sequence $(A_n)_{n\geqslant1}$ models the random variables $(X_n)_{n\geqslant1}$ through the relation $A_n=[X_n=1]$. The sequence $(Y_n)_{n\geqslant1}$ defines a random subset $N=\{n\in\mathbb N\mid Y_n=1\}\subseteq\mathbb N$ and yields at once all the finitary representations used in your post because, for every fixed $n\geqslant1$, the random set $N\cap\{1,2,\ldots,n\}$ is uniformly distributed on the $2^n$ subsets of $\{1,2,\ldots,n\}$. To see this, note that for every $B\subseteq\{1,2,\ldots,n\}$, $$\mathrm P(N\cap\{1,2,\ldots,n\}=B)=\prod\limits_{k\in B}\mathrm P(k\in N)\cdot\prod\limits_{k\leqslant n,\ k\notin B}\mathrm P(k\notin N),$$ which is $$\mathrm P(N\cap\{1,2,\ldots,n\}=B)=\prod\limits_{k\in B}\mathrm P(Y_k=1)\cdot\prod\limits_{k\leqslant n,\ k\notin B}\mathrm P(Y_k=0)=\frac1{2^n}.$$ Now, one asks for the probability of the event $$A=\bigcap\limits_{n\in N}A_n.$$ Solution of the problem
For a given $N$, the independence hypothesis on the random variables $(X_n)_{n\geqslant1}$ implies that $$\mathrm P(A\mid N)=\prod\limits_{n\in N}\mathrm P(A_n)=\prod\limits_{n\in N}p_n=\prod\limits_{n\geqslant1}(1-(1-p_n)\mathbf 1_{n\in N}).$$ The independence hypothesis on the random variables $(Y_n)_{n\geqslant1}$ implies that $$\mathrm P(A)=\mathrm E(\mathrm P(A\mid N))=\prod\limits_{n\geqslant1}(1-(1-p_n)\mathrm P(n\in N)),$$ that is, $$\mathrm P(A)=\prod\limits_{n\geqslant1}(1-\tfrac12(1-p_n))=\prod\limits_{n\geqslant1}\frac{1+p_n}2=\prod\limits_{n\geqslant1}\left(1-\frac1{2n^2}\right).$$ Finally, the representation of the sine function as an infinite product for $z=\frac1{\sqrt2}$ indicates that the infinite product in the RHS above is indeed $\frac{\sin z}{z}=\frac{\sqrt2}{\pi}\sin\left(\frac{\pi}{\sqrt2}\right)$.
-
I like the use of the infinite product representation for sine. I'd seen it before but it didn't even cross my mind for some reason. – Nick Haliday Jan 16 '12 at 14:52
– Nick Haliday Jan 16 '12 at 15:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519740343093872, "perplexity_flag": "head"}
|
http://nrich.maths.org/6962/solution
|
### Shapes in the Alphabet
In this problem, we're going to find sets of letter shapes that go together.
### Logic Block Collections
What do you think is the same about these two Logic Blocks? What others do you think go with them in the set?
### Our Numbers
These spinners will give you the tens and unit digits of a number. Can you choose sets of numbers to collect so that you spin six numbers belonging to your sets in as few spins as possible?
# I Like ...
##### Stage: 1 Challenge Level:
Milo thought that
We would have to test all 25 numbers to be sure what his rule was. Do you agree?
Helen of Lea Valley Primary school had a good idea:
For the "I like these numbers" I put $\{2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24\}$ because they are even and for "I don't like" I put $\{1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25\}$ because they are odd.
Kate of Bilton Junior School exclaimed:
It is the five times table, well it has to be! What do you think?
Rischabh of the European School of Varese was unsure and wants more information:
It is either odd numbers, or the five times table. I would like to try number $7$. If it goes on the left, it means it's the odd numbers, and if it goes on the right, it's the five times table. I will also try $10$ to be sure.
What if the number $7$ and the number $10$ went into the "I don't like" section. Are there any other possibilities?
Simran said: The rule might be to select the numbers with units of $5$.
On the other hand what might the rule be if we tried $7$ and $10$ and both numbers went into the "I like" section?
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9589385390281677, "perplexity_flag": "middle"}
|
http://agtb.wordpress.com/2009/11/27/economists-and-complexity/
|
Turing's Invisible Hand
Feeds:
Posts
Comments
Economists and Complexity
November 27, 2009 by Noam Nisan
One of main culture clashes between computer scientists and economists on the CS-econ frontier is whether “complexity of equilibria” matters. The CS-y view of the matter is captured in Kamal Jain’s quote: “If your laptop can’t find it then neither can the market“. Economists mostly don’t care since they see equilibrium reached everyday, contrary to what CS says. As Jeff Ely quips: “Solving the n-body problem is beyond the capabilities of the world’s smartest mathematicians. How do those rocks-for-brains planets manage to do pull it off?“ TCS folks who see complexity as the map of the world can’t really understand this indifference, as Lance Fortnow tweets: “I’m an economist so I can ignore computational constraints / I’m a computer scientist, so I can ignore gravity.“
Computational Complexity map of the world
The beautiful thing about studying systems at equilibrium is precisely the fact that this abstracts away the gory details of the process (aka dynamics aka algorithm) of reaching this equilibrium. In principle, there are many different difficult-to-understand dynamics that all reach the same easy-to-understand equilibrium point. This is all very nice as long as the equilibrium is indeed magically reached somehow by the “real” dynamics. The obvious crucial question is whether this is the case in practice. It seems that the general tendency among economists is to claim “yes”, to which the natural next question in line is “why”? As computer scientists show, this is not a general characteristic of large games or markets. Understanding the properties of interesting games and markets that make them actually reach equilibrium should be enlightening. Maybe it is because economists choose to consider only those that do turn out to converge quickly, ignoring another large and interesting class of strategic scenarios? Maybe it is because economists are thinking about “smallish” games and so their insights will not carry over to more complex realistic scenarios? Maybe there is some deeper interesting structure that guarantees fast convergence? Distinguishing between these possibilities is especially crucial as we aim to analyze the new artificial games and markets that are to be found — and designed — on the Internet as well as elsewhere. Which economic and game-theoretic sensibilities will still hold in these complex unnatural circumstances?
Complexity is all about understanding such processes. While the foremost question dealt by computational complexity is that of “time” — how long does a computational process need in order to find the solution — in our case to reach (close-to) equilibrium — this is not the only type of questions and insights provided by complexity theory. As one can see in the “map above”, there are a stunning variety of complexity classes, each attempting to capture a different facet of the challenge of finding the solution: how much space (memory) is needed? Can we even tell when we reach a solution? Does randomization help? Is it helped parallelism? Are approximations easier? Does the solution have this or that particular structure? In the case of finding equilibria, the classes PPAD and PLS give a very nuanced explanation of what is involved. There are also “concrete” models that study explicitly specific parameters such as communication or queries. One may dislike the fact that this complexity analysis does not restrict attention to natural dynamics but allows arbitrary and unnatural algorithms. The kind of natural dynamics that economists usually have in mind are some kind of best-replying in case of games and some kind of a Walrasian auctioneer in markets. The problem is that there are many variants of these that make sense: fictitious play, various forms of population dynamics, more sophisticated types of learning such as regret-minimization, and all these can be enhanced with various orderings, smoothing attempts, tie-breaking rules, strategic look-ahead, re-starts, actions of the central planner, not to mention other more or less complex optimization and learning attempts. The strength of complexity analysis is that it applies to all of these. Any “lower bounds” are definitive: any practical system can be simulated by a computer, and thus no dynamics can succeed in general. (Emphasis on “in general” — as mention above, the problems that you may be interested in may have special structure — so what is it?) A statement of an “upper bound” may be less interesting as stated, but immediately raises the challenge of either finding a natural algorithm=process=dynamic or pinpointing the complexity reason explaining why this is impossible.
This is a good point to refute several irrelevant objections to the applicability of computational complexity for analyzing dynamics of games and markets. The first is the notion that humans or markets can undergo processes that cannot be computed. Frankly, there is no evidence of this; there is certainly much about the world that we do not understand well enough to simulate, but there is no indication of any natural process that is inherently more powerful than computers. This is the modern version of the Church-Turing thesis. (It seems that some quantum phenomena cannot be simulated by usual computers, but that doesn’t change the principle — it just would replace classical computers with quantum ones in the few places that it matters.) Even if you do subscribe to metaphysical dualism, do you want to base economics on it? The second types of objections concern the standard technical notions used in complexity which obviously leave much wiggle-room: “polynomial time”, with its hidden constants, is not synonymous with efficient; worst-case analysis is not always the right notion, etc. The point is that these are simply concise notions that usually seem to capture the issues well. There always are cases where more refined notions are needed, and in such cases complexity theory can provide more precise answers: for example in analysis of basic problems like sorting or matrix multiplication, very exact results are obtained (with no hidden constants) similarly, cryptography is not based on worst-case analysis, etc. There is no indication so far that the usual somewhat coarse notions of computational complexity miss something significant when applied to games or markets — quite the contrary in fact. If such evidence emerges then complexity will not become irrelevant; simply more refined analysis will be used.
Take for example the stunning difference between reaching a Nash equilibrium and reaching a Correlated equilibrium. While the latter is reached by various natural regret-minimization dynamics, there are no “non-trivial” dynamics that, in general, reach the former. Let me say a word about this “non-triviality” by giving an example of what I would consider a trivial process: suppose that each player chooses a strategy at random every “turn”, unless his last strategy was already a best-reply to those of the others (up-to some $\epsilon$ indifference). At some time, the players will happen to “hit” an ($\epsilon$-) equilibrium. This type of dynamics that simply search over the whole space of strategy profiles provides no insight and is not useful in most practical situations. The point of the triviality is not that of the random choices but rather that of essentially trying every possibility. Many other proposed “dynamics” for Nash equilibrium or for market equilibrium are similarly trivial — in some cases they resort to simply trying all possibilities (in some approximate sense). The dynamics for correlated-Nash are not like this at all — they only look at a tiny “small-dimensional” fraction of the space of possibilities. Why is that? Complexity theory explains this phenomena clearly: correlated equilibria can be found in polynomial time, but finding a Nash equilibrium (or many types of market-equilibrium) is PPAD-complete.
Posted in Uncategorized | Tagged complexity of equilibria, CS vs Econ, Theory vs. Practice | 22 Comments
22 Responses
1. on November 28, 2009 at 12:07 am | Reply Joe
The claim that worst-case complexity is a concise notion that seem to capture the issues well, is surprising.
Many natural problems (e.g. protein folding) have NP-complete formulations and yet are “solved” by nature regularly.
Furthermore, many NP-complete problems are routinely solved by computer programs such as integer linear-programming, SAT solving, scheduling problems etc.
(Not to mention the simplex algorithm whose versions that actually run are proven to be exponential.)
Worst-case complexity analysis is usually the best we can do. For that reason, it is important to do it.
Nevertheless, claiming that it captures the world well,
is far fetching and requires strong arguments.
It seems entirely possible that what we see in the markets today comes from some distribution on which finding equilibrium is not computationally hard.
While it is true that more refined analysis is needed,
we do not know how to do it.
Furthermore, an economist should not necessarily wait for us to catch up with finding a refined analysis.
Economists should describe the world the best way they can. If it turns out that the world “computes” solution to hard problems, it is up to us to explain the discrepancy.
• on November 28, 2009 at 6:07 am | Reply noamnisan
You seem to have missed my point. It is not that if a problem is NPC then it can’t be solved in reality — it is just that there must be an explanation for this, which usually means understanding the special property of the realistic problems that makes them easy. Protein folding by nature would certainly be a good example where the mystery is strengthened by the basic complexity results and where we should eventually be able to understand the special physical/chemical properties that make the dynamics converge. LP is actually a good example for the rare cases where there was always ample evidence for a gap between the basic complexity results and reality; the recent more refined smoothed analysis seems to close the gap.
No one is suggesting not to study equilibria in cases where we don’t understand the dynamics leading to them. The point is that exactly in such cases studying the dynamics is doubly interesting and would usually mean understanding something new about the problem itself, not about complexity theory.
2. [...] This post was mentioned on Twitter by Lance Fortnow, Maryland Thermoform. Maryland Thermoform said: "Economists and Complexity « Algorithmic Game Theory" http://tinyurl.com/yb5kbhr MDThermo [...]
3. From my naive point of view it seems likely that case analysis may make more sense in economics than in scientific applications, because if there’s a worst case to exploit then one can expect the arbitrageurs to eventually find it and exploit it.
4. on November 28, 2009 at 3:31 am | Reply alex
I thought economists typically modeled things using very simple games, for which computing the equilibrium is typically possible.
• on November 28, 2009 at 6:11 am | Reply noamnisan
Indeed so. The question is what happens with the larger games of reality. If convergence depends crucially on smallness then the small models don’t say much. This does not seem to be the common case: good “small” economic models do explain also larger realistic games.
5. “but finding a Nash equilibrium (or many types of market-equilibrium) is PPAD-complete”
Only 2-player Nash is PPAD-complete. For 3 or more players, it is FIXP-complete — a class due to Etessami and Yannakakis (FOCS 2007).
Problems in PPAD always have rational solutions, but a 3-player Nash instance may have only irrational solutions (Nash himself gave such an example).
• on November 28, 2009 at 6:18 am | Reply noamnisan
As usual with continuous quantities, some notion of approximation is implied. I avoided getting into this issue since I don’t believe that it is crucial here, but see a previous post on approximation: http://agtb.wordpress.com/2009/06/07/approximate-nash/
6. Thanks Noam, and agreed.
Perhaps I should have stated my main point differently — that FIXP is an important class for understanding the complexity of equilibria but needs to be more well known and more widely explored.
7. on November 28, 2009 at 11:18 pm | Reply Kamal Jain
If some problem is complexitywise hard, but somewhere some system solve the instances efficiently then it means that we should either giveup the church-turing thesis, or question the complexity class, or try to understand what kind of instances that system is getting.
So in case, scientists believe that economic systems solve instances of a computationally hard problem, that may means the scientific understanding is lacking in understanding what kind of situations arise in practice which is leading to solvable instances of hard problem. Could scientists in general and economists in particular say, “hey inpractice we do not get general instances of Nash problem but usually get the instances which have these additional attributes which make them easy to solve”.
One goal of any theoretical discipline is to build a conceptual understanding of the real world. If something is computationally hard but the real world solves it, what it means? It means that we are still lacking the conceptual understanding of an important dimension of the real world.
8. I think Alex’s point is valid to some extent. He is saying that it is important for a subset of economists and computer scientists to think hard about the special cases of equilibrium problems that seem to arise in practice which enable them to be solved fast. However, the practicing economist could/should continue to use the notion of equilibria without constantly worrying about the computational complexity issues.
9. Excuse the question (I know very little about economics) but what is an example of a real-world, large-scale system that one can indisputably claim has come to equilibrium? Global and national markets (stock markets, commodities markets, currency markets, …) fluctuate daily, sometimes by significant amounts. Occasionally this can be attributed to a change in information, but usually not. Do (experimental) economists really believe the stock market is in equilibrium? (I know about the efficient market hypothesis, which makes for nice theory. What I’m wondering about is how anyone justifies that hypothesis in practice.)
• on December 29, 2009 at 10:42 pm | Reply Stephen
To say something has come to an equilibrium definitively, we would need to know what each of the participants is maximizing. Indisputability is to high a bar for large real-world systems. Because of this, most of the work has been done in experimental settings. For markets for physical goods, Vernon Smith did good experimental work about the convergence to equilibrium with participants having only limited information.
The real-world example that I think would be easiest to go after would be equilibrium in routing games. Looking at traffic flows after a major road or bridge closure for instance could be informative.
10. on November 29, 2009 at 3:08 am | Reply Paul Beame
In standard models for Nash equilibria (even approximate ones) one assumes that the players have perfect information about the game. Without perfect information the difference between PPAD-hard find strategies approximately in equilibrium) and FIXP-hard (approximating an actual equilibrium) seems to go away.
Suppose that one does not have perfect information and the payoffs given are only approximate. It is certainly no harder to compute strategies that are an approximate equilibrium for some possible payoffs consistent with the input. Can it be significantly easier?
11. This is interesting discussion and I am sorry I just saw it. Here is my 2 cents anyway.
1. I certainly agree that economic systems (as well as other sustems in nature) cannot “solve” computationally infeasible problems. It is an interesting question if computational complexity considerations of this type should play a role in modeling economical or, say, biological/physical phenomena.
2. For modeling in science as well as in economics, computational complexity is sort of second order matter. If you want to predict the value of a stock in a week based on some data you have today the first order question is what is the quality of the solution you can give based on the data, even with an unlimmited computational power. The (related) question of computational complexity comes next.
3) It is possible that if you have the full data (and a complete understanding of the mechanism) the process is computationally feasible but when you need to optimize over some hidden variables the process becomes computationally infeasible.
4) when it comes to game theory there are much deeper issues that may lead to skepticism concerning the relevance of computational complexity considerations. Game theory (unlike closely related fields like statistics and optimization) almost did not led to computations which are relevant to predictions or precise normative suggestions. You may argue that the value of zero sum games has descriptive and normative value, bur when you go beyond that (say to Nash equilibrium or correlated equilibrium) these are hardly ever used for any computations at all.
Game theory is mainly used to conceptually understand (or discuss, at least) various issues in strategic behaviors.
5) Of course, I think that the quantitative (mathematical) study of various aspects in game theory and economics including the computational complexity aspect, and also various other mathematical aspects like dynamics, learnability, stability, and more, is interesting (which is why I am doing it). Such a study can contribute to game-theory /economics and can also lead to fruit in math/TCS.
Also the computer/Internet revolution gives wonderful new opportunities to experiment and gather empirical data on strategic behavior. This was not possible in early days of game theory.
The main bottleneck remains: Putting complexity aside (and even in small cases where we can compute everything), we do not have and are not close to having models that give quantitative computational tools for either predicting strategic behavior, or suggesting such a behavior.
12. [...] discussion of computational complexity and economics. See especially the comment by [...]
13. [...] of such a difference and more over argue that this difference is at the root of economists’ refusal to view excessive computational complexity as a critique of an equilibrium…. The “usual” argument against taking worst-case complexity results seriously is that [...]
14. [...] the core algorithmic/complexity questions of the computational difficulty of finding equilibria in games and markets we have seen only little progress, mostly for some market variants. There [...]
15. VPN…
[...]Economists and Complexity « Algorithmic Game-Theory/Economics[...]…
16. game system…
[...]Economists and Complexity « Algorithmic Game-Theory/Economics[...]…
17. [...] mathematical deduction systems, or computation by natural systems (see workshop above) or economic markets, has been a surprising and central phenomena, validating the formal study of Turing machines or [...]
18. Awesome post!
When trying to convince economists of the relevance of complexity theory in print, is there a standard reference to cite that makes the sort of arguments? I am familiar with Roughgarden’s survey, but it still seems a bit too computer science-y to convince economists, is there something that is specifically targeted at economists, uses their languages, and (preferably) published in places they read? In a similar vein, do you know any surveys or historic overviews of how computer scientists introduced intractability results into economics and how economists responded? These would be great sources to have!
I ask this because I am currently struggling with how to convince evolutionary biologists to care about computational complexity, and a lot of the same arguments as economists given are around.
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369032979011536, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2007/07/28/the-simplicial-category/
|
# The Unapologetic Mathematician
## The Simplicial Category
There’s another approach to the theory of monoids which finds more direct application in topology and homology theory (which, yes, I’ll get to eventually) — the “simplicial category” $\mathbf{\Delta}$. Really it’s an isomorphic category to $\mathrm{Th}(\mathbf{Mon})$, but some people think better in these other terms. I personally like the direct focus on the algebra, coupled with the diagrammatics so reminiscent of knot theory, but for thoroughness’ sake I’ll describe the other approach.
Note that the objects of $\mathrm{Th}(\mathbf{Mon})$ correspond exactly with the natural numbers. Each object is the monoidal product of some number of copies of the generating object $M$. We’re going to focus here on the model of $\mathbb{N}$ given by the ordinal numbers. That is, the object $M^{\otimes n}$ corresponds to the ordinal number $\mathbf{n}$, which is a set of $n$ elements with its unique (up to isomorphism) total order. In fact, we’ve been implicitly thinking about an order all along. When we draw our diagrams, the objects consist of a set of marked points along the upper or lower edge of the diagram, which we can read in order from left to right.
Let’s pick a specific representation of each ordinal to be concrete about this. The ordinal $\mathbf{n}$ will be represented by the set of natural numbers from ${0}$ to $n-1$ with the usual order relation. The monoidal structure will just be addition — $\mathbf{m}\otimes\mathbf{n}=\mathbf{m+n}$.
The morphisms between ordinals are functions which preserve the order. A function $f:X\rightarrow Y$ between ordinals satisfies this property if whenever $i\leq j$ in $X$ then $f(i)\leq f(j)$ in $Y$. Note that we can send two different elements of $X$ to the same element of $Y$, just as long as we don’t pull them past each other.
So what sorts of functions do we have to play with? Well, we have a bunch of functions from $\mathbf{n}$ to $\mathbf{n+1}$ that skip some element of the image. For instance, we could send $\mathbf{3}$ to $\mathbf{4}$ by sending ${0}$ to ${0}$, skipping $1$, sending $1$ to $2$, and sending $2$ to $3$. We’ll say $\delta^n_i:\mathbf{n}\rightarrow\mathbf{n+1}$ for the function that skips $i$ in its image. The above function is then $\delta^3_1$. For a fixed $n$, the index $i$ can run from ${0}$ to $n$.
We also have a bunch of functions from $\mathbf{n+1}$ to $\mathbf{n}$ that repeat one element of the image. For example, we could send $\mathbf{4}$ to $\mathbf{3}$ by sending ${0}$ to ${0}$, $1$ and $2$ both to $1$, and $3$ to $2$. We’ll say $\sigma^n_i:\mathbf{n+1}\rightarrow\mathbf{n}$ for the function that repeats $i$ in its image. The above function is then $\sigma^3_1$. Again, for a fixed $n$, the index $i$ can run from ${0}$ to $n-1$.
Notice in particular that “skipping” and “repeating” are purely local properties of the function. For instance, $\delta^0_0$ is the unique function from $\mathbf{0}$ (the empty set) to $\mathbf{1}$, which clearly skips $0\in\mathbf{1}$. Then $\delta^n_i$ can be written as $1_i\otimes\delta^0_0\otimes1_{n-i}$, since it leaves the numbers from ${0}$ to $i-1$ alone, sticks in a new $i$, and then just nudges over everything from (the old) $i$ to $n$. Similarly, $\sigma^1_0$ is the unique function from $\mathbf{2}$ to $\mathbf{1}$ that sends both elements in its domain to $0\in\mathbf{1}$. Then all the other $\sigma^n_i$ can be written as $1_i\otimes\sigma^0_0\otimes1_{n-i-1}$.
Now every order-preserving function is determined by the set of elements of the range that are actually in the image of the function along with the set of elements of its domain where it does not increase. That is, if we know where it skips and where it repeats, we know the whole function. This tells us that we can write any function as a composition of $\delta$ and $\sigma$ functions. These basic functions satisfy a few identities:
• If $i\leq j$ then $\delta^{n+1}_i\circ\delta^n_j=\delta^{n+1}_{j+1}\circ\delta^n_i$.
• If $i\leq j$ then $\sigma^{n-1}_j\circ\sigma^n_i=\sigma^{n-1}_i\circ\sigma^n_{j+1}$.
• If $i<j$ then $\sigma^n_j\circ\delta^n_i=\delta^{n-1}_i\circ\sigma^{n-1}_{j-1}$.
• If $i=j$ or $i=j+1$ then $\sigma^n_j\circ\delta^n_i=1$.
• If $i>j+1$ then $\sigma^n_j\circ\delta^n_i=\delta^{n-1}_{i-1}\circ\sigma^{n-1}_j$.
We could check all these by hand, and if you like that sort of thing you’re welcome to it. Instead, I’ll just assume we’ve checked the second one for $n=2$ and the fourth one for $n=1$.
What’s so special about those conditions? Well, notice that $\sigma^1_0:\mathbf{1}\otimes\mathbf{1}\rightarrow\mathbf{1}$ takes two copies of $\mathbf{1}$ to one copy, and that the second relation becomes the associativity condition for this morphism. Then also $\delta^0_0:\mathbf{0}\rightarrow\mathbf{1}$ takes zero copies to one copy, and the fourth relation becomes the left and right identity conditions. That is, $\mathbf{1}$ with these two morphisms is a monoid object in this category! Now we can verify all the other relations by using our diagrams rather than a lot of messy calculations!
We can also go back the other way, breaking any of our diagrams into basic pieces and translating each piece into one of the $\delta$ or $\sigma$ functions. The category of ordinal numbers not only contains a monoid object, it is actually isomorphic to the “theory of monoids” functor — it contains the “universal” monoid object.
So why bother with this new formulation at all? Well, for one thing it’s always nice to see the same structure instantiated in many different ways. Now we have it built from the ground up as $\mathrm{Th}(\mathbf{Mon})$, we have it implemented as a subcategory of $\mathcal{OTL}$, we have it as the category of ordinal numbers, and thus we also have it as a full subcategory of $\mathbf{Cat}$ — the category of all small categories (why?).
There’s another reason, though, which won’t really concern us for a while yet. The morphisms $\delta^n_i$ and $\sigma^n_i$ turn out to be very well-known to topologists as “face” and “degeneracy” maps when working with shapes they call “simplicial complexes”. Not only is this a wonderful oxymoron, it’s the source of the term “simplicial category”. If you know something about topology or homology, you can probably see how these different views start to tie together. If not, don’t worry — I’ll get back to this stuff.
About these ads
### Like this:
Like Loading...
Posted by John Armstrong | Category theory
## 5 Comments »
1. Yay!
Simplicial objects!
Quillen!
*hrm*
Maybe I should make myself stop talking in semi-coherent interjections. These things are So Coooool, and high on my list of “Need to understand this”. They’re also connected to what I hope to possibly be able to stand up and tell a bunch of undergrads before I leave Jena.
I had no idea that Delta (in my world, the simplicial category is the large Delta, not the small delta…) is isomorphic to Th(Monoid) though. I always used to see it topologically. That’s -cool-.
Comment by | July 28, 2007 | Reply
2. Oops, you’re right.. that should have been a capital letter up at the top.
Anyhow, yeah, they’re all the same thing. And thus simplicial objects and monoid objects are also “the same”… sort of.
If you want to jump ahead (and I know you do), try getting a monad from the underlying-set/free-group adunction. Then it’s a monoid object, and thus a simplicial object in some sense. Charge ahead blindly with that and see what sort of co/homology you get.
Comment by | July 28, 2007 | Reply
3. Leafing through my Weibel, it turns out that if you do this with the forgetful-functor from kG-Mod to Z-Mod (or, I assume, k-Mod), then you get ordinary group (co)homology out of it.
Is this what you’d expect from it too? It sounds like what it should be, but I’m not certain what that last bit of forgetfulness matters.
Comment by | July 30, 2007 | Reply
4. What is the geometric realisation of a simplicial small category??
Comment by Sharma | August 2, 2007 | Reply
5. I’m not sure if I quite understand the question, Sharma. Do you mean a simplicial object in the category of small categories?
Comment by | August 2, 2007 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
• ## RSS Feeds
RSS - Posts
RSS - Comments
• ## Feedback
Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
%d bloggers like this:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 98, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286178350448608, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/tagged/weyl-law
|
## Tagged Questions
1answer
235 views
### Weyl law for SL(2,C)
Are there any estimates for the eigenvalues of the Laplace operator for $\Gamma \backslash SL(2, \mathbb{C})/SU(2)$ known beyond the main term? Here, $\Gamma$ should be congruence …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.765468418598175, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-algebra/132363-solved-determine-if-line-plane-parallel.html
|
# Thread:
1. ## [SOLVED] Determine if a line and plane are parallel
x = -5 - 4t
y = 1 - t
z = 3 +2t
and x + 2y + 3z - 9 = 0
Now how do I approach a problem like this? I am capable of calculating dot products, cross products, finding vectors from two points.
What is involved in this? Just a little boost will help and I'll try to solve it. Thanks!
2. Originally Posted by thekrown
x = -5 - 4t
y = 1 - t
z = 3 +2t
and x + 2y + 3z - 9 = 0
Now how do I approach a problem like this? I am capable of calculating dot products, cross products, finding vectors from two points.
What is involved in this? Just a little boost will help and I'll try to solve it. Thanks!
Consider the normal to the plane and the vector in the direction of the line and take the dot product.
3. This is where I have trouble. I'm used to dealing with points, I have difficulty reading the way the information is displayed. I'm not really sure which one is the plane.
Okay, I believe x + 2y + 3z - 9 = 0 is the plane and the other info can be used to find a point. How is this read? I know it is a parametric equation of the line, so it should be
x = -5
y = 1
z = 3
If I am reading this correctly, I can use the distance between point and a plane formula and we get -3 / square root 14.
Now that we have the distance, I'm supposed determine if the line and plane are parallel. I could use some help on this part.
4. Originally Posted by thekrown
This is where I have trouble. I'm used to dealing with points, I have difficulty reading the way the information is displayed. I'm not really sure which one is the plane.
Okay, I believe x + 2y + 3z - 9 = 0 is the plane and the other info can be used to find a point. How is this read? I know it is a parametric equation of the line, so it should be
x = -5
y = 1
z = 3
If I am reading this correctly, I can use the distance between point and a plane formula and we get -3 / square root 14.
Now that we have the distance, I'm supposed determine if the line and plane are parallel. I could use some help on this part.
You need to extensively review your class notes. Below are things you are expected to know.
The cartesian equation of a plane can be written ax + by + cz = d and a normal vector to this plane is <a, b, c>.
The parametric equations of a line can be written $x = x_0 + \alpha t$, $y = y_0 + \beta t$, $z = z_0 + \gamma t$ and a vector in the direction of this line is $<\alpha, \, \beta, \, \gamma>$.
Now apply the defintion you will undoubtedly have been given on establishing whether a line and plane are parallel.
5. Okay, I think I got this. My teacher and textbook use different notations, or there are different notations so the examples I have in class are not exactly as those in the textbook but I think I got it.
With
x = -5 - 4t
y = 1 - t
z = 3 +2t
and x + 2y + 3z - 9 = 0
I can sub the x,y,z into the equation. Then in order to find out if it is parallel I have this note:
at + b = 0
a = 0, then b = 0
If we get 0 = 0 then any t is a solution and is parallel.
If we get b = 0 or b =/= 0 then perpendicular.
In this problem, we get
(-5-4t) + 2(1-t) + 3(3+2t) - 9 = 0
-5 -4t + 2 - 2t + 9 + 6t - 9 = 0
Since t = 0 then a = 0 and so b is 0. So by the above notes we should get 0 = 0. I actually get -3=0.
Following my notes, I would say this is parallel. I just don't understand why I get -3 = 0.
6. Originally Posted by thekrown
Okay, I think I got this. My teacher and textbook use different notations, or there are different notations so the examples I have in class are not exactly as those in the textbook but I think I got it.
With
x = -5 - 4t
y = 1 - t
z = 3 +2t
and x + 2y + 3z - 9 = 0
I can sub the x,y,z into the equation. Then in order to find out if it is parallel I have this note:
at + b = 0
a = 0, then b = 0
If we get 0 = 0 then any t is a solution and is parallel.
If we get b = 0 or b =/= 0 then perpendicular.
In this problem, we get
(-5-4t) + 2(1-t) + 3(3+2t) - 9 = 0
-5 -4t + 2 - 2t + 9 + 6t - 9 = 0
Since t = 0 then a = 0 and so b is 0. So by the above notes we should get 0 = 0. I actually get -3=0.
Following my notes, I would say this is parallel. I just don't understand why I get -3 = 0.
I don't know what you're doing. Taking the dot product, as I suggested in my first post, the normal to the plane is obviously perpendicular to the line. It follows by defniition that the line and plane are parallel.
7. I don't know either. My teacher speaks with a thick Russian accent and understanding the material is hard enough in itself. Please bear with me.
Anyways, I believe I finally got it. I'm learning more on this subject with google and this website than my actual class. Thank god for this forum.
So, again, we have
x = -5 - 4t
y = 1 - t
z = 3 +2t
and x + 2y + 3z - 9 = 0
The vector of the line is (-5,1,3) and it's direction vector is (-4,-1,2)
The vector of the plane is (1,2,3).
I take the dot product, if it is 0 then they are parallel.
The dot product is -4 -2 +6 = 0 and so here we get the 0 = 0.
I'm going to have to clarify the notes with my teacher, chances are I'm just confusing two different things.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603796005249023, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/90016/the-most-general-context-of-mathers-cube-theorems
|
## The most general context of Mather’s Cube Theorems
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Quite simply, I'd like to know what is the broadest or most natural context in which either (or both) of Mather's cube theorems hold. If you like, this may mean any of
• What properties of $Top$ or $Top^*$ are essential to the proofs?
• (where) are model/homotopical categories verifying Mather's theorems studied as such in the literature?
• Are there more examples known verifying Mather's theorems?
I ask because Mather's proof strikes me as fairly gritty and seems to rely on explicit cellular constructions.
For reference, the cube theorems concern a cubical diagram whose faces commute up to homotopy in a coherent way, and assert
1. If one pair of opposite faces are homotopy push-outs and the two remaining faces adjecent the source vertex are homotopy pull-backs, then the final two faces are also homotopy pull-backs
2. If two pairs of opposite faces are homotopy pull-backs, and the remaining face adjacent the target vertex is a homotopy push-out, then the remaining face is a homotopy push-out.
-
## 4 Answers
Let $\mathcal{X}$ be an $\infty$-category (i.e., a homotopy theory) which admits small homotopy colimits, a set of small generators, and has the property that homotopy colimits in $\mathcal{X}$ commute with homotopy pullback. Then $\mathcal{X}$ satisfies the Mather cube theorem if and only if $\mathcal{X}$ is an $\infty$-topos: that is, it can be described as a left exact localization of an $\infty$-category of presheaves of spaces. (I learned this from Charles Rezk). Such homotopy theories are studied extensively in my book "Higher Topos Theory" (see in particular Proposition 6.1.3.10 and the remark which follows it).
-
Jacob, I'm guessing that such things also have the following more general property: If $F$ and $G$ are functors $I\to \mathcal X$ such that for every $i$-morphism $i\to j$ you get a homotopy pullback square $(F(i)\to G(i))\to (F(j)\to G(j))$ then for every $i$-object $i$ you get a homotopy pullback square $(F(i)\to G(i))\to (hocolim F\to hocolim G)$. (The case of this when $I$ is the poset $a\leftarrow b\to c$ is the (first) Mather cube theorem.) – Tom Goodwillie Mar 3 2012 at 21:14
Or, Jacob, I suppose by "the Mather cube theorem" you mean statement 2 in the question. As I mentioned in my edit to my answer, statement 1 fails for the category of sets. – Tom Goodwillie Mar 3 2012 at 21:35
1
Hi Tom; I meant statement 1 (the category of sets is an ordinary topos, rather than an $\infty$-topos). Statement 2 would be a special case of the blanket hypothesis "homotopy colimits in X commute with homotopy pullback". The more general property you describe also holds in any $\infty$-topos. – Jacob Lurie Mar 4 2012 at 1:03
Oh my goodness wow! I'll have to keep studying these things. – some guy on the street Apr 16 2012 at 23:33
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You might be interested in the work of Jean-Paul Doeraene: http://math.univ-lille1.fr/~doeraene/
In particular, on pages 8 and 9 of the paper Homotopy pull backs, push outs, and joins, he gives several examples of model categories satisfying the Cube Axiom, and identifies the cube maps in several which do not.
The motivation of much of this is to define and study Lusternik-Schnirelmann type invariants in model categories other than $Top$.
-
Thank you, that's nifty! – some guy on the street Mar 2 2012 at 15:02
Here's a sketch proof of 2, sort of in the same spirit as Jeff Strom's answer:
These statements have equivalent formulations involving strictly commutative squares.
Denote a typical square by $\mathcal X$, with last space $X$ and two spaces $X_1$ and $X_2$ mapping into $X$, and $X_{12}$ mapping into both. It's called a homotopy pushout square if the resulting map from the homotopy pushout of $X_1\leftarrow X_{12}\to X_2$ to $X$ is a weak equivalence, and likewise for pullback.
Use the fact that if $X$ is the union of open sets $X_1$ and $X_2$ with intersection $X_{12}$ then the resulting square is a homotopy pushout.
Also use this converse: Any homotopy pushout square admits an equivalence from such an "open triad square"--a map that is a weak equivalence in all four corners.
Now let $\mathcal X\to \mathcal Y$ be a cube, a map of squares. Suppose that $\mathcal Y$ is a homotopy pushout square and that the four side squares are homotopy pullback squares.
Wlog the map $X\to Y$ is a fibration and the side squares are pullback (not just homotopy pullback) squares. Make an open triad square $\mathcal Y'$ and an equivalence $\mathcal Y'\to \mathcal Y$. Pull back to get a new square $\mathcal X'$. This is now also of the open triad kind, so it's a homotopy pushout square. And its map to $\mathcal X$ is an equivalence, so the latter is also a homotopy pushout square.
The other theorem, 1, implies 2 (for Top) but I don't think you can go the other way. And when I try to prove 1 for Top I end up using quasifibrations.
EDIT: The first theorem really seems deeper than the second. In the category of sets (with equivalence=iso, holim=lim, hocolim=colim) the second is true and the first is false.
-
The second cube theorem (if base and sides are ok, then so is the top) is a straightforward consequence of the formally crazy fact that if $p:E\to B$ is a fibration, $i:A\to B$ is a cofibration, and $p_A:E_A\to A$ is the pullback of $p$ along $i$, then $i:E_A\to E$ is also a cofibration.
-
1
That's in the category of simplicial sets, or what? – Tom Goodwillie Mar 2 2012 at 14:29
ahah! This will bear some thinking about. Tom, it's at least in $Top$-like things, which is (again) one of the likenesses I'm trying to get at; in this case, it seems to be the fact that ordinary pull-backs are subspaces of products in a very clean way. – some guy on the street Mar 2 2012 at 15:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257462024688721, "perplexity_flag": "middle"}
|
http://en.wikisource.org/wiki/1911_Encyclop%C3%A6dia_Britannica/Calculating_Machines
|
# 1911 Encyclopædia Britannica/Calculating Machines
From Wikisource
Calculating Machines
See also Calculating machine and Planimeter on Wikipedia, Slide rule on Wikipedia, and our 1911 Encyclopædia Britannica disclaimer.
1911 Encyclopædia Britannica, Volume 4 — Calculating Machines
CALCULATING MACHINES. Instruments for the mechanical performance of numerical calculations, have in modern times come into ever-increasing use, not merely for dealing with large masses of figures in banks, insurance offices, &c., but also, as cash registers, for use on the counters of retail shops. They may be classified as follows:—(i.) Addition machines; the first invented by Blaise Pascal (1642). (ii.) Addition machines modified to facilitate multiplication; the first by G.W. Leibnitz (1671). (iii.) True multiplication machines; Léon Bollés (1888), Steiger (1894). (iv.) Difference machines; Johann Helfrich von Müller (1786), Charles Babbage (1822). (v.) Analytical machines; Babbage (1834). The number of distinct machines of the first three kinds is remarkable and is being constantly added to, old machines being improved and new ones invented; Professor R. Mehmke has counted over eighty distinct machines of this type. The fullest published account of the subject is given by Mehmke in the Encyclopädie der mathematischen Wissenschaften, article "Numerisches Rechnen," vol. i., Heft 6 (1901). It contains historical notes and full references. Walther von Dyck's Catalogue also contains descriptions of various machines. We shall confine ourselves to explaining the principles of some leading types, without giving an exact description of any particular one.
FIG. 1.
Practically all calculating machines contain a "counting work," a series of "figure disks" consisting in the original form of horizontal circular disks (fig. 1), on which the figures 0, 1, 2, to 9 are marked. Each disk can turn about its vertical axis, and is covered by a fixed plate with a hole or "window" in it through which one figure can be seen. On turning the disk through one-tenth of a revolution this figure will be changed into the next higher or lower. Such turning may be called a "step," positive Addition machines. if the next higher and negative if the next lower figure appears. Each positive step therefore adds one unit to the figure under the window, while two steps add two, and so on. If a series, say six, of such figure disks be placed side by side, their windows lying in a row, then any number of six places can be made to appear, for instance 000373. In order to add 6425 to this number, the disks, counting from right to left, have to be turned 5, 2, 4 and 6 steps respectively. If this is done the sum 006798 will appear. In case the sum of the two figures at any disk is greater than 9, if for instance the last figure to be added is 8 instead of 5, the sum for this disk is 11 and the 1 only will appear. Hence an arrangement for "carrying" has to be introduced. This may be done as follows. The axis of a figure disk contains a wheel with ten teeth. Each figure disk has, besides, one long tooth which when its 0 passes the window turns the next wheel to the left, one tooth forward, and hence the figure disk one step. The actual mechanism is not quite so simple, because the long teeth as described would gear also into the wheel to the right, and besides would interfere with each other. They must therefore be replaced by a somewhat more complicated arrangement, which has been done in various ways not necessary to describe more fully. On the way in which this is done, however, depends to a great extent the durability and trustworthiness of any arithmometer; in fact, it is often its weakest point. If to the series of figure disks arrangements are added for turning each disk through a required number of steps, we have an addition machine, essentially of Pascal's type. In it each disk had to be turned by hand. This operation has been simplified in various ways by mechanical means. For pure addition machines key-boards have been added, say for each disk nine keys marked 1 to 9. On pressing the key marked 6 the disk turns six steps and so on. These have been introduced by Stettner (1882), Max Mayer (1887), and in the comptometer by Dorr Z. Felt of Chicago. In the comptograph by Felt and also in "Burrough's Registering Accountant" the result is printed.
These machines can be used for multiplication, as repeated addition, but the process is laborious, depending for rapid execution Modified addition machines. essentially on the skill of the operator.[1] To adapt an addition machine, as described, to rapid multiplication the turnings of the separate figure disks are replaced by one motion, commonly the turning of a handle. As, however, the different disks have to be turned through different steps, a contrivance has to be inserted which can be "set" in such a way that by one turn of the handle each disk is moved through a number of steps equal to the number of units which is to be added on that disk. This may be done by making each of the figure disks receive on its axis a ten-toothed wheel, called hereafter the A-wheel, which is acted on either directly or indirectly by another wheel (called the B-wheel) in which the number of teeth can be varied from 0 to 9. This variation of the teeth has been effected in different ways. Theoretically the simplest seems to be to have on the B-wheel nine teeth which can be drawn back into the body of the wheel, so that at will any number from 0 to 9 can be made to project. This idea, previously mentioned by Leibnitz, has been realized by Bohdner in the "Brunsviga." Another way, also due to Leibnitz, consists in inserting between the axis of the handle bar and the A-wheel a "stepped" cylinder. This may be considered as being made up of ten wheels large enough to contain about twenty teeth each; but most of these teeth are cut away so that these wheels retain in succession 9, 8, ... 1, 0 teeth. If these are made as one piece they form a cylinder with teeth of lengths from 9, 8 ... times the length of a tooth on a single wheel.
FIG. 2.
In the diagrammatic vertical section of such a machine (fig. 2) FF is a figure disk with a conical wheel A on its axis. In the covering plate HK is the window W. A stepped cylinder is shown at B. The axis Z, which runs along the whole machine, is turned by a handle, and itself turns the cylinder B by aid of conical wheels. Above this cylinder lies an axis EE with square section along which a wheel D can be moved. The same axis carries at E′ a pair of conical wheels C and C′, which can also slide on the axis so that either can be made to drive the A-wheel. The covering plate MK has a slot above the axis EE allowing a rod LL′ to be moved by aid of a button L, carrying the wheel D with it. Along the slot is a scale of numbers 0 1 2 ... 9 corresponding with the number of teeth on the cylinder B, with which the wheel D will gear in any given position. A series of such slots is shown in the top middle part of Steiger's machine (fig. 3). Let now the handle driving the axis Z be turned once round, the button being set to 4. Then four teeth of the B-wheel will turn D and with it the A-wheel, and consequently the figure disk will be moved four steps. These steps will be positive or forward if the wheel C gears in A, and consequently four will be added to the figure showing at the window W. But if the wheels CC′ are moved to the right, C′ will gear with A moving backwards, with the result that four is subtracted at the window. This motion of all the wheels C is done simultaneously by the push of a lever which appears at the top plate of the machine, its two positions being marked "addition" and "subtraction." The B-wheels are in fixed positions below the plate MK. Level with this, but separate, is the plate KH with the window. On it the figure disks are mounted.
This plate is hinged at the back at H and can be lifted up, thereby throwing the A-wheels out of gear. When thus raised the figure disks can be set to any figures; at the same time it can slide to and fro so that an A-wheel can be put in gear with any C-wheel forming with it one "element." The number of these varies with the size of the machine. Suppose there are six B-wheels and twelve figure disks. Let these be all set to zero with the exception of the last four to the right, these showing 1 4 3 2, and let these be placed opposite the last B-wheels to the right. If now the buttons belonging to the latter be set to 3 2 5 6, then on turning the B-wheels all once round the latter figures will be added to the former, thus showing 4 6 8 8 at the windows. By aid of the axis Z, this turning of the B-wheels is performed simultaneously by the movement of one handle. We have thus an addition machine. If it be required to multiply a number, say 725, by any number up to six figures, say 357, the buttons are set to the figures 725, the windows all showing zero. The handle is then turned, 725 appears at the windows, and successive turns add this number to the first. Hence seven turns show the product seven times 725. Now the plate with the A-wheels is lifted and moved one step to the right, then lowered and the handle turned five times, thus adding fifty times 725 to the product obtained. Finally, by moving the piate again, and turning the handle three times, the required product is obtained. If the machine has six B-wheels and twelve disks the product of two six-figure numbers can be obtained. Division is performed by repeated subtraction. The lever regulating the C-wheel is set to subtraction, producing negative steps at the disks. The dividend is set up at the windows and the divisor at the buttons. Each turn of the handle subtracts the divisor once. To count the number of turns of the handle a second set of windows is arranged with number disks below. These have no carrying arrangement, but one is turned one step for each turn of the handle. The machine described is essentially that of Thomas of Colmar, which was the first that came into practical use. Of earlier machines those of Leibnitz, Müller (1782), and Hahn (1809) deserve to be mentioned (see Dyck, Catalogue). Thomas's machine has had many imitations, both in England and on the Continent, with more or less important alterations. Joseph Edmondson of Halifax has given it a circular form, which has many advantages.
The accuracy and durability of any machine depend to a great extent on the manner in which the carrying mechanism is constructed. Besides, no wheel must be capable of moving in any other way than that required; hence every part must be locked and be released only when required to move. Further, any disk must carry to the next only after the carrying to itself has been completed. If all were to carry at the same time a considerable force would be required to turn the handle, and serious strains would be introduced. It is for this reason that the B-wheels or cylinders have the greater part of the circumference free from teeth. Again, the carrying acts generally as in the machine described, in one sense only, and this involves that the handle be turned always in the same direction. Subtraction therefore cannot be done by turning it in the opposite way, hence the two wheels C and C′ are introduced. These are moved all at once by one lever acting on a bar shown at R in section (fig. 2).
In the Brunsviga, the figure disks are all mounted on a common horizontal axis, the figures being placed on the rim. On the side of each disk and rigidly connected with it lies its A-wheel with which it can turn independent of the others. The B-wheels, all fixed on another horizontal axis, gear directly on the A-wheels. By an ingenious contrivance the teeth are made to appear from out of the rim to any desired number. The carrying mechanism, too, is different, and so arranged that the handle can be turned either way, no special setting being required for subtraction or division. It is extremely handy, taking up much less room than the others. Professor Eduard Selling of Würzburg has invented an altogether different machine, which has been made by Max Ott, of Munich. The B-wheels are replaced by lazy-tongs. To the joints of these the ends of racks are pinned; and as they are stretched out the racks are moved forward 0 to 9 steps, according to the joints they are pinned to. The racks gear directly in the A-wheels, and the figures are placed on cylinders as in the Brunsviga. The carrying is done continuously by a train of epicycloidal wheels. The working is thus rendered very smooth, without the jerks which the ordinary carrying tooth produces; but the arrangement has the disadvantage that the resulting figures do not appear in a straight line, a figure followed by a 5, for instance, being already carried half a step forward. This is not a serious matter in the hands of a mathematician or an operator using the machine constantly, but it is serious for casual work. Anyhow, it has prevented the machine from being a commercial success, and it is not any longer made. For ease and rapidity of working it surpasses all others. Since the lazy-tongs allow of an extension equivalent to five turnings of the handle, if the multiplier is 5 or under, one push forward will do the same as five (or less) turns of the handle, and more than two pushes are never required.
FIG. 3.
The Steiger-Egli machine is a multiplication machine, of which fig. 3 gives a picture as it appears to the manipulator. The lower Multiplication machines. part of the figure contains, under the covering plate, a carriage with two rows of windows for the figures marked ff and gg. On pressing down the button W the carriage can be moved to right or left. Under each window is a figure disk, as in the Thomas machine. The upper part has three sections. The one to the right contains the handle K for working the machine, and a button U for setting the machine for addition, multiplication, division, or subtraction. In the middle section a number of parallel slots are seen, with indices which can each be set to one of the numbers 0 to 9. Below each slot, and parallel to it, lies a shaft of square section on which a toothed wheel, the A-wheel, slides to and fro with the index in the slot. Below these wheels again lie 9 toothed racks at right angles to the slots. By setting the index in any slot the wheel below it comes into gear with one of these racks. On moving the rack, the wheels turn their shafts and the figure disks gg opposite to them. The dimensions are such that a motion of a rack through 1 cm. turns the figure disk through one "step" or adds 1 to the figure under the window. The racks are moved by an arrangement contained in the section to the left of the slots. There is a vertical plate called the multiplication table block, or more shortly, the block. From it project rows of horizontal rods of lengths varying from 0 to 9 centimetres. If one of these rows is brought opposite the row of racks and then pushed forward to the right through 9 cm., each rack will move and add to its figure disk a number of units equal to the number of centimetres of the rod which operates on it. The block has a square face divided into a hundred squares. Looking at its face from the right—i.e. from the side where the racks lie—suppose the horizontal rows of these squares numbered from 0 to 9, beginning at the top, and the columns numbered similarly, the 0 being to the right; then the multiplication table for numbers 0 to 9 can be placed on these squares. The row 7 will therefore contain the numbers 63, 56, ... 7, 0. Instead of these numbers, each square receives two "rods" perpendicular to the plate, which may be called the units-rod and the tens-rod. Instead of the number 63 we have thus a tens-rod 6 cm. and a units-rod 3 cm. long. By aid of a lever H the block can be raised or lowered so that any row of the block comes to the level of the racks, the units-rods being opposite the ends of the racks.
The action of the machine will be understood by considering an example. Let it be required to form the product 7 times 385. The indices of three consecutive slots are set to the numbers 3, 8, 5 respectively. Let the windows gg opposite these slots be called a, b, c. Then to the figures shown at these windows we have to add 21, 56, 35 respectively. This is the same thing as adding first the number 165, formed by the units of each place, and next 2530 corresponding to the tens; or again, as adding first 165, and then moving the carriage one step to the right, and adding 253. The first is done by moving the block with the units-rods opposite the racks forward. The racks are then put out of gear, and together with the block brought back to their normal position; the block is moved sideways to bring the tens-rods opposite the racks, and again moved forward, adding the tens, the carriage having also been moved forward as required. This complicated movement, together with the necessary carrying, is actually performed by one turn of the handle. During the first quarter-turn the block moves forward, the units-rods coming into operation. During the second quarter-turn the carriage is put out of gear, and moved one step to the right while the necessary carrying is performed; at the same time the block and the racks are moved back, and the block is shifted so as to bring the tens-rods opposite the racks. During the next two quarter-turns the process is repeated, the block ultimately returning to its original position. Multiplication by a number with more places is performed as in the Thomas. The advantage of this machine over the Thomas in saving time is obvious. Multiplying by 817 requires in the Thomas 16 turns of the handle, but in the Steiger-Egli only 3 turns, with 3 settings of the lever H. If the lever H is set to 1 we have a simple addition machine like the Thomas or the Brunsviga. The inventors state that the product of two 8-figure numbers can be got in 6-7 seconds, the quotient of a 6-figure number by one of 3 figures in the same time, while the square root to 5 places of a 9-figure number requires 18 seconds.
Machines of far greater powers than the arithmometers mentioned have been invented by Babbage and by Scheutz. A description is impossible without elaborate drawings. The following account will afford some idea of the working of Babbage's difference machine. Imagine a number of striking clocks placed in a row, each with only an hour hand, and with only the striking apparatus retained. Let the hand of the first clock be turned. As it comes opposite a number on the dial the clock strikes that number of times. Let this clock be connected with the second in such a manner that by each stroke of the first the hand of the second is moved from one number to the next, but can only strike when the first comes to rest. If the second hand stands at 5 and the first strikes 3, then when this is done the second will strike 8; the second will act similarly on the third, and so on. Let there be four such clocks with hands set to the numbers 6, 6, 1, 0 respectively. Now set the third clock striking 1, this sets the hand of the fourth clock to 1; strike the second (6), this puts the third to 7 and the fourth to 8. Next strike the first (6); this moves the other hands to 12, 19, 27 respectively, and now repeat the striking of the first. The hand of the fourth clock will then give in succession the numbers 1, 8, 27, 64, &c., being the cubes of the natural numbers. The numbers thus obtained on the last dial will have the differences given by those shown in succession on the dial before it, their differences by the next, and so on till we come to the constant difference on the first dial. A function
y = a + bx + cx2 + dx3 + ex4
gives, on increasing x always by unity, a set of values for which the fourth difference is constant. We can, by an arrangement like the above, with five clocks calculate y for x = 1, 2, 3, ... to any extent. This is the principle of Babbage's difference machine. The clock dials have to be replaced by a series of dials as in the arithmometers described, and an arrangement has to be made to drive the whole by turning one handle by hand or some other power. Imagine further that with the last clock is connected a kind of typewriter which prints the number, or, better, impresses the number in a soft substance from which a stereotype casting can be taken, and we have a machine which, when once set for a given formula like the above, will automatically print, or prepare stereotype plates for the printing of, tables of the function without any copying or typesetting, thus excluding all possibility of errors. Of this "Difference engine," as Babbage called it, a part was finished in 1834, the government having contributed £17,000 towards the cost. This great expense was chiefly due to the want of proper machine tools.
Meanwhile Babbage had conceived the idea of a much more powerful machine, the "analytical engine," intended to perform any series of possible arithmetical operations. Each of these was to be communicated to the machine by aid of cards with holes punched in them into which levers could drop. It was long taken for granted that Babbage left complete plans; the committee of the British Association appointed to consider this question came, however, to the conclusion (Brit. Assoc. Report, 1878, pp. 92-102) that no detailed working drawings existed at all; that the drawings left were only diagrammatic and not nearly sufficient to put into the hands of a draughtsman for making working plans; and "that in the present state of the design it is not more than a theoretical possibility." A full account of the work done by Babbage in connexion with calculating machines, and much else published by others in connexion therewith, is contained in a work published by his son, General Babbage.
FIG. 4.
Slide rules are instruments for performing logarithmic calculations mechanically, and are extensively used, especially where Slide rules. only rough approximations are required. They are almost as old as logarithms themselves. Edmund Gunter drew a "logarithmic line" on his "Scales" as follows (fig. 4):—On a line AB lengths are set off to scale to represent the common logarithms of the numbers 1 2 3 ... 10, and the points thus obtained are marked with these numbers. As log 1 = 0, the beginning A has the number 1 and B the number 10, hence the unit of length is AB, as log 10 = 1. The same division is repeated from B to C. The distance 1,2 thus represents log 2, 1,3 gives log 3, the distance between 4 and 5 gives log 5 - log 4 = log 5/4, and so for others. In order to multiply two numbers, say 2 and 3, we have log 2 × 3 = log 2 + log 3. Hence, setting off the distance 1,2 from 3 forward by the aid of a pair of compasses will give the distance log 2 + log 3, and will bring us to 6 as the required product. Again, if it is required to find 4/5 of 7, set off the distance between 4 and 5 from 7 backwards, and the required number will be obtained. In the actual scales the spaces between the numbers are subdivided into 10 or even more parts, so that from two to three figures may be read. The numbers 2, 3 ... in the interval BC give the logarithms of 10 times the same numbers in the interval AB; hence, if the 2 in the latter means 2 or .2, then the 2 in the former means 20 or 2.
Soon after Gunter's publication (1620) of these "logarithmic lines," Edmund Wingate (1672) constructed the slide rule by repeating the logarithmic scale on a tongue or "slide," which could be moved along the first scale, thus avoiding the use of a pair of compasses. A clear idea of this device can be formed if the scale in fig. 4 be copied on the edge of a strip of paper placed against the line A C. If this is now moved to the right till its 1 comes opposite the 2 on the first scale, then the 3 of the second will be opposite 6 on the top scale, this being the product of 2 and 3; and in this position every number on the top scale will be twice that on the lower. For every position of the lower scale the ratio of the numbers on the two scales which coincide will be the same. Therefore multiplications, divisions, and simple proportions can be solved at once.
Dr John Perry added log log scales to the ordinary slide rule in order to facilitate the calculation of ax or ex according to the formula log logax = log loga + logx. These rules are manufactured by A.G. Thornton of Manchester.
Many different forms of slide rules are now on the market. The handiest for general use is the Gravet rule made by Tavernier-Gravet in Paris, according to instructions of the mathematician V.M.A. Mannheim of the École Polytechnique in Paris. It contains at the back of the slide scales for the logarithms of sines and tangents so arranged that they can be worked with the scale on the front. An improved form is now made by Davis and Son of Derby, who engrave the scales on white celluloid instead of on box-wood, thus greatly facilitating the readings. These scales have the distance from one to ten about twice that in fig. 4. Tavernier-Gravet makes them of that size and longer, even ½ metre long. But they then become somewhat unwieldy, though they allow of reading to more figures. To get a handy long scale Professor G. Fuller has constructed a spiral slide rule drawn on a cylinder, which admits of reading to three and four figures. The handiest of all is perhaps the "Calculating Circle" by Boucher, made in the form of a watch. For various purposes special adaptations of the slide rules are met with—for instance, in various exposure meters for photographic purposes. General Strachey introduced slide rules into the Meteorological Office for performing special calculations. At some blast furnaces a slide rule has been used for determining the amount of coke and flux required for any weight of ore. Near the balance a large logarithmic scale is fixed with a slide which has three indices only. A load of ore is put on the scales, and the first index of the slide is put to the number giving the weight, when the second and third point to the weights of coke and flux required.
By placing a number of slides side by side, drawn if need be to different scales of length, more complicated calculations may be performed. It is then convenient to make the scales circular. A number of rings or disks are mounted side by side on a cylinder, each having on its rim a log-scale.
The "Callendar Cable Calculator," invented by Harold Hastings and manufactured by Robert W. Paul, is of this kind. In it a number of disks are mounted on a common shaft, on which each turns freely unless a button is pressed down whereby the disk is clamped to the shaft. Another disk is fixed to the shaft. In front of the disks lies a fixed zero line. Let all disks be set to zero and the shaft be turned, with the first disk clamped, till a desired number appears on the zero line; let then the first disk be released and the second clamped and so on; then the fixed disk will add up all the turnings and thus give the product of the numbers shown on the several disks. If the division on the disks is drawn to different scales, more or less complicated calculations may be rapidly performed. Thus if for some purpose the value of say ab³ √c is required for many different values of a, b, c, three movable disks would be needed with divisions drawn to scales of lengths in the proportion 1: 3: ½. The instrument now on sale contains six movable disks.
Continuous Calculating Machines or Integrators.—In order to measure the length of a curve, such as the road on a map, a Curvometers. wheel is rolled along it. For one revolution of the wheel the path described by its point of contact is equal to the circumference of the wheel. Thus, if a cyclist counts the number of revolutions of his front wheel he can calculate the distance ridden by multiplying that number by the circumference of the wheel. An ordinary cyclometer is nothing but an arrangement for counting these revolutions, but it is graduated in such a manner that it gives at once the distance in miles. On the same principle depend a number of instruments which, under various fancy names, serve to measure the length of any curve; they are in the shape of a small meter chiefly for the use of cyclists. They all have a small wheel which is rolled along the curve to be measured, and this sets a hand in motion which gives the reading on a dial. Their accuracy is not very great, because it is difficult to place the wheel so on the paper that the point of contact lies exactly over a given point; the beginning and end of the readings are therefore badly defined. Besides, it is not easy to guide the wheel along the curve to which it should always lie tangentially. To obviate this defect more complicated curvometers or kartometers have been devised. The handiest seems to be that of G. Coradi. He uses two wheels; the tracing-point, halfway between them, is guided along the curve, the line joining the wheels being kept normal to the curve. This is pretty easily done by eye; a constant deviation of 8° from this direction produces an error of only 1%. The sum of the two readings gives the length. E. Fleischhauer uses three, five or more wheels arranged symmetrically round a tracer whose point is guided along the curve; the planes of the wheels all pass through the tracer, and the wheels can only turn in one direction. The sum of the readings of all the wheels gives approximately the length of the curve, the approximation increasing with the number of the wheels used. It is stated that with three wheels practically useful results can be obtained, although in this case the error, if the instrument is consistently handled so as always to produce the greatest inaccuracy, may be as much as 5%.
FIG. 5.
Planimeters are instruments for the determination by mechanical means of the area of any figure. A pointer, generally called the Planimeters. "tracer," is guided round the boundary of the figure, and then the area is read off on the recording apparatus of the instrument. The simplest and most useful is Amsler's (fig. 5). It consists of two bars of metal OQ and QT, which are hinged together at Q. At O is a needle-point which is driven into the drawing-board, and at T is the tracer. As this is guided round the boundary of the figure a wheel W mounted on QT rolls on the paper, and the turning of this wheel measures, to some known scale, the area. We shall give the theory of this instrument fully in an elementary manner by aid of geometry. The theory of other planimeters can then be easily understood.
FIG. 6.
Consider the rod QT with the wheel W, without the arm OQ. Let it be placed with the wheel on the paper, and now moved perpendicular to itself from AC to BD (fig. 6). The rod sweeps over, or generates, the area of the rectangle ACDB = lp, where l denotes the length of the rod and p the distance AB through which it has been moved. This distance, as measured by the rolling of the wheel, which acts as a curvometer, will be called the "roll" of the wheel and be denoted by w. In this case p = w, and the area P is given by P = wl. Let the circumference of the wheel be divided into say a hundred equal parts u; then w registers the number of u's rolled over, and w therefore gives the number of areas lu contained in the rectangle. By suitably selecting the radius of the wheel and the length l, this area lu may be any convenient unit, say a square inch or square centimetre. By changing l the unit will be changed.
FIG. 7.
Again, suppose the rod to turn (fig. 7) about the end Q, then it will describe an arc of a circle, and the rod will generate an area ½l²θ, where θ is the angle AQB through which the rod has turned. The wheel will roll over an arc cθ, where c is the distance of the wheel from Q. The "roll" is now w = cθ; hence the area generated is
P = ½ l²/c w,
and is again determined by w.
FIG. 8.
Next let the rod be moved parallel to itself, but in a direction not perpendicular to itself (fig. 8). The wheel will now not simply roll. Consider a small motion of the rod from QT to Q′T′. This may be resolved into the motion to RR′ perpendicular to the rod, whereby the rectangle QTR′R is generated, and the sliding of the rod along itself from RR′ to Q′T′. During this second step no area will be generated. During the first step the roll of the wheel will be QR, whilst during the second step there will be no roll at all. The roll of the wheel will therefore measure the area of the rectangle which equals the parallelogram QTT′Q′. If the whole motion of the rod be considered as made up of a very great number of small steps, each resolved as stated, it will be seen that the roll again measures the area generated. But it has to be noticed that now the wheel does not only roll, but also slips, over the paper. This, as will be pointed out later, may introduce an error in the reading.
FIG. 9.
We can now investigate the most general motion of the rod. We again resolve the motion into a number of small steps. Let (fig. 9) AB be one position, CD the next after a step so small that the arcs AC and BD over which the ends have passed may be considered as straight lines. The area generated is ABDC. This motion we resolve into a step from AB to CB′, parallel to AB and a turning about C from CB′ to CD, steps such as have been investigated. During the first, the "roll" will be p the altitude of the parallelogram; during the second will be cθ. Therefore
w = p + cθ.
FIG. 10.
The area generated is lp + ½ l2θ, or, expressing p in terms of w, lw + (½l2 - lc)θ. For a finite motion we get the area equal to the sum of the areas generated during the different steps. But the wheel will continue rolling, and give the whole roll as the sum of the rolls for the successive steps. Let then w denote the whole roll (in fig. 10), and let α denote the sum of all the small turnings θ; then the area is
P = lw + (½l2 - lc)α . . . (1)
Here α is the angle which the last position of the rod makes with the first. In all applications of the planimeter the rod is brought back to its original position. Then the angle α is either zero, or it is 2π if the rod has been once turned quite round.
Hence in the first case we have
P = lw . . . (2a)
and w gives the area as in case of a rectangle.
In the other case
P = lw + lC . . . (2b)
where C = (½l-c)2π, if the rod has once turned round. The number C will be seen to be always the same, as it depends only on the dimensions of the instrument. Hence now again the area is determined by w if C is known.
FIG. 11.
Thus it is seen that the area generated by the motion of the rod can be measured by the roll of the wheel; it remains to show how any given area can be generated by the rod. Let the rod move in any manner but return to its original position. Q and T then describe closed curves. Such motion may be called cyclical. Here the theorem holds:—If a rod QT performs a cyclical motion, then the area generated equals the difference of the areas enclosed by the paths of T and Q respectively. The truth of this proposition will be seen from a figure. In fig. 11 different positions of the moving rod QT have been marked, and its motion can be easily followed. It will be seen that every part of the area TT′BB′ will be passed over once and always by a forward motion of the rod, whereby the wheel will increase its roll. The area AA′QQ′ will also be swept over once, but with a backward roll; it must therefore be counted as negative. The area between the curves is passed over twice, once with a forward and once with a backward roll; it therefore counts once positive and once negative; hence not at all. In more complicated figures it may happen that the area within one of the curves, say TT′BB′, is passed over several times, but then it will be passed over once more in the forward direction than in the backward one, and thus the theorem will still hold.
FIG. 12.
To use Amsler's planimeter, place the pole O on the paper outside the figure to be measured. Then the area generated by QT is that of the figure, because the point Q moves on an arc of a circle to and fro enclosing no area. At the same time the rod comes back without making a complete rotation. We have therefore in formula (1), α = 0; and hence
P = lw,
which is read off. But if the area is too large the pole O may be placed within the area. The rod describes the area between the boundary of the figure and the circle with radius r = OQ, whilst the rod turns once completely round, making α = 2π. The area measured by the wheel is by formula (1), lw + (½l²-lc) 2π.
To this the area of the circle πr² must be added, so that now
P = lw + (½l²-lc)2π + πr²,
or
P = lw + C,
where
C = (½l²-lc)2π + πr²,
is a constant, as it depends on the dimensions of the instrument alone. This constant is given with each instrument.
FIG. 14.
FIG. 13.
Amsler's planimeters are made either with a rod QT of fixed length, which gives the area therefore in terms of a fixed unit, say in square inches, or else the rod can be moved in a sleeve to which the arm OQ is hinged (fig. 13). This makes it possible to change the unit lu, which is proportional to l.
In the planimeters described the recording or integrating apparatus is a smooth wheel rolling on the paper or on some other surface. Amsler has described another recorder, viz. a wheel with a sharp edge. This will roll on the paper but not slip. Let the rod QT carry with it an arm CD perpendicular to it. Let there be mounted on it a wheel W, which can slip along and turn about it. If now QT is moved parallel to itself to Q′T′, then W will roll without slipping parallel to QT, and slip along CD. This amount of slipping will equal the perpendicular distance between QT and Q′T′, and therefore serve to measure the area swept over like the wheel in the machine already described. The turning of the rod will also produce slipping of the wheel, but it will be seen without difficulty that this will cancel during a cyclical motion of the rod, provided the rod does not perform a whole rotation.
FIG. 15.
The first planimeter was made on the following principles:—A frame FF (fig. 15) can move parallel to OX. It carries a rod TT Early forms. movable along its own length, hence the tracer T can be guided along any curve ATB. When the rod has been pushed back to Q′Q, the tracer moves along the axis OX. On the frame a cone VCC′ is mounted with its axis sloping so that its top edge is horizontal and parallel to TT′, whilst its vertex V is opposite Q′. As the frame moves it turns the cone. A wheel W is mounted on the rod at T′, or on an axis parallel to and rigidly connected with it. This wheel rests on the top edge of the cone. If now the tracer T, when pulled out through a distance y above Q, be moved parallel to OX through a distance dx, the frame moves through an equal distance, and the cone turns through an angle dθ proportional to dx. The wheel W rolls on the cone to an amount again proportional to dx, and also proportional to y, its distance from V. Hence the roll of the wheel is proportional to the area ydx described by the rod QT. As T is moved from A to B along the curve the roll of the wheel will therefore be proportional to the area AA′B′B. If the curve is closed, and the tracer moved round it, the roll will measure the area independent of the position of the axis OX, as will be seen by drawing a figure. The cone may with advantage be replaced by a horizontal disk, with its centre at V; this allows of y being negative. It may be noticed at once that the roll of the wheel gives at every moment the area A′ATQ. It will therefore allow of registering a set of values of $\textstyle\int_{a}^{x} y\, dx$ for any values of x, and thus of tabulating the values of any indefinite integral. In this it differs from Amsler's planimeter. Planimeters of this type were first invented in 1814 by the Bavarian engineer Hermann, who, however, published nothing. They were reinvented by Prof. Tito Gonnella of Florence in 1824, and by the Swiss engineer Oppikofer, and improved by Ernst in Paris, the astronomer Hansen in Gotha, and others (see Henrici, British Association Report, 1894). But all were driven out of the field by Amsler's simpler planimeter.
FIG. 16.
FIG. 17.
Altogether different from the planimeters described is the hatchet planimeter, invented by Captain Prytz, a Dane, and made by Herr Hatchet planimeters. Cornelius Knudson in Copenhagen. It consists of a single rigid piece like fig. 16. The one end T is the tracer, the other Q has a sharp hatchet-like edge. If this is placed with QT on the paper and T is moved along any curve, Q will follow, describing a "curve of pursuit." In consequence of the sharp edge, Q can only move in the direction of QT, but the whole can turn about Q. Any small step forward can therefore be considered as made up of a motion along QT, together with a turning about Q. The latter motion alone generates an area. If therefore a line OA = QT is turning about a fixed point O, always keeping parallel to QT, it will sweep over an area equal to that generated by the more general motion of QT. Let now (fig. 17) QT be placed on OA, and T be guided round the closed curve in the sense of the arrow. Q will describe a curve OSB. It may be made visible by putting a piece of "copying paper" under the hatchet. When T has returned to A the hatchet has the position BA. A line turning from OA about O kept parallel to QT will describe the circular sector OAC, which is equal in magnitude and sense to AOB. This therefore measures the area generated by the motion of QT. To make this motion cyclical, suppose the hatchet turned about A till Q comes from B to O. Hereby the sector AOB is again described, and again in the positive sense, if it is remembered that it turns about the tracer T fixed at A. The whole area now generated is therefore twice the area of this sector, or equal to OA. OB, where OB is measured along the arc. According to the theorem given above, this area also equals the area of the given curve less the area OSBO. To make this area disappear, a slight modification of the motion of QT is required. Let the tracer T be moved, both from the first position OA and the last BA of the rod, along some straight line AX. Q describes curves OF and BH respectively. Now begin the motion with T at some point R on AX, and move it along this line to A, round the curve and back to R. Q will describe the curve DOSBED, if the motion is again made cyclical by turning QT with T fixed at A. If R is properly selected, the path of Q will cut itself, and parts of the area will be positive, parts negative, as marked in the figure, and may therefore be made to vanish. When this is done the area of the curve will equal twice the area of the sector RDE. It is therefore equal to the arc DE multiplied by the length QT; if the latter equals 10 in., then 10 times the number of inches contained in the arc DE gives the number of square inches contained within the given figure. If the area is not too large, the arc DE may be replaced by the straight line DE.
To use this simple instrument as a planimeter requires the possibility of selecting the point R. The geometrical theory here given has so far failed to give any rule. In fact, every line through any point in the curve contains such a point. The analytical theory of the inventor, which is very similar to that given by F.W. Hill (Phil. Mag. 1894), is too complicated to repeat here. The integrals expressing the area generated by QT have to be expanded in a series. By retaining only the most important terms a result is obtained which comes to this, that if the mass-centre of the area be taken as R, then A may be any point on the curve. This is only approximate. Captain Prytz gives the following instructions:—Take a point R as near as you can guess to the mass-centre, put the tracer T on it, the knife-edge Q outside; make a mark on the paper by pressing the knife-edge into it; guide the tracer from R along a straight line to a point A on the boundary, round the boundary, and back from A to R; lastly, make again a mark with the knife-edge, and measure the distance c between the marks; then the area is nearly cl, where l = QT. A nearer approximation is obtained by repeating the operation after turning QT through 180° from the original position, and using the mean of the two values of c thus obtained. The greatest dimension of the area should not exceed ½l, otherwise the area must be divided into parts which are determined separately. This condition being fulfilled, the instrument gives very satisfactory results, especially if the figures to be measured, as in the case of indicator diagrams, are much of the same shape, for in this case the operator soon learns where to put the point R.
Integrators serve to evaluate a definite integral $\textstyle\int_{a}^{b} f(x)\, dx$. If we plot out Integrators. the curve whose equation is y = f(x), the integral ∫ydx between the proper limits represents the area of a figure bounded by the curve, the axis of x, and the ordinates at x=a, x=b. Hence if the curve is drawn, any planimeter may be used for finding the value of the integral. In this sense planimeters are integrators. In fact, a planimeter may often be used with advantage to solve problems more complicated than the determination of a mere area, by converting the one problem graphically into the other. We give an example:—
FIG. 18.
Let the problem be to determine for the figure ABG (fig. 18), not only the area, but also the first and second moment with regard to the axis XX. At a distance a draw a line, C′D′, parallel to XX. In the figure draw a number of lines parallel to AB. Let CD be one of them. Draw C and D vertically upwards to C′D′, join these points to some point O in XX, and mark the points C1D1 where OC′ and OD′ cut CD. Do this for a sufficient number of lines, and join the points C1D1 thus obtained. This gives a new curve, which may be called the first derived curve. By the same process get a new curve from this, the second derived curve. By aid of a planimeter determine the areas P, P1, P2, of these three curves. Then, if x is the distance of the mass-centre of the given area from XX; x1 the same quantity for the first derived figure, and I = Ak² the moment of inertia of the first figure, k its radius of gyration, with regard to XX as axis, the following relations are easily proved:—
Px = aP1; P1x1 = aP2; I = aP1x1 = a²P1P2; k² = xx1,
which determine P, x and I or k. Amsler has constructed an integrator which serves to determine these quantities by guiding a tracer once round the boundary of the given figure (see below). Again, it may be required to find the value of an integral ∫yφ(x)dx between given limits where φ(x) is a simple function like sin nx, and where y is given as the ordinate of a curve. The harmonic analysers described below are examples of instruments for evaluating such integrals.
FIG. 19.
FIG. 20.
Amsler has modified his planimeter in such a manner that instead of the area it gives the first or second moment of a figure about an axis in its plane. An instrument giving all three quantities simultaneously is known as Amsler's integrator or moment-planimeter. It has one tracer, but three recording wheels. It is mounted on a Amsler's Integrator. carriage which runs on a straight rail (fig. 19). This carries a horizontal disk A, movable about a vertical axis Q. Slightly more than half the circumference is circular with radius 2a, the other part with radius 3a. Against these gear two disks, B and C, with radii a; their axes are fixed in the carriage. From the disk A extends to the left a rod OT of length l, on which a recording wheel W is mounted. The disks B and C have also recording wheels, W1 and W2, the axis of W1 being perpendicular, that of W2 parallel to OT. If now T is guided round a figure F, O will move to and fro in a straight line. This part is therefore a simple planimeter, in which the one end of the arm moves in a straight line instead of in a circular arc. Consequently, the "roll" of W will record the area of the figure. Imagine now that the disks B and C also receive arms of length l from the centres of the disks to points T1 and T2, and in the direction of the axes of the wheels. Then these arms with their wheels will again be planimeters. As T is guided round the given figure F, these points T1 and T2 will describe closed curves, F1 and F2, and the "rolls" of W1 and W2 will give their areas A1 and A2. Let XX (fig. 20) denote the line, parallel to the rail, on which O moves; then when T lies on this line, the arm BT1 is perpendicular to XX, and CT2 parallel to it. If OT is turned through an angle θ, clockwise, BT1 will turn counter-clockwise through an angle 2θ, and CT2 through an angle 3θ, also counter-clockwise. If in this position T is moved through a distance x parallel to the axis XX, the points T1 and T2 will move parallel to it through an equal distance. If now the first arm is turned through a small angle dθ, moved back through a distance x, and lastly turned back through the angle dθ, the tracer T will have described the boundary of a small strip of area. We divide the given figure into such strips. Then to every such strip will correspond a strip of equal length x of the figures described by T1 and T2.
The distances of the points, T, T1, T2, from the axis XX may be called y, y1, y2. They have the values
y = l sin θ, y1 = l cos 2θ, y2 = -l sin 3θ,
from which
dy = l cos θ.dθ, dy1 = - 2l sin 2θ.dθ, dy2 = - 3l cos 3θ.dθ.
The areas of the three strips are respectively
dA = xdy, dA1 = xdy1, dA2 = xdy2.
Now dy1 can be written dy1 = - 4l sin θ cos θdθ = - 4 sin θdy; therefore
dA1 = - 4 sin θ.dA = - $\tfrac{4}{l}$ ydA;
whence
A1 = - $\tfrac{4}{l}$ ∫ydA = - $\tfrac{4}{l}$ Ay,
where A is the area of the given figure, and y the distance of its mass-centre from the axis XX. But A1 is the area of the second figure F1, which is proportional to the reading of W1. Hence we may say
Ay = C1w1,
where C1 is a constant depending on the dimensions of the instrument. The negative sign in the expression for A1 is got rid of by numbering the wheel W1 the other way round.
Again
dy2 = - 3l cos θ {4 cos² θ - 3} dθ = - 3 {4 cos² θ - 3} dy = - 3 {$\tfrac{4}{l^2}$ y² - 3} dy,
which gives
dA2 = - $\tfrac{12}{l^2}$y²dA + 9dA,
and
A2 = - $\tfrac{12}{l^2}$ ∫y²dA + 9A.
But the integral gives the moment of inertia I of the area A about the axis XX. As A2 is proportional to the roll of W2, A to that of W, we can write
I = Cw - C2 w2,
Ay = C1 w1,
A = Cc w.
If a line be drawn parallel to the axis XX at the distance y, it will pass through the mass-centre of the given figure. If this represents the section of a beam subject to bending, this line gives for a proper choice of XX the neutral fibre. The moment of inertia for it will be I + Ay². Thus the instrument gives at once all those quantities which are required for calculating the strength of the beam under bending. One chief use of this integrator is for the calculation of the displacement and stability of a ship from the drawings of a number of sections. It will be noticed that the length of the figure in the direction of XX is only limited by the length of the rail.
This integrator is also made in a simplified form without the wheel W2. It then gives the area and first moment of any figure.
While an integrator determines the value of a definite integral, hence a Integraphs. mere constant, an integraph gives the value of an indefinite integral, which is a function of x. Analytically if y is a given function f(x) of x and
Y = $\textstyle\int_{c}^{x} y\, dx$ or Y = ∫ydx + const.
the function Y has to be determined from the condition
$\tfrac{dY}{dx}$ = y.
Graphically y = f(x) is either given by a curve, or the graph of the equation is drawn: y, therefore, and similarly Y, is a length. But $\tfrac{dY}{dx}$ is in this case a mere number, and cannot equal a length y. Hence we introduce an arbitrary constant length a, the unit to which the integraph draws the curve, and write
$\tfrac{dY}{dx}$ = $\tfrac{y}{a}$ and aY = ∫ydx
Now for the Y-curve $\tfrac{dY}{dx}$ = tan φ, where φ is the angle between the tangent to the curve, and the axis of x. Our condition therefore becomes
tan φ = $\tfrac{y}{a}$.
FIG. 21.
This φ is easily constructed for any given point on the y-curve:—From the foot B′ (fig. 21) of the ordinate y = B′B set off, as in the figure, B′D = a, then angle BDB′ = φ. Let now DB′ with a perpendicular B′B move along the axis of x, whilst B follows the y-curve, then a pen P on B′B will describe the Y-curve provided it moves at every moment in a direction parallel to BD. The object of the integraph is to draw this new curve when the tracer of the instrument is guided along the y-curve.
The first to describe such instruments was Abdank-Abakanowicz, who in 1889 published a book in which a variety of mechanisms to obtain the object in question are described. Some years later G. Coradi, in Zürich, carried out his ideas. Before this was done, C.V. Boys, without knowing of Abdank-Abakanowicz's work, actually made an integraph which was exhibited at the Physical Society in 1881. Both make use of a sharp edge wheel. Such a wheel will not slip sideways; it will roll forwards along the line in which its plane intersects the plane of the paper, and while rolling will be able to turn gradually about its point of contact. If then the angle between its direction of rolling and the x-axis be always equal to φ, the wheel will roll along the Y-curve required. The axis of x is fixed only in direction; shifting it parallel to itself adds a constant to Y, and this gives the arbitrary constant of integration.
In fact, if Y shall vanish for x = c, or if
Y = $\textstyle\int_{c}^{x} y\, dx$,
then the axis of x has to be drawn through that point on the y-curve which corresponds to x = c.
FIG. 22.
In Coradi's integraph a rectangular frame F1F2F3F4 (fig. 22) rests with four rollers R on the drawing board, and can roll freely in the direction OX, which will be called the axis of the instrument. On the front edge F1F2 travels a carriage AA′ supported at A′ on another rail. A bar DB can turn about D, fixed to the frame in its axis, and slide through a point B fixed in the carriage AA′. Along it a block K can slide. On the back edge F3F4 of the frame another carriage C travels. It holds a vertical spindle with the knife-edge wheel at the bottom. At right angles to the plane of the wheel, the spindle has an arm GH, which is kept parallel to a similar arm attached to K perpendicular to DB. The plane of the knife-edge wheel r is therefore always parallel to DB. If now the point B is made to follow a curve whose y is measured from OX, we have in the triangle BDB′, with the angle φ at D,
tan φ = y/a,
where a = DB′ is the constant base to which the instrument works. The point of contact of the wheel r or any point of the carriage C will therefore always move in a direction making an angle φ with the axis of x, whilst it moves in the x-direction through the same distance as the point B on the y-curve—that is to say, it will trace out the integral curve required, and so will any point rigidly connected with the carriage C. A pen P attached to this carriage will therefore draw the integral curve. Instead of moving B along the y-curve, a tracer T fixed to the carriage A is guided along it. For using the instrument the carriage is placed on the drawing-board with the front edge parallel to the axis of y, the carriage A being clamped in the central position with A at E and B at B′ on the axis of x. The tracer is then placed on the x-axis of the y-curve and clamped to the carriage, and the instrument is ready for use. As it is convenient to have the integral curve placed directly opposite to the y-curve so that corresponding values of y or Y are drawn on the same line, a pen P′ is fixed to C in a line with the tracer.
Boys' integraph was invented during a sleepless night, and during the following days carried out as a working model, which gives highly satisfactory results. It is ingenious in its simplicity, and a direct realization as a mechanism of the principles explained in connexion with fig. 21. The line B′B is represented by the edge of an ordinary T-square sliding against the edge of a drawing-board. The points B and P are connected by two rods BE and EP, jointed at E. At B, E and P are small pulleys of equal diameters. Over these an endless string runs, ensuring that the pulleys at B and P always turn through equal angles. The pulley at B is fixed to a rod which passes through the point D, which itself is fixed in the T-square. The pulley at P carries the knife-edge wheel. If then B and P are kept on the edge of the T-square, and B is guided along the curve, the wheel at P will roll along the Y-curve, it having been originally set parallel to BD. To give the wheel at P sufficient grip on the paper, a small loaded three-wheeled carriage, the knife-edge wheel P being one of its wheels, is added. If a piece of copying paper is inserted between the wheel P and the drawing paper the Y-curve is drawn very sharply.
Integraphs have also been constructed, by aid of which ordinary differential equations, especially linear ones, can be solved, the solution being given as a curve. The first suggestion in this direction was made by Lord Kelvin. So far no really useful instrument has been made, although the ideas seem sufficiently developed to enable a skilful instrument-maker to produce one should there be sufficient demand for it. Sometimes a combination of graphical work with an integraph will serve the purpose. This is the case if the variables are separated, hence if the equation
Xdx + Ydy = 0
has to be integrated where X = p(x), Y = φ(y) are given as curves. If we write
au = ∫Xdx, av = ∫Ydy,
then u as a function of x, and v as a function of y can be graphically found by the integraph. The general solution is then
u + v = c
with the condition, for the determination for c, that y = y0, for x = x0. This determines c = u0 + v0, where u0 and v0 are known from the graphs of u and v. From this the solution as a curve giving y a function of x can be drawn:—For any x take u from its graph, and find the y for which v = c - u, plotting these y against their x gives the curve required. If a periodic function y of x is given by its graph for one period c, it can, according to the theory of Fourier's Series, be expanded in a series.
y = A0 + A1 cos θ + A2 cos 2θ + ... + An cos nθ + ...
+ B1 sin θ + B2 sin 2θ + ... + Bn sin nθ + ...
where θ = $\tfrac{2\pi x}{c}$.
The absolute term A0 equals the mean ordinate of the curve, and can therefore be determined by any planimeter. The other co-efficients are
An = $\textstyle\tfrac{1}{\pi}\int_{0}^{2\pi}$ y cos nθ.dθ;
Bn = $\textstyle\tfrac{1}{\pi}\int_{0}^{2\pi}$ y sin nθ.dθ.
A harmonic analyser is an instrument which determines these integrals, and is therefore an Harmonic analysers. integrator. The first instrument of this kind is due to Lord Kelvin (Proc. Roy Soc., vol xxiv., 1876). Since then several others have been invented (see Dyck's Catalogue; Henrici, Phil. Mag., July 1894; Phys. Soc., 9th March; Sharp, Phil. Mag., July 1894; Phys. Soc., 13th April). In Lord Kelvin's instrument the curve to be analysed is drawn on a cylinder whose circumference equals the period c, and the sine and cosine terms of the integral are introduced by aid of simple harmonic motion. Sommerfeld and Wiechert, of Konigsberg, avoid this motion by turning the cylinder about an axis perpendicular to that of the cylinder. Both these machines are large, and practically fixtures in the room where they are used. The first has done good work in the Meteorological Office in London in the analysis of meteorological curves. Quite different and simpler constructions can be used, if the integrals determining An and Bn be integrated by parts. This gives
nAn = - $\textstyle\tfrac{1}{\pi}\int_{0}^{2\pi}$ sin nθ.dy;
nBn = $\textstyle\tfrac{1}{\pi}\int_{0}^{2\pi}$ cos nθ.dy.
An analyser presently to be described, based on these forms, has been constructed by Coradi in Zurich (1894). Lastly, a most powerful analyser has been invented by Michelson and Stratton (U.S.A.) (Phil Mag., 1898), which will also be described.
FIG. 23.
The Henrici-Coradi analyser has to add up the values of dy.sin nθ and dy.cos nθ. But these are the components of dy in two directions perpendicular to each other, of which one makes an angle nθ with the axis of x or of θ. This decomposition can be performed by Amsler's registering wheels. Let two of these be mounted, perpendicular to each other, in one horizontal frame which can be turned about a vertical axis, the wheels resting on the paper on which the curve is drawn. When the tracer is placed on the curve at the point θ = 0 the one axis is parallel to the axis of θ. As the tracer follows the curve the frame is made to turn through an angle nθ. At the same time the frame moves with the tracer in the direction of y. For a small motion the two wheels will then register just the components required, and during the continued motion of the tracer along the curve the wheels will add these components, and thus give the values of nAn and nBn. The factors 1/π and -1/π are taken account of in the graduation of the wheels. The readings have then to be divided by n to give the coefficients required. Coradi's realization of this idea will be understood from fig. 23. The frame PP′ of the instrument rests on three rollers E, E′, and D. The first two drive an axis with a disk C on it. It is placed parallel to the axis of x of the curve. The tracer is attached to a carriage WW which runs on the rail P. As it follows the curve this carriage moves through a distance x whilst the whole instrument runs forward through a distance y. The wheel C turns through an angle proportional, during each small motion, to dy. On it rests a glass sphere which will therefore also turn about its horizontal axis proportionally, to dy. The registering frame is suspended by aid of a spindle S, having a disk H. It is turned by aid of a wire connected with the carriage WW, and turns n times round as the tracer describes the whole length of the curve. The registering wheels R, R′ rest against the glass sphere and give the values nAn and nBn. The value of n can be altered by changing the disk H into one of different diameter. It is also possible to mount on the same frame a number of spindles with registering wheels and glass spheres, each of the latter resting on a separate disk C. As many as five have been introduced. One guiding of the tracer over the curve gives then at once the ten coefficients An and Bn for n = 1 to 5.
All the calculating machines and integrators considered so far have been kinematic. We have now to describe a most remarkable instrument based on the equilibrium of a rigid body under the action of springs. The body itself for rigidity's sake is made a hollow Michelson and Stratton analyzer cylinder H, shown in fig. 24 in end view. It can turn about its axis, being supported on knife-edges O. To it springs are attached at the prolongation of a horizontal diameter; to the left a series of n small springs s, all alike, side by side at equal intervals at a distance a from the axis of the knife-edges; to the right a single spring S at distance b. These springs are supposed to follow Hooke's law. If the elongation beyond the natural length of a spring is λ, the force asserted by it is p = kλ. Let for the position of equilibrium l, L be respectively the elongation of a small and the large spring, k, K their constants, then
FIG. 24.
nkla = KLb.
The position now obtained will be called the normal one. Now let the top ends C of the small springs be raised through distances y1, y2, ... yn. Then the body H will turn; B will move down through a distance z and A up through a distance (a/b)z. The new forces thus introduced will be in equilibrium if
ak$\left(\textstyle\sum y - n \tfrac{a}{b} z\right)$ = bKz.
Or
z = $\frac{\textstyle\sum y}{n \tfrac{a}{b} + \tfrac{b}{a}\tfrac{K}{k} }$ = $\frac{\textstyle\sum y}{n \left(\tfrac{a}{b} + \tfrac{l}{L}\right)}$.
This shows that the displacement z of B is proportional to the sum of the displacements y of the tops of the small springs. The arrangement can therefore be used for the addition of a number of displacements. The instrument made has eighty small springs, and the authors state that from the experience gained there is no impossibility of increasing their number even to a thousand. The displacement z, which necessarily must be small, can be enlarged by aid of a lever OT′. To regulate the displacements y of the points C (fig. 24) each spring is attached to a lever EC, fulcrum E. To this again a long rod FG is fixed by aid of a joint at F. The lower end of this rod rests on another lever GP, fulcrum N, at a changeable distance y″ = NG from N. The elongation y of any spring s can thus be produced by a motion of P. If P be raised through a distance y′, then the displacement y of C will be proportional to y′y″; it is, say, equal to μy′y″ where μ is the same for all springs. Now let the points C, and with it the springs s, the levers, &c., be numbered C0, C1, C2 ... There will be a zero-position for the points P all in a straight horizontal line. When in this position the points C will also be in a line, and this we take as axis of x. On it the points C0, C1, C2 ... follow at equal distances, say each equal to h. The point Ck lies at the distance kh which gives the x of this point. Suppose now that the rods FG are all set at unit distance NG from N, and that the points P be raised so as to form points in a continuous curve y′ = φ(x), then the points C will lie in a curve y = μφ(x). The area of this curve is
μ $\textstyle\int_{0}^{c}$φ(x)dx.
Approximately this equals ∑hy = h∑y. Hence we have
$\textstyle\int_{0}^{c}$φ(x)dx = $\tfrac{h}{\mu}$ ∑y = $\tfrac{\lambda h}{\mu}$z,
where z is the displacement of the point B which can be measured. The curve y′ = φ(x) may be supposed cut out as a templet. By putting this under the points P the area of the curve is thus determined—the instrument is a simple integrator.
The integral can be made more general by varying the distances NG = y″. These can be set to form another curve y″ = f(x). We have now y = μy′y″ = μ f(x) φ(x), and get as before
$\textstyle\int_{0}^{c}$f(x) φ(x)dx = $\tfrac{h}{\mu}$z.
These integrals are obtained by the addition of ordinates, and therefore by an approximate method. But the ordinates are numerous, there being 79 of them, and the results are in consequence very accurate. The displacement z of B is small, but it can be magnified by taking the reading of a point T′ on the lever AB. The actual reading is done at point T connected with T′ by a long vertical rod. At T either a scale can be placed or a drawing-board, on which a pen at T marks the displacement.
If the points G are set so that the distances NG on the different levers are proportional to the terms of a numerical series
u0 + u1 + u2 + ...
and if all P be moved through the same distance, then z will be proportional to the sum of this series up to 80 terms. We get an Addition Machine.
The use of the machine can, however, be still further extended. Let a templet with a curve y′ = φ(ξ) be set under each point P at right angles to the axis of x hence parallel to the plane of the figure. Let these templets form sections of a continuous surface, then each section parallel to the axis of x will form a curve like the old y′ = φ(x), but with a variable parameter ξ, or y′ = φ(ξ, x). For each value of ξ the displacement of T will give the integral
Y = $\textstyle\int_{0}^{c}$ f(x) φ(ξx) dx = F(ξ), . . . (1)
where Y equals the displacement of T to some scale dependent on the constants of the instrument.
If the whole block of templets be now pushed under the points P and if the drawing-board be moved at the same rate, then the pen T will draw the curve Y = F(ξ). The instrument now is an integraph giving the value of a definite integral as function of a variable parameter.
Having thus shown how the lever with its springs can be made to serve a variety of purposes, we return to the description of the actual instrument constructed. The machine serves first of all to sum up a series of harmonic motions or to draw the curve
Y = a1 cos x + a2 cos 2x + a3 cos 3x + . . . (2)
The motion of the points P1P2 ... is here made harmonic by aid of a series of excentric disks arranged so that for one revolution of the first the other disks complete 2, 3, ... revolutions. They are all driven by one handle. These disks take the place of the templets described before. The distances NG are made equal to the amplitudes a1, a2, a3, ... The drawing-board, moved forward by the turning of the handle, now receives a curve of which (2) is the equation. If all excentrics are turned through a right angle a sine-series can be added up.
It is a remarkable fact that the same machine can be used as a harmonic analyser of a given curve. Let the curve to be analysed be set off along the levers NG so that in the old notation it is
y″ = f(x),
whilst the curves y′ = φ(xξ) are replaced by the excentrics, hence ξ by the angle θ through which the first excentric is turned, so that y′k = cos kθ. But kh = x and nh = π, n being the number of springs s, and π taking the place of c. This makes
kθ = $\tfrac{n}{\pi}$θ.x.
Hence our instrument draws a curve which gives the integral (1) in the form
y = $\tfrac{2}{\pi}\textstyle\int_{0}^{\pi}$ f(x)cos$\left(\tfrac{n}{\pi}\theta x\right)$ dx
as a function of θ. But this integral becomes the coefficient am in the cosine expansion if we make
$\tfrac{\theta n}{\pi}$ = m or θ = $\tfrac{m \pi}{n}$ .
The ordinates of the curve at the values θ = $\tfrac{\pi}{n}$, $\tfrac{2\pi}{n}$, ... give therefore all coefficients up to m = 80. The curve shows at a glance which and how many of the coefficients are of importance.
The instrument is described in Phil. Mag., vol. xlv., 1898. A number of curves drawn by it are given, and also examples of the analysis of curves for which the coefficients am are known. These indicate that a remarkable accuracy is obtained.
(O. H.)
### Endnotes
1 ^ For a fuller description of the manner in which a mere addition machine can be used for multiplication and division, and even for the extraction of square roots, see an article by C.V. Boys in Nature, 11th July 1901.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422574043273926, "perplexity_flag": "middle"}
|
http://scicomp.stackexchange.com/questions/4636/thermoplastic-equation-solving
|
# Thermoplastic Equation solving
I was given a problem by my professor as follows
Solve the
System
$pV=S$
$pcT=kT+BS\frac{dG}{dt}$
$\frac{dS}{dt}=\mu(V-\frac{dG}{dt})$
$\frac{dG}{dt}=f(S,T)$
Where $p$, $c$, $B$, $\mu$ are constant and $f(S,T)$ is a function to be reasonably assumed
Where do I even begin... I know I have to use a numerical solution but Id love some helps and tips if possible.
-
Is that all that was asking? No initial conditions? Are you supposed to give a numerical solution or an analytical one? – FrenchKheldar Nov 10 '12 at 5:51
## 1 Answer
There are a few issues I can see with the formulation of your problem. First, as a set of ODE's you need initial conditions to completely pose the problem. Another issue is that to solve it numerically you need to know $f(S,T)$ (or at the very least be able to tell your computer to evaluate it). Both of these issues can be ignored if the solution is supposed to be analytical and general.
Since this is a homework and I am not sure what exactly you are being asked to do, I am not going to put any equations in this answer for the time being. I will tell you that it is possible to reduce your system to a single ODE with all of the unknowns eliminated except for $S$. The algebra to get there is not too difficult. Assuming stability the ODE can be solved with time-stepping such as Explicit Euler or an RK method. Once you have $S$ as a function of time, you can post-process for the rest of your unknowns.
If you let us know what type of class you received this in and what tools/methods you have learned you may get a bit more information.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9730772972106934, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/18430/2d-simulation-shooting-projectiles-that-inherit-the-guns-veocity-at-a-moving-ta
|
# 2d simulation shooting projectiles that inherit the gun's veocity at a moving target while also moving
Overview: I'm working on programming a simulation that requires 'shooting' projectile-type objects at other moving objects. How can I calculate the angle at which to shoot the object to hit?
Details:
Imagine you're holding some gun-type object that shoots projectiles at velocity $v_d$ in straight lines. You are moving with velocity $v_p$ at angle $\theta_p$, in a straight line. Another object, which we will name the target, is located a distance $R$ away from you at an angle $\theta_{tp}$, and is moving in a straight ine with velocity $v_t$ at angle $\theta_t$.
I need to calculate the angle $\theta_a$ I should aim the gun in order to hit the target. Note that there is no decision on the TIME to fire. I have to fire immediately, at $t=0$, so this adds a helpful constraint but also makes it so that some targets are not able to be hit based on the numerical values of the parameters mentioned, which is fine. The target is moving so you need to lead it with your shot to compensate. Also, the projectile will inherit your velocity on top of its own since you are also moving, which will further modify the angle you need to shoot at.
It seems to me that the strategy should be that I find a point in space which will be occupied by both the projectile and the target. I know all the points in space that the target will take along its linear path, and the times at which it will take all points. However, because the total velocity of the projectile varies based on the angle you shoot it (e.g. the magnitude of the velocity vector will be different depending on what Theta_a you decide to shoot at), I'm not sure how to figure out where to shoot it.
Any help would be appreciated!
In addition, if it is easy, a solution to a target moving in a circular path with radius $R$ (same $R$ as the distance between you and it, so it's making a perfect circle around you) would be appreciated!
-
1
How does the magnitude of the velocity vary with the angle? – David Zaslavsky♦ Dec 18 '11 at 11:14
1
– Gareth Rees Dec 18 '11 at 11:39
It varies because if you add two vectors of length A and length B, depending on the angle between them, the resultant vector has a different magnitude. Since when I change the angle I'm aiming at this changes the angle between the gun velocity vector and the projectile velocity vector, the resultant total magnitude of velocity of the projectile, which is the sum of both vectors, should change. Either that or I'm dense :P I'll check out the duplicate, seems relevant. – user6737 Dec 18 '11 at 20:27
The link was indeed relevant, and I was able to solve the problem. There, by using vector notation it becomes clear on how to deal with the 'varying velocity' issue, which is just incorporated into the components of the overall equation for t_intersect instead of using it as a scalar magnitude like I was doing when solving the problem for a stationary guy and then trying to use that approach for the moving gun incorrectly. What's teh appropriate procedure for accepting an answer if it is found in a different post? – user6737 Dec 18 '11 at 22:11
1
Hi CHP and welcome to Physics Stack Exchange! You can post an answer here explaining how you were able to solve the problem and linking to the other answer that allowed you to do so. Make sure you don't just post a link, though; people should be able to understand your answer without following any links. – David Zaslavsky♦ Dec 19 '11 at 0:46
show 1 more comment
## 2 Answers
First of all, you should do the math in terms of vectors, i.e. position $\vec{x} = (x, y)$, velocity $\vec{v} = (v_x, v_y)$. Then for every object the position over time can be expressed as $\vec{x}(t) = \vec{x}_0 + t\vec{v}$, where $\vec{x}_0$ denotes the position at the time $t=0$, $\vec{v}$ the constant velocity, and $\vec{x}(t)$ the position at a given time $t$.
To simplify the picture, you can do the calculations in the co-moving reference frame of the gun. The pointing angle of the gun will not change by this (as long as you aren't considering relativistic velocities ;) ). The ejection velocity is defined in the gun's frame of reference anyway.
To do this, let's write the target time dependent position in the universe frame as:
$$\vec{x}_t(t) = \vec{x}_{t0} + t \vec{v}_t$$
and the gun's position as
$$\vec{x}_g(t) = \vec{x}_{g0} + t \vec{v}_g$$
To transform to the gun's reference frame, just subtract the gun's position equation; the ' denotes the new reference frame. Then we get:
$$\vec{x}_t'(t) = \vec{x}_{t0} - \vec{x}_{g0} + t (\vec{v}_t - \vec{v}_g),\quad \vec{x}_g'(t) = 0$$
Or, with $\vec{x}_{t0}' = \vec{x}_{t0} - \vec{x}_{g0}$, and $\vec{v}_t' = \vec{v}_t - \vec{v}_g$ you get
$$\vec{x}_t'(t) = \vec{x}_{t0}' + t \vec{v}_t'$$
Into this equation you can simply insert the time when you want to hit the target, and you get the position of the hit. As the gun is located at $0$, this also gives the direction to which you have to point your gun. The required velocity then is defined by the distance to the hitting point divided by the hitting time.
The angle can simply calculated with the atan2 (a 360-degree complete version of the arctangent function available in most programming languages) of the required velocity vector for the projectile $\vec{v}_p = (v_{px}, v_{py})$ by `theta = atan2(vpy, vpx)`. This of course depends on your definition of theta, so you might need to adjust it.
If you shoot with a predefined velocity, you can write an equation with the hitting time as a variable and solve for that.
This should show the way to get your result, you might have to work out some more details, or ask again for something specific (e.g. vector arithmetics) if this isn't sufficient.
-
Hi Peter, and welcome to Physics Stack Exchange! Nice answer :-) We have MathJax available on this site for writing mathematical notation, so I went through and edited your math to make it look better. – David Zaslavsky♦ Dec 19 '11 at 0:57
great, thanks David! It has been moved from stackoverflow.com, so I just incidentally started posting on Physics Stack Exchange. As I'm professionally related to physics, it isn't a bad thing though ;) – Piotr99 Dec 21 '11 at 9:24
I found the solution posted by Gareth Rees to work, which was posted at this link:
http://stackoverflow.com/questions/4107403/ai-algorithm-to-shoot-at-a-target-in-a-2d-game
Notation: I write vectors in capital letters, scalars in lower case, and ∠V for the angle that the vector V makes with the x-axis. (Which you can compute with the function atan2 in many languages.)
The simplest case is a stationary shooter which can rotate instantly.
Let the target be at the position A and moving with velocity VA, and the shooter be stationary at the position B and can fire bullets with speed s. Let the shooter fire at time 0. The bullet hits at time t such that |A − B + t VA| = t s. This is a straightforward quadratic equation in t, which you should be easily able to solve (or determine that there is no solution). Having determined t, you can now work out the firing angle, which is just ∠(A − B + t VA).
Now suppose that the shooter is not stationary but has constant velocity VB. (I'm supposing Newtonian relativity here, i.e. the bullet velocity is added to the shooter's velocity.)
It's still a straightforward quadratic equation to work out the time to hit: |A − B + t(VA − VB)| = t s. In this case the firing angle is ∠(A − B + t (VA − VB)).
What if the shooter waits until time u before firing? Then the bullet hits the target when |A − B + t(VA − VB)| = (t − u) s. The firing angle is still ∠(A − B + t(VA − VB)).
Now for your problem. Suppose that the shooter can complete a half rotation in time r. Then it can certainly fire at time r. (Basically: work out the necessary firing angle, if any, for a shot at time r, as described above, rotate to that angle, stop, wait until time r, then fire.)
But you probably want to know the earliest time at which the shooter can fire. Here's where you probably want to use successive approximation to find it. (Sketch of algorithm: Can you fire at time 0? No. Can you fire at time r? Yes. Can you fire at time ½r? No. etc.)
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352148771286011, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/mass?sort=votes&pagesize=15
|
# Tagged Questions
The property of an object that determines how much it responds to a force in Newtonian mechanics, and how much it interacts with gravity in the Newtonian framework. Mass also refers to the intrinsic energy of a particle in particle physics.
14answers
69k views
### How Does Mass Leave the Body When you Lose Weight?
When your body burns calories and you lose weight, obviously mass is leaving your body. In what form does it leave? In other words, what is the physical process by which the body loses weight when ...
9answers
7k views
### Don't heavier objects actually fall faster because they exert their own gravity?
The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object ...
7answers
2k views
### Why do we have an elementary charge but no elementary mass?
Why do we have an elementary charge $e$ in physics but no elementary mass? Is an elementary mass ruled out by experiment or is an elementary mass forbidden by some theoretical reason?
7answers
3k views
### Is a hard drive heavier when it is full?
Browsing Quora, I saw the following question with contradicting answers. For the highest voted answer: The bits are represented by certain orientations of magnetic fields which shouldn't have ...
8answers
13k views
### If photons have no mass, how can they have momentum?
As an explanation of why a large gravitational field (such as a black hole) can bend light, I have heard that light has momentum. This is given as a solution to the problem of only massive objects ...
5answers
2k views
### Do photons gain mass when they travel through glass?
Please correct me if I'm wrong, but I believe that photons slow down when travelling through glass. Does this mean they gain mass? Otherwise, what happens to extra kinetic energy? I understand now ...
7answers
10k views
### How can a photon have no mass and still travel at the speed of light?
I've read a number of the helpful Q&As on photons that mention the mass/mass-less issue. Do I understand correctly that the idea of mass-less (a rest mass of 0) may be just a convention to make ...
7answers
2k views
### Is mass quantized?
I learned today in class that photons and light are quantized. I also remember that electric charge is quantized as well. I was thinking about these implications, and I was wondering if mass was ...
9answers
1k views
### What is the difference between weight and mass?
My science teacher is always saying the words "weight of an object" and "mass of an object," but then my physics book (that I read on my own) tells me completely different definitions from the way ...
3answers
1k views
### If the universe were compressed into a super massive black hole, how big would it be?
I understand only a little of general relativity, but that's why I'm here! :) Consider the hypothetical situation of some extra-terrestrial intelligence pushing all the mass in the universe, every ...
3answers
3k views
### How do we determine the mass of a black hole?
Since by definition we cannot observe black holes directly, how do astronomers determine the mass of a black hole? What observational techniques are there that would allow us to determine a black ...
6answers
1k views
### What is the symmetry which is responsible for conservation of mass?
According to Noether's theorem, all conservation laws originate from invariance of a system to shifts in a certain space. For example conservation of energy stems from invariance to time translation. ...
3answers
467 views
### The interpretation of mass in quantum field theories
Consider a free theory with one real scalar field: $$\mathcal{L}:=-\frac{1}{2}\partial _\mu \phi \partial ^\mu \phi -\frac{1}{2}m^2\phi ^2.$$ We write this positive coefficient in front of $\phi ^2$ ...
4answers
864 views
### Why does a black hole have a finite mass?
I mean besides the obvious "it has to have finite mass or it would suck up the universe." A singularity is a dimensionless point in space with infinite density, if I'm not mistaken. If something is ...
10answers
3k views
### Does 'electricity' have mass? Is 'electricity' tangible?
Background: I'm in a legal academic discussion about the status of electronic 'goods' and whether they qualify as 'goods' in the same way a chair and a pen do. In this context (and specifically at the ...
1answer
556 views
### Does the mass of a battery's change when charged/discharged?
... and if so, how much? Is it possible to detect it, or is it beyond any measurement? I'd say there are two possible scenarios (depending on the battery type) and both seem interesting: The battery ...
4answers
967 views
### Acceleration of two falling objects with identical form and air drag but different masses
I have a theoretical question that has been bugging me and my peers for weeks now - and we have yet to settle on a concrete answer. Imagine two balloons, one is filled with air, one with concrete. ...
2answers
247 views
### Does relativistic mass have weight?
If an object was sliding on an infinitely long friction-less floor on Earth with relativistic speeds (ignoring air resistance), would it exert more vertical weight force on the floor than when it's at ...
2answers
280 views
### How does rest mass become energy?
I know that there's a difference between relativistic rest mass. Relativistic mass is "acquired" when an object is moving at speeds comparable to the speed of light.Rest mass is the inherent mass that ...
3answers
762 views
### Decay of massless particles
We don't normally consider the possibility that massless particles could undergo radioactive decay. There are elementary arguments that make it sound implausible. (A bunch of the following is ...
3answers
1k views
### Where does matter come from?
I admit, it's been a few years since I've studied physics, but the following question came to me when I was listening to a talk by Lawrence Krauss. Is there any knowledge of from where matter that ...
2answers
244 views
### Do we have an idea about the amount of matter in the universe?
Do we consider the amount of matter in the universe to be "infinite"? Or do we have an idea about "how much" there is?
2answers
585 views
### Does a photon exert a gravitational pull?
I know a photon has zero rest mass, but it does have plenty of energy. Since energy and mass are equivalent does this mean that a photon (or more practically, a light beam) exerts a gravitational pull ...
4answers
2k views
### Why don’t photons interact with the Higgs field?
Why don’t photons interact with the Higgs field and hence remain massless?
3answers
3k views
### What's the difference between the five masses: inertial mass, gravitational mass, rest mass, invariant mass and relativistic mass?
I have learned in my physics classes about five different types of masses and I am confused about the differences between them. What's the difference between the five masses: inertial mass, ...
2answers
696 views
### Does $E = mc^2$ apply to photons?
Photons are massless, but if $m = 0$ and $E = mc^2$ then $E = 0c^2 = 0$. This would say that photons have no energy, which is not true. However, given the formula $E = ℎf$, a photon does have energy ...
1answer
150 views
### How the inverse square law in electrodynamics is related to photon mass?
I have read somewhere that one of the tests of the inverse square law is to assume nonzero mass for photon and then, by finding a maximum limit for it , determine a maximum possible error in ...
9answers
6k views
### Why does the mass of an object increase when its speed approaches that of light?
I'm reading Nano: The Essentials by T. Pradeep and I came upon this statement in the section explaining the basics of scanning electron microscopy. However, the equation breaks down when the ...
2answers
1k views
### Why do we need Higgs field to re-explain mass, but not charge?
We already had definition of mass based on gravitational interactions since before Higgs. It's similar to charge which is defined based on electromagnetic interactions of particles. Why did Higgs ...
3answers
445 views
### What is the exact gravitational force between two masses including relativistic effects?
I was wondering if there is a closed-form formula for the force between two masses $m_1$ and $m_2$ if relativistic effects are included. My understanding is that the classic formula \$G \frac{m_1 ...
2answers
163 views
### Equivalence of definitions of ADM Mass
ADM Mass is a useful measure of a system. It is often defined (Wald 293) $$M_{ADM}=\frac{1}{16\pi} \lim_{r \to \infty} \oint_{s_r} (h_{\mu\nu,\mu}-h_{\mu\mu,\nu})N^{\nu} dA$$ Where $s_r$ is two ...
2answers
923 views
### 'Density' of a proton
I was doing some exercises the other day, when I came across this question in my book: A proton weighs about 1.66 x 10-24 g and has a diameter of about 10-15 m. What is its density in g/cm3? ...
3answers
2k views
### Why is Higgs Boson given the name “The God Particle”?
Higgs Boson (messenger particle of Higgs field) accounts for inertial mass, not gravitational mass. So, how could it account for formation of universe as we know it today? I think, gravity accounts ...
3answers
505 views
### Special Relativity and $E = mc^2$
I read somewhere that $E=mc^2$ shows that if something was to travel faster than the speed of light then they would have infinite mass and would have used infinite energy. How does the equation show ...
1answer
435 views
### How do we know the masses of single stars?
I have recently read that we can only know the masses of stars in binary systems, because we use Kepler's third law to indirectly measure the mass. However, it is not hard to find measurements for the ...
3answers
260 views
### What is the massless limit of massive electromagnetism?
Consider electromagnetism, an abelian gauge theory, with a massive photon. Is the massless limit equal to electromagnetism? What does it happen at the quantum level with the extra degree of freedom? ...
3answers
412 views
### How are the masses of unstable elementary particles measured?
I am interested in knowing how (Q1) the particle's masses are experimentally determined from accelerator observations. What kind of particles? They must be as far as we know elementary and unstable ...
2answers
186 views
### Why did this glass start popping?
I remember a while ago my father dropped a glass lid and it smashed. It looked something like this. When that happened, for about 5 minutes afterwards, the glass parts were splitting, kind of like ...
1answer
70 views
### Higgs boson mass and electroweak energy scale
Is it a coincidence that the mass of the Higgs boson is exactly half the electroweak energy scale?
1answer
456 views
### What is the difference between 'running' and 'current' quark mass?
When looking at the PDG, there is a difference between the 'running' and the 'current' quark masses. Does anyone know which is the difference between these two?
4answers
1k views
### Finding the volume of this irregular shape I have
I have an approximately basketball-sized non-hollow piece of aluminum sitting in my house that is of irregular shape. I need to find the volume of it for a very legitimate yet irrelevant reason. ...
3answers
1k views
### What defines the mass of elementary particle?
The electron is particle. The mass of electron is $9.10938215(45)\times 10^{−31}\, {\rm kg}$. But why is the mass exactly what it is? What in physics defines the mass of elementary particle?
4answers
813 views
### Atomic mass of Copper-63?
This URL lists the mass of Copper-63 as 62.9295975(6) and this other URL lists the mass as 62.939598. These values differ by almost exactly 0.01 which seems hard to explain by experimental error. ...
4answers
225 views
### Can you create mass with $E=mc^2$?
If you use the equation $E=mc^2$ could you make matter by dividing the $c^2$? I'm sorry if this is a really stupid sounding question or if it shouldn't be asked here.
2answers
161 views
### Why would a particle in an extra dimension appear not as one particle, but a set of particles?
I was reading an article in this months issue of Physics World magazine on the three main theories of extra dimensions and stumbled across something I didn't quite understand when the author began ...
3answers
371 views
### What is the mass of a wave?
The slide called "QUANTA" here says that "One Quantum has a definite mass" and the picture shows a wave. So, What is meant by the mass of a wave?
3answers
286 views
### bound states of massless fields?
Question: are they mathematically possible at all? physically? with finite mass systems, usually the binding energy contributes to the rest-mass of the system. It would seem that even if you could ...
3answers
327 views
### What is the difference between impulse and momentum?
What is the difference between impulse and momentum? The question says it all...I know the second of of them is mass * velocity, but what is the first one for, and when is it used? Also, what are its ...
3answers
105 views
### Is it possible to have a singularity with zero mass?
A singularity, by the definition I know, is a point in space with infinite of a property such as density. Density is Mass/Volume. Since the volume of a singularity is 0, then the density will thus ...
2answers
683 views
### The contribution to mass from the dynamical breaking of chiral symmetry
The claim is often made that the discovery of the Higgs boson will give us information about the origin of mass. However, the bare masses of the up and down quarks are only around 5 MeV, quite a bit ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523885846138, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?t=87066
|
Physics Forums
Thread Closed
Page 1 of 3 1 2 3 >
## confused about fourth spatial dimension
lets consider a sphere and let there be a two dimensional man and for him he is living on a two dimensional place.we r observing it from third dimension.
now this two dimensional man be fred.
one day fred decides to make a circle using a rope.now start imagining he is on a sphere and he tkaes a rope and he sticks one end of the rope on the surface and the other end on his hands. now he makes a big circle on this sphere and he considers the length of the rope as the radius and tries to find the circumference and he gets some value.remember he is on a sphere so he is getting wrong circumference if he uses length of the rope as the radius but now he some how using measuring tape or formula s=vt fin the real circumference and hence the real radius.since he is in two dimensional place he knows pythagoras theorem and he puts the value of the real radius and wrong radius in the formula
real radius =r
wrong radius=length of the rope=R
and third dimension = height=h
and R^2=r^2+h^2
voila he finds the height of the third dimension
correct me if i m wrong
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Recognitions: Homework Help Science Advisor You are assuming your conclusion, is how it seems to me. If Fred is truly a 2-D creature, how does he have any idea of what the radius in 3-D would be? At best he can make a guess at how the curvature is distributed as a random variable, I think.
Recognitions:
Gold Member
Staff Emeritus
Quote by EnumaElish You are assuming your conclusion, is how it seems to me. If Fred is truly a 2-D creature, how does he have any idea of what the radius in 3-D would be? At best he can make a guess at how the curvature is distributed as a random variable, I think.
You're wrong Enuma. Two-dimensional Fred can indeed conclude he lives on a sphere (or at least on a manifold that is curved locally) and compute its radius from geometrical measurements like this. This is what is meant by "instrinsic geometry", invented by Gauss for surfaces and extended by Riemann to manifolds of any dimension.
Recognitions:
Homework Help
Science Advisor
## confused about fourth spatial dimension
Hmmm. What are our 3 dimensions curved into, then? Is there a measurable, spatial 4th dimension?
Quote by EnumaElish What are our 3 dimensions curved into, then?
Our 3 spacial dimensions are Euclidean (not curved), except where there are strong gravitational fields.
Quote by EnumaElish Is there a measurable, spatial 4th dimension?
No.
you guys are not getting it right. what i m trying to say is that fred finds this bcoz of circumference. the circumference he is getting is not satisfying the length of the rope as the radius
Quote by εllipse Our 3 spacial dimensions are Euclidean (not curved), except where there are strong gravitational fields.
As far like i know nobody has measured a curved spacetime or curved space predicted by general relativity still
In fact, Einstein idea that gravity is curvature is easily shown to be wrong. Take a Schwarzild metric in the limit c--> infinite, curvature is exactly zero, (1 - 2 GM/Rcc) --> 1, but gravity is not because there is a Newtonian potential still.
There exists a well-known principle of epistemology saiyng if A is cause of B then the elimination of A eliminates B.
If there is gravity (B) without curvature (A) then curvature (A) cannot be the cause of gravity (B).
Recognitions:
Science Advisor
Staff Emeritus
Quote by sandesh trivedi you guys are not getting it right. what i m trying to say is that fred finds this bcoz of circumference. the circumference he is getting is not satisfying the length of the rope as the radius
We get it. At least the part about the curvature, we get. it appears you may have some question? I don't understand what question you are asking, so far you have simply stated some facts, which are correct.
Fred has found how to measure "Gaussian" curvature, which is an intrinsic sort of curvature first defined/discoverd by (you guessed it) Carl Gauss.
See for instance the Wikipedia article at
http://en.wikipedia.org/wiki/Curvature
An intrinsic definition of the Gaussian curvature at a point P is the following: imagine an ant which is tied to P with a short thread of length r. She runs around P while the thread is completely stretched and measures the length C(r) of one complete trip around P. If the surface were flat, she would find C(r) = $2\, \pi \, r$ On curved surfaces, the formula for C(r) will be different, and the Gaussian curvature K at the point P can be computed as $$K = \lim_{r \rarrow 0} (2 \pi r - \mbox{C}(r)) \cdot \frac{3}{\pi r^3}.$$
Quote by pervect We get it. At least the part about the curvature, we get. it appears you may have some question? I don't understand what question you are asking, so far you have simply stated some facts, which are correct. Fred has found how to measure "Gaussian" curvature, which is an intrinsic sort of curvature first defined/discoverd by (you guessed it) Carl Gauss. See for instance the Wikipedia article at http://en.wikipedia.org/wiki/Curvature
well i m in high school right now and i dont know much of the mathematics u r talkin about. i was just trying to find the relation between third and fourth dimension.pervect do you have some yahoo messenger where i can ask u a few questions regarding this stuff.but i explained this ant thing a little bit using simple geometry and i m happy that i m gettin original not some bookworm.i know one has to be original especially in physics and mathematics i mean theoretical physics and excuse me if i m gettin very much excited.
now i have this new story about fred of course written by me.
now fred makes some beautiful hemisphere on this spherical surface using some marker. for fred he is making a big circle and for us 3d people it is a hemisphere.he finds the real radius that u already know and finds the surface area.
here
let
real radius=r
length of rope =wrong radius=R
height of third dimension=h=r as it is a hemisphere
please let me know how can i post a diagram along with this
now
R^2=r^2+h^2
R^2=r^2+r^2
R=r*sqrt(2)-------------------------eqn (1)
now when fred tries to know the area he uses his wrong radius R purposely.
for fred area circle=pi*R^2
using eqn(1)
we get area =2*pi*r^2 area calculated by us of hemisphere made by fred.
hence circumference in 2d and 3d is different while area between 2d and 3d remains the same.
next we compare 3d and 4d where i hope we shall get areas to be different and volume to be the same.
Recognitions: Science Advisor Staff Emeritus What you are doing when you calculate the height is known as finding an "embedding diagram". In this particular case there is a unique embedding diagram that describes the geometry, that turns out to be not the case in general. Sorry I dont' have any private messenger programs, I just read the forums here when (I'm in the mood. You might be somewhat interested in http://www.sff.net/people/Geoffrey.L...lackholes.html which pushes embedding diagrams via paper models about as far as they can go to explain certain aspects of relativity. If you really want to understand relativity, though, at some point you'll have to tackle the math.
i know i gotta learn a lot of maths for special and general relativity. what i m trying to explain is that draw a right angled triangle where R=length of the rope r=real radius h=height of third dimension now R^2=r^2+h^2 for simplicity i dont consider a small circle drawn by fred on ths sphere but a large circle which from third dimension is a hemisphere. so the equation becomes simple and we get R=r*sqrt(2)-------------------------eqn(1) now according to fred area of the circle made by him=pi*R^2 but according to us its a hemisphere so area=2(pi)r^2 but when we substitute value of R from eqn(1) to pi*R^2 we get area = pi*(r*sqrt(2))^2 = 2(pi)*r^2 so when we compare 2d and 3d we get circumference to be different and area to be same i.e equal but when we compare 3d and 4d can we say we will get areas to be different and volume to be same. help me with this.
Recognitions: Science Advisor Staff Emeritus Sandesh, basically you've got a good solution, but your approach won't generalize. Here is an example. Try to imagine an infinite plane surface of constant curvature. By constant curvature, I mean the "Gauss" curvature that was previously mentioned. A sphere has a constant curvature, but it is not infinite in the sense I mean. Your approach (embedding the plane in a 3-d space) won't work for this problem. You'd need to embed the plane in at least a 4-d space. If you are still undeterred and start to apply your approach to the embedding of the plane in a 4-d space, you will find that you have an 1 equation with 2 unknowns.
i m not gettin it what is wrong with my solution. i know its my incapability to understand that much maths. but can i get the relation between areas volume indifferent dimensions as i m expecting for example areas are same but circumference is different between 3d and 4d and areas are different but volume are same between 4d and 3d i mean there are somethings which i m not able to explain this computer stuff u also must be experiencing that.do u live in india (bombay)
There is a raging debate going on (and probably will go on forever) about whether the fourth dimension is time, or whether it is a 4th spatial dimension. Your question is not to claim that the fourth dimension is spatial instead of being time. The point is to speculate about what a fourth *spatial* dimension would be like, beyond our three spatial dimensions. To humans going about their everyday lives, time is fundamentally different than the three spatial dimensions, since only one direction is possible with time. Some have argued that this is only because of our limited perspective. Indeed, time actually behaves like a spatial dimension when you consider it in special relativity. You can get into some quite complex, interesting, and paradoxical discussions about weird things happening with time as a dimension, one of which is what uv mentioned in ur question sanjeev!!
Quote by fahd There is a raging debate going on (and probably will go on forever) about whether the fourth dimension is time, or whether it is a 4th spatial dimension. Your question is not to claim that the fourth dimension is spatial instead of being time. The point is to speculate about what a fourth *spatial* dimension would be like, beyond our three spatial dimensions. To humans going about their everyday lives, time is fundamentally different than the three spatial dimensions, since only one direction is possible with time. Some have argued that this is only because of our limited perspective. Indeed, time actually behaves like a spatial dimension when you consider it in special relativity. You can get into some quite complex, interesting, and paradoxical discussions about weird things happening with time as a dimension, one of which is what uv mentioned in ur question sanjeev!!
Hey! Time is not a spatial dimension on SR. Precisely one talk about spacetime not about four-space. Time is different from space as already seen in the metric (-1, 1, 1, 1).
Of course, some authors have misunderstood the concept of time. For example Hawking.
Hawking loves Euclidean and proposes the use of his imaginary time. Then tau = i·t and the metric becomes (1, 1, 1, 1),
ds = SQR[dx + dy + dz + d(tau)]
that is, a pure 4D spatial formulation: time is geometrised. Hawking claims that real time is tau. Perhaps!
But Hawking posterior analysis is incorrect. It is true that tau is indistinguisable from other space-like dimensions in the new metric but if you take Newton equation you may do tau = i·t
dx/dt = -grad(potential)
i dx/d(tau) = -grad(potential)
and again time is different from space, because space dimension is real and time dimension is imaginary. Hawking's geometrisation of time is ineffective. time is differnt from space.
Recognitions:
Science Advisor
Staff Emeritus
Quote by sandesh trivedi i m not gettin it what is wrong with my solution. i know its my incapability to understand that much maths. but can i get the relation between areas volume indifferent dimensions as i m expecting for example areas are same but circumference is different between 3d and 4d and areas are different but volume are same between 4d and 3d i mean there are somethings which i m not able to explain this computer stuff u also must be experiencing that.do u live in india (bombay)
Let me clarify something. There's nothing technically wrong with the problem solution you posted, it's just that the problem and its solution doesn't have much relevance to General Relativity and how it deals with curvature.
GR deals with curvature not from the viewpoint of someone looking at our universe from outside, but from the viewpoint of someone looking at it from inside. This is known as intrinsic curvature. That's why I stressed that viewpoint, because it is the one that GR uses.
Imagine a 2d surface of constant curvature - you probably imagine the surface of a sphere.
But a sphere requires periodic space and time coordiantes. To use a technical term, it's a compact manifold.
Try to imagine wrapping a piece of paper around a sphere, a piece of paper that's infinite in both directions (a plane). You can't do it in three dimensions.
You can wrap a narrow strip of paper (elastic paper) that's infintely long but finitely wide around the sphere with no problem. But as you try to make the paper wider and wider, eventually you find that the paper has to pass through itself, something that you can't do. (Unless you add extra dimensions).
GR has to deal with non-compact manifolds all the time. You can't simply represent the curved 4-d space of GR as the surface of a 5 dimensional manifold. You need more than 5 dimensions to get the right sort of curvature, just as you need more than 3 dimensions to construct a plane that has a constant curvature everywhere.
This also makes the embedding non-unique. When you embed a n-dimensional maniofld in n+1 dimensions, you can solve the equaitons and find a unique solution, but when you embed a n-dimensional mainfold in n+2 or even higher dimensional space, you find the embedding is not unique.
Recognitions: Gold Member Science Advisor Too many assumptions make for a bad result. It's correct to treat time as a dimension, but not ok to introduce arbitrary factors that restrict the freedom of choice in choosing spatial coordinates. I disagree with Juan, time is clearly a free parameter, but not priveledged. Space without time is meaningless.
Thread Closed
Page 1 of 3 1 2 3 >
Thread Tools
| | | |
|--------------------------------------------------------------|------------------------------|---------|
| Similar Threads for: confused about fourth spatial dimension | | |
| Thread | Forum | Replies |
| | Cosmology | 52 |
| | Special & General Relativity | 2 |
| | General Discussion | 9 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476443529129028, "perplexity_flag": "middle"}
|
http://nrich.maths.org/7289/index?nomenu=1
|
What is the sum of: $$6 + 66 + 666 + 6666 + \cdots + 666666666\cdots6$$ where there are $n$ sixes in the last term?
Many functions, including the trigonometric and exponential functions that you meet in school, can be approximated by infinite power series and good approximations can be found using a finite number of terms. If the series is centred at zero then it can be written in the form $\Sigma_{n=0}^\infty a_nx^n$ where the coefficients depend on the derivative of the function at the origin. The infinite geometric series $1 + x + x^2 + \cdots$ which converges for $|x| < 1$ is the power series for the function $(1 - x )^{-1}$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364811182022095, "perplexity_flag": "head"}
|
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevB.81.100501
|
# Synopsis:
Keep it local
#### Local measurement of the penetration depth in the pnictide superconductor Ba(Fe0.95Co0.05)2As2
Lan Luan, Ophir M. Auslaender, Thomas M. Lippman, Clifford W. Hicks, Beena Kalisky, Jiun-Haw Chu, James G. Analytis, Ian R. Fisher, John R. Kirtley, and Kathryn A. Moler
Published March 2, 2010
The typical distance that an applied magnetic field penetrates a superconductor—called the penetration depth, $λ$—is an important measure that can be directly related to the superfluid density. Absolute values of $λ$ are notoriously difficult to ascertain. Most experiments therefore measure the difference, $Δ$$λ$ = $λ$$(T)-λ(0)$, which has the same temperature dependence as the superfluid density at low temperatures. A power-law dependence in $Δ$$λ$ suggests there are nodes in the superconductivity gap, while an exponential dependence implies a fully open gap.
Recent measurements of $Δ$$λ$ in iron pnictide superconductors of the $Ba$-$122$ family show conflicting behavior: when the superconductor is doped with cobalt, there is a power-law dependence, suggesting nodes in the gap, but when doped with potassium, different experiments do not agree on the absence or presence of nodes in the gap. It turns out that to resolve this conflict, it may be necessary to measure the absolute value of $λ$.
In a Rapid Communication appearing in Physical Review B, Lan Luan and collaborators from Stanford University in the US have found a new way of measuring both the absolute value of the penetration depth and its spatial homogeneity using magnetic force microscopy (MFM) and SQUID susceptometry. Lan Luan et al. observe that for the $Ba$-$122$ compound doped with cobalt, the superfluid density has a temperature dependence that is consistent with a fully gapped two-band model, similar to the potassium-doped material. Further, $λ$ is found to be spatially homogeneous at the submicron scale, and absolute values of $λ$ suggest that phase fluctuations are not as important for iron pnictides as for the underdoped cuprates. The ability of the new tool to obtain the absolute values of the penetration depth and to map its spatial variation down to the submicron scale is likely to be extremely useful. – Sarma Kancharla
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8866282105445862, "perplexity_flag": "middle"}
|
http://nrich.maths.org/6481&part=
|
### Degree Ceremony
What does Pythagoras' Theorem tell you about these angles: 90°, (45+x)° and (45-x)° in a triangle?
### Loch Ness
Draw graphs of the sine and modulus functions and explain the humps.
### Squareness
The family of graphs of x^n + y^n =1 (for even n) includes the circle. Why do the graphs look more and more square as n increases?
# Tangled Trig Graphs
##### Stage: 5 Challenge Level:
Here is a pattern I made with some graphs of trigonometric functions.You can find a copy to print here.
• The purple line is the graph $y=\sin x$. Can you identify the coordinates of the points where it crosses the axes and where it reaches its maximum and its minimum values?
• How could I make the red graph from the purple graph? Can you work out the equation of the red graph?
• The green graph has equation $y=\sin 2x$. Can you describe how to make the green graph from the purple graph? How does the transformation of the graph relate to the way the equation has changed?
• Using these ideas, can you work out the equations of the other graphs I have drawn?
Imagine you had a graphical calculator but the sine button is broken. Can you draw the same patterns using the cosine function instead? Explain how you can transform a cosine graph into a sine graph.
Why not create some trig patterns of your own using graphical calculators or graphing software, and send them to us.
This problem is also available in French: Trigo tricoté
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9131200909614563, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/03/21/integrability-over-subintervals/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
|
# The Unapologetic Mathematician
## Integrability over subintervals
As I noted when I first motivated bounded variation, we’re often trying to hold down Riemann-Stieltjes sums to help them converge. In a sense, we’re sampling both the integrand $f$ and the variation of the integrator $\alpha$, and together they’re not big enough to make the Riemann-Stieltjes sums blow up as we take more and more samples. And it seems reasonable that if these sums don’t blow up over the whole interval, then they’ll not blow up over a subinterval.
More specifically, I assert that if $\alpha$ is a function of bounded variation, $f$ is integrable with respect to $\alpha$ on $\left[a,b\right]$, and $c$ is a point between $a$ and $b$, then $f$ is integrable with respect to $\alpha$ on $\left[a,c\right]$.
Then, in the equation expressing “linearity” in the interval
$\displaystyle\int\limits_{\left[a,b\right]}fd\alpha=\int\limits_{\left[a,c\right]}fd\alpha+\int\limits_{\left[c,b\right]}fd\alpha$
we have two of these integrals known to exist. Therefore the third one does as well, and the equation is true\$. If we have a subinterval $\left[c,d\right]\subseteq\left[a,b\right]$, then this theorem states that $f$ is integrable over $\left[c,b\right]$, and another invocation of the theorem shows that $f$ is integrable over $\left[c,d\right]$. So being integrable over an interval implies that the function is integrable over any subinterval.
As we said earlier, we can handle all integrators of bounded variation by just considering increasing integrators. And then we can use Riemann’s condition. So, given an $\epsilon>0$ there is a partition $x_\epsilon$ of $\left[a,b\right]$ so that $U_{\alpha,x}(f)-L_{\alpha,x}(f)<\epsilon$ for any partition $x$ finer than $x_\epsilon$.
We may as well assume that $c$ is a partition point of $x_\epsilon$, since we can just throw it in if it isn’t already. Then the partition points up to $c$ form a partition $x_\epsilon'$ of $\left[a,c\right]$. Further, any refinement $x'$ of $x_\epsilon'$ is similarly part of a refinement $x$ of $x_\epsilon$. So by assumption we know that $U_{\alpha,x}(f)-L_{\alpha,x}(f)<\epsilon$, and we get down to $x'$ by throwing away terms in the sum for partition points above $c$. Each of these terms is nonnegative, and so we see that
$U_{\alpha,x'}(f)-L_{\alpha,x'}(f)<U_{\alpha,x}(f)-L_{\alpha,x}(f)<\epsilon$
That is, $f$ satisfies Riemann’s condition with respect to $\alpha$ on $\left[a,c\right]$, and so it’s integrable.
### Like this:
Posted by John Armstrong | Analysis, Calculus
## 2 Comments »
1. [...] variation (in fact it’s decreasing), and so it’s integrable with respect to . Then it’s integrable over the subinterval . Why not just start by saying it’s integrable over ? Because now we [...]
Pingback by | April 18, 2008 | Reply
2. [...] and a function that’s Riemann-Stieltjes integrable with respect to over that interval. Then we know that is also integrable with respect to over the subinterval . Let’s use this to define a [...]
Pingback by | March 14, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 40, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8851516246795654, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/tagged/astronomy
|
# Tagged Questions
The science dealing with objects and phenomena located beyond Earth. In particular, this applies to observations and data. At its core, astronomy is the physically informed cataloging and classifying of the contents of the universe in order to better understand what is out there.
1answer
49 views
### Do all known planets and moons have magnetic field?
In this Wikipedia article it is stated, that magnetic field of Earth is caused by currents in her core. The same origin is for Jupiter magnetic field. For Moon (article) there is a magnetic field, ...
3answers
64 views
### Parallax, obliquity, precession, and Orion?
Today, the obliquity of the earth is about 23.4°. 6500 years ago, it was about 24.1° Imagine the blue square is the constellation of Orion, and the yellow star is the sun. Viewpoint B is you, on ...
3answers
107 views
### Distance away from earth to see it as a full disk [duplicate]
This question is more space-related than physics-related, but here goes... How far away the earth would I have to be in order to see the earth as a full disk? What I'm looking for is a distance in ...
0answers
19 views
### What's an equation for two astronomical entities both of 4000 tonnes in weight, colliding? [duplicate]
I have next to no knowledge of any physics, but would be happy if you could answer my question... I want to know an equation for two astronomical entities such as the star Sirius (2.02 solar mass) ...
0answers
20 views
### Aligning images of starfields
I have two images taken within 30 minutes of each other in the same part of the sky. They are very similar but are slightly offset due to the Earth's rotation and other factors. I know: the X, Y ...
2answers
85 views
### Is it possible that universe might not be speeding up expansion?
I'm not sure but I was thinking of galaxies shrinking with time while still moving apart from each other at almost a constant speed or less (i.e: uniform/slightly decelerating expansion). This may ...
1answer
40 views
### What planets are visible to the naked eye from Mars?
Here on Earth we are blessed with being able to see some other planets, Mars & Venus etc, with the naked eye on a fairly regular basis thanks to the distance between the planets. What about from ...
2answers
84 views
### Curiosity Rover (MSL): current coordinates
I'm looking for information on the current coordinates of Mars Science Laboratory's Curiosity Rover. I've only found the landing site coordinates 4.5895°S ...
1answer
114 views
### What happens to the electron companions of cosmic ray protons?
If primary cosmic rays are made mostly of protons, where are the electrons lost, and does this mean that the Earth is positively charged? Does the sun eject protons and electrons in equal number?
3answers
81 views
### How do we measure the range of distant objects despite relativistic effects?
When we observe astronomical objects like distant galaxies there are several complicating factors for estimating the distance: Relativistic speed result in length contraction Relativistic speed ...
0answers
115 views
### Optimal telescope size?
Consider a diffraction-limited telescope with unobstructed aperture $D$. Such a scope is capable of yielding an angular resolution $\alpha$ that scales as $\lambda/D$, with $\lambda$ denoting the ...
0answers
288 views
### Strange things about new moon [duplicate]
I have some strange and infantile questions about new moon. I want to know how is it possible that the Moon is not visible at night and also at day it is not Sun eclipse? I will explain the problem in ...
3answers
105 views
### Why wasn't the moon visible during the day a few decades ago?
I was born in 1949. When I was young we played outside and watched the clouds and the sky a lot, and I don't remember ever seeing the moon during the day. Is the sun closer to us now so we see it more ...
2answers
48 views
### Has anyone studied a statistical scaling law for the universe? [closed]
How do named objects in the universe scale? Is there a predictable curve for an ordered list, say {atom, animal, planet, solar system, galaxy, etc}? Can you then use the analysis to predict when the ...
0answers
27 views
### Information-theoretic limits in observational astronomy
It seems to me that with ever larger and better telescopes and powerful statistical methods, humans are gleaning surprising amounts of information from observations of distant stars. I am especially ...
1answer
112 views
### Find temperature of surface (Blackbody Radiation)
An astronomer is trying to estimate the surface temperature of a star with a radius of $5 \times 10^8\ m$ by modeling it as an ideal blackbody. The astronomer has measured the intensity of ...
1answer
74 views
### I want the Saturn's position in terms of Declination and Right Ascension?
I want the Saturn's position in terms of Declination and Right Ascension for a couple of month in the interval of 1 hour in a text file to do a simulation. Which site can provide me these data? Or ...
1answer
56 views
### Where can I search for high quality telescope images of Earth's moon?
I am developing a sensor calibration capability that compares a telescope lunar observation to a physics-based radiometric model. I'd like to find some high quality lunar images to test our ...
0answers
45 views
### What is the observable Earth we can see? [duplicate]
Yes I know we can see the whole earth, but how far can we see left to right waving out the limitations of sight. Because it's impossible to see the whole earth right? because it's spherical? So is ...
0answers
70 views
### Doubts about NASA's announcement of collision between Milky Way and Andromeda [closed]
Andromeda is one of the nearest big gallaxies out there. We can estimate the distance to the Andromeda Galaxy measuring the apparent brightness of Cepheid variable stars; its distance is currently ...
3answers
132 views
### How old is SUN ☉?
How do we know/calculate the exact age of sun ☉ ? ie. 4.57 billion years. What is the way to calculate it?
2answers
96 views
### Is there anyone calculate the probability of extrasolar planets?
After reading an recent news "Stargazers capture first picture of a planet with two suns – just like Luke Skywalker’s home planet of Tatooine in Star Wars", I am thinking that: can we calculate the ...
1answer
44 views
### What information about a meteor's trajectory, size, or height can be derived from a single location?
If one sees a meteor, is there any way to get even a rough approximation of its height, entry angle, size, or other characteristic without triangulation from another position? If it appeared as a ...
1answer
144 views
### Telescope size to view saturn
What is the properties (size, etc) of required lenses for minimal telescope to see the Saturn rings clearly?
1answer
113 views
### What is the frequency of occurrence of stellar classifications off the HR main-sequence?
An alternative version of this question would be: "if was to pick a star from the $10^{11}$ or so in our galaxy at random, what are the probabilities of it being various kinds of star?" (and I do mean ...
1answer
80 views
### How is celestial navigation done on a low-level?
When we send a probe off to Jupiter or Saturn, or even Earth orbit, how are the rocket firings timed and coordinated? For instance, when I want to drive to another city I pull onto the highway and ...
1answer
40 views
### Initial separation of neutron star/black hole binaries?
How would I go about finding the distribution of initial separations (i.e. the lengths between the centres of mass) of stars that make up binary systems. I am interested in neutron stars and stellar ...
1answer
50 views
### Have we observed another supernova explosion since SN 2008D?
I read the wikipedia article about SN 2008D which says: "Now that it is known what X-ray pattern to look for, the next generation of X-ray satellites is expected to find hundreds of supernovae every ...
0answers
24 views
### Saturn's angular position with respect to major axis?
Would someone please help me by giving me Saturn's angular position with respect to its orbit's major axis when the Earth is at Vernal Equinox Or at Perihelion? If not possible please mention some ...
0answers
25 views
### Planets motion around the sun [duplicate]
Why planets of solar system move almost in the same plane?
3answers
350 views
### Why planets are rotating only in one plane? [duplicate]
Since gravity is three dimensional why planets are rotating only in one plane around sun.
1answer
101 views
### Very large absorption lines in stellar spectrum
I was puzzled by the wide absorption lines in a stellar spectrum I found. The following is what I expect absorption lines to look like - thin, crisp lines: However, I found this stellar spectrum, ...
1answer
55 views
### How is Doppler redshift of distant galaxies established?
Doppler redshift of distant galaxies gave first hint that the universe is expanding. I am curious to know how this redshift is actually measured and interpreted from observation. Suppose I observe ...
0answers
27 views
### What is the formula for calculating the length of any given day (sunrise to sunset)? [duplicate]
In a specific date what law gives us perfect measurements and how will we measure if latitude is given?
2answers
92 views
### How/why can the cosmic background radiation measurements tell us anything about the curvature of the universe?
So I've read the Wikipedia articles on WMAP and CMB in an attempt to try to understand how scientists are able to deduce the curvature of the universe from the measurements of the CMB. The Wiki ...
1answer
55 views
### How “big” objects can WISE and NEOWISE detect?
I mean WISE is monitoring near-Earth objects, but cannot see the latest Russian meteor and others. Why can it not detect small objects? What is the limit of it's infrared detectors?
2answers
123 views
### Could we survive if the sun were a black hole?
Ignoring the impossibility of the sun suddenly collapsing, and the energy release (which would kill us): If the sun suddenly turned into a black hole, could we survive if we had sufficient energy for ...
2answers
182 views
### Which way does a black hole spin?
As far as I understand, and from what I have been shown in renderings of black holes, they spin (like water going down a drain). My question is, firstly, does the matter being pulled into a black ...
1answer
101 views
### How is it the Voyagers are a few seconds closer to Earth than earlier?
The Voyager 2 tweet of March 01, 2013 put it's distance at 14 hrs 04 mins 23 secs of light-travel time from Earth. A more recent (earlier today) tweet says it is 14 hrs 04 mins 22 secs of light-travel ...
3answers
303 views
### What made us think that Earth moves around the Sun?
Trying to observe the night sky for a few weeks, the motion of the Sun and the stars pretty much fits into the Geocentric Theory i.e. All of them move around the Earth. What then, which particular ...
1answer
42 views
### How to track the visual path of a LEO satellite as seen from the ground
I have been struggling with this problem for a while so I decided to ask. I'm new here and I'm not sure where this type of question belongs, so forgive me if this isn't the right section. I am ...
2answers
208 views
### EM waves: How do they travel for billions of km without damping
If a star is 1 billion light years away, it means that the light we see from the star is emmitted billions of years ago. How does this light not undergo a frequency change or get damped inspite of ...
3answers
65 views
### Length of day of a gas giant
How can the rotational speed, or the length of a day be determined or estimated in a planet which is composed entirely of non homogeneous fluids? There must be internal forces (pressure gradients, ...
1answer
95 views
### What are we all falling towards?
One meteorite fell on the ground in Russia, last week. In different circumstances, it could have orbited the earth, or perhaps pass close to the earth and then disappear into the space. It seems that ...
1answer
67 views
### Dividing two star spectra
I am doing some work that involves dividing two stellar spectra from the same star. Those stellar spectra are constructed by summing random samples of multiple spectra from the same star to improve ...
1answer
222 views
### What dark matter can AMS currently find (or exclude)?
The rumor mill is running again, this time it's about the AMS experiment (Alpha Magnetic Spectrometer) that's going to make a major announcement soon. I suppose they are looking for peaks in gamma ...
2answers
142 views
### Why do astronomers never put a scale on their photographs?
Why do astronomers never put a scale on their photographs? I have been looking at images of the Bird nebula, a collision of three galaxies, but in none of the dozen or so that I have found, nor in the ...
0answers
151 views
### Mirrors and light beam divergence technology limits
There are many applications for orbital space mirrors in astronomy (better telescopes) and space propulsion (solar power for deep space probes), but this is limited by the minimum beam divergence ...
2answers
80 views
### Is there a map of the particles in outer space?
Since outer space is not quite a vacuum, and the distribution of various heavenly bodies is locally inhomogeneous, it seems reasonable to expect that the density and variety of particles ...
1answer
136 views
### Relationship between Mars and Earth rotation
Is it by pure random chance that Mars and the Earth have nearly the same day duration (Mars day is barely 40 minutes longer, which is just 3% difference), or there is some causal relationship between ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93421870470047, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/48888/what-is-the-divisibility-theory-for-bezout-domains
|
## What is the divisibility theory for Bezout Domains?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
There are many facts about integer gcds which can be proved by appealing to unique prime factorization (up to sign). for example $\gcd(a^2,b^2)=\gcd(a,b)^2$. One way to get the machinery (if that is not too strong a word) about primes and unique factorization is to start with the fact that for all integers $a,b$ there are is a linear combination $d=as+bt$ which divides both $a$ and $b$. That is, to show that the integers are a Bezout Domain and then follow the familiar series of results we all learned and perhaps forget (but then relearned when we taught it). An important tool is that the cofactors $s,t$ can be computed via the Euclidean Algorithm. There are other domains with an integer norm where the same path can be followed: $\mathbb{Z}[\sqrt{k}]$ for certain $k$ such as $2$ and $-1$ for example. However, many of the facts about divisibility can be derived without all that machinery. In a recent answer I pointed out that $as+bt=1$ implies after cubing and simplifying that $a^2(as+3bt)s^2+b^2(3as+bt)t^2=1$ so that $\gcd(a,b)=1$ implies $\gcd(a^2,b^2)=1$. That can be fixed up to a proof that $\gcd(a^2,b^2)=\gcd(a,b)^2$. My question is how far one can get assuming only that a commutative ring (with 1) is a Bezout Domain? Lest I be accused of not having a question I'll ask the following (but feel free to tell more).
Is there any nice treatment of the facts about divisibility which follow only from the assumption that we are in a Bezout Domain? And are there any nice Bezout Domains which show some facts which don't follow?
I suppose all the premises would be equations as would the conclusions. Can we say anything about the increase in complexity? In the equation $a^2(as+3bt)s^2+b^2(3as+bt)t^2=1$ we can use $as+bt=1$ to replace $as+3bt$ with $2bt+1$ or $as+bt+2$ although this is still the same value. Must the cofactors for $s',t'$ for $a^2s'+b^2t'=1$ be at least quartic in $a,b,s,t$ or is there a nicer equation? Over the integers the $s'$ and $t'$ I mention are usually much bigger than they need to be. For example $a,b=11,18$ gives $as+bt=11\cdot 5+18\cdot (-3)=1$ which yields $11^2\cdot (-2675)+18^2\cdot 999=1$ But we prefer $11^2\cdot (-83)+18^2\cdot 31=1$ or at least $11^2\cdot 241+18^2\cdot (-90)=1$.
-
## 2 Answers
I find your question interesting but a little vague. Let me give you a couple of references: perhaps you can use them to sharpen your question. (Or perhaps they're not what you're looking for at all: we'll see...)
First, there are certainly plenty of treatments of factorization from the perspective of Bezout domains (or GCD-domains, etc.): see for instance the Chapters on Factorization, Bezout Domains and Valuation Domains (currently labelled 12,13,14, but beware: this is subject to change!) in
http://math.uga.edu/~pete/integral.pdf
As for properties not shared by all Bezout domains: an integral domain $R$ is called an elementary divisor domain if Smith normal form exists over $R$: that is if every (not necessarily square) matrix with $R$-coefficients can be diagonalized by performing elementary row and column operations. Famously, any PID is an elementary divisor domain.
It is an open question whether every Bezout domain is an elementary divisor domain. Apparently the expected answer among the experts is no, so this gives at least a conjectural answer to your question. For some information about this problem and other facts which may or may not hold for arbitrary Bezout domains, please see the recent paper by my colleague D.J. Lorenzini:
http://www.math.uga.edu/~lorenz/BezoutMay9.pdf
-
Thanks, I like your notes. I realize that my question is somewhat different so i will re-ask it. – Aaron Meyerowitz Dec 11 2010 at 6:16
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Based upon your examples, you may find it more natural to work with the slightly more general class of Prüfer domains (vs. Bezout domains) - i.e. finitely generated ideals$\:\ne 0$ are invertible (vs. principal). Prüfer domains are non-Noetherian generalizations of Dedekind domains. Their ubiquity stems from a remarkable confluence of interesting characterizations, e.g. CRT, or Gauss's Lemma for content ideals, or for ideals $\rm\ A\cap (B + C) = A\cap B + A\cap C\:,\:$ or $\rm\ (A + B)\ (A \cap B) = A\ B\$ etc. It's been estimated that there are close to 100 such characterizations known, e.g. see my sci.math post for 30.
As a simple example I'll give the natural Prüfer domain proof of a generalization of your example, viz. the ideal-theoretic $\:$ Freshman's Dream $\rm\ \ (A + B)^n = A^n + B^n\:.\:$ This identity is true for both arithmetic of $\:$ GCDs $\:$ and invertible ideals simply because, in both cases, multiplication is cancellative and addition is idempotent, i.e. $\rm\ A + A = A\$ for ideals and $\rm\ (A,A) = A\$ for GCDs. Combining these properties with the associative, commutative, distributive laws of addition and multiplication we obtain an extremely elementary high-school-level proof of the Freshman's Dream - which is best illustrated for $\rm\: n = 2\:,\:$ viz.
$\rm\quad\quad (A + B)^4 \ =\ A^4 + A^3 B + A^2 B^2 + AB^3 + B^4$
$\rm\quad\quad\phantom{(A + B)^4 }\ =\ A^2\ (A^2 + AB + B^2) + (A^2 + AB + B^2)\ B^2$
$\rm\quad\quad\phantom{(A + B)^4 }\ =\ (A^2 + B^2)\ \:(A + B)^2$
So $\rm\ {(A + B)^2 }\ =\ \ A^2 + B^2\$ if $\rm\ A+B\$ is cancellative, e.g. if $\rm\ A+B = 1\$ or if it's invertible.
The same proof generalizes for all $\rm\:n\:$ since, as above
$\rm\quad\quad (A + B)^{2n}\ =\ A^n\ (A^n + \cdots+ B^n) + (A^n +\cdots+ B^n)\ B^n$
$\rm\quad\quad\phantom{(A + B)^{2n}}\ =\ (A^n + B^n)\ (A + B)^n$
In the GCD case $\rm\ A+B\ := (A,B) = \gcd(A,B)\$ for $\rm\:A,B\:$ in a GCD-domain, i.e. a domain where $\rm\: \gcd(A,B)\:$ exists for all $\rm\:A,B \ne 0,0\:$. Here too the Dream is true since $\rm\:(A,B)\:$ is cancellable, being nonzero in a domain. (Note: one can unify the GCD and ideal cases by employing Divisor Theory).
In fact this yields yet another characterization: a domain is Prufer iff it satisfies the Freshman's Dream for all finitely generated ideals. See said sci.math post for further discussion.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324219822883606, "perplexity_flag": "head"}
|
http://en.wikipedia.org/wiki/Summation
|
# Summation
"Sum" and "Summation" redirect here. For other uses, see Sum (disambiguation) and Summation (disambiguation).
Calculation results
Addition (+)
addend + addend = sum
Subtraction (−)
minuend − subtrahend = difference
Multiplication (×)
multiplicand × multiplier = product
Division (÷)
dividend ÷ divisor = quotient
Exponentiation
baseexponent = power
nth root (√)
degree √ = root
Logarithm
logbase(power) = exponent
Summation is the operation of adding a sequence of numbers; the result is their sum or total. If numbers are added sequentially from left to right, any intermediate result is a partial sum, prefix sum, or running total of the summation. The numbers to be summed (called addends, or sometimes summands) may be integers, rational numbers, real numbers, or complex numbers. Besides numbers, other types of values can be added as well: vectors, matrices, polynomials and, in general, elements of any additive group (or even monoid). For finite sequences of such elements, summation always produces a well-defined sum (possibly by virtue of the convention for empty sums).
Summation of an infinite sequence of values is not always possible, and when a value can be given for an infinite summation, this involves more than just the addition operation, namely also the notion of a limit. Such infinite summations are known as series. Another notion involving limits of finite sums is integration. The term summation has a special meaning related to extrapolation in the context of divergent series.
The summation of the sequence [1, 2, 4, 2] is an expression whose value is the sum of each of the members of the sequence. In the example, 1 + 2 + 4 + 2 = 9. Since addition is associative the value does not depend on how the additions are grouped, for instance (1 + 2) + (4 + 2) and 1 + ((2 + 4) + 2) both have the value 9; therefore, parentheses are usually omitted in repeated additions. Addition is also commutative, so permuting the terms of a finite sequence does not change its sum (for infinite summations this property may fail; see absolute convergence for conditions under which it still holds).
There is no special notation for the summation of such explicit sequences, as the corresponding repeated addition expression will do. There is only a slight difficulty if the sequence has fewer than two elements: the summation of a sequence of one term involves no plus sign (it is indistinguishable from the term itself) and the summation of the empty sequence cannot even be written down (but one can write its value "0" in its place). If, however, the terms of the sequence are given by a regular pattern, possibly of variable length, then a summation operator may be useful or even essential. For the summation of the sequence of consecutive integers from 1 to 100 one could use an addition expression involving an ellipsis to indicate the missing terms: 1 + 2 + 3 + 4 + ... + 99 + 100. In this case the reader easily guesses the pattern; however, for more complicated patterns, one needs to be precise about the rule used to find successive terms, which can be achieved by using the summation operator "Σ". Using this sigma notation the above summation is written as:
$\sum_{i=1}^{100}i.$
The value of this summation is 5050. It can be found without performing 99 additions, since it can be shown (for instance by mathematical induction) that
$\sum_{i=1}^ni = \frac{n(n+1)}{2}$
for all natural numbers n. More generally, formulae exist for many summations of terms following a regular pattern.
The term "indefinite summation" refers to the search for an inverse image of a given infinite sequence s of values for the forward difference operator, in other words for a sequence, called antidifference of s, whose finite differences are given by s. By contrast, summation as discussed in this article is called "definite summation".
## Notation
### Capital-sigma notation
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, ∑, an enlarged form of the upright capital Greek letter Sigma. This is defined as:
$\sum_{i=m}^n a_i = a_m + a_{m+1} + a_{m+2} +\cdots+ a_{n-1} + a_n.$
Where, i represents the index of summation; ai is an indexed variable representing each successive term in the series; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by 1 for each successive term, stopping when i = n.
Here is an example showing the summation of exponential terms (all terms to the power of 2):
$\sum_{i=3}^6 i^2 = 3^2+4^2+5^2+6^2 = 86.$
Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in:
$\sum a_i^2 = \sum_{i=1}^n a_i^2.$
One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
$\sum_{0\le k< 100} f(k)$
is the sum of f(k) over all (integers) k in the specified range,
$\sum_{x\in S} f(x)$
is the sum of f(x) over all elements x in the set S, and
$\sum_{d|n}\;\mu(d)$
is the sum of μ(d) over all positive integers d dividing n.[1]
There are also ways to generalize the use of many sigma signs. For example,
$\sum_{\ell,\ell'}$
is the same as
$\sum_\ell\sum_{\ell'}.$
A similar notation is applied when it comes to denoting the product of a sequence, which is similar to its summation, but which uses the multiplication operation instead of addition (and gives 1 for an empty sequence instead of 0). The same basic structure is used, with ∏, an enlarged form of the Greek capital letter Pi, replacing the ∑.
### Special cases
It is possible to sum fewer than 2 numbers:
• If the summation has one summand x, then the evaluated sum is x.
• If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum.
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if n = m in the definition above, then there is only one term in the sum; if n = m − 1, then there is none.
## Formal definition
If the iterated function notation is defined e.g. $f^2(x) \equiv f(f(x))$ and is considered a more primitive notation, then summation can be defined in terms of iterated functions as:
$\left\{b+1,\sum_{i=a}^b g(i)\right\} \equiv \left( \{i,x\} \rightarrow \{ i+1 ,x+g(i) \}\right)^{b-a+1} \{a,0\}$
Where the curly braces define a 2-tuple and the right arrow is a function definition taking a 2-tuple to 2-tuple. The function is applied b-a+1 times on the tuple {a,0}.
## Measure theory notation
In the notation of measure and integration theory, a sum can be expressed as a definite integral,
$\sum_{k=a}^b f(k) = \int_{[a,b]} f\,d\mu$
where [a,b] is the subset of the integers from a to b, and where μ is the counting measure.
## Fundamental theorem of discrete calculus
Indefinite sums can be used to calculate definite sums with the formula:[2]
$\sum_{k=a}^b f(k)=\Delta^{-1}f(b+1)-\Delta^{-1}f(a)$
## Approximation by definite integrals
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any:
increasing function f:
$\int_{s=a-1}^{b} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a}^{b+1} f(s)\ ds.$
decreasing function f:
$\int_{s=a}^{b+1} f(s)\ ds \le \sum_{i=a}^{b} f(i) \le \int_{s=a-1}^{b} f(s)\ ds.$
For more general approximations, see the Euler–Maclaurin formula.
For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance
$\frac{b-a}{n}\sum_{i=0}^{n-1} f\left(a+i\frac{b-a}n\right) \approx \int_a^b f(x)\ dx,$
since the right hand side is by definition the limit for $n\to\infty$ of the left hand side. However for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.
## Identities
The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series
### General manipulations
$\sum_{n=s}^t C\cdot f(n) = C\cdot \sum_{n=s}^t f(n)$, where C is a constant
$\sum_{n=s}^t f(n) + \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) + g(n)\right]$
$\sum_{n=s}^t f(n) - \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) - g(n)\right]$
$\sum_{n=s}^t f(n) = \sum_{n=s+p}^{t+p} f(n-p)$
$\sum_{n=s}^j f(n) + \sum_{n=j+1}^t f(n) = \sum_{n=s}^t f(n)$
$\sum_{n=s}^t f(n) = \sum_{n=t}^s f(n)$, for finite s and t.
$\sum_{n\in A} f(n) = \sum_{n\in \sigma(A)} f(n)$, for a finite set A (Where σ(A) is a permutation of A).
$\sum_{i=k_0}^{k_1}\sum_{j=l_0}^{l_1} a_{i,j} = \sum_{j=l_0}^{l_1}\sum_{i=k_0}^{k_1} a_{i,j}$
$\sum_{n=0}^t f(2n) + \sum_{n=0}^t f(2n+1) = \sum_{n=0}^{2t+1} f(n)$
$\sum_{n=0}^t \sum_{i=0}^{z-1} f(z\cdot n+i) = \sum_{n=0}^{z\cdot t+z-1} f(n)$
$\sum_{n=s}^t \ln f(n) = \ln \prod_{n=s}^t f(n)$
$c^{\left[\sum_{n=s}^t f(n) \right]} = \prod_{n=s}^t c^{f(n)}$
### Some summations of polynomial expressions
$\sum_{i=m}^n 1 = n+1-m$
$\sum_{i=1}^n \frac{1}{i} = H_n$ (See Harmonic number)
$\sum_{i=1}^n \frac{1}{i^k} = H^k_n$ (See Generalized harmonic number)
$\sum_{i=m}^n i = \frac{(n+1-m)(n+m)}{2}$ (see arithmetic series)
$\sum_{i=0}^n i = \sum_{i=1}^n i = \frac{n(n+1)}{2}$ (Special case of the arithmetic series)
$\sum_{i=0}^n i^2 = \frac{n(n+1)(2n+1)}{6} = \frac{n^3}{3} + \frac{n^2}{2} + \frac{n}{6}$
$\sum_{i=0}^n i^3 = \left(\frac{n(n+1)}{2}\right)^2 = \frac{n^4}{4} + \frac{n^3}{2} + \frac{n^2}{4} = \left[\sum_{i=1}^n i\right]^2$
$\sum_{i=0}^n i^4 = \frac{n(n+1)(2n+1)(3n^2+3n-1)}{30} = \frac{n^5}{5} + \frac{n^4}{2} + \frac{n^3}{3} - \frac{n}{30}$
$\sum_{i=0}^n i^p = \frac{(n+1)^{p+1}}{p+1} + \sum_{k=1}^p\frac{B_k}{p-k+1}{p\choose k}(n+1)^{p-k+1}$ where $B_k$ denotes a Bernoulli number
The following formulae are manipulations of $\sum_{i=0}^n i^3 = \left(\sum_{i=0}^n i\right)^2$ generalized to begin a series at any natural number value (i.e., $m \in \mathbb{N}$ ):
$\left(\sum_{i=m}^n i\right)^2 = \sum_{i=m}^n ( i^3 - im(m-1) )$
$\sum_{i=m}^n i^3 = \left(\sum_{i=m}^n i\right)^2 + m(m-1)\sum_{i=m}^n i$
### Some summations involving exponential terms
In the summations below a is a constant not equal to 1
$\sum_{i=m}^{n-1} a^i = \frac{a^m-a^n}{1-a}$ (m < n; see geometric series)
$\sum_{i=0}^{n-1} a^i = \frac{1-a^n}{1-a}$ (geometric series starting at 0)
$\sum_{i=0}^{n-1} i a^i = \frac{a-na^n+(n-1)a^{n+1}}{(1-a)^2}$
$\sum_{i=0}^{n-1} i 2^i = 2+(n-2)2^{n}$ (special case when a = 2)
$\sum_{i=0}^{n-1} \frac{i}{2^i} = 2-\frac{n+1}{2^{n-1}}$ (special case when a = 1/2)
### Some summations involving binomial coefficients and factorials
There exist enormously many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.
$\sum_{i=0}^n {n \choose i} = 2^n$
$\sum_{i=1}^{n} i{n \choose i} = n2^{n-1}$
$\sum_{i=0}^{n} i!\cdot{n \choose i} = \sum_{i=0}^{n} {}_{n}P_{i} = \lfloor n!\cdot e \rfloor$
$\sum_{i=0}^{n-1} {i \choose k} = {n \choose k+1}$
$\sum_{i=0}^n {n \choose i}a^{(n-i)} b^i=(a + b)^n$, the binomial theorem
$\sum_{i=0}^n i\cdot i! = (n+1)! - 1$
$\sum_{i=1}^n {}_{i+k}P_{k+1} = \sum_{i=1}^n \prod_{j=0}^k (i+j) = \frac{(n+k+1)!}{(n-1)!(k+2)}$
$\sum_{i=0}^n {m+i-1 \choose i} = {m+n \choose n}$
## Growth rates
The following are useful approximations (using theta notation):
$\sum_{i=1}^n i^c = \Theta(n^{c+1})$ for real c greater than −1
$\sum_{i=1}^n \frac{1}{i} = \Theta(\log n)$ (See Harmonic number)
$\sum_{i=1}^n c^i = \Theta(c^n)$ for real c greater than 1
$\sum_{i=1}^n \log(i)^c = \Theta(n \cdot \log(n)^{c})$ for non-negative real c
$\sum_{i=1}^n \log(i)^c \cdot i^d = \Theta(n^{d+1} \cdot \log(n)^{c})$ for non-negative real c, d
$\sum_{i=1}^n \log(i)^c \cdot i^d \cdot b^i = \Theta (n^d \cdot \log(n)^c \cdot b^n)$ for non-negative real b > 1, c, d
## Notes
1. Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet (i through q) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see x instead of k in the above formulae involving k. See also typographical conventions in mathematical formulae.
## Further reading
• Nicholas J. Higham, "The accuracy of floating point summation", SIAM J. Scientific Computing 14 (4), 783–799 (1993).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 63, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8962911367416382, "perplexity_flag": "middle"}
|
http://inperc.com/wiki/index.php?title=Projection
|
This site contains: mathematics courses and book; covers: image analysis, data analysis, and discrete modelling; provides: image analysis software. Created and run by Peter Saveliev.
# Projection
### From Intelligent Perception
Given two sets $X$ and $Y$. Then $X×Y$ is the set of all pairs $(a,b)$ of elements in $X$ and $Y$ respectively. The functions $p_X: X×Y → X$ and $p_Y: X×Y → Y$ given by $p_X(a,b) = a, p_Y(a,b) = b$ are called the projections.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8868794441223145, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2012/01/04/the-biot-savart-law/?like=1&source=post_flair&_wpnonce=3fdfac1636
|
# The Unapologetic Mathematician
## The Biot-Savart Law
What I’m going to present is slightly different from what usually gets called the Biot-Savart law, but I think it’s the most natural parallel to the Coulomb law. As far as I can tell, it doesn’t get stressed all that much in modern coverage; in the first course I took on electromagnetism way back in the summer of 1994 I didn’t even see the name written down and parsed what I heard as “bee-ohs of r”. So at least you know how it’s supposed to be pronounced.
If we have two charged particles that are both moving, then they also feel a different force than the electric one. We call the excess the magnetic force. In magnitude it’s proportional to both the magnitudes of the charges and their speeds, and inversely proportional to the square of the distance between them, and then it gets complicated. It’s probably going to be easier to write this down as a formula first.
$\displaystyle F=\frac{\mu_0}{4\pi}\frac{qv\times(q'v'\times\hat{r})}{\lvert r^2\rvert}=\frac{\mu_0}{4\pi}\frac{qv\times(q'v'\times r)}{\lvert r^3\rvert}$
So many cross products! Like last time, this is the force exerted on the first particle by the second; $r$ is the vector pointing from the second particle to the first, and $\hat{r}=\frac{r}{\lvert r\rvert}$ is the unit vector pointing in the same direction. From this formula, we see that the force is perpendicular to the direction the first particle is moving — the magnetic force can only turn a particle, not speed it up or slow it down — and in the plane spanned by the direction the second particle is moving and the displacement vector between them.
Again, we’ve written the constant of proportionality in a weird way. The “magnetic constant” $\mu_0$ now appears in the numerator, so its units are almost like the inverse of those on the electric constant. But we’ve also got two velocities to contend with; these lead to a factor of time squared over area, resulting in mass times distance over charge squared. In the SI system we have another convenience unit called the “henry”, with symbol $H$, defined by
$\displaystyle\mathrm{H}=\frac{\mathrm{m}^2\cdot\mathrm{kg}}{\mathrm{C}^2}$
which lets us write $\mu_0$ in units of henries per meter. Specifically, the SI units give it a value of $\mu_0=4\pi\times10^{-7}\frac{\mathrm{H}}{\mathrm{m}}\approx1.2566370614\times10^{-6}\frac{\mathrm{H}}{\mathrm{m}}$. Yes, I know that this makes the proportionality constant look that much weirder, since it just works out to $10^{-7}$, but still.
## 7 Comments »
1. I read all your posts. Thanks for starting posts on electro-magnetic applications. I am looking forward to seeing how generalized Stokes’ theorem is applied to them.
Comment by Soma Murthy | January 4, 2012 | Reply
2. [...] of the reason that the Biot-Savart law isn’t usually stated in the way I did is that it’s really about currents, which are [...]
Pingback by | January 7, 2012 | Reply
3. [...] so the Biot-Savart law says that the differential contribution to the magnetic field at a point from the current density [...]
Pingback by | January 12, 2012 | Reply
4. [...] Biot-Savart law says that electric currents give rise to a magnetic field. Given the current distribution we have [...]
Pingback by | February 1, 2012 | Reply
5. [...] get the Biot-Savart law, we can use Ampère’s law to calculate the magnetic field around an infinitely long straight [...]
Pingback by | February 3, 2012 | Reply
6. [...] Armstrong: Too many to list! A very small sample chosen mostly at random: Coulomb’s Law, The Biot-Savart Law, Currents, Gauss’ Law for Magnetism, Faraday’s Law, Conservation of Charge, Deriving [...]
Pingback by | February 6, 2012 | Reply
7. we can not find in this the other name of biot savart law.
Comment by vipul wani | August 5, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432540535926819, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/44839?sort=votes
|
## Wikipedia’s definition of ‘locally free sheaf’
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $R$ be a, say, noetherian ring and $M$ an $R$-module. The Wikipedia article on 'locally free sheaf' tells me that the following two statements are equivalent:
1. The module $M$ is locally free (Edit: this means there is an open cover $\{U_i\}$ of $Spec R$ such that every $\tilde{M}_{|U_i}$ is free as an ${\mathcal{O}_{Spec R}}{|U_i}$-module.)
2. $M_p$ is a free $R_p$-module for every prime ideal $p$ of $R$.
I see that these two things are equivalent if $M$ is finitely generated but I cannot see this in general, even if $R$ is noetherian. Am I missing something or is there a mistake on Wikipedia?
If the latter case is true, has anybody an example of a (non-finitely generated) $R$-module $M$ over a noetherian $R$ such that $M_p=(R_p)^{n_p}$ for every prime ideal $p$ of $R$ and such that $M$ is not locally free?
-
29
There couldn't be a mistake in Wikipedia. – Tom Goodwillie Nov 4 2010 at 16:29
2
The current version of <a href="en.wikipedia.org/wiki/…; is really wrong. There are examples below with quasi-coherent sheaves. One can also consider the example $X$ equal to the spectrum of a discrete valuation ring $R$, ${\mathcal F}(X)={\mathcal O}_X(X)$ and ${\mathcal F}(U)={\mathcal O_X}(U)^2$ where $U$ is the complementary of the closed point of $X$. Then $\mathcal F$ is not quasi-coherent (so can not be locally free in the correction definition), but its stalks are free. – Qing Liu Nov 4 2010 at 21:45
## 2 Answers
Dear roger123, let $R$ be a commutative ring and $M$ an $R$-module ( which I do not suppose finitely generated). In order to minimize the risk of misunderstandings, allow me to introduce the following terminology:
Locfree The module $M$ is locally free if for every $P \in Spec (R)$ there is an element $f \in Spec(R)$ such that $f \notin P$ and that $M_f$ is a free $R_f$ - module.
Punctfree The module $M$ is punctually free if for every $P \in Spec (R)$ the $R_{P}$ - module $M_P$ is free.
Fact 1 Every locally free module is punctually free. Clear.
Fact 2 Despite Wikipedia's claim, it is false that a punctually free module is locally free.
Fact 3 However if the punctually free $R$- module $M$ is also finitely presented, then it is indeed locally free.
Fact 4 A finitely generated module is locally free if and only it is projective.
Fact 5 A projective module over a local ring is free.This was proved by Kaplansky and is remarkable in that, let me repeat it, the module $M$ is not supposed to be finitely generated.
A family of counterexamples to support Fact 2 Let R be a Von Neumann regular ring. This means that every $r\in R$ can be written $r=r^2s$ for some $s\in R$. For example, every Boolean ring is Von Neumann regular. Take a non-principal ideal $I \subset R$. Then the $R$- module $R/I$ is finitely generated (by one generator: the class of 1 !), all its localizations are free but it is not locally free because it is not projective (cf. Fact 4) .The standard way of manufacturing that kind of examples is to take for R an infinite product of fields $\prod \limits_{j \in J}K_j$ and for $I$ the set of families $(a_j)_{j\in J}$ with $a_j =0$ except for finitely many $j$ 's.
Final irony In the above section on counterexamples I claimed that the $R$ - module $R/I$ is not projective.This is because in all generality a quotient $R/I$ of a ring $R$ by an ideal $I$ can only be $R$ - projective if $I$ is principal . And I learned this fact in...Wikipedia !
-
1
Thank you for your answer Georges! I think that for a finitely presented $R$ module Locfree is the same as Punctfree. If $R$ is noetherian, finitely presented is equivalent to finitely generated. So I think your example only works with a non-noetherian Ring, right? – roger123 Nov 5 2010 at 7:43
2
Dear roger123: Yes, everything you write is absolutely correct. For finitely presented modules, the equivalence of Locfree and Punctfree follows from Facts 1 and 3. Moreover your remark about the example is extremely interesting. Indeed, R/I can only be non finitely presented if I is non finitely generated. But I only assumed $I$ non-principal. So we are inexorably led to the conclusion that in a Von Neumann regular ring, every finitely generated ideal is principal. I did not know this so I checked five minutes ago in Rotman's book on Homological Algebra (3d ed.): it is stated in Lemma 4.8 ! – Georges Elencwajg Nov 5 2010 at 8:46
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I'm a bit confused about your question since an $R$-module $M$ is defined to be locally free if $M_{\mathfrak{p}}$ is a free $R_\mathfrak{p}$-module for all primes $\mathfrak{p}$. I'll assume that you are asking for any example where $M_{\mathfrak{p}}$ is a free $R_\mathfrak{p}$-module for all primes $\mathfrak{p}$ but $M$ is not projective. If this is not what you are after, just ignore this answer.
Let $R=\mathbb{Z}$ and let $M$ be the submodule of $\mathbb{Q}$ generated by all $\frac{1}{p}$ where $p$ is a prime number. Then $M$ is not projective/free, but it is locally free since `$ M_{(p)}=\mathbb{Z}_{(p)}\frac{1}{p}=R_{(p)} $` and `$M_{(0)}=\mathbb{Q}=R_{(0)}$`.
-
5
Dear Michael: your defn of "locally free" for finitely generated modules is wrong in the sense that it isn't too useful when $R$ is not noetherian. The right defn for f.gentd modules is local freeness for Zariski topology (i.e., over Zariski-open covering, acquires a basis), and this is equivalent to the stalk condition in the noetherian case. But not otherwise. A counterexample is $M=R/I$ for $R=\prod_{n \ge 0} \mathbf{F}_2$ and $I$ an ideal that isn't finitely generated. This $M$ is not loc. free but each $R_P$ is $\mathbf{F}_2$ (needs some thought) and $M_P$ is 0 or 1-diml. – BCnrd Nov 4 2010 at 17:55
1
Thank you for the comment. In my case $R$ is noetherian and $M$ is unfortunately not finitely generated. – roger123 Nov 4 2010 at 18:36
1
Dear roger123: The example in my comment, viewed as an $R$-algebra, is a counterexample to the sufficiency aspect in EGA IV$_4$, 18.4.12(ii) (the mistake is assuming locally finite type rather than locally finite presentation, and the error in the proof occurs when they say "donc $j$ est une immersion ouverte"). This bogus sufficiency claim is invoked in the proof of 18.4.14, but that proof can be easily made OK by verifying the locally finite presentation condition holds in the relevant cases for that argument. – BCnrd Nov 6 2010 at 3:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222770929336548, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/102648/amortized-analysis-for-2-5-tree
|
# Amortized Analysis for (2,5)-Tree
I need some help with the following problem
Definition: A (2,5)-tree is an external search tree, where all leaves have the same depth. Each inner node in a (2,5)-tree has at least 2, and at most 5 children.
An insert-operation consists of two parts. In the first part, we go down a suitable path from the tree's root to a leaf and the additional node is inserted as a child of the parent of this leaf. However, in doing this the condition on the degree ($\deg \le 5$) of the parent may be violated. So in a second part of the insert-operation, we must make up for this. (By splitting the parent into two nodes, and assigning to each node half the children)
The costs of this second part will be called rebalancing costs in the following. (The costs are given by how many times a node changes its parent)
A delete-operation works similarly: localise the node in a first step, then cut it off and rectify possible violations of the condition on the degree. (By either adopting an additional child from a neighbour or by combining the parent and a neighbour into a single node)
Problem: Using amortized analysis, prove the following:
• Starting with an empty (2,5)-tree, inserting $n$ Elements has a total cost of no more than $2n$.
• Starting with an empty (2,5)-tree, $n$ insert-/delete-operations (in arbitrary order) have a cost of no more than $2n$.
(This is my try at a translation from German, so I apologize if it is hard/impossible to follow; if you feel you can improve readability, please do so)
I think I can get an upper bound of $5n$ for the first part: Let $\phi(i)$ be the number of nodes with $5$ children after $i$ insert-operations. Then
$$\phi(i) - \phi(i-1) = \begin{cases} 0 & \text{if the inserted node has $<4$ siblings} \\ 1 & \text{if the inserted node has $4$ siblings} \\ -m & \text{if we have to rebalance $m$ times} \end{cases}$$
And the time-costs are (after inserting the node: to rebalance once, we must create a new parent and assign $3$ children to the new parent - giving a rebalancing-cost of 4)
$$t(i)- t(i-1) = \begin{cases} 1 & \text{if the inserted node has $<4$ siblings} \\ 1 & \text{if the inserted node has $4$ siblings} \\ 1+4m & \text{if we have to rebalance $m$ times} \end{cases}$$
Therefore $[t(i)+4\phi(i)]- [t(i-1)+4\phi(i-1)] \le 5$, implies
$$t(n) \le t(n) + 4\phi(n) \le 5n + t(0) + 4\phi(0) = 5n$$
How could I improve this estimate? What is the right potential here? Should I maybe use a different method?
Any suggestions would be very much appreciated! Thanks for reading.
-
If I remember correctly, this can be done using the Accounting method. – Aryabhata Jan 26 '12 at 20:27
This is what is meant by "amortised analysis". Sam, I don't see how `insert` as described by you ensures that all leaves are on the same level. You should give precise formulations of the investigated operations, in particular rebalancing. – Raphael Jan 27 '12 at 9:16
I think your $t(i) - t(i-1)$ is wrong. In particular, why should the difference be one in the trivial cases? The inserted nodes might end up on the same level and thusly cause the same cost. – Raphael Jan 27 '12 at 9:32
Yea, I think "cost" should be read in a way that makes it dependent on elemnt depth, otherwise it does not make much sense imho. Can you ask the person who gave you the problem? Note that what you describe does not "re*balance*" the tree; in general, a linear list may be constructed. – Raphael Jan 27 '12 at 11:25
@Raphael: I think you misunderstood the rebalancing-operation. Let me try to describe it better: Say we insert a note $x_0$ into some part of the tree. If the parent $x_1$ of $x_0$ now has more than $5$ children, then we create a new node $x_1'$ and assign half the children of $x_1$ to $x_1'$. Let $x_2$ be the common parent of $x_1$ and $x_1'$ (I think this is where I was unclear in my description - we create the new node as a child of $x_1$'s parent, not as a child of $x_1$). $x_2$ now has an additional child. If it has more than 5, create a new node $x_2'$, assign half the children of... – Sam Jan 27 '12 at 13:18
show 4 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363753795623779, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/22941/is-an-infinite-compositions-of-arrows-meaningful
|
## Is an “infinite compositions of arrows” meaningful?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For example, deciding whether or not the following is a category seems to depend on the above question (from Awodey's Category Theory, pg. 6):
"What if we take sets as objects and as arrows, those $f : A \rightarrow B$ such that for all $b \in B$, the subset $f$-1$(b) \subseteq A$ is finite?"
Define for each $n \in N$ the function $f$n: $N \rightarrow N$ where $f$n$(x) = max(0, x - n)$.
Then any $f$n or any finite composition thereof has finite inverse images. Yet the "infinite composition" $... f$1$f$2$f$0 has an infinite inverse image for 0, and so the above does not meet the definition of a category.
If this "infinite composition" is legit, does it follow from the basic definition of a category, or must the definition be made more flexible or precise to accommodate it?
For reference, this is Awodey's definition concerning composition:
"Given arrows $f : A \rightarrow B$ and $g : B \rightarrow C$, [...] there is given an arrow: $g$ o $f : A \rightarrow C$ called the composite of $f$ and $g$."
-
5
Infinite composition can't possibly be well-defined in general; consider the case where f : A to B, g : B to A, and you want to define ...fgfg. Presumably one needs some additional structure on the domains and codomains involved. – Qiaochu Yuan Apr 29 2010 at 5:20
1
In the usual definition of categories, infinite compositions make no sense. So, in fact, infinite compositions do not follow from the usual definition. Now, «must the definition be made more flexible?» I would say that that is a rather unanswerable question... – Mariano Suárez-Alvarez Apr 29 2010 at 5:27
3
On the other hand, you could ask for ways to modify the usual definition of categories so as to be able to attach some sense to (some, at least) infinite compositions, as that is a question which has answers. One straightforward way is to consider categories whose End-sets are topological spaces (for example, categories enriched over topological spaces), where you can make sense of what the limit of a sequence of endomorphisms of a fixed object is. – Mariano Suárez-Alvarez Apr 29 2010 at 5:38
1
Essentially a follow-up on Qiaochu's comment: remember that by Mazur's swindle 0 = 1-1+1-... = 1, it is impossible to have all three of (a) countable compositions, (b) arbitrary associativity, and (c) inverses. – Theo Johnson-Freyd Apr 29 2010 at 16:16
While it's certainly interesting to ask when infinite composition is meaningful, I think it's important to point out that "deciding whether or not something is a category" doesn't depend on it at all! The definition of a category (as usually given, and in Awodey's book) asks for identities and binary composition, end of story; so if something's got those, satisfying the appropriate axioms, then it's a category. We can then go looking to see if there are some kind infinite composites too, but we don't need to worry about that in checking if something's a category! – Peter LeFanu Lumsdaine Apr 30 2010 at 1:10
## 2 Answers
This is probably not what the OP is looking for, but there is a notion of "infinite composition of arrows" which often appears for example in categorical homotopy theory:
If $f_0 : X_0 \to X_1$, $f_1 : X_1 \to X_2$, $\ldots$ are morphisms in a category $C$ and the colimit of the diagram $X_0 \to X_1 \to X_2 \to \cdots$ exists (call it $X$) then $X$ is equipped in particular with a canonical map $X_0 \to X$ which is called the transfinite composition of the maps $f_i$. Of course, technically this morphism of $C$ is specified only up to canonical isomorphism because $X$ may be replaced by a (uniquely isomorphic) different colimit of the $X_i$. More generally given an ordinal $\alpha$ which we may view as a category (poset) and a colimit-preserving functor $X_\cdot : \alpha \to C$ (so that for each limit ordinal $\beta \in \alpha$, we have `$X_\beta = \operatorname{colim}_{\gamma < \beta} X_\gamma$`), we may form the transfinite composition `$X_0 \to \operatorname{colim}_\alpha X_\alpha$`. One is often concerned with questions such as whether a given class of maps is closed under transfinite compositions (for example, the class of cofibrations in a model category has this property).
-
I apologize for the silly question, but in your first example of a colimit of a functor $\omega\to C$, is the transfinite composition just the 0-th arrow of the colimiting cone? Does this specialize in some sense to the usual composition if $\omega$ is replaced by some finite $n$? And if not, why is it called transfinite composition ? – unknown (google) Apr 29 2010 at 6:36
2
Yes. If $\alpha = n$ is a finite ordinal, then for the colimit we may take the last object $X_{n-1}$ and the structural map of the colimit cone is the composition of the maps $X_0 \to X_1 \to \cdots \to X_{n-1}$. – Reid Barton Apr 29 2010 at 6:49
Ah, of course. Thank you. – unknown (google) Apr 29 2010 at 6:50
1
I've never heard the term transfinite composition, but it seems to be another word for the perfectly ordinary direct/inverse limit, right? If so, it certainly shows up absolutely everywhere in algebra, not just in categorical homotopy theory... consider the p-adic numbers or taking stalks of sheaves. – Dan Petersen Apr 29 2010 at 8:04
I haven't learned about limits and colimits in categories yet. Perhaps that will shed some light on the subject. – Matthew Willis Apr 29 2010 at 11:39
show 3 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
(This was slightly too long to fit as a comment.)
As with Qiaochu's and Mariano's comments above, the answer as to whether infinite compositions are a priori meaningful in a category is simply no.
I don't know what it would mean to make this definition more precise -- it seems already, like any acceptable definition, to be completely precise. Moreover, in order to entertain a notion of infinite composition, it seems that one rather needs a less flexible -- i.e., more restrictive -- definition of composition of morphisms in a category. Since as Qiaochu points out, already infinite composition is not always meaningful in the category of sets and functions, such a definition would have to be very restrictive indeed.
It does not seem completely unreasonable to define some sort of category-like structure in which infinite composition is meaningful. Off the top of my head, it seems that some kind of "topology" on the class of objects of the category would be helpful, so that we could speak of a convergent sequence of objects. But it would be much more interesting and fruitful to do this in the context of some particular instance in which one would like to formalize infinite composition, e.g. for certain elements of the symmetric group on an infinite set.
-
The example I gave is problematic in any category; the infinite composition ...fgfg doesn't have a well-defined codomain (maybe I should say target). – Qiaochu Yuan Apr 29 2010 at 5:51
Presumably, that example wouldn't be "convergent" in this "topology". – Steve D Apr 29 2010 at 6:02
@Qiaochu: It's okay -- or rather not problematic for the reason you give -- in any category with a single object (i.e., a monoid). Seriously, that's a reasonable special case. But I certainly do take your point: we need to either very much restrict the categories in question, or cook up some extra structure that keeps track of convergence of objects (or do something else equally drastic). – Pete L. Clark Apr 29 2010 at 6:04
@Qiaochu: this problem of the target being not always well defined could perhaps be solved by requiring the infinite set of arrows one wants to "compose" to be linearly ordered (w.r.t. "source precedes target" order, let's say) and having a minimum (the source of the "infinite composite") and a maximum (the target of the "infinite composite"). – Qfwfq Apr 29 2010 at 6:10
@Pete: This makes the question even more striking in my view. For example, let's say we have the monoid {0,1} with operation xor and try to count its arrows. Should it have two arrows corresponding to the two elements of the monoid? What about the infinite composition ..0101.. (or the source-target-friendly 1..0101..0), which is neither equal to 0 nor to 1? I think this is the essence of my original question. – Matthew Willis Apr 29 2010 at 11:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364089369773865, "perplexity_flag": "middle"}
|
http://unapologetic.wordpress.com/2008/12/15/the-category-of-representations-is-abelian/?like=1&source=post_flair&_wpnonce=f636fc8316
|
# The Unapologetic Mathematician
## The Category of Representations is Abelian
We’ve been considering the category of representations of an algebra $A$, and we’re just about done showing that $\mathbf{Rep}_\mathbb{F}(A)$ is abelian.
First of all, the intertwiners between any two representations form a vector space, which is really an abelian group plus stuff. Since the composition of intertwiners is bilinear, this makes $\mathbf{Rep}_\mathbb{F}(A)$ into an $\mathbf{Ab}$-category. Secondly, we can take direct sums of representations, which is a categorical biproduct. Thirdly, every intertwiner has a kernel and a cokernel.
The only thing we’re missing is that every monomorphism and every epimorphism be normal. That is, every monomorphism should actually be the kernel of some intertwiner, and every epimorphism should actually be the cokernel of some intertwiner. So, given representations $\rho:A\rightarrow\mathrm{End}(V)$ and $\sigma:A\rightarrow\mathrm{End}(W)$, let’s consider a monomorphic intertwiner $f:\rho\rightarrow\sigma$.
As for linear maps, it’s straightforward to show that $f$ is monomorphic if and only if its kernel is trivial. Specifically, we can consider the inclusion $\iota:\mathrm{Ker}(f)\rightarrow V$ and the zero map $0:\mathrm{Ker}(f)\rightarrow V$. It’s easy to see that $f\circ\iota=f\circ0$, and so the left-cancellation property shows that $\iota=0$, which is only possible if $\mathrm{Ker}(f)=\mathbf{0}$. So a monomorphism has a trivial kernel. Thus the underlying linear map $f$ is an isomorphism of $V$ onto the image $\mathrm{Im}(f)\subseteq W$. Then this subrepresentation is exactly the kernel of the quotient map $W\rightarrow W/\mathrm{Im}(f)$. And so the monomorphism $f$ is the kernel of some map. The proof that any epimorphism is normal is dual.
And so we have established that the category of representations of the algebra $A$ is abelian. This allows us to bring in all the machinery of homological algebra, if we should so choose. In particular, we can talk about exact sequences, which can be useful from time to time.
### Like this:
Posted by John Armstrong | Algebra, Representation Theory
## 1 Comment »
1. [...] Short Exact Sequences of Representations Split? We’ve seen that the category of representations is abelian, so we have all we need to talk about exact sequences. And we know that some of the most important [...]
Pingback by | December 17, 2008 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118528962135315, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/144535/subgroup-structure-of-s-4
|
# subgroup structure of $S_4$
In the list of Young subgroups of $S_4,$ we find $\langle(12)\rangle, \langle(13)\rangle, \langle(14)\rangle, \langle(23)\rangle, \langle(24)\rangle, \langle(34)\rangle,$ but we don't find $\langle(12)(34)\rangle, \langle(13)(24)\rangle,\langle(14)(23)\rangle,$ while they are all isomorphic to $S_2.$ I'm confused.
-
1
What is a young subgroup? – Chris Eagle May 13 '12 at 10:10
1
– Isaac Solomon May 13 '12 at 10:23
## 1 Answer
A Young subgroup is the direct product of the symmetric groups on the components of the partition. While all these groups are abstractly isomorphic to $S_2$, only the first batch you list is actually $S_2$ on a two-element subset of $\{1,2,3,4\}$, whereas e.g. in the first example of the second batch the group would also have to include $(12)$ and $(34)$ separately in order to be the Young subgroup for the partition $\{1,2,3,4\}=\{1,2\}\cup\{3,4\}$. You can't write down a partition such that $\langle(12)(34)\rangle$ contains all combinations of all permutations on all subsets forming the partition.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285589456558228, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/48025/how-is-quantum-mechanics-compatible-with-the-speed-of-light-limit?answertab=oldest
|
# How is quantum mechanics compatible with the speed of light limit?
Consider a free electron in space. Let us suppose we measure its position to be at point A with a high degree of accuracy at time 0. If I recall my QM correctly, as time passes the wave function spreads out, and there is a small but finite chance of finding it pretty much anywhere in the universe. Suppose it's measured one second later by a different observer more than one light second away and, although extremely unlikely, this observer discovers that electron. I.e. the electron appears to have traversed the intervening distance faster than light speed. What's going on here?
I can think of several, not necessarily contradictory, possibilities:
1. I'm misremembering how wave functions work, and in particular the wave function has zero (not just very small) amplitude beyond the light speed cone.
2. Since we can't control this travel, no information is transmitted and therefore special relativity is preserved (similar to how non-local correlations from EPR type experiments don't transmit information)
3. Although the difference between positions is greater than could have been traversed by the electron traveling at c, had we measured the momentum instead, we would have always found it to be less than $m_e c$ and it's really the instantaneous momentum that special relativity restricts; not the distance divided by time.
4. My question is ill-posed, and somehow meaningless.
Would anyone care to explain how this issue is resolved?
-
This is one reason why we need quantum field theory. – leongz Dec 31 '12 at 19:58
Welcome Elliotte, good question. I don't know the answer to it, I hope someone with better knowledge in QM will be able to help you. I have a small correction for you about the momentum. In special relativity, the momentum is $p=\gamma m v$, where m is the rest mass, $\gamma = \frac{1}{\sqrt{1-(\frac{v}{c})^2}}$, and v is the velocity. As v tends to c, $\gamma$ tends to infinity, so the momentum can actually be much larger than $mc$. – Andrey B Dec 31 '12 at 20:07
– Qmechanic♦ Jan 27 at 18:17
## 1 Answer
Excellent question. You are correct about wavepacket spreading, and in fact you do get superluminal propagation in non-relativistic QM - which is rubbish. You need a relativistic theory.
You should read the first part of Sidney Coleman's lecture notes on quantum field theory where he discusses this exact problem: http://arxiv.org/abs/1110.5013
The short answer is that you need antiparticles. There is no way to tell difference between an electron propagating from A to B, with A to B spacelike separated, and a positron propagating from B to A. When you add in the amplitude for the latter process the effects of superluminal transmission cancel out.
The way to gaurantee that it all works properly is to go to a relativistic quantum field theory. These theories are explicitly constructed so that all observables at spacelike separation commute with each other, so no measurement at A could affect things at B if A and B are spacelike. This causality condition severely constrains the type of objects that can appear in the theory. It is the reason why every particle needs an antiparticle with the same mass, spin and opposite charge, and is partially responsible for the spin-statistics theorem (integer spin particles are bosons and half-integer spin particles are fermions) and the CPT theorem (the combined operation of charge reversal, mirror reflection and time reversal is an exact symmetry of nature).
-
So, is it correct to say that if one releases an electron at r=0 at t=0, and waits, then the probability of measuring the electron outside the light cone will be zero, but this is due to the field of a positron which cancels out the propagation of electron outside the light cone? Can one then measure a positron anywhere inside the light cone? – Alexey Bobrick Jan 1 at 22:30
Another comment is that one does not need relativistic quantum field theory for this problem. Dirac theory describes a propagating particle-electron (non field) well enough. – Alexey Bobrick Jan 1 at 22:31
Thanks for the question, Elliotte! In my QFT class, we briefly touched on how the antiparticle field cancels out the superluminal effects of the particle field. But what I don't understand is that the particle still can travel faster than the speed of light. Is there no way one can observe only that? I'm sorry if this is a silly question, I've taken just one semester of QFT..Thanks! – user34801 Jan 2 at 7:19
Good questions, and not at all silly! @Alexey Bobrick: You are correct, there is no amplitude to measure an electron outside the light cone. There is also no amplitude to measure a positron inside the lightcone (if you start with an electron state rather than a positron state!). – Michael Brown Jan 2 at 10:59
@user34801: Your question and Alexey's other question are answered by the same discussion: The "electron field" $\psi$ really is the sum of two terms: a term that annihilates an electron (the convention is backwards - blame Heisenberg) and a term that creates a positron. The conjugate field $\bar{\psi}$ does the reverse. (The opposite action on electron vs. positron states makes it an operator with definite electric charge.) Any operator that acts on electron or positron states must be built up out of these combinations to preserve causality. This is the restriction I mentioned before. – Michael Brown Jan 2 at 11:00
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307551980018616, "perplexity_flag": "middle"}
|
http://physics.aps.org/articles/large_image/f1/10.1103/Physics.2.22
|
Illustration: Courtesy of V. V. Moshchalkov
Figure 1: Schematic of the spatial distribution of the superconducting order parameter $|ψ(x)|$ and the field profile $B(x)$ for two neighboring fluxons in (top) type-I, (middle) type-II, and (bottom) type-1.5 superconductors. The bottom sketch in each panel shows $|ψ(x)|$ in a color plot with darker regions indicating the smaller order parameter, and the stray field emanating from the sample surface.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8018007874488831, "perplexity_flag": "middle"}
|
http://en.wikipedia.org/wiki/Larmor_frequency
|
# Larmor precession
(Redirected from Larmor frequency)
In physics, Larmor precession (named after Joseph Larmor) is the precession of the magnetic moments of electrons, atomic nuclei, and atoms about an external magnetic field. The magnetic field exerts a torque on the magnetic moment,
$\vec{\Gamma} = \vec{\mu}\times\vec{B}= \gamma\vec{J}\times\vec{B}$
where $\vec{\Gamma}$ is the torque, $\vec{\mu}$ is the magnetic dipole moment, $\vec{J}$ is the angular momentum vector, $\vec{B}$ is the external magnetic field, $\times$ symbolizes the cross product, and $\ \gamma$ is the gyromagnetic ratio which gives the proportionality constant between the magnetic moment and the angular momentum.
## Larmor frequency
The angular momentum vector $\vec{J}$ precesses about the external field axis with an angular frequency known as the Larmor frequency,
$\omega = -\gamma B$
where $\omega$ is the angular frequency,[1] $\gamma=\frac{-e g}{2m}$ is the gyromagnetic ratio, and $B$ is the magnitude of the magnetic field[2] and $g$ is the g-factor (normally 1, except in quantum physics).
Simplified, this becomes:
$\omega = \frac{egB}{2m}$
where $\omega$ is the Larmor frequency, m is mass, e is charge, and B is applied field. For a given nucleus, the g-factor includes the effects of the spin of the nucleons as well as their orbital angular momentum and the coupling between the two. Because the nucleus is so complicated, g factors are very difficult to calculate, but they have been measured to high precision for most nuclei. Each nuclear isotope has a unique Larmor frequency for NMR spectroscopy, which is tabulated here.
## Including Thomas precession
The above equation is the one that is used in most applications. However, a full treatment must include the effects of Thomas precession, yielding the equation (in CGS units):
$\omega_s = \frac{geB}{2mc} + (1-\gamma)\frac{eB}{mc\gamma}$
where $\gamma$ is the relativistic gamma factor (not to be confused with the gyromagnetic ratio above). Notably, for the electron g is very close to 2 (2.002..), so if one sets g=2, one arrives at
$\omega_{s(g=2)} = \frac{eB}{mc\gamma}$
## Bargmann–Michel–Telegdi equation
The spin precession of an electron in an external electromagnetic field is described by the Bargmann–Michel–Telegdi (BMT) equation [3]
$\frac{da^{\tau}}{ds} = \frac{e}{m} u^{\tau}u_{\sigma}F^{\sigma \lambda}a_{\lambda} + 2\mu (F^{\tau \lambda} - u^{\tau} u_{\sigma} F^{\sigma \lambda})a_{\lambda},$
where $a^{\tau}$, $e$, $m$, and $\mu$ are polarization four-vector, charge, mass, and magnetic moment, $u^{\tau}$ is four-velocity of electron, $a^{\tau}a_{\tau} = -u^{\tau}u_{\tau} = -1$, $u^{\tau} a_{\tau}=0$, and $F^{\tau \sigma}$ is electromagnetic field-strength tensor. Using equations of motion,
$m\frac{du^{\tau}}{ds} = e F^{\tau \sigma}u_{\sigma},$
one can rewrite the first term in the right side of the BMT equation as $(- u^{\tau}w^{\lambda} + u^{\lambda}w^{\tau})a_{\lambda}$, where $w^{\tau} = du^{\tau}/ds$ is four-acceleration. This term describes Fermi–Walker transport and leads to Thomas precession. The second term is associated with Larmor precession.
When electromagnetic fields are uniform in space or when gradient forces with like $\nabla({\boldsymbol\mu}\cdot{\boldsymbol B})$ can be neglected, the particles's translational motion is described by
${du^\alpha\over d\tau}={e\over m}F^{\alpha\beta}u_\beta\;.$
The BMT equation is then written as [4]
${\;\,dS^\alpha\over d\tau}={e\over m}\bigg[{g\over2}F^{\alpha\beta}S_\beta+\left({g\over2}-1\right)u^\alpha\left(S_\lambda F^{\lambda\mu}U_\mu\right)\bigg]\;,$
The Beam-Optical version of the Thomas-BMT, from the Quantum Theory of Charged-Particle Beam Optics, applicable in accelerator optics [5] [6]
## Applications
A 1935 paper published by Lev Landau and Evgeny Lifshitz predicted the existence of ferromagnetic resonance of the Larmor precession, which was independently verified in experiments by J. H. E. Griffiths (UK) and E. K. Zavoiskij (USSR) in 1946.
Larmor precession is important in nuclear magnetic resonance, electron paramagnetic resonance and muon spin resonance. It is also important for the alignment of cosmic dust grains, which is a cause of the polarization of starlight.
To calculate the spin of a particle in a magnetic field, one must also take into account Thomas precession.
## Notes
1. Spin Dynamics, Malcolm H. Levitt, Wiley, 2001
2. Louis N. Hand and Janet D. Finch. (1998). Analytical mechanics. Cambridge, England: Cambridge University Press. p. 192. ISBN 978-0-521-57572-0.
3. V. Bargmann, L. Michel, and V. L. Telegdi, Precession of the Polarization of Particles Moving in a Homogeneous Electromagnetic Field, Phys. Rev. Lett. 2, 435 (1959).
4. Jackson, J. D., Classical Electrodynamics, 3rd edition, Wiley, 1999, p. 563.
5. M. Conte, R. Jagannathan, S. A. Khan and M. Pusterla, Beam optics of the Dirac particle with anomalous magnetic moment, Particle Accelerators, 56, 99-126 (1996); (Preprint: IMSc/96/03/07, INFN/AE-96/08).
6. Khan, S. A. (1997). Quantum Theory of Charged-Particle Beam Optics, Ph.D Thesis, University of Madras, Chennai, India. (complete thesis available from Dspace of IMSc Library, The Institute of Mathematical Sciences, where the doctoral research was done).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.884376049041748, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/99186/is-this-graph-known
|
## Is this graph known?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I came across the graph $G= (V, E)$, where
$V =$ { $(i, j)| 1 \leq i, j \leq n$ }
$E=${ $((i, j), (k,l))| i \ne l$ and $j \ne k$ }
Does this graph have a name? Is it well studied?
I would very much like to see some of their properties.
-
3
It's the complement of the Cartesian square of the complete graph on $n$ vertices. It does not seem to be a suitable class of graphs for a research project of any kind. – Chris Godsil Jun 9 at 18:59
3
@Chris: "It does not seem to be a suitable class of graphs for a research project of any kind": that's a bit pretentious... – André Henriques Jun 9 at 19:07
@hbm: Your question is not motivated... Without more background, I fear that nobody will be able to help you. It would help us if you could tell us what you are looking for? For example, how did you get interested in this particular graph? – André Henriques Jun 9 at 19:11
1
I am interested in finding the chromatic number of this graph. – hbm Jun 9 at 19:34
1
@André: As the question is stated I do not think it is a suitable question for this site. There are an awful lot of graphs, and we could pose the same question for many families. Asking about the chromatic number is more focussed, but still some reason for picking on this family should be offered - why should we care? – Chris Godsil Jun 9 at 21:45
show 2 more comments
## 1 Answer
I think you just have the complement of a line graph here.
Start with $K_n$, the complete directed graph on $n$ vertices (including self-edges). That is, the vertex set is $\lbrace 1,\ldots,n \rbrace$ and you have all possible directed edges $(i,j)$ for vertices $i$ and $j$ including when $i=j$.
The line graph $LK_n$ of $K_n$ is the undirected graph which has a vertex $v_{ij}$ for each edge $(i,j)$ in $K_n$. Since $K_n$ is complete, you get precisely the vertex set of the graph $G$ from the question. Now as for the edges: there is an edge in $LK_n$ between $v_{ij}$ and $v_{kl}$ if and only if $j=k$.
Your graph $G$ is the complement of $LK_n$; that is, it has the same vertex set but an edge between $v_{ij}$ and $v_{kl}$ if and only if there is no edge between these vertices in $LK_n$.
As for properties, there is a large amount of information about complete directed graphs, line graphs and graph complements a google search away. Perhaps if you provide context and motivation for how your graph arose and what properties you would be interested in, someone here will point you to appropriate literature.
-
1
I've corrected the link. – Martin Brandenburg Jun 9 at 19:41
The graph has $n^2$ vertices, its the complement of the line graph of the complete bipartite graph $K_{n,n}$, not of the complete graph. – Chris Godsil Jun 9 at 21:49
As I said: complete DIRECTED graph plus self edges... But yes, the undirected bipartite graph works also. – Vidit Nanda Jun 10 at 2:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525960683822632, "perplexity_flag": "head"}
|
http://mathhelpforum.com/number-theory/89665-p-adic-expansion.html
|
# Thread:
1. ## p-adic expansion
I just started learning about p-adic numbers so I am having trouble with a relatively easy question:
Compute the 2-adic expansion of 2/3 and then verify your answer.
Help would be appreciated. Thanks.
2. Originally Posted by curiousmuch
I just started learning about p-adic numbers so I am having trouble with a relatively easy question:
Compute the 2-adic expansion of 2/3 and then verify your answer.
Help would be appreciated. Thanks.
Hi curiousmuch.
Note that $1+2^2+\left(2^2\right)^2+\cdots$ converges to $\frac1{1-2^2}=-\frac13$ in the 2-adic norm.
Hence $\frac23=1-\frac13=2+2^2+2^4+\cdots$ so the 2-adic expansion of $\frac23$ is
$\frac23=\cdot01101010\ldots$
3. Wow that's really helpful, but I would really appreciate it if you could explain the last step. How did you generate the 2 adic decimal expansion from the infinite series. Thanks.
4. Originally Posted by curiousmuch
How did you generate the 2 adic decimal expansion from the infinite series. Thanks.
They are the coefficients of the powers of 2 in the 2-adic power series.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306513667106628, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/52995/complex-structure-on-l2-mathbb-r-generalizing-the-hilbert-transform
|
Complex structure on $L^2(\mathbb R)$ generalizing the Hilbert transform.
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The Hilbert transform on the real Hilbert space $L^2(\mathbb R)$ is the singular integral operator $$\mathcal H(f)(x) := \frac{1}{\pi} \int_{-\infty}^\infty \frac{1}{x-y} f(y) dy.$$
It satisfies $\mathcal H^2=-Id_{L^2(\mathbb R)}$, and in that sense, it is a complex structure on the Hilbert space $L^2(\mathbb R)$ of real-valued, square integrable functions on the real line.
I am wondering if there are other operators $\tilde {\mathcal H}:L^2(\mathbb R)\to L^2(\mathbb R)$ with similar properties.
Question: Does there exists a function $K:\mathbb R^2\to \mathbb R$ with the following properties:
• The function $K(x,y)$ looks like $\frac{1}{x-y}$ in a neighborhood of the diagonal $x=y$
(here, by "looks like", I mean for instance as "$K(x,y) = \frac{1}{x-y} +$ smooth function").
• The singular integral operator $$\tilde {\mathcal H}(f)(x) := \frac{1}{\pi} \int_{-\infty}^\infty K(x,y) f(y) dy.$$ satisfies $\tilde {\mathcal H}^2=-Id_{L^2(\mathbb R)}$, and thus defines a complex structure on $L^2(\mathbb R)$.
• The function $K$ goes to zero faster than $\frac{1}{x-y}$ along the antidiagonals.
Namely, it satisfies $$\forall x\in \mathbb R,\qquad\qquad \lim_{t\to \infty}\;\;\;\; t\cdot K(t,x-t) = 0.$$
Variant: In case it turns out difficult to produce an example of an operator $\tilde {\mathcal H}:L^2(\mathbb R)\to L^2(\mathbb R)$ as above, I would be happy to replace $L^2(\mathbb R)$ by $L^2(\mathbb R;\mathbb R^n)$, the Hilbert space of $\mathbb R^n$-valued $L^2$ functions on the real line.
In that case, I would be looking for an integral kernel $$K:\mathbb R^2\to \mathit{Mat}_{n\times n}(\mathbb R)$$ with all the properties listed above.
Right now, I actually believe that such an integral kernel does not exist, but this is purely a gut feeling...
If someone has any ideas about how to prove the non-existence of $\tilde {\mathcal H}:L^2(\mathbb R)\to L^2(\mathbb R)$ with the above properties, then I would very interested to hear them.
-
Could you explain the third condition? E.g. when $K=1/(x-y)$ one gets $t \cdot K(t,x-t) = t/(2t-x)$... – Piero D'Ancona Apr 29 2011 at 21:53
@:Piero. That's right. And $lim_{t\to\infty}t/(2t-x)$ is not zero. But if you have something like $K=1/(x-y)^\alpha$ with $\alpha>1$, then you get $lim_{t\to\infty}t/(2t-x)^\alpha=0$. – André Henriques Apr 30 2011 at 2:23
Well, if K=K(x-y) then the operator is translation invariant, then it is a constant coefficient pseudodifferential operator with a symbol $a(\xi)$, and by your second condition $a(\xi)$ must take only the values $\pm i$. Thus you have jump singularities in the symbol which should always produce a decay of order $\sim t^{-1}$. I guess. – Piero D'Ancona Apr 30 2011 at 13:54
@Piero: I do not require translation invariance. But I agree with you that it looks like it's not possible to achieve my decay condition. – André Henriques Apr 30 2011 at 14:31
2 Answers
EDIT: This solution does not satisfy the third condition, which rules out the Hilbert transform itself. So, this is an answer to different question. I do not delete it in hope it may be useful for someone.
Let $\phi(x)$ be a smooth monotone function such that $x-\phi(x)$ has compact support. This is a diffeomorphism of the real line and the pullback `$\phi^*$` is a linear operator acting on $L^2(\mathbb R)$. The singular integral operator `$$(\phi^*)^{-1}{\mathcal H}\phi^*$$` has all the properties you need.
-
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
It is possible also to reason thus. Too fast decay could lead to compactness of the operator if not in $L_2$ then in some suitably defined spaces. And this cannot be since its degree is identity and the last operator is compact only in finite dimension spaces.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9197935461997986, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/75150/sheaf-with-free-stalks/75161
|
Sheaf with free stalks
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Say we are given a complex manifold $X$ and an $\mathcal{O}_X$-module $\mathcal{F}$. Assume that for any point $P\in X$ the stalk $\mathcal{F}_P$ is a free $(\mathcal{O}_X)_P$-module of finite rank. Does it imply that $\mathcal{F}$ is locally free? If not, what do you need to know additionally about $\mathcal{F}$ to make it true?
Note that if we were looking at the case of schemes then it would be wrong in general. Mathoverflow answer to a related question is here
Remark: As it was pointed out by Francesco Polizzi, this is true if $\mathcal{F}$ is coherent. What if we do not know it apriori?
-
This is exercise II.5.7 in Hartshorne. – J.C. Ottem Sep 11 2011 at 19:12
Hartshorne's exercise is about coherent sheaves. As the remark above says, we do not assume this apriori. – maxim Sep 11 2011 at 21:01
3 Answers
Just looking at stalks is not enough:
Suppose that $X$ is a nontrivial complex manifold. Let $i_x:x\to X$ denote the inclusion, and set $$\mathcal{F} =\bigoplus_{x\in X} i_{x*}\mathcal{O}_x$$ Notice that it is naturally an $\mathcal{O}_X$-module with $\mathcal{F}_x\cong \mathcal{O}_x$, and yet it is certainly not locally free.
Notes Rather than editing, I'll keep the original form of my answer in tact and add a few footnotes.
1. Of course, this $\mathcal{F}$ is not coherent.
2. (Re: UG's first comment.) I probably should have included the proof that $\mathcal{F}_x\cong \mathcal{O}_x$. Here it is. The left is the direct limit `$$\varinjlim\bigoplus_{y\in U} \mathcal{O}_y$$` as $U$ shrinks to $x$. There is a projection $p$ to $\mathcal{O}_x$ which is surjective since it has a section. Suppose that $f=\sum f_y$ lies in the kernel of $p$. Shrink $U$ to avoid the support of $f$ (which excludes $x$). Then we see that the class of $f$ in the direct limit must be zero. (There is a reason I took the sum and not the product.)
3. (Re: Laurent's comment.) By $i_{x*}\mathcal{O}_x$, I meant the skyscraper sheaf associated to $\mathcal{O}_x$.
-
2
The stalk of your sheaf $\mathcal F$ at $x$ is much larger than $\mathcal O_x$. Rather, it's all germs around $x$ of "functions" associating to a point $y$ an element of $\mathcal O_y$. – unknown (google) Sep 11 2011 at 22:07
Ah, yes, sorry. I was thinking direct product. – unknown (google) Sep 11 2011 at 23:11
No problem. I had my doubts about it too. – Donu Arapura Sep 11 2011 at 23:30
1
I guess the sheaf you have in mind in the direct sum is not $i_*\mathcal{O}_x$, but the skyscraper sheaf at $x$ with stalk the free $\mathcal{O}_x$-module of rank one. If by $i$ you mean the inclusion of the point $x$, as a morphism of ringed spaces, then $i_* \mathcal{O}_x$ is the same skyscraper sheaf (of abelian groups), but as an $\mathcal{O}$-module it is killed by the maximal ideal of $x$. – Laurent Moret-Bailly Sep 12 2011 at 5:31
Yes, I meant the skyscraper the sheaf. I'll fix it in a bit. – Donu Arapura Sep 12 2011 at 12:31
show 1 more comment
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This is a small modification of Donu's answer.
Let $\mathcal F=(i_x)_\ast\mathcal O_x$ (the skyscraper sheaf of $\mathcal O_x$ over $x$) for some $x\in X$. Then $\mathcal F$ is locally free of rank $1$ at $x$, and is locally free of rank $0$ everywhere else. Clearly $\mathcal F$ is not locally free near $x$, since it doesn't even have locally constant rank.
-
The answer is yes, at least when $\mathcal{F}$ is a coherent sheaf.
This actually holds for any complex space. See [Grauert-Remmert, Coherent Analytic Sheaves, p. 90].
-
Thank you for the answer! Right, this is because the support of a coherent sheaf is closed. What if the sheaf is not coherent? – maxim Sep 11 2011 at 17:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428151249885559, "perplexity_flag": "head"}
|
http://stats.stackexchange.com/questions/34805/confidence-intervals-when-using-bayes-theorem
|
# Confidence intervals when using Bayes' theorem
I'm computing some conditional probabilities, and associated 95% confidence intervals. For many of my cases, I have straightforward counts of `x` successes out of `n` trials (from a contingency table), so I can use a Binomial confidence interval, such as is provided by `binom.confint(x, n, method='exact')` in `R`.
In other cases though, I don't have such data, so I use Bayes' theorem to compute from information I do have. For example, given events $a$ and $b$:
$$P(a|b) = \frac{P(b|a) \cdot P(a)}{P(b)}$$
I can compute a 95% confidence interval around $P(b|a)$ using $\textrm{binom.confint}(\#\left(b\cap{}a),\#(a)\right)$, and I compute the ratio $P(a)/P(b)$ as their frequency ratio $\#(a)/\#(b)$. Is it possible to derive a confidence interval around $P(a|b)$ using this information?
Thanks.
-
$a$ and $b$ are events. In my case, $a$ is a system failure (which is quite rare, so relatively hard to find "in the wild"), and $b$ is a pre-failure alarm, so I'm measuring the probability of failure given an alarm. – Ken Williams Aug 21 '12 at 18:56
The above comment was in response to someone who asked for more background on what $a$ and $b$ were, but seems to have deleted that comment. – Ken Williams Aug 21 '12 at 19:35
Well you can't just take the confidence interval for p(b|a) and scale it by p(a)/p(b) because of the uncertainty in the estimate of that ratio. If you can construct a 100(1-α)% confidence interval for p(a)/p(b) call it [A, B] then take the lower bound for a 100(1-α)% confidence interval for p(b|a) and multiply it by A and take the upper bound for p(b|a) and multiple it by B. That should give at an interval that has at least a 100(1-α)$^2$% confidence level for p(a|b). – Michael Chernick Aug 21 '12 at 19:37
Could work... getting a confidence interval for $P(a)/P(b)$ isn't obvious to me though - do you feel like moving this into the "Answer" area? I promise at least one upvote. =) – Ken Williams Aug 21 '12 at 21:46
2
Don't you want a Bayesian credible interval instead? That is directly computable from the posterior distribution of $a$. – whuber♦ Aug 21 '12 at 22:44
show 1 more comment
## 1 Answer
Well, you can't just take the confidence interval for $p(b|a)$ and scale it by $p(a)/p(b)$ because of the uncertainty in the estimate of that ratio. If you can construct a $100(1-\alpha)\%$ confidence interval $[A, B]$ for $p(a)/p(b)$, then take the lower bound for a $100(1-\alpha)\%$ confidence interval for $p(b|a)$ and multiply it by $A$ and take the upper bound for $p(b|a)$ and multiply it by $B$. That should give at an interval that has at least a $100(1-\alpha)^2\%$ confidence level for $p(a|b)$.
-
That seems workable at least as a first stab. But I'm not aware of a method for deriving confidence intervals for the ratio of two Bernoulli probabilities $P(a)/P(b)$ given observed counts of $a$ and $b$ in a sample population. – Ken Williams Aug 21 '12 at 22:33
– Ken Williams Aug 22 '12 at 15:59
If I haven't erred, here's a function to do that ratio-of-Bernoullis interval estimation: ```binrat.confint <- function(x, y, n=Inf, m=n, p=0.95) {
s2 <- 1/x - 1/n + 1/y - 1/m;
x/y * exp(c(-1:1)*pnorm((1+p)/2)*sqrt(s2))
}
``` – Ken Williams Aug 24 '12 at 19:05
default
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216858148574829, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/61779/problem-in-understanding-p-implies-q
|
# Problem in understanding p implies q
I am trying to understand what “$p$ implies $q$” means. I read that $p$ is a sufficient condition for $q$, and $q$ is a necessary condition for $p$. Further from Wikipedia,
A necessary condition of a statement must be satisfied for the statement to be true. Formally, a statement $P$ is a necessary condition of a statement $Q$ if $Q$ implies $P,\quad (Q \Rightarrow P)$.
A sufficient condition is one that, if satisfied, assures the statement's truth. Formally, a statement $P$ is a sufficient condition of a statement $Q$ if $P$ implies $Q,\quad (P \Rightarrow Q)$.
Now what I am stuck with is that if $P$ is not satisfied will the condition still always be true?
-
– Asaf Karagila Sep 4 '11 at 9:22
– t.b. Sep 4 '11 at 12:02
## 1 Answer
This is a simple matter answered by the truth table of $\Rightarrow$:
$$\begin{array}{ c | c || c | } P & Q & P\Rightarrow Q \\ \hline \text T & \text T & \text T \\ \text T & \text F & \text F \\ \text F & \text T & \text T \\ \text F & \text F & \text T \end{array}$$
This shows that when $P$ is false, the implication is true. Note that this is the definition of the table, there is no need to prove it. This is how $\Rightarrow$ is defined to work.
As an example, here is one:
$$\textbf{If it is raining then there are clouds in the sky}$$
In this case $P=$It is raining, and $Q=$There are clouds in the sky. Note that $P$ is sufficient to conclude $Q$, and $Q$ is necessary for $P$. There is no rain without clouds, and if there are no clouds then there cannot be any rain.
However, note that $Q$ is not necessary for $P$. There could be light clouds without any rain, and there could be clouds of snow and blizzard (which is technically not rain).
-
Thanks for your answer. In your answer the statement P is false, but as I read that p is not neccessary but sufficient for q then shouldn't it be true as q is true? – Fahad Uddin Sep 4 '11 at 9:41
@fahad: In my answer, $Q$ is necessary for $P$, and $P$ is sufficient for $Q$. – Asaf Karagila Sep 4 '11 at 9:46
@fahad: No; that $p$ is not necessary for $q$ means that $p$ need not be true for $q$ to be true. It is a sufficient but not necessary condition for you to get rich that you win the lottery. It may well be true that you get rich ($q$) even though you don't win the lottery ($\neg p$). – joriki Sep 4 '11 at 9:54
Note that there exist 15 other possibilities for the last column where each entry belongs to {T, F}. None of the other ones fit with the meaning of implication as well as this one. – Doug Spoonwood Sep 4 '11 at 15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432949423789978, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/100085?sort=oldest
|
## RS to RSK correspondence
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The RS correspondence is a correspondence which associates to each permutation a pair of standard Young tableaux of the same shape.
The RSK correspondence associates to each integer matrix (with non-negative entries) a pair of semistandard Young tableaux of the same shape.
Given an integer matrix, replace it by a permutation matrix whose rows and columns, when partitioned according to the row and column sums of the original matrix, have block sums equal to the entries of the original matrix. There is a unique such permutation matrix with the property that there are no descents within any of the blocks (each block is a partial permutation).
For example, if
`$A=\begin{pmatrix} 2 & 1\\ 1 & 0\end{pmatrix}$`
then the corresponding permutation matrix is
```$\tilde A =\begin{pmatrix} 1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\end{pmatrix}$```
Here the row and column partitions are both $(3,1)$.
It seems to be well-known (for example, it is implicit in Fulton's matrix ball construction) that to obtain the SSYT's for $A$, one may substitute for each entry in the SYT's for `$\tilde A$` the integers corresponding to the blocks the rows and columns corresponding to these entries belong.
In the above example, the SYT's associated to $\tilde A$ are
`$P = Q = \begin{array}{cc} 1 & 2 & 3 \\ 4 & &\end{array}$`
into which we would saubstitute $1$ for $1,2,3$ and $2$ for $4$ to get the SSYT's for $A$:
`$P = Q = \begin{array}{cc} 1 & 1 & 1 \\ 2 & & \end{array}$`.
Is there a nice reference for this result?
-
I think the growth diagram construction is clearer than the matrix ball construction. However I don't know of an account of growth diagrams for the RSK correspondence for integer matrices. I would be interested to hear of one. – Bruce Westbury Jun 20 at 8:35
Dear Bruce, this was essentially done by S. Fomin himself in "Schur operators and Knuth correspondences". – Philippe Nadeau Jun 20 at 8:54
Dear Philippe, thanks. I could do with some help in understanding that paper in terms of growth diagrams. – Bruce Westbury Jun 20 at 10:32
Stanley's book Enumerative Combinatorics, Volume 2, discusses Fomin's viewpoint and is written with students in mind. Look at Section 7.13, Symmetry of the RSK Algorithm. – Patricia Hersh Jun 20 at 10:59
1
Thanks Patricia. What I am missing is growth diagrams for the two Knuth correspondences. It is clear what to do for $(0,1)$-matrices. It is not clear to me what to do for integer matrices. I realise this is not an issue for experts. The point I am not clear on is how one arrives at growth diagrams starting from the paper by Fomin that Philippe refers to. – Bruce Westbury Jun 20 at 12:15
show 1 more comment
## 3 Answers
I would look at chapter 7 in Enumerative Combinatorics, Volume 2, by Richard Stanley. A second place that can also be helpful for getting a good understanding of RS is Bruce Sagan's book called The Symmetric Group.
-
It is Lemma 7.11.6 in Stanley. – Amritanshu Prasad Jun 20 at 8:04
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Warning: I do not know nothing about combinatorial problems, so what I say now might be completely wrong. However: Rota in his talk at the Birkhoff memorial conference (The many lives of lattice theory, easily available online) has, in the section about semiprimary lattices, something which seemes strongly related.
Edit: citing from the article: each of the two chains is associated with a standard Young tableau, hence we obtain the statement and proof of the Schensted algorithm, which precisely associates a pair of standard Young tableaux to every permutation.
-
I found an article with that title in the Notices. It did not mention the RSK correspondence. – Amritanshu Prasad Jun 20 at 8:07
[Responding particularly to Bruce...] You may want to take a look at my thesis, which was the first place that the Knuth versions of RSK were "Fominized". There are lots of examples, which others have told me they've found helpful in understanding this material. (I'm sure Fomin already understood that this could be done, but it doesn't appear in his papers before 1991.) I put a scan on the web at:
http://www.math.uconn.edu/~troby/research.html
Scroll down to:
Applications and Extensions of Fomin's Generalization of the Robinson-Schensted Correspondence to Differential Posets, Ph.D. Thesis, Massachusetts Institute of Technology, 1991.
The key idea is just that RS commutes with "standardization" of words or SSYT, where one adds subscripts from Left to Right in the word and corresponding tableaux. See EC2, Lemma 7.11.6.
Thanks to Tricia Hersh for mentioning this thread at the SIAM DM conference. This is my first posting to MathOverflow, so I'm not allowed to comment.
Hope this helps!
Tom
-
Thanks Tom; for the reply, and also for making your thesis available. – Amritanshu Prasad Jun 21 at 5:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244500398635864, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/139388/past-coin-tosses-affect-the-latest-one-if-you-know-about-them/139444
|
# Past coin tosses affect the latest one if you know about them?
Suppose Mark and Paul are sitting on a table, and Mark starts tossing an unbiased and fair coin. He tosses it for 99 times, and he gets 99 consecutive tails. At this point Mark asks Paul: "let's bet \$100 on the next toss, do you want to pick tail or head?"
Question: In order to maximize his expected return, should Paul pick tail, head, or it doesn't matter?
-
– Qiaochu Yuan May 2 '12 at 15:10
## 4 Answers
If, as you said, it's an unbiased and fair coin, then it has an equal chance of coming up heads as tails on the 100th flip, by definition, because that's what an unbiased fair coin is: it is a coin that has an equal chance to come up heads or tails on any flip.
On the other hand, if you see a coin come up tails 99 times in a row you had best revisit your assumption that it is an unbiased fair coin. It would be foolish to bet on it coming up heads after that.
Addendum: Here's the reference you asked for: Gambler's fallacy
-
3
+1: Very well put. – Nate Eldredge May 1 '12 at 13:41
5
@DanielS No, it is not. The coin has no memory. It does not look back on its life and decide to come up heads because it has come up tails many times in a row. It cannot think. The expected result over 100 flips is 50 heads and 50 tails only if you have no other information about it. If you know that the first 99 flips were tails, then the expected result after 100 flips is 99$\frac12$ tails and $\frac12$ head. – MJD May 1 '12 at 13:43
2
@DanielS Don't update the question. Someone has answered your question. Ask a new question. (It makes it hard to keep questions and answers in sync if you keep changing your question.) – Thomas Andrews May 1 '12 at 13:46
2
– MJD May 1 '12 at 13:56
2
@DanielS: in response to your first question, the concept you need to understand is conditional probability (en.wikipedia.org/wiki/Conditional_probability). You're not asking for the probability that the coin comes up tails $100$ times; you're asking for the probability that the coin comes up tails $100$ times conditional on it already having come up $99$ times, which is $\frac{1}{2}$. – Qiaochu Yuan May 2 '12 at 15:15
show 6 more comments
The restriction is that the coin is fair. There does not seem to be a restriction that the flipping is fair.
With practice I used to be able to get the same result from coin flips - mostly heads or mostly tails. One needs to start with the coin in the smae head up or tail up position, but it is not difficult to make the flipping unfair.
-
– Qiaochu Yuan May 2 '12 at 15:17
To elaborate on Mark's answer. Let's say you don't necessarily believe that the coin is unbiased, but instead believe it has probability $p$ of coming up heads.
You degree of belief in the value of $p$ can be specified by providing the parameters $a$ and $b$ of a beta distribution. Intuitively, they correspond to the number of times you've seen heads and tails come up already. A typical 'default belief' is given by $a=b=1$, which essentially expresses maximum ignorance - you believe that the true value of $p$ is distributed uniformly between 0 and 1.
The benefit of doing this is that there is a simple method of updating your belief about the distribution of $p$ whenever you see a new coin toss - you simply increment $a$ by 1 whenever you see a head, and increment $b$ by one whenever you see a tail.
The mean of the distribution is simply $a/(a+b)$ and its variance is $ab/[(a+b)^2(a+b+1)]$. If you see 99 tails in a row, then your new values of $a$ and $b$ are $a=1$ and $b=100$, giving an expected value for $p$ of
$$E(p) \approx 0.01$$
$$\mathrm{StDev}(p) \approx 0.01$$
So with a very high degree of certainty, you believe that the true value of $p$ is 0.01, and certainly isn't much more than 0.04. You would therefore be very naive to bet on heads coming up on the next toss (unless you were given very favorable odds!)
-
Due to the law of large numbers, the proportion of heads to tails seen with an unbiased coin will converge to $\frac12$ as the number of flips tends towards infinity. If you were to make the bet before 99 coins have been tossed, then it would be apparent that the probability of seeing all tails is $(\frac12)^{99}$. If you have already achieved this then the probability of seeing a subsequent tail is $\frac12^{100}-\frac12^{99}$, it is not about the coin having a 'memory' but rather the probability of observing 99 events involving the coin with identical values. It would be the same if you decided to flip the coin 100,000 times and chose to observe it 0.1% of the time. Even if the average proportion of tails to heads of the 100,000 were 0.5, the probability of observing 99 consecutive tails would still be $(\frac12)^{100}-(\frac12)^{99}$. The same would also be true if you selected a new coin every time.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9627587795257568, "perplexity_flag": "head"}
|
http://psychology.wikia.com/wiki/Variance?interlang=all
|
Variance
Talk0
31,725pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
In probability theory and statistics, the variance of a random variable is a measure of its statistical dispersion, indicating how far from the expected value its values typically are. The variance of a real-valued random variable is its second central moment, and it also happens to be its second cumulant. The variance of a random variable is the square of its standard deviation.
Definition
If μ = E(X) is the expected value (mean) of the random variable X, then the variance is
$\operatorname{var}(X) = \operatorname{E}( ( X - \mu ) ^ 2 ).$
That is, it is the expected value of the square of the deviation of X from its own mean. In plain language, it can be expressed as "The average of the square of the distance of each data point from the mean". It is thus the mean squared deviation. The variance of random variable X is typically designated as $\operatorname{var}(X)$, $\sigma_X^2$, or simply $\sigma^2$.
Note that the above definition can be used for both discrete and continuous random variables.
Many distributions, such as the Cauchy distribution, do not have a variance because the relevant integral diverges. In particular, if a distribution does not have expected value, it does not have variance either. The opposite is not true: there are distributions for which expected value exists, but variance does not.
Properties
If the variance is defined, we can conclude that it is never negative because the squares are positive or zero. The unit of variance is the square of the unit of observation. For example, the variance of a set of heights measured in centimeters will be given in square centimeters. This fact is inconvenient and has motivated many statisticians to instead use the square root of the variance, known as the standard deviation, as a summary of dispersion.
It can be proven easily from the definition that the variance does not depend on the mean value $\mu$. That is, if the variable is "displaced" an amount b by taking X+b, the variance of the resulting random variable is left untouched. By contrast, if the variable is multiplied by a scaling factor a, the variance is multiplied by a2. More formally, if a and b are real constants and X is a random variable whose variance is defined,
$\operatorname{var}(aX+b)=a^2\operatorname{var}(X)$
Another formula for the variance that follows in a straightforward manner from the linearity of expected values and the above definition is:
$\operatorname{var}(X)= \operatorname{E}(X^2 - 2\,X\,\operatorname{E}(X) + (\operatorname{E}(X))^2 ) = \operatorname{E}(X^2) - 2(\operatorname{E}(X))^2 + (\operatorname{E}(X))^2 = \operatorname{E}(X^2) - (\operatorname{E}(X))^2.$
This is often used to calculate the variance in practice.
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of independent random variables is the sum of their variances. A weaker condition than independence, called uncorrelatedness also suffices. In general,
$\operatorname{var}(aX+bY) =a^2 \operatorname{var}(X) + b^2 \operatorname{var}(Y) + 2ab \operatorname{cov}(X, Y).$
Here $\operatorname{cov}$ is the covariance, which is zero for independent random variables (if it exists).
Approximating the variance of a function
The Delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables. For example, the approximate variance of a function of one variable is given by
$\operatorname{var}\left[f(X)\right]\approx \left(f'(\operatorname{E}\left[X\right])\right)^2\operatorname{var}\left[X\right]$
provided that $f(\cdot)$ is twice differentiable and that the mean and variance of $X$ are finite.
Population variance and sample variance
In general, the population variance of a finite population is given by
$\sigma^2 = \sum_{i=1}^N \left(x_i - \overline{x} \right)^ 2 \, \Pr(x_i),$
where $\overline{x}$ is the population mean. This is merely a special case of the general definition of variance introduced above, but restricted to finite populations.
In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with large finite populations, it is almost never possible to find the exact value of the population variance, due to time, cost, and other resource constraints. When dealing with infinite populations, this is generally impossible.
A common method of estimating the variance of large (finite or infinite) populations is sampling. We start with a finite sample of values taken from the overall population. Suppose that our sample is the sequence $(y_1,\dots,y_N)$. There are two distinct things we can do with this sample: first, we can treat it as a finite population and describe its variance; second, we can estimate the underlying population variance from this sample.
The variance of the sample $(y_1,\dots,y_N)$, viewed as a finite population, is
$\sigma^2 = \frac{1}{N} \sum_{i=1}^N \left(y_i - \overline{y} \right)^ 2,$
where $\overline{y}$ is the sample mean. This is sometimes known as the sample variance; however, that term is ambiguous. Some electronic calculators can calculate $\sigma^2$ at the press of a button, in which case that button is usually labelled "$\sigma^2$".
When using the sample $(y_1,\dots,y_N)$ to estimate the variance of the underlying larger population the sample was drawn from, it may be tempting to equate the population variance with $\sigma^2$. However, $\sigma^2$ is a biased estimator of the population variance. The following is an unbiased estimator:
$s^2 = \frac{1}{N-1} \sum_{i=1}^N \left(y_i - \overline{y} \right)^ 2,$
where $\overline{y}$ is the sample mean. Note that the term $N-1$ in the denominator above contrasts with the equation for $\sigma^2$, which has $N$ in the denominator. Note that $s^2$ is generally not identical to the true population variance; it is merely an estimate, though perhaps a very good one if $N$ is large. Because $s^2$ is a variance estimate and is based on a finite sample, it too is sometimes referred to as the sample variance.
One common source of confusion is that the term sample variance may refer to either the unbiased estimator $s^2$ of the population variance, or to the variance $\sigma^2$ of the sample viewed as a finite population. Both can be used to estimate the true population variance, but $s^2$ is unbiased. Intuitively, computing the variance by dividing by $N$ instead of $N-1$ underestimates the population variance. This is because we are using the sample mean $\overline{y}$ as an estimate of the unknown population mean $\mu$, and the raw counts of repeated elements in the sample instead of the unknown true probabilities.
In practice, for large $N$, the distinction is often a minor one. In the course of statistical measurements, sample sizes so small as to warrant the use of the unbiased variance virtually never occur. In this context Press et al. commented that if the difference between n and n−1 ever matters to you, then you are probably up to no good anyway - e.g., trying to substantiate a questionable hypothesis with marginal data.
An unbiased estimator
We will demonstrate why $s^2$ is an unbiased estimator of the population variance. An estimator $\hat{\theta}$ for a parameter $\theta$ is unbiased if $\operatorname{E}\{ \hat{\theta}\} = \theta$. Therefore, to prove that $s^2$ is unbiased, we will show that $\operatorname{E}\{ s^2\} = \sigma^2$. As an assumption, the population which the $x_i$ are drawn from has mean $\mu$ and variance $\sigma^2$.
$\operatorname{E} \{ s^2 \} = \operatorname{E} \left\{ \frac{1}{n-1} \sum_{i=1}^n \left( x_i - \overline{x} \right) ^ 2 \right\}$
$= \frac{1}{n-1} \sum_{i=1}^n \operatorname{E} \left\{ \left( x_i - \overline{x} \right) ^ 2 \right\}$
$= \frac{1}{n-1} \sum_{i=1}^n \operatorname{E} \left\{ \left( (x_i - \mu) - (\overline{x} - \mu) \right) ^ 2 \right\}$
$= \frac{1}{n-1} \sum_{i=1}^n \operatorname{E} \left\{ (x_i - \mu)^2 \right\} - 2 \operatorname{E} \left\{ (x_i - \mu) (\overline{x} - \mu) \right\} + \operatorname{E} \left\{ (\overline{x} - \mu) ^ 2 \right\}$
$= \frac{1}{n-1} \sum_{i=1}^n \sigma^2 - 2 \left( \frac{1}{n} \sum_{j=1}^n \operatorname{E} \left\{ (x_i - \mu) (x_j - \mu) \right\} \right) + \frac{1}{n^2} \sum_{j=1}^n \sum_{k=1}^n \operatorname{E} \left\{ (x_j - \mu) (x_k - \mu) \right\}$
$= \frac{1}{n-1} \sum_{i=1}^n \sigma^2 - \frac{2 \sigma^2}{n} + \frac{\sigma^2}{n}$
$= \frac{1}{n-1} \sum_{i=1}^n \frac{(n-1)\sigma^2}{n}$
$= \frac{(n-1)\sigma^2}{n-1} = \sigma^2$
See also algorithms for calculating variance.
Alternate proof
$E\left[ \sum_{i=1}^n {(X_i-\overline{X})^2}\right] =E\left[ \sum_{i=1}^n {X_i^2}\right] - nE[ \overline{X}^2]$
$=nE[X_i^2] - \frac{1}{n} E\left[\left(\sum_{i=1}^n X_i\right)^2\right]$
$=n(\operatorname{var}[X_i] + (E[X_i])^2) - \frac{1}{n} E\left[\left(\sum_{i=1}^n X_i\right)^2\right]$
$=n\sigma^2 + \frac{1}{n}(nE[X_i])^2 - \frac{1}{n}E\left[\left(\sum_{i=1}^n X_i\right)^2\right]$
$=n\sigma^2 - \frac{1}{n}\left( E\left[\left(\sum_{i=1}^n X_i\right)^2\right] - \left(E\left[\sum_{i=1}^n X_i\right]\right)^2\right)$
$=n\sigma^2 - \frac{1}{n}\left(\operatorname{var}\left[\sum_{i=1}^n X_i\right]\right) =n\sigma^2 - \frac{1}{n}(n\sigma^2) =(n-1)\sigma^2.$
Confidence intervals based on the sample variance
A confidence interval $T$ for the population variance can be formed as[1]
$T=\left[ \frac{n-1}{z_{2}}s^{2},\frac{n-1}{z_{1}}s^{2}\right]$
where $z_{1}$ and $z_{2}$ are strictly positive constants and $z_{1}<z_{2}$. Its coverage probability is
$P\left( \sigma^{2}\in T\right) =P\left( z_{1}\leq \chi^2_{n-1}\leq z_{2}\right)$
where $\chi^2_{n-1}$ is a chi-square random variable with $n-1$ degrees of freedom.
Generalizations
If X is a vector-valued random variable, with values in Rn, and thought of as a column vector, then the natural generalization of variance is E[(X − μ)(X − μ)T], where μ = E(X) and XT is the transpose of X, and so is a row vector. This variance is a nonnegative-definite square matrix, commonly referred to as the covariance matrix.
If X is a complex-valued random variable, then its variance is E[(X − μ)(X − μ)*], where X* is the complex conjugate of X. This variance is a nonnegative real number.
History
The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance.
Moment of inertia
The variance of a probability distribution is analagous to the moment of inertia in classical mechanics of a corresponding linear mass distribution, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions.
References
1. ↑ Marco Taboga (2010). Lectures on Probability and Statistics, Set estimation of the variance
1. ^ Press, W. H., Teukolsky, S. A., Vetterling, W. T. & Flannery, B. P. (1986) Numerical recipes: The art of scientific computing. Cambridge: Cambridge University Press. (online)
de:Varianz es:Varianza fr:Variance gl:Varianzalt:Variacija nl:Variantieno:Varianspt:Variância su:Varian sv:Varianszh:方差
This page uses Creative Commons Licensed content from Wikipedia (view authors).
Read more
• Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual...
Graphical model
• Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual...
Markov chains
• Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual...
Covariance
Languages:
Advertisement | Your ad here
Photos
Add a Photo
6,465photos on this wiki
• by Dr9855
2013-05-14T02:10:22Z
• by PARANOiA 12
2013-05-11T19:25:04Z
Posted in more...
• by Addyrocker
2013-04-04T18:59:14Z
• by Psymba
2013-03-24T20:27:47Z
Posted in Mike Abrams
• by Omaspiter
2013-03-14T09:55:55Z
• by Omaspiter
2013-03-14T09:28:22Z
• by Bigkellyna
2013-03-14T04:00:48Z
Posted in User talk:Bigkellyna
• by Preggo
2013-02-15T05:10:37Z
• by Preggo
2013-02-15T05:10:17Z
• by Preggo
2013-02-15T05:09:48Z
• by Preggo
2013-02-15T05:09:35Z
• See all photos
See all photos >
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 70, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9033381938934326, "perplexity_flag": "head"}
|
http://crypto.stackexchange.com/questions/5355/which-risks-are-associated-with-deriving-multiple-keys-from-the-same-dh-secret-z?answertab=oldest
|
# Which risks are associated with deriving multiple keys from the same DH secret Z?
NIST recommends Krawczyk's HMAC-based key derivation function (HKDF) in SP-800-56C (PDF). HKDF shall e.g. be used to create keys from shared secrets after Diffie Hellman key establishment.
NIST states in the same doc:
Each call to the randomness extraction step requires a freshly computed shared secret $Z$, and this shared secret shall be zeroized immediately following its use in the extraction process.
Why is it not recommended to derive multiple keys from the same $Z$, e.g. why not derive a key for data encryption and a key for authentication from the same $Z$ using different info strings? If I still do it, what weaknesses might the derived keys be facing? Is it simply bad practice (without deep explanation) to do so since the derived keys obviously are related? Krawczyk seems to think that multiple derivations from the secret are expected use (see section 3.2 in RFC 5869, my interpretation).
Does salting the HKDF with a good random salt change my risks when deriving multiple keys from a single $Z$?
-
## 2 Answers
A key derivation function is intuitively "purifying" the entropy in the group element Z into uniformly random (looking) bits that can used as a key for other purposes. It is not designed to produce "multiple keys" from the same Z, and one should definitely not call the KDF on the same Z twice (even with different salts) and expect to get two independent keys.
If you want more key material beyond the output of one call to the KDF, the high level approach should be to apply the KDF to Z only once to get a single key, and then apply a pseudorandom generator (e.g., AES in counter mode) to that key to get multiple keys. This approach will be sound in the sense that it can proven secure in an appropriate model, assuming Z is pseudorandom, the KDF meets a natural notion of "purifying," and the pseudorandom generator is secure.
-
1
So HKDF (not being any old KDF but e.g. HMAC-SHA-256/512 based) is still not providing the step of applying a pseudorandom generator? – NotACryptographer Nov 13 '12 at 14:53
1
The output of K = HKDF(Z,r) will look like a random key when Z is a random group element and r is a random salt. But K1 = HKDF(Z,r) and K2 = HKDF(Z,r') will not look like two independent random keys, even when the salts r,r' are independent (at least this is the case with the concept of KDFs I am familiar with). However, if we take (K1,K2) = PRG(HKDF(Z,r)), then they will look random and independent. – David Cash Nov 13 '12 at 16:48
There are two answers: the "engineering" answer, and the "principled" answer.
The engineering answer is that, in practice, if you generate two keys using two different info strings, I suspect you'd probably get away with it without problems. If we model the hash as a random oracle (admittedly a very strong "assumption"), then I suspect it might be possible to demonstrate that what you propose is OK. Disclaimers: I haven't analyzed this, and I'm certainly not going to give you any guarantees -- if you do what you propose, cryptographers will wag their finger and say "tsk, tsk", and rightly so. I suspect you'd probably get away with it (it's not the worst sin you could make), but if something does go wrong, cryptographers aren't going to take the blame -- it's all on you.
The more principled answer is that if you do what you propose, you are misusing the HKDF primitive. The HKDF is only intended to be applied to a single $Z$ once. It is intended to turn an unguessable value into something that looks uniformly random. It has been analyzed for that use. It was not designed to derive multiple keys from the same $Z$: it hasn't been analyzed for that kind of use case, since that's not what it was designed for. So, you're throwing away the benefits you could get from the public analysis of HKDF if you use it in a way that it wasn't designed for.
Consequently, given that it is so easy to apply a PRG or PRF to the output of HKDF (using HKDF to get a uniform-random key, and using a PRG or PRF for key separation, i.e., to derive two different keys), you should probably do that, instead of what you proposed. Given that it is so easy to do the principled things, you might as well do the principled thing, and use the HKDF only once on any given $Z$. Even though you could probably get away with cutting corners and doing what you proposed, I see no reason to take the risk (even if the risk is miniscule).
So, stick to using HKDF in the way that its specification tells you to. Don't cut corners. In this case, there's not really any compelling reason to deviate from standard cryptographic practice, so you might as well stick with what the specification and cryptographers recommend.
-
3
As HKDF has a variable-length output, couldn't we just produce enough output for both keys and then split it in two, instead of using another PRF after HKDF? – Paŭlo Ebermann♦ Nov 15 '12 at 23:14
@PaŭloEbermann, good point, yes, I think that would be OK too. – D.W. Nov 15 '12 at 23:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382356405258179, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/238753/evaluation-of-the-limit-lim-limits-n-to-infty-frac1-sqrt-n-left1-f/238761
|
# Evaluation of the limit $\lim\limits_{n \to \infty } \frac1{\sqrt n}\left(1 + \frac1{\sqrt 2 }+\frac1{\sqrt 3 }+\cdots+\frac1{\sqrt n } \right)$
Evaluate the limit : $$\lim_{n \to \infty } {1 \over {\sqrt n }}\left( {1 + {1 \over {\sqrt 2 }} + {1 \over {\sqrt 3 }} + \cdots + {1 \over {\sqrt n }}} \right)$$
I can use the sandwich principle, certain convergence criteria, Cesaro mean theorem, limit arithmetic.. things around this area.
Any help would be greatly appreciated, thanks! Sorry for not elaborating more at the beginning, rookie first-post mistake I suppose. :)
-
1
Thanks for your kind help everyone, but is there any way to show this with simple basic tools that a calculus newbie like myself would recognize? – Adar Hefer Nov 16 '12 at 16:49
## 7 Answers
To use as little machinery as possible, observe that $\left(\sqrt n+\frac1{2\sqrt n}\right)^2=n+1+\frac1{4n}>n+1$, hence $$\tag1\sqrt{n+1}<\sqrt n + \frac1{2\sqrt n}.$$ By induction, we see therefore that $$\tag22\sqrt {n+1}<2+\sum_{k=1}^n \frac1{\sqrt k}\quad\text{for all }n\in\mathbb N.$$ On the other hand, if $q>2$, then for $n$ sufficiently large, we have $\left(\sqrt n+\frac1{q\sqrt n}\right)^2=n+\frac2q+\frac1{q^2n}<n+1$ and hence $$\tag3\sqrt{n+1}>\sqrt n + \frac1{q\sqrt n}.$$ Again by induction, we therefore find $$\tag4q\sqrt {n+1}>C+\sum_{k=1}^n \frac1{\sqrt k}\quad\text{for all }n\in\mathbb N,$$ where $C$ is a (negative) constant depending on $q$ (needed to cover the fact that $(3)$ hold only for $n$ sufficiently large). This gives us $$\frac{2\sqrt{n+1}-2}{\sqrt n}<\frac1{\sqrt n}\left(1+\frac1{\sqrt 2}+\cdots+\frac1{\sqrt n}\right)<\frac{q\sqrt{n+1}-C}{\sqrt n}$$ for almost all $n$. The left and right estimate converge to $2$ and $q$, respectively, as $n\to\infty$. Since $q$ was any number $>2$, we conclude that $$\lim_{n\to\infty}\frac1{\sqrt n}\left(1+\frac1{\sqrt 2}+\cdots+\frac1{\sqrt n}\right)=2.$$
-
1
After reading Phira's answer, I suggest you replace the upper bound with something based on $\sqrt{n+1}>\sqrt n+\frac1{2\sqrt {n+1}}$ for all $n$ rather than my $\sqrt{n+1}>\sqrt n+\frac1{q\sqrt {n}}$ for almost all $n$. – Hagen von Eitzen Nov 16 '12 at 17:09
I don;t get why $\left(\sqrt n+\frac1{q\sqrt n}\right)^2=n+\frac q2+\frac1{q^2n}<n+1$ for $q>2$ – Norbert Dec 4 '12 at 15:09
My wrting $\frac q2$ instead of $\frac2q$ was a TeX accident. With correct formula $\left(\sqrt n +\frac 1{\sqrt n}\right)^2 = n+2\cdot\sqrt n\cdot \frac1{q\sqrt n}+\frac1{q^2n}=n+\frac 2q+\frac 1{q\sqrt n}$ and $\frac2q<1$. For sufficiently large $n$, we still have $\frac2q+\frac1{q^2n}<1$. – Hagen von Eitzen Dec 4 '12 at 17:38
Note that $$2(\sqrt{k+1}-\sqrt{k})=\frac{2}{\sqrt{k+1}+\sqrt{k}}\leq\frac{1}{\sqrt{k}}\leq \frac{2}{\sqrt{k}+\sqrt{k-1}}=2(\sqrt{k}-\sqrt{k-1})$$ Hence $$2(\sqrt{n+1}-1)=\sum\limits_{k=1}^n 2(\sqrt{k+1}-\sqrt{k})\leq\sum\limits_{k=1}^n\frac{1}{\sqrt{k}}\leq \sum\limits_{k=1}^n 2(\sqrt{k}-\sqrt{k-1})=2\sqrt{n}$$ so $$\frac{2\sqrt{1 + n}-2}{\sqrt{n}}\leq\frac{1}{\sqrt{n}}\sum\limits_{k=1}^{n}\frac{1}{\sqrt{k}}\leq2$$ The rest is clear.
-
Thanks though, you're brilliant. I didn't get this far in my studies though, and I don't know how to interpret your answer.. – Adar Hefer Nov 16 '12 at 16:40
Norbert, you're the man!! – Adar Hefer Nov 16 '12 at 16:59
Rewrite as $$\frac 1n \left(\frac 1 {\sqrt{\frac 1n}}+\frac 1 {\sqrt{\frac 2n}}+\dots +\frac 1 {\sqrt{\frac nn}} \right)$$ and interpret this as a Riemann sum for the function $$\frac 1{\sqrt x}.$$
-
Thanks buddy, but I didn't get to differential and integral mathematics yet. Still battling my way through sequences. – Adar Hefer Nov 16 '12 at 16:45
You would get more useful answers if you gave more context to your question. What tools do you have at your disposal? And write the answer to this question in your original question where it belongs. – Phira Nov 16 '12 at 16:48
Sandwich principle, Cesaro mean theorem, arithmetics with limits.. I'd probably elaborate further but I would need to look up the English terms. Do you figure these are enough, though? – Adar Hefer Nov 16 '12 at 16:53
Does interpreting the sequence as a Riemann sum guarantee convergence to the integral in the general case? I'm not sure it does, since the idea that the integral exists means that we can find a partition that means that the sum is close to the value for the integral. I suppose that in this case we could find n large enough so that there is an $\sqrt{\frac{i}{n}}$ in each subinterval of the given partition and use convexity of the function, but that seems like a lot of hassle, and I'm not sure we can claim it's true for the general case. Am I missing something? – Tom Oldfield Nov 16 '12 at 17:14
@TomOldfield This is a continuous function, thus (Riemann)integrable, thus the Riemann sum converges. What you call "hassle" is part of the general proof that any fine partition works. – Phira Nov 16 '12 at 17:30
show 2 more comments
Write your expression as ${1\over n}$ times $\biggl(\ldots\biggr)$ and you will see that it can be interpreted as a Riemann sum belonging to a certain integral.
-
Thanks for the suggestion, but I need a solution using the sandwich principle, Cesaro means, not a whole lot more advanced than this... I've only just started 4 weeks ago. :/ – Adar Hefer Nov 16 '12 at 16:42
You can also use the identity $$\sqrt {x +1} - \sqrt x = \dfrac 1 {\sqrt{x+1} + \sqrt x }$$
which is between $\dfrac 1{2\sqrt {x+1} }$ and $\dfrac 1 {2\sqrt {x}}$.
-
It'd be clearer to add this to your first answer. It is also considered rude, or at least not-so-nice, to write more than one answer to the same question in the same thread. – DonAntonio Nov 16 '12 at 19:30
I suggest that you own your opinion and say that you consider it rude. Furthermore, I suggest that you do so at the recent meta thread dealing with this very topic. – Phira Nov 16 '12 at 22:22
Have it as you want: you can read other cases like this. I was trying to be useful, my bad. – DonAntonio Nov 16 '12 at 22:40
A couple of other methods, which use more machinery than the earlier ones, but are worth looking into for their generality.
Bounding by integrals
Comparing the integrals by constant bounding function over unit intervals, we get $$\frac1{\sqrt{n}}\left(\int_1^{n+1}\frac{\mathrm{d}x}{\sqrt{x}}\right) \le\frac1{\sqrt{n}}\sum_{k=1}^n\frac1{\sqrt{k}} \le\frac1{\sqrt{n}}\left(1+\int_1^n\frac{\mathrm{d}x}{\sqrt{x}}\right)$$ which yields $$\color{#C00000}{\frac1{\sqrt{n}}\left(2\sqrt{n+1}-2\right)} \le\frac1{\sqrt{n}}\sum_{k=1}^n\frac1{\sqrt{k}} \le\color{#C00000}{\frac1{\sqrt{n}}\left(2\sqrt{n}-1\right)}$$ As $n\to\infty$, both bounding terms (in red) tend to $2$. Therefore, by the Squeeze Theorem, we get $$\lim_{n\to\infty}\frac1{\sqrt{n}}\sum_{k=1}^n\frac1{\sqrt{k}}=2$$ Euler-Maclaurin Sum Formula
The Euler-Maclaurin Sum Formula says that for some constant $C$, we have $$\sum_{k=1}^n\frac1{\sqrt k}=2\sqrt{n}+\frac1{2\sqrt n}+C+O\left(n^{-3/2}\right)$$ Dividing by $\sqrt n$ yields $$\begin{align} \frac1{\sqrt n}\sum_{k=1}^n\frac1{\sqrt k} &=2+\frac C{\sqrt n}+\frac12+O\left(n^{-2}\right)\\ &\to2 \end{align}$$
-
This is a standard Stolz-Cezaro problem
$$\lim_{n \to \infty } {1 \over {\sqrt n }}\left( {1 + {1 \over {\sqrt 2 }} + {1 \over {\sqrt 3 }} + \cdots + {1 \over {\sqrt n }}} \right)=\lim_{n \to \infty } \frac{\left( {1 + {1 \over {\sqrt 2 }} + {1 \over {\sqrt 3 }} + \cdots + {1 \over {\sqrt n }}} \right)}{\sqrt{n}}=$$ $$=\lim_{n}\frac{\frac{1}{\sqrt{n+1}}}{\sqrt{n+1}-\sqrt{n}}$$
rationalize the denominator you get
$$\lim_{n \to \infty } {1 \over {\sqrt n }}\left( {1 + {1 \over {\sqrt 2 }} + {1 \over {\sqrt 3 }} + \cdots + {1 \over {\sqrt n }}} \right)=\lim_n \frac{\sqrt{n+1}+\sqrt{n}}{\sqrt{n+1}}=2$$
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436360597610474, "perplexity_flag": "middle"}
|
http://www.reference.com/browse/Geosynchronous+orbit+derivation
|
Definitions
Nearby Words
# Geosynchronous orbit
A geosynchronous orbit is an orbit around the Earth with an orbital period matching the Earth's sidereal rotation period. This synchronization means that for an observer at a fixed location on Earth, a satellite in a geosynchronous orbit returns to exactly the same place in the sky at exactly the same time each day. In principle, any orbit with a period equal to the Earth's rotational period is technically geosynchronous, however, the term is almost always used to refer to the special case of a geosynchronous orbit that is circular (or nearly circular) and at zero (or nearly zero) inclination, that is, directly above the equator. This is sometimes called a geostationary orbit.
A semisynchronous orbit has an orbital period of 0.5 sidereal days, i.e. 11 h 58 min. Relative to the Earth's surface it has twice this period, and hence appears to go around the Earth once every day. Examples include the Molniya orbit and the orbits of the satellites in the Global Positioning System.
## Orbital characteristics
All geosynchronous orbits have a semi-major axis of . In fact, orbits with the same period share the same semi-major axis: $a=sqrt\left[3\right]\left\{muleft\left(frac\left\{P\right\}\left\{2pi\right\}right\right)^2\right\}$ where a=semi-major axis, P=orbital period, μ=geocentric gravitational constant.
In the special case of a geostationary orbit, the ground track of a satellite is the equator. In the general case of a geosynchronous orbit with a non-zero inclination or eccentricity, the ground track is a more or less distorted figure-eight, returning to the same places once per solar day.
## Geostationary orbit
A circular geosynchronous orbit in the plane of the Earth's equator has a radius of approximately 42,164 km (from the center of the Earth). A satellite in such an orbit is at an altitude of approximately 35,786 kilometers above mean sea level. It will maintain the same position relative to the Earth's surface. If one could see a satellite in geostationary orbit, it would appear to hover at the same point in the sky, i.e., not exhibit diurnal motion, while one would see the Sun, Moon, and stars traverse the heavens behind it. This is sometimes called a Clarke orbit. Such orbits are useful for telecommunications satellites.
A perfect stable geostationary orbit is an ideal that can only be approximated. In practice the satellite will drift out of this orbit (because of perturbations such as the solar wind, radiation pressure, variations in the Earth's gravitational field, and the gravitational effect of the Moon and Sun), and thrusters are used to maintain the orbit in a process known as station-keeping.
## Synchronous orbits around general astronomical objects
Synchronous orbits exist around all moons, planets, stars and black holes — unless they rotate so slowly that the orbit would be outside their Hill sphere or so fast that such an orbit would be inside the body. Most inner moons of planets have synchronous rotation, so their synchronous orbits are, in practice, limited to their leading and trailing (L4 and L5) Lagrange points, as well as the L1 and L2 Lagrange points, assuming they don't fall within the body of the moon. Objects with chaotic rotations (such as Hyperion) are also problematic, as their synchronous orbits keep changing unpredictably.
## Other geosynchronous orbits
can be and are designed for communications satellites that keep the satellite within view of its assigned ground stations or receivers. A satellite in an elliptical geosynchronous orbit will appear to oscillate in the sky from the viewpoint of a ground station, tracing an analemma in the sky. Satellites in highly elliptical orbits must be tracked by steerable ground stations.
Theoretically an active geosynchronous orbit can be maintained if forces other than gravity are also used to maintain the orbit, such as a solar sail. Such a statite can be geosynchronous in an orbit different (higher, lower, more or less elliptical, or some other path) from the conic section orbit formed by a gravitational body.
Surveillance satellites use active geosynchronous orbits to maintain position and track above a fixed point on the Earth's surface. They are directed by controllers on the ground.
A further form of geosynchronous orbit is obtained by the theoretical space elevator in which one end of the structure is tethered to the ground, maintaining a longer orbital period than by gravity alone if under tension.
Other definitions of geosynchronous orbit
• Geosynchronous orbit (GEO): a circular orbit, 35786 km above Earth's surface
The following orbits are special orbits that are also used to categorize orbits:
• Geostationary orbit (GSO): zero inclination geosynchronous orbit
• Supersynchronous orbit - a disposal / storage orbit above GSO/GEO. Satellites will drift in a westerly direction.
• Subsynchronous orbit - a drift orbit close to but below GSO/GEO. Used for satellites undergoing station changes in an eastern direction.
• Graveyard orbit - a supersynchronous orbit where spacecraft are intentionally placed at the end of their operational life.
## History
Author Arthur C. Clarke is credited with proposing the notion of using a geostationary orbit for communications satellites. The orbit is also known as the Clarke Orbit. Together, the collection of artificial satellites in these orbits is known as the Clarke Belt.
The first communications satellite placed in a geosynchronous orbit was Syncom 2, launched in 1963. Geosynchronous orbits have been in common use ever since, in particular for satellite television.
Geostationary satellites also carry international telephone traffic but they are being replaced by fiber optic cables in heavily populated areas and along the coasts of less developed regions, because of the greater bandwidth available and lower latency, due to the inherent disconcerting delay in communicating via a satellite in such a high orbit. It takes electromagnetic waves about a quarter of a second to travel from one end to the other of the link, thus two parties talking via satellite will be subject to about a half second delay in round-trip response.
Although many populated land locations on the planet now have terrestrial communications facilities (microwave, fiber-optic), even undersea, with more than sufficient capacity, satellite telephony and Internet access is still the only service available for many places in Africa, Latin America, and Asia, as well as isolated locations that have no terrestrial facilities, such as Canada's Arctic islands, Antarctica, the far reaches of Alaska and Greenland, and ships at sea.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186310172080994, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/112069/a-fibrant-objects-structure-on-top
|
## A fibrant-objects structure on Top
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
(Sorry for the crossposting, but I'm really interested in this question).
One can define (Paragraph 1.5, page 10) a fibrant-object structure on a suitable cartesian closed category of topological spaces $\bf Top$, called the $\pi_0$-fibrant structure:
1. A $\pi_0$-equivalence is a map inducing a bijection at the level of $\pi_0$
2. A $\pi_0$-fibration is a continuous map $p\colon E\to B$ having the RLP with respect to the map ${0}\to [0,1]$ including the 0: [I'm not able to reproduce the diagram, the TeX engine seems not to accept the "array" environment]
Every property defining a fibrant structure can be easily shown in the way you see.
Now I'm interested in extending this. The natural definition for a $\pi_n$-equivalence is a map $A\to B$ inducing isomorphisms $\pi_i(A)\to \pi_i(B)$ for all $0\le i\le n$.
What should a $\pi_n$-fibration be in order to define a fibrant structure $\pi_n\text{-}\bf Top$ for all $n\in\mathbb N$?
What if we "go to the limit" (and can it be done?) $\varinjlim_n \big(\pi_n\text{-}\bf Top\big)$ of these fibrant structures? Do we recover a known fibrant structure, obtained forgetting cofibrations and mutual lifting properties of a suitable model structure, on $\bf Top$?
-
## 2 Answers
Your question was addressed in the following paper:
Carmen Elvira-Donazar and Luis-Javier Hernandez-Patricio. Closed model categories for the $n$-type of spaces and simplicial sets. Math. Proc. Camb. Phil. Soc. (1995), 118, 93.
Allow me to define an $n$-fibration by quoting from the introduction: Let $I^p$ be the $p$-dimensional unit cube, $V^{p-1}$ be the union of all faces of $I^p$ except for $I^p\times {1}$ and $\partial I^p$ the boundary of $I^p$. A map $f$ is an $n$-fibration if it has the right lifting property with respect to $V^{p-1}\to I^p$ (for $0 < p \leq n+1$) and with respect to $V^{n+1}\to \partial I^{n+2}$.
With this definition, and your notion of an $n$-equivalence they prove that $Top$ (meaning a suitable cartesian-closed version) is a model category. So you can forget all mention of cofibrations and get the fibrant-object structure you wanted. The proof proceeds by way of Simplicial Sets, so if you read that paper you'll probably learn loads more about $n$-fibrations. For instance, Corollary 2.1 says trivial $n$-fibrations are exactly maps which have the RLP with respect to $\partial I^p\to I^p$ for $0\leq p\leq n+1$.
It is not difficult to see from the description of $n$-equivalences and $n$-fibrations that in the limit as $n\to \infty$ you get the usual model structure on $Top$. I should mention that this paper of Golasinski and Goncalves credits this model structure to Tim Porter and J.L. Hernandez via *Categorical models of $n$-types for procrossed complexes and $\mathcal{J}_n$-prospaces* from the 1990 Barcelona Conference on Algebraic Topology. But I couldn't find an online copy of that, so I went with the reference above instead.
Note that the dual question to your question (declaring $X\to Y$ to be an $n$-equivalence if $\pi_k(X)\to \pi_k(Y)$ is an isomorphism for all $k>n$) has also been answered, and again there is a model structure. Here is a reference:
J. Ignacio Extremiana Aldana, L. Javier Hernández Paricio, and M. Teresa Rivas Rodríguez. A closed model category for ($n-1$)-connected spaces. Proc. Amer. Math. Soc. 124 (1996), 3545-3553
EDIT (April 1, 2013):
I recently learned of another paper in this vein by the same authors, thanks to a comment of Fernando Muro over at this MO thread. Here is the reference:
J. Ignacio Extremiana Aldana , L. Javier Hernández Paricio , M. Teresa Rivas Rodríguez. Closed Model Categories For [n,m]-Types (1997)
This combines the two types of truncation mentioned above to get a model structure on [n,m]-types (truncated by $n$ below and $m$ above) whose homotopy category is equivalent to the category of $n$-reduced CW complexes with dimension $\leq m+1$ and $m$-homotopy classes of maps. It actually does even more, because it gives a different model structure whose homotopy category is equivalent to the homotopy category of $(n-1)$-connected, $(m+1)$-coconnected CW complexes.
-
maybe you mean "$V^{p-1}$ = the union of all faces of $I^p$ except for $I^{p-1}\times 1$"? – tetrapharmakon Dec 17 at 19:51
...I'm totally uncomfortable with the definition of $V^{p-1}$. Can you write explicitly $V^{-1}$ (if it can be defined), $V^0,V^1, V^2$? – tetrapharmakon Dec 17 at 20:01
Did you look at the linked paper? Maybe I made an error transcribing that description. The paper goes into more detail on this construction. Perhaps it's supposed to be "except for $I^{p-1}\times 1$." – David White Dec 17 at 20:17
It's obviously $I^{p-1}$; I checked and the error is repeated verbatim in the version of the paper you linked me. – tetrapharmakon Dec 18 at 22:26
(which is the submitted version, but it's strange, isn't it?) – tetrapharmakon Dec 18 at 22:27
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This might be a naive answer but here is a suggestion for the definition of $\pi_n$-fibrations: maps having the RLP with respect to the the map $\Delta^k\to\Delta^k\times I$ for any $k\leq n$.
In the limit you will get the "obvious" fibrant-object structure on $Top$ which comes from its usual model structure (recall that the full subcategory of fibrant objects in any model category is a category of fibrant objects.. and that all objects are fibrants in $Top$).
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128825068473816, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/267140/why-is-f-v-lambda-true-for-a-standing-wave?answertab=votes
|
# Why is $f=v/\lambda$ true for a standing wave? [closed]
If a wave travels at speed $v$ and has wavelength $\lambda$ it's obvious that any given place in the medium through which the wave is travelling will undergo one full oscillation every $v/\lambda$ units of time, and so the frequency $f$ of the wave must be given by $f=v/\lambda$.
But I can't see why a standing wave should obey that equation. What does it even mean to talk about the "speed" of a standing wave? I'd love an explanation.
(I'm asking this because I'm trying to understand the harmonic series of a plucked or struck vibrating string with fixed ends. Assume the length of the string is $1/2$. Then I understand that the string can only have waves whose wavelength is an element of $\{1,\ 1/2,\ 1/3, ...\}$. Assume that the wave with wavelength $1$ vibrates at frequency $f_!$. Then we can supposedly use the equation $f=v/\lambda$ to find that the wave with wavelength $1/2$ must vibrate at frequency $f_2=2*f_1$, the one with wavelength $1/3$ with frequency $f_3=3*f_1$ and so on, giving the harmonic series $f_1,f_2,f_3,...$, where $f_n=n*f_1$but as you can see from the above, I can't see why the $f_n$ should have the values they do.)
-
1
A standing wave is just a superposition of two waves travelling in opposite directions. The relation applies to the two superimposed waves individually. – Julien Dec 29 '12 at 16:50
## closed as off topic by Nameless, Henry T. Horton, Davide Giraudo, Alexander Gruber, TMMDec 29 '12 at 18:04
Questions on Mathematics Stack Exchange are expected to relate to math within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here.
## 1 Answer
The fact that the higher modes of a string have frequencies that are integer multiples of the fundamental is not automatic in the properties of higher modes. It comes from the calculation of potential and kinetic energy in the string. Once you calculate these, the spacing of the modes comes out. In other oscillators, such as 2D drum heads or the semi-classical hydrogen atom, the frequencies have a different spacing.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950122594833374, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/193673/find-the-line-equations-of-the-sides-of-a-triangle-finding-slopes
|
# Find the line equations of the sides of a triangle. (finding slopes)
```` A (10,15)
/\
/ \
/ \
/ \
/ \
C (x, y) /__________\ B (16, 10)
````
Given : $\angle BAC = 85^\circ$ ; $\angle ABC = 70^\circ$ ;
To find: Equations of $\overline{AC}$ and $\overline{BC}$ (or their slopes)
What I have done:
1. By distance formula I found $AB = \sqrt{61}$
2. By law of sines, I found the other values. $AC = 18.4103$ and $BC = 17.3661$
3. Found slope of $AB$. $m = -5/6$. Since slope is negative $m = -(-5/6) = 5/6$
4. $\arctan(5/6) = 39.80^\circ$. $\angle B$ is $70^\circ$. But it makes ($70^\circ - 39.80^\circ = 30.20^\circ$) with $x$-axis. $BC$ makes $30.2^\circ$ with $x$-axis.
5. Hence slope of $BC = \tan(30.2^\circ) = 0.5820$
6. $\Rightarrow (y - 10) = 0.582 (x - 16) \Rightarrow 0.582x - y = - 0.688$ (Equation of $\overline{BC}$)
7. $AC$ makes $55.2^\circ$ with $x$-axis. So slope is $\tan(55.2^\circ) = 1.4388$
The slope of $AC$ is false. The coordinates of $C$ is $(4,3)$. So if i cross check, the slope of $BC$ is correct but not of $AC$. Can anyone help me with this? How do i find the line equation of $\overline{AC}$? what will be its slope?
-
if h=hypotenuse,b=adjacent,p=opposite then slope of h=6:1 ,slope of b=1.5% calculate the length of b. – asmat khan Apr 7 at 10:49
## 1 Answer
Your calculations are fine; there's a problem with your assumptions. For example, in step 2 you correctly find the lengths of AC and BC from the angles BAC and ABC. But if you find those lengths from the assumption $C = (4, 3)$ with the Pythagorean theorem, you'll get different values around 13. The angles are not consistent with that position for $C$.
I suggest you go back to where you got this data and find the mistake there.
-
No. I didnt use pythagoras theorem here. nor did (4,3) influenced me. I was going by the steps. I used law of sines to find the sides a/sinA = b/sinB = c/sinC. I used (4,3) only to check if my line equation was right or not. Now I am unable to find the line equation of AC since I need slope. which is tan(some angle)! How do i find this angle? – Krat Sep 10 '12 at 19:24
You found that angle in step 7, right? You say that value is wrong, but why do you think so? – Hew Wolff Sep 10 '12 at 19:31
See step 5: slope of BC = 0.58. This can also be calculated by m = y2-y1/x2-x1 = 10-3/16-4 = 0.58. This is where i am using the coordinates of C to check if the slope i found is right. It is right for BC. but not right for AC. The slope of AC would be = 15-3/10-4 = 2. I should get 2 as the slope, but i'm getting 1.43. ?!?!?!?!?! – Krat Sep 10 '12 at 19:49
It sounds like you constructed $C$ to be on the line $BC$ and now you're finding that it's not on the line $AC$. But that's not surprising; pretty much any point on the line $BC$ will not be on the line $AC$. The only point on $BC$ that will also be on $AC$ will be the actual position of $C$, and we don't know that yet. Does that make sense? – Hew Wolff Sep 10 '12 at 19:56
This is actually a question to find the third point (x,y) of a triangle. The book says "The third coordinate is (4,3)". I am unable to derive at it. There are five problems in the book. I've done the first two. I'm now stuck with this problem. For the first two questions, I found line equations of the two sides the triangles and then solved them to find their point of intersection. (4,3) is not any point on the line. It is the point of intersection C. – Krat Sep 10 '12 at 21:46
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482602477073669, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/116694/the-embedding-of-smooth-manifold
|
# The embedding of smooth manifold
I have run into a problem in my differential geometry book.
Let $M$ be a smooth manifold and $F={C^\infty }(M,\mathbb R)$. Define a mapping $i:M \to {\mathbb R^F}$ by ${i_f}(x) = f(x)$ for $x \in M,f \in F$, then $i$ is an embedding. ($\mathbb R^F$ has product topology and $i_f$ means the component)
The injectivity and continuity are not hard, but I cannot prove the mapping to be an embedding.
-
What book is it? – magma Mar 6 '12 at 10:44
Characteristic Classes from Milnor – Hezudao Mar 6 '12 at 14:25
## 1 Answer
Let $K\subset M$ be closed. Let $K_f = \overline{f(K)}$ the closure of $f(K)$. Define the set $L \subset \mathbb{R}^F$ by $$L = \cap_{f\in F} p_f^{-1} K_f$$ where $p_f$ is the projection map on the coordinate $f\in F$ from $\mathbb{R}^F\to\mathbb{R}$. Continuity of the projection ensures that $L$ is closed. It suffices to show that $i^{-1}(L) = K$, since obviously $i(K) \subset L$.
Observe that $p_f L \subset K_f$.
Let $y\in M\setminus K$. Since a topological manifold is Tychonoff, there exists a continuous function such that $h(F) = 0$ and $h(y) = 1$. Going through the construction for bump functions one sees that on a smooth manifold you can take $h$ to be smooth (hence $h\in F$). (Take a coordinate chart around $y$, find an open $U$ disjoint from $K$ whose closure is inside the chart. Make bump function relative to the compact $\{y\}$ and open $U$, extend by 0 outside the chart.) Hence $h(y) \not\in K_h$, hence $i(y) \not\in L$.
So $L\cap i(M) = i(K)$ is closed, so $i^{-1}$ is continuous, and so $i$ is a homeomorphism to its image.
-
Tychonoffness gives you continuous functions, and you do need to work a little harder to turn it into a smooth one :) – Mariano Suárez-Alvarez♦ Mar 5 '12 at 16:04
@Mariano: fair enough. I'll edit. But I assume that the OP knows how to construct bump functions. – Willie Wong♦ Mar 5 '12 at 16:13
I know: it is just me being annoying :) – Mariano Suárez-Alvarez♦ Mar 5 '12 at 16:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234641790390015, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/292515/bootstrap-argument-to-prove-regularity-of-a-special-solution
|
# Bootstrap argument to prove regularity of a special solution
Rabinowitz proves (using the Mountain Pass Theorem) that for a bounded smooth domain $\Omega \in \mathbb{R}^n$, and $f(x,\xi)\in C(\bar{\Omega}\times \mathbb{R},\mathbb{R})$ satisyfing the growth condition $$f(x,\xi) \leq A + B|\xi|^s,\ \ \text{where}\ \ s+1<\frac{2n}{n-2}$$ then there is a weak solution $u \in H_0^1(\Omega)$ to the problem $$-\Delta u = f(x,u), \ x\in \Omega$$ $$u = 0, \ \ x \in \partial \Omega$$ He then makes the remark that if in addition $f$ is locally Lipschitz, the solution is classical. The proof is made by referencing a paper by Agman, here. Unfortunately, this was made before $\LaTeX$ so it is quite difficult to read, as in it actually hurts my eyes. So, I thought I would try to use some arguments presented in Gilbarg-Trudinger to solve a particular case that came up in Evans presentation for the mountain pass theorem. Let $f = |u|^{s-1}u$, where $s$ as is above. Here, maybe we need this paper, maybe we don't.
My argument goes like this : Since $u \in H_0^1$, by the Sobolev embedding theorem, we have $u \in L^{2^{*}} = L^{\frac{2n}{n-2}}$, and so $f \in L^{\frac{2^{*}}{s}}$. Using the global $L^p$ estimates (GT thm 9.13) $u \in W^{2,\frac{2^{*}}{s}}$. Now use the Sobloev theorem again to get a better estimate for $u$. Continue this process a finite number of times until we hit the critical value for the Sobolev theorem, in which case we get $u \in C^{0,\alpha_1}$ for some $\alpha_1$ (and thus $f \in C^{0,\alpha_2}$ for some $\alpha_2$). Now use the Dirichlet theory for elliptic operators (GT 6.11) to conclude that the solution is $C^{2,\alpha_2}$, by uniqueness.
Problems :
I know this argument can't be correct for at least two reasons :
1. The theory for the Dirichlet problem says the solution is unique, and it can be proved that there are at least 2 solutions if $f$ is locally Lipschitz continuous (Rabinowitz : Minimax Methods p 11)
2. The $L^p$ theory used in the "proof" above requires we know apriori that $u$ is a strong solution, ie $u\in W^{2,p}$. The normal regularity result that I would use to get this, GT-theorem 8.12, requires that $f \in L^2$. Our $f$ seems to always be in $L^r$, $r<2$.
Edit : I have found the result in a book "Qualitative Analysis of Nonlinear Elliptic Partial Differential Equations", but I am still having trouble understanding the proof. At least it has some of the general ideas that I had. For convenience, the argument from the book is given below, noting they use different letters for exponents (they consider the problem $-\Delta u = u^p$)
We know until now that $u\in H_0^1(\Omega)\subset L^{2^{*}}(\Omega)$.In a general framework, assuming that $u \in L^q$, it follows that $u^p \in L^{\frac{q}{p}}$, that is, by Schauder regularity and Sobolev embeddings, $u \in W^{2,\frac{q}{p}} \subset L^s$, where $\frac{1}{s} = \frac{p}{q} − \frac{2}{N}$. So, assuming that $q_1 > \frac{(p−1)N}{2}$, we have $u \in L^{q_2}$, where $\frac{1}{q_2} = \frac{p}{q_1} − \frac{2}{N}$. In particular, $q_2 > q_1$. Let $(q_n)$ be the increasing sequence we may construct in this manner and set $q_{\infty} = \lim_{n\rightarrow \infty}{q_n}$. Assuming, by contradiction, that $q_n < \frac{Np}{2}$, we obtain, passing at the limit as $n \rightarrow \infty$, that $q_{\infty} = \frac{N(p − 1)}{2} < q_1$, contradiction. This shows that there exists $r > \frac{N}{2}$ such that $u \in L^r(\Omega)$ which implies $u \in W^{2,r}(\Omega) \subset L^{\infty}(\Omega)$. Therefore, $u \in W^{2,r}(\Omega) \subset C^k(\Omega)$, where $k$ denotes the integer part of $2 − \frac{N}{r}$. Now, by Holder continuity, $u \in C^2(\Omega)$.
I still don't understand how we get $u \in W^{2,\frac{q}{p}}$. Also, the last statement by Holder continuity, $u \in C^2(\Omega)$. Why is this?
-
Just a comment on the title (since unfortunately I cannot help with the PDEs): You probably want to add something about PDEs and the problem at hand to the title. I think you'd be more likely to get hits that way. – mck Feb 2 at 2:40
Comment has been noted. Thanks. – Euler....IS_ALIVE Feb 2 at 6:59
## 1 Answer
The theory for the Dirichlet problem says the solution is unique, and it can be proved that there are at least 2 solutions if $f$ is locally Lipschitz continuous.
No contradiction here. The theory to which you refer (GT Chapter 6) is for the Poisson equation $\Delta u=g$ with given $g$. The equation $\Delta u=f(u)$ may well have multiple solutions, since the right-hand side is different for different $u$.
The normal regularity result that I would use to get this, GT-theorem 8.12, requires that $f\in L^2$.
Indeed, GT do not prove a version of Theorem 8.12 for $1<p<\infty$. But it is true: if $u\in H_0^1(\Omega)$ and $\Delta u=g\in L^p(\Omega)$, then $u\in W^{2,p}(\Omega)$. The proof goes like this:
1. extend $u$ and $g$ by zero to $\mathbb R^n$.
2. Let $v=\mathcal{N}g$, the Newtonian potential of $g$. GT consider it in Chapter 4, but give only Hölder estimates. Sobolev estimates are based on the theory of singular integrals: $D_{ij}v$ is obtained from $g$ via Riesz transforms which are bounded on $L^p$ for $1<p<\infty$. See Theorem 3.2 here, which gives $v\in W^{2,p}(\mathbb R^n)$.
3. Both $u$ and $v$ vanish at infinity, and $u-v$ is weakly harmonic, hence harmonic, hence identically zero.
Also, the last statement by Holder continuity, $u\in C^2(\Omega)$. Why is this?
That book is strangely written. Taking "the integer part of $2-N/r$" is not helpful because this part may well be equal to zero. The point is that the Morrey-Sobolev embedding gives $u\in C^{\alpha}(\overline \Omega)$ for some $\alpha>0$, which implies $g:=u^p$ is in $C^\alpha$. By Corollary 4.14 of GT, $\Delta u=g$ has a solution in $C^{2,\alpha}(\overline \Omega)$. Since the classical solution is also a weak solution, and the weak solution is unique, we conclude that $u \in C^{2,\alpha}(\overline \Omega)$.
-
Thank you for your response. Indeed, result 2 is proved in GT. Can you expand a bit more please? I'm very surprised that this theorem isn't in GT, since you need it for all the estimates. How can you conclude that $u-v$ is weakly harmonic? We only know that $u$ is in $W^{1,p}$. So, $\int{(u-v)\Delta \phi}$ = $\int{D(u-v)\cdot D\phi}$. To get this integral is $0$, I presume we need to be able to integrate by parts again. Also, how do you know the weak solution is unique? My first thought was Lax-Milgram, but this requires $L^2$ again. – Euler....IS_ALIVE Feb 4 at 6:53
– user53153 Feb 4 at 15:40
@Euler... There is at most one $u\in H^1_0(\Omega)$ that satisfies $\Delta u=g$ in the weak sense; this follows from Corollary 8.2 in GT. – user53153 Feb 4 at 15:54
This is a good enough explanation for the bounty... although I wonder; Should we always think of the Laplacian as the distributional Laplacian? – Euler....IS_ALIVE Feb 5 at 4:47
@Euler... You don't have to, although I can't think of an instance when it can hurt (as long as Laplacian is a linear operator). Distributions are a convenient common denominator for function spaces. In this situation there is $u\in H_0^1$ which satisfies $\Delta u=g$ in the weak sense, and $v\in W^{2,p}$ which satisfies $\Delta v=g$ in the strong sense. To save myself the headache of thinking of function spaces, I say: whatever, they both have Laplacian $g$ in the sense $\int (u\text{ or }v)\Delta \phi=\int g \phi$ for Schwartz functions $\phi$. This is all Weyl's lemma needs, anyway. – user53153 Feb 5 at 4:57
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 98, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477081298828125, "perplexity_flag": "head"}
|
http://mathhelpforum.com/pre-calculus/52243-perpenicular-bisector.html
|
# Thread:
1. ## perpenicular bisector...?
find the equation of the perpendicular bisector of (2,-5) and (6,9), expressing your answer in the form: ax + by + c = 0
thanks for any help i can quite seem to get this one...
and its in for tomorrow!! gah!
2. Originally Posted by pop_91
find the equation of the perpendicular bisector of (2,-5) and (6,9), expressing your answer in the form: ax + by + c = 0
thanks for any help i can quite seem to get this one...
and its in for tomorrow!! gah!
Find the midpoint of the line segment through (2, -5) and (6, 9):
$Midpoint=\left(\frac{2+6}{2} \ \ , \ \ \frac{9-5}{2}\right)=(4, 2)$
Find the slope of the line through (2, -5) and (6, 9).
slope = $\frac{9-^-5}{6-2}=\frac{14}{4}=\frac{7}{2}$
The slope of the perpendicular bisector is the negative reciprocal of that slope, so it would be $-\frac{2}{7}$ and pass through the midpoint (4, 2).
Using y=mx+b and m = $-\frac{2}{7}$ through (4, 2).
$2=-\frac{2}{7}(4)+b$
$\frac{22}{7}=b$
$y=-\frac{2}{7}x+\frac{22}{7}$
$7y=-2x+22$
$2x+7y-22=0$
3. thank you soo much!!!!
4. Hello, pop_91!
All the necessary information is there ... just dig it up.
Find the equation of the perpendicular bisector of $A(2,-5)$ and $B(6,9),$
expressing your answer in the form: $ax + by + c \:=\: 0$
The perpendicular bisector of $AB$ goes through the midpoint of $AB$
. . and is perpendicular to $AB.$
The midpoint of $AB$ is: . $\left(\frac{2+6}{2},\;\frac{\text{-}5+9}{2}\right) \;=\;(4,\:2)$
The slope of $AB$ is: . $m_1 \;=\;\frac{9-(\text{-}5)}{6-2} \:=\:\frac{14}{4} \:=\:\frac{7}{2}$
The perpendicular slope is: . $m_2 \:=\:\text{-}\frac{2}{7}$
The line through $(4,\:2)$ with slope $\text{-}\frac{2}{7}$ is:
. . $y - 2 \;=\;\text{-}\frac{2}{7}(x - 4) \quad\Rightarrow\quad y - 2 \;=\;\text{-}\frac{2}{7}x + \frac{8}{7}$
Multiply by 7: . $7y - 14 \;=\;\text{-}2x + 8$
. . Therefore: . ${\color{blue}2x + 7y - 22 \;=\;0}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8867207169532776, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/advanced-statistics/90907-mean-lognormal-base-2-a.html
|
# Thread:
1. ## Mean of lognormal with base 2
Hi,
I am trying to figure out what the mean of a lognormal distribution that is in base-2 rather than in base-e. Could someone help with the derivation or at least with the formula itself?
The reason I ask is that I need to generate values with a known mean for a multiplicative process. Therefore, I can't use a simple transform to figure out the mean.
I can generate values for natural logarithms using the mean value of
$\mu - \sigma^2/2$
but this doesn't work for base-2 logs.
Thank you very much.
#### Search Tags
View Tag Cloud
Copyright © 2005-2013 Math Help Forum. All rights reserved.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9409701824188232, "perplexity_flag": "head"}
|
http://unapologetic.wordpress.com/2007/05/29/special-kinds-of-morphisms-subobjects-and-quotient-objects/?like=1&source=post_flair&_wpnonce=c1ce40f6e1
|
# The Unapologetic Mathematician
## Special kinds of morphisms, subobjects, and quotient objects
Last week I was using the word “invertible” as if it was perfectly clear. Well, it should be, more or less, since categories are pretty similar to monoids, but I should be a bit more explicit. First, though, there’s a few other kinds of morphisms we should know.
We want to motivate these definitions from what we know of sets, but the catch is that sets are actually pretty special. Some properties turn out to be the same when applied to sets, though they can be different in other categories.
First of all, let’s look at injective functions. Remember that these are functions $f:X\rightarrow Y$ where $f(x_1)=f(x_2)$ implies $x_1=x_2$. That is, distinct inputs produce distinct outputs. Now we can build a function $g:B\rightarrow A$ as follows: if $y=f(x)$ for some $x\in X$ we define $g(y)=x$. This is well-defined because at most one $x$ can work, by injectivity. Then for all the other elements of $Y$ we just assign them to random elements of $X$. Now the composition $g\circ f$ is the identity function on $X$ because $g(f(x))=x$ for all $x\in X$. We say that the function $f$ has a (non-unique) “left inverse”.
Now since $f$ has a left inverse $g$ there’s something else that happens: if we have two functions $h_1$ and $h_2$ both from $Y$ to $X$, and if $f\circ h_1=f\circ h_2$ then $h_1=g\circ f\circ h_1=g\circ f\circ h_2=h_2$. That is, $f$ is “left cancellable”.
Now in any category $\mathcal{C}$ we say a morphism $f$ is a “monomorphism” (or “a mono”, or “$f$ is monic”) if it is left cancellable, whether or not the cancellation comes from a left inverse as above. If $f$ has a left inverse we say $f$ is “injective” or that it is “an injection”. By the same argument as above, every injection is monic, but in general not all monos are injective. In $\mathbf{Set}$ the two concepts are the same.
Similarly, a surjective function $f$ has a right inverse $g$, and is thus right cancellable. We say in general that a right cancellable morphism is an “epimorphism” (or “an epi”, or “$f$ is epic”). If the right cancellation comes from a right inverse, we say that $f$ is “surjective”, or that it is “a surjection”. Again, every surjection is epic, but not all epis are surjective. In $\mathbf{Set}$ the two concepts are again the same.
If a morphism is both monic and epic then we call it a “bimorphism”, and it can be cancelled from either side. If it is both injective and surjective we call it an “isomorphism”. All isomorphisms are bimorphisms, but not all bimorphisms are isomorphisms. If $f$ is an isomorphism, then we can show (try it) that the left and right inverses are not only unique, but are the same, and we call the (left and right) inverse $f^{-1}$. When I said “invertible” last week I meant that such an inverse exists.
We’ve already seen these terms in other categories. In groups and rings we have monomorphisms and epimorphisms, which are monos and epis in the categories $\mathbf{Grp}$ and $\mathbf{Ring}$.
Now recall that any subset $T$ of a set $S$ comes with an injective function $T\rightarrow S$ “including” $T$ into $S$. Similarly, subgroups and subrings come with “inclusion” monomorphisms. We generalize this concept and define a “subobject” of an object $C$ in a category $\mathcal{C}$ to be a monomorphism $S\rightarrow C$. In the same way we generalize quotient groups and quotient rings by defining a “quotient objects” of $C$ to be epimorphisms $C\rightarrow Q$.
Notice that we define a subobject to be an arrow, and we allow any monomorphism. Consider the function $f:\{a,b\}\rightarrow\{1,2,3\}$ defined by $f(a)=1$ and $f(b)=3$. It seems odd at first, but we say that this is a subobject of $\{1,2,3\}$. The important thing here is that we don’t define these concepts in terms of elements of sets, but in terms of arrows and their relations to each other. We “can’t tell the difference” between $\{a,b\}$ and $\{1,3\}$ since they are isomorphic as sets. If we just look at the arrow $f$ and the usual inclusion arrow of $\{1,3\}\subseteq\{1,2,3\}$, they pick out the same subset of $\{1,2,3\}$ so we may as well consider them to be the same subset.
Let’s be a little more general here. Let $f_1:S_1\rightarrow C$ and $f_2:S_2\rightarrow C$ be two subobjects of $C$. We say that $f_1$ “factors through” $f_2$ if there is an arrow $g:S_1\rightarrow S_2$ so that $f_1=f_2\circ g$. If we take the class of all subobjects of $C$ (all monomorphisms into $C$) we can give it the structure of a preorder by saying $f_1\leq f_2$ if $f_1$ factors through $f_2$. It should be straightforward to verify that this is a preorder.
Now we can turn this preorder into a partial order as usual by identifying any two subobjects which factor through each other. If $f_1=f_2\circ g_2$ and $f_2=f_1\circ g_1$ then $f_1=f_1\circ g_1\circ g_2$. Since $f_1$ is monic we can cancel it from the left and find that $1_{S_1}=g_1\circ g_2$. similarly we find that $1_{S_2}=g_2\circ g_1$. That is, $g_1$ and $g_2$ are inverses of each other, and so $S_1$ and $S_2$ are isomorphic as subobjects of $C$. Conversely, if $S_1$ and $S_2$ are isomorphic subobjects then $f_1$ and $f_2$ factor through each other by an isomorphism $g:S_1\rightarrow S_2$. This gives us a partial order on (equivalence classes of) subobjects of $C$. If the class of equivalence classes of subobjects is in fact a proper set for every object $C$ we say that our category is “well-powered”.
The preceding two paragraphs can be restated in terms of quotient objects. Just switch the directions of all the arrows and the orders of all the compositions. We get a partial order on (equivalence classes of) quotient objects of $C$. If the class of equivalence classes is a proper set for each object $C$ then we say that the category is “co-well-powered”.
It should be noted that even though isomorphic subobjects come with an isomorphism between their objects, just having an isomorphism between the objects is not enough. One toy example is given in the comments below. Another is to consider two distinct one-element subsets of a given set. Clearly the object for each is a singleton, and all singletons are isomorphic, but the two subsets are not isomorphic as subobjects.
As an exercise, consider the category $\mathbf{CRing}$ of commutative rings with unit and determine the partial order on the set of quotient objects of $\mathbb{Z}$.
### Like this:
Posted by John Armstrong | Category theory
## 20 Comments »
1. [...] a “dual” definition, which we get by reversing all the arrows like this. For example, monos and epis are dual notions, as are subobjects and quotient objects. Just write down one definition in terms [...]
Pingback by | May 31, 2007 | Reply
2. [...] and Faithful Functors We could try to adapt the definitions of epics and monics to functors, but it turns out they’re not really the most useful. Really what we’re [...]
Pingback by | June 5, 2007 | Reply
3. [...] off, we know that subsets are subobjects, which are monomorphisms. More to the point, we can look at this subset and take its inclusion [...]
Pingback by | June 12, 2007 | Reply
4. Sorry if I bother you with this old post, but I didn’t catch one thing!
You wrote: “I \$S_1\$ and \$S_2\$ are isomorphic, then \$f_1\$ and \$f_2\$ will factor through each other”.
Could you prove it? In particular, I’m sure it works if \$f_1,f_2\$ are injective, but if not…?
Comment by | November 25, 2007 | Reply
5. That’s all you need. Notice that I defined a subobject to be a monic arrow. If you’re looking at a category like $\mathbf{Set}$ or $\mathbf{Grp}$, then monics are injective functions.
Comment by | November 25, 2007 | Reply
6. So there is nothing we can say if f_1,f_2 are monic but not injective? Even if they have the same domain, it seems that they might not factor through each other. (for example: take a category with two objects, two identities and two arrows from the first object to the second, these arrows are monic but they don’t factor)
Comment by | November 25, 2007 | Reply
7. You know, I think I was less than clear in the difference between isomorphic objects and isomorphic subobjects.. I suppose I should clear that section up.
Comment by | November 25, 2007 | Reply
8. [...] and Quotient Representations Today we consider subobjects and quotient objects in the category of representations of an algebra . Since the objects are representations we call [...]
Pingback by | December 5, 2008 | Reply
9. Can you show that there is an equivalence of categories between PreSet (the category of preordered sets) and PoSet (the category of partialy ordered sets) ?
Comment by Lucia | February 1, 2009 | Reply
10. You say “g\circ f is the identity function on X because g(f(x))=x for all x\in X.” but it seems to me this is only true if f and g are surjective (i.e. f and g are bijections). Alternatively, we construct an f’ whose codomain is the image of f, and whose domain is the preimage of g.
(also minor correction: “h_1 and h_2 both from Y to *X*” [not Y to Z]).
Comment by Alaistair | July 23, 2009 | Reply
11. sorry I meant construct an f’ whose codomain is the image of f, and a g’ whose codomain is the preimage of f.
Comment by Alaistair | July 23, 2009 | Reply
12. Alaistair, you seem very confused. The function $g$ is not given in advance. It’s what I just said I’m constructing. That is, it is the $f'$ you talk about constructing.
Comment by | July 23, 2009 | Reply
13. Thanks for the quick response.
Yes, sorry. What I said for $f'$ is essentially how you defined $g$.
What I was thinking is that: $g \circ f$ is not the identity on $X$, it is just the identity on the subset of X that is the preimage of f. But I just realised: you are assuming $f$ is total here, thus the preimage of f is $X$. Apologies- I have a proclivity for thinking of functions as partial (too much functional programming!)
Comment by Alaistair | July 23, 2009 | Reply
14. Sorry, but I do have some nits to pick:
In Set it’s not quite true that monics coincide with functions that have a left inverse; consider what happens when the domain is empty.
You have to be on your guard when you use the term “quotient” for a class of epis. In the category of commutative rings, it happens that the inclusion of the integers in the rationals, for instance, is an epi, but normally one does not consider that the rationals form a quotient ring of the integers (people usually reserve “quotient ring” for a surjective homomorphism $R \to R/I$). I’m not saying it’s wrong to define a quotient object in a category as a class of epis (people do that), but watch out about speaking of “quotient objects” in the category of rings in the same breath as “quotient rings” — there is a bad clash of terminology there.
I’ve never seen the terms “injective” and “surjective” used to describe morphisms in a general category which have left, resp., right inverses. This would definitely clash with standard usage; consider what people mean when they say “surjective continuous map”. Not all surjective continuous maps have sections. (By the way, some standard terms to introduce are “section” for a right inverse, and “retraction” for a left inverse.) Similarly for surjective homomorphisms in the category of groups.
Finally, not a nit but an observation: it’s probably a good idea to point out here that the statement that all epis in Set have a right inverse (a section) is equivalent to the axiom of choice.
Comment by | July 23, 2009 | Reply
15. Todd, a lot of the problem here is that there are at least three different naming conventions going on here. If I recall, when I wrote this I was taking Herrlich and Strecker as my source. And I specifically avoided “sect” and “retract” because those really hinge on certain topological references I’m not ready to introduce yet.
Comment by | July 23, 2009 | Reply
16. “Section” and “retraction” do not hinge on topological references. They are perfectly standard categorical terms. For example, “section” is used all the time when referring to splitting an exact sequence of groups (among other things).
I’ll check Herrlich and Strecker — that’s interesting if true. But I think you will agree that what people mean by surjective homomorphism or surjective continuous map doesn’t match the usage given here, and that it would be a good idea to point that out.
Please don’t be on the defensive — normally when I write, I do so in a spirit of trying to be of help!
Comment by | July 23, 2009 | Reply
17. Yes, the terms are used on their own, but you surely can’t deny that they’re meant to suggest the topological meanings. A section would never have been called a section without the idea of a section of a bundle, and a retraction would never have been called a retraction without the idea of a topological retraction. Why introduce a term when I can’t give some sort of motivation behind the name?
Comment by | July 23, 2009 | Reply
18. I’ll agree that section and retraction were (very probably, but I’m not an expert on the history) originally used in a topological context, but since then the meanings have expanded, and the way they are used today does not presuppose or hinge on topology. Anyway, I offered those terms as some useful current parlance, but if you don’t want to use them, that’s fine. They’re there for others to use if they want.
Comment by | July 23, 2009 | Reply
19. [...] principle, we know what a submanifold should be: a subobject in the category of smooth manifolds. That is, a submanifold of a manifold should be another [...]
Pingback by | March 7, 2011 | Reply
20. [...] can obviously define subalgebras and quotient algebras. Subalgebras are a bit more obvious than quotient algebras, though, being just subspaces that are [...]
Pingback by | August 6, 2012 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 103, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451085329055786, "perplexity_flag": "head"}
|
http://dwave.wordpress.com/2008/10/23/the-128-qubit-rainier-processor-ii-graph-embedding/
|
The 128-qubit Rainier chip: II. graph embedding
Posted on October 23, 2008 by
In the previous post I introduced the concept of an allowed edge set in hardware, and presented the edge set we chose for our current processor designs. I mentioned that a reduction in edges reduces the capability of the processor. In this post I will explain why this is and how it affects the performance of the system.
To begin, note that in the design we’re building, it is useful to refer to the physical qubits as vertices $V$ and physical couplers as edges $E$ in a graph $G$. Call $G=(V,E)$ the hardware graph. The hardware graphs for 8, 32 and 128-qubit systems were explicitly shown in the previous post, although in principle many other types of hardware graph (say for example the one we used for the 28-qubit Leda or 16-qubit Cerberus chips) are feasible. Internally here we call graphs formed of tiling the plane with $K_{44}$ blocks Chimera graphs.
Assume a hardware graph $G$ has been defined. For the following example, we’ll use the 32-qubit version for simplicity. Here it is for reference:
We now want to use this hardware graph for solving problems. Assume that we wish to use the chip to run a quantum adiabatic algorithm to solve for the global minimum of
${\cal E}(s_1, ..., s_{32})=\sum_{j=1}^{32} h_j s_j + \sum_{i<j=1}^{32} J_{ij} s_i s_j$
ie, a fully connected (all $J_{ij}$ non-zero) graph on 32 variables. Can we do this? The answer is no. To see why, assign to this problem a graph $G_p=(V_p, E_p)$, with vertices $V_p$ identified with variables and edges $E_p$ drawn whenever a non-zero $J_{ij}$ is required. For this example, this graph is the fully connected (all $J_{ij}$ non-zero) graph on 32 vertices. Call $G_p$ the problem graph.
A problem graph can be solved natively in the hardware if $G_p$ can be embedded in the hardware graph $G$. A graph embedding is in general a one to many map between vertices and edges in $G_p$ to vertices and edges in $G$. In the example above, $G_p$ cannot be embedded in $G$, as they have the same number of vertices, while $G_p$ is fully connected and $G$ is not.
Let’s try another problem graph. Let’s say we want to solve for the global minimum of
${\cal E}(s_1, ..., s_N)=\sum_{j=1}^N h_j s_j + \sum_{i,j \in E_p}^N J_{ij} s_i s_j$
where $N$ and the problem edge set $E_p$ are arbitrary. What can we say about whether or not we can embed the problem graph into a hardware graph? Here are some things that we know:
1. You can always embed a fully connected graph on $N$ variables into a Chimera graph with $N^2$ variables.
2. You can never embed a problem graph that has either more vertices or more edges than a hardware graph.
3. Any instance that is in the middle needs to be determined by solving what is in general a hard problem in its own right, that is does $G_p$ embed in $G$. In practice if you wanted to operate in this regime, you’d probably have a fast heuristic embedder to see if a problem natively embeds.
Based on these observations, there are three obvious ways these chips can be operated.
1. Use the chip as a solver for complete graphs, where the algorithm calling the chip has a hard-coded embedding scheme for any problem edge set. This option has the advantage that it is the most flexible for inclusion in the master algorithm (no restrictions on problem edge set), and there is no runtime cost for embedding. Its key disadvantage is that the number of variables of problems you can solve in this mode is upper bounded by roughly the square root of the number of qubits in hardware, which is a significant cost with the current chips & their low qubit count.
2. Use the chip as a solver for problem graphs that exactly match the hardware graph by making this a hard constraint in the master algorithm. In other words, as an algorithm designer you are constrained to only pose problems to the hardware that have problem graphs that exactly match the hardware graph. This has the advantage of having trivial hard-coded embedding and maximally using the resources of the hardware. Its key disadvantage is a severe loss of flexibility in algorithms design possibilities.
3. Use the chip for arbitrary problem sizes by running an embedding heuristic each time a possibly embeddable graph is generated by the master algorithm. This has the advantage of a significant gain in flexibility for algorithm designers. The main disadvantage is the increased runtime burden of having to compute embeddings on the fly.
My own strong preference is for #2 above. The loss of flexibility in algorithms design is a big problem, but we’ve been able to find ways to build algorithms respecting fixed hardware graphs already so I think that the advantages of #2 carry the day, at least in the short term.
So how does embedding work in practice? I will defer delving too deeply into the technical details here, leaving this to this publication. Here is a picture from that paper. The problem graph $G_p$ is on the left, and a simple hardware graph $G$ (not the one we use) is on the right. This is an example of a successful embedding.
One very interesting subject that Vicky’s paper discusses is that the embedding details actually affect the runtime of the quantum adiabatic algorithm for solving the problem, not just whether the problem can be natively solved. In other words, two different embeddings which are both completely valid can have vastly differing adiabatic runtimes. The reason for this is that the runtime is a function of the minimum gap between ground and first excited states, and the matrix element between them. Both of these are functions of the actual embedding and embedding parameters used. In fact, it is likely that different embedding strategies could change the asymptotic scaling of quantum adiabatic algorithms. As far as I know, no-one is currently investigating this aspect of the scaling of these algorithms–this is a feature of operation in a regime where the AQC does not natively support full inter-qubit connection (ie. any real machine).
To finish this introduction, note that the 128-qubit Rainier chip can embed two complete graphs of size $K_{16}$ and $K_{12}$ as shown here:
Here the blue and red circles are couplers, the black lines are qubits (a 4×4 array of 8-qubit unit cells), the yellow circles are individual qubit biases (the $h_j$). The dashed lines show regions where fully connected graphs can be embedded. If an algorithm designer wanted to use this chip in mode #1 above (a complete graph solver), you’d architect the system to feed this chip 16 and/or 12 variable problems. This illustrates the significant hit using the chips as complete solvers gives. You go from having 128 variables to 16. This is the primary reason why I think using this scale of system as a fully connected graph solver is not the best idea.
Like this:
This entry was posted in D-Wave Science & Technology, Superconducting Electronics, Superconducting Processors by Geordie. Bookmark the permalink.
About Geordie
I'm the chief technology officer of D-Wave and 2010 NAGA Brazilian jiu-jitsu light heavyweight world champion.
3 thoughts on “The 128-qubit Rainier chip: II. graph embedding”
1. JP on October 24, 2008 at 2:07 am said:
i see why now 2K qubits is where operating at number 1 start to take off. you would have K 256 and K 192.
so 2 at first, then when qubits increase you can use 2 and 1.
when does 3 become interesting / worth it?
2. Vasil on October 26, 2008 at 2:57 am said:
JP, I think 3 becomes necessary when you can’t do 1 and 2: you may not have enough qubits to do 1, and the particular problem that you’re trying to solve may be getting hurt too much when directly using the hardware graph, so then 2 doesn’t work either.
3. Pingback: Sparse coding on D-Wave hardware: things that don’t work | Hack The Multiverse
Follow
Follow “Hack The Multiverse”
Get every new post delivered to your Inbox.
Join 1,036 other followers
Cancel
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9342586994171143, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/27471?sort=newest
|
## on the computation of decomposition groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $L/K$ be a finite Galois extension of function fields, with Galois groups $G$. I want to look at the ramification of primes in the extension, i.e. to get $e_p$ and $f_p$ for a prime $p$ in the base field $K$ (since the extension is Galois, the ramification index and inertia degree are independent of the choice of the prime lying above $p$). From Serre's 'Local Fields', it is clear that if we fix a prime $q$ in $L$ which lies over $p$, then we can look at the decomposition group associated to $q$, say $G_q$, and its inertia group, say $(G_q)_0$ (please forgive me for the notation :P), and an immediate result is that $e_p = \left|{(G_q)_0} \right|$ and $f_p = \left| {G_q/(G_q)_0} \right|$.
And here is my problem. Is there any nice way to compute the decomposition groups, inertia groups or just the cardinalities? If not, can we do something in some special cases? For example, when $G$ is cyclic?
-
## 2 Answers
Henri Cohen's A Course in Computational Algebraic Number Theory contains quite a bit of information. Chapters 4.8, 6.2 and 6.3 combined result in algorithms that compute decomposition groups. Note that if you want to relate different primes you will have to first compute the galois group (6.3) and fix a presentation.
-
I checked that, but it seems that Henri Cohen assumed in his book that all those were built on number fields. What I really want to do is to compute all these in function fields, and I don't think the algorithms in number fields can be just used on function fields :( – Yujia Qiu Jun 11 2010 at 9:07
Dedekind's theorem, which is the basis for most of Cohen's algorithms in these chapters, is still valid. The same is true for the resolvents in 6.3. For example, computing the maximal order becomes much easier since extracting the square free part of the discriminant is polynomial (gcd with derivative). In fact, most of the algorithms, after a suitable change to function fields, usually become easier. On a bigger scale than the original question, the ray class groups also become easier to calculate (Kedlaya/Satoh's algorithms), so you can actually compute using Artin symbols. – Dror Speiser Jun 11 2010 at 10:38
Thanks a lot, Dror – Yujia Qiu Jun 11 2010 at 13:40
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
A particularly well-studied case is that of cyclic extensions $L/K$, where $K=k(x)$ is a rational function field, the degree $n:=[L:K]$ is not divisible by the characteristic of $k$ and $k$ contains the $n$-th roots of unity. In this case there exists a kind of normal form $L=K(y)$ for generating $L$: $y^n=f(x)$, where the prime factorization of $f\in k[x]$ satisfies some requirements. One can then compute the ramification indices and inertia degrees using only the multiplicities and degrees of the prime factors of $f$. You can find the result in an article by Helmut Hasse: "Theorie der relativ-zyklischen algebraischen Funktionenkörper, insbesondere bei endlichem Konstantenkörper.", Journal für die reine und angewandte Mathematik 172 (1935).
Similar things can be done for Artin-Schreier-extensions of a rational function field.
Hagen.
-
Thanks Hagen. The extension you mentioned is basically Kummer extension, right? I found that the article is in German, is there any English translation of this? I am afraid that my limited German is not enough to read an article :P – Yujia Qiu Jun 8 2010 at 15:20
Yes the starting point is Kummer theory, but then one has to discuss specific transformations of generators of $L/k(x)$ to bring the polynomial $f(x)$ into a certain form. As for an english translation of Hasse's article: I do not know of any such translation - sorry. Hagen – Hagen Jun 9 2010 at 8:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9029965400695801, "perplexity_flag": "head"}
|
http://en.m.wikibooks.org/wiki/C_Programming/Control
|
C Programming/Control
Very few programs follow exactly one control path and have each instruction stated explicitly. In order to program effectively, it is necessary to understand how one can alter the steps taken by a program due to user input or other conditions, how some steps can be executed many times with few lines of code, and how programs can appear to demonstrate a rudimentary grasp of logic. C constructs known as conditionals and loops grant this power.
From this point forward, it is necessary to understand what is usually meant by the word block. A block is a group of code statements that are associated and intended to be executed as a unit. In C, the beginning of a block of code is denoted with { (left curly), and the end of a block is denoted with }. It is not necessary to place a semicolon after the end of a block. Blocks can be empty, as in {}. Blocks can also be nested; i.e. there can be blocks of code within larger blocks.
Conditionals
There is likely no meaningful program written in which a computer does not demonstrate basic decision-making skills. It can actually be argued that there is no meaningful human activity in which some sort of decision-making, instinctual or otherwise, does not take place. For example, when driving a car and approaching a traffic light, one does not think, "I will continue driving through the intersection." Rather, one thinks, "I will stop if the light is red, go if the light is green, and if yellow go only if I am traveling at a certain speed a certain distance from the intersection." These kinds of processes can be simulated in C using conditionals.
A conditional is a statement that instructs the computer to execute a certain block of code or alter certain data only if a specific condition has been met. The most common conditional is the If-Else statement, with conditional expressions and Switch-Case statements typically used as more shorthanded methods.
Before one can understand conditional statements, it is first necessary to understand how C expresses logical relations. C treats logic as being arithmetic. The value 0 (zero) represents false, and all other values represent true. If you chose some particular value to represent true and then compare values against it, sooner or later your code will fail when your assumed value (often 1) turns out to be incorrect. Code written by people uncomfortable with the C language can often be identified by the usage of #define to make a "TRUE" value. [1]
Because logic is arithmetic in C, arithmetic operators and logical operators are one and the same. Nevertheless, there are a number of operators that are typically associated with logic:
Relational and Equivalence Expressions:
a < b
1 if a is less than b, 0 otherwise.
a > b
1 if a is greater than b, 0 otherwise.
a <= b
1 if a is less than or equal to b, 0 otherwise.
a >= b
1 if a is greater than or equal to b, 0 otherwise.
a == b
1 if a is equal to b, 0 otherwise.
a != b
1 if a is not equal to b, 0 otherwise
New programmers should take special note of the fact that the "equal to" operator is ==, not =. This is the cause of numerous coding mistakes and is often a difficult-to-find bug, as the expression `(a = b)` sets `a` equal to `b` and subsequently evaluates to `b`; but the expression `(a == b)`, which is usually intended, checks if `a` is equal to `b`. It needs to be pointed out that, if you confuse = with ==, your mistake will often not be brought to your attention by the compiler. A statement such as `if ( c = 20) {}` is considered perfectly valid by the language, but will always assign 20 to `c` and evaluate as true. A simple technique to avoid this kind of bug (in many, not all cases) is to put the constant first. This will cause the compiler to issue an error, if == got misspelled with =.
Note that C does not have a dedicated boolean type as many other languages do. 0 means false and anything else true. So the following are equivalent:
``` if (foo()) {
//do something
}
```
and
``` if (foo() != 0) {
//do something
}
```
Often `#define TRUE 1` and `#define FALSE 0` are used to work around the lack of a boolean type. This is bad practice, since it makes assumptions that do not hold. It is a better idea to indicate what you are actually expecting as a result from a function call, as there are many different ways of indicating error conditions, depending on the situation.
``` if (strstr("foo", bar) >= 0) {
//bar contains "foo"
}
```
Here, `strstr` returns the index where the substring foo is found and -1 if it was not found. Note that this would fail with the `TRUE` definition mentioned in the previous paragraph. It would also not produce the expected results if we omitted the `>= 0`.
One other thing to note is that the relational expressions do not evaluate as they would in mathematical texts. That is, an expression `myMin < value < myMax` does not evaluate as you probably think it might. Mathematically, this would test whether or not value is between myMin and myMax. But in C, what happens is that value is first compared with myMin. This produces either a 0 or a 1. It is this value that is compared against myMax. Example:
``` int value = 20;
/* ... */
if ( 0 < value < 10) { // don't do this! it always evaluates to "true"!
/* do some stuff */
}
```
Because value is greater than 0, the first comparison produces a value of 1. Now 1 is compared to be less than 10, which is true, so the statements in the if are executed. This probably is not what the programmer expected. The appropriate code would be
``` int value = 20;
/* ... */
if ( 0 < value && value < 10) { // the && means "and"
/* do some stuff */
}
```
Logical Expressions
a || b
when EITHER a or b is true (or both), the result is 1, otherwise the result is 0.
a && b
when BOTH a and b are true, the result is 1, otherwise the result is 0.
!a
when a is true, the result is 0, when a is 0, the result is 1.
Here's an example of a larger logical expression. In the statement:
``` e = ((a && b) || (c > d));
```
e is set equal to 1 if a and b are non-zero, or if c is greater than d. In all other cases, e is set to 0.
C uses short circuit evaluation of logical expressions. That is to say, once it is able to determine the truth of a logical expression, it does no further evaluation. This is often useful as in the following:
```int myArray[12];
....
if ( i < 12 && myArray[i] > 3) {
....
```
In the snippet of code, the comparison of i with 12 is done first. If it evaluates to 0 (false), i would be out of bounds as an index to myArray. In this case, the program never attempts to access myArray[i] since the truth of the expression is known to be false. Hence we need not worry here about trying to access an out-of-bounds array element if it is already known that i is greater than or equal to zero. A similar thing happens with expressions involving the or || operator.
```while( doThis() || doThat()) ...
```
doThat() is never called if doThis() returns a non-zero (true) value.
Bitwise Boolean Expressions
The bitwise operators work bit by bit on the operands. The operands must be of integral type (one of the types used for integers). The six bitwise operators are & (AND), | (OR), ^ (exclusive OR, commonly called XOR), ~ (NOT, which changes 1 to 0 and 0 to 1), << (shift left), and >> (shift right). The negation operator is a unary operator which precedes the operand. The others are binary operators which lie between the two operands. The precedence of these operators is lower than that of the relational and equivalence operators; it is often required to parenthesize expressions involving bitwise operators.
For this section, recall that a number starting with 0x is hexadecimal, or hex for short. Unlike the normal decimal system using powers of 10 and digits 0123456789, hex uses powers of 16 and digits 0123456789abcdef. Hexadecimal is commonly used in C programs because a programmer can quickly convert it to or from binary (powers of 2 and digits 01). C does not directly support binary notation, which would be really verbose anyway.
a & b
bitwise boolean and of a and b
0xc & 0xa produces the value 0x8 (in binary, 1100 & 1010 produces 1000)
a | b
bitwise boolean or of a and b
0xc | 0xa produces the value 0xe (in binary, 1100 | 1010 produces 1110)
a ^ b
bitwise xor of a and b
0xc ^ 0xa produces the value 0x6 (in binary, 1100 ^ 1010 produces 0110)
~a
bitwise complement of a.
~0xc produces the value -1-0xc (in binary, ~1100 produces ...11110011 where "..." may be many more 1 bits)
a << b
shift a left by b (multiply a by $2^b$)
0xc << 1 produces the value 0x18 (in binary, 1100 << 1 produces the value 11000)
a >> b
shift a right by b (divide a by $2^b$)
0xc >> 1 produces the value 0x6 (in binary, 1100 >> 1 produces the value 110)
The If-Else statement
If-Else provides a way to instruct the computer to execute a block of code only if certain conditions have been met. The syntax of an If-Else construct is:
``` if (/* condition goes here */) {
/* if the condition is non-zero (true), this code will execute */
} else {
/* if the condition is 0 (false), this code will execute */
}
```
The first block of code executes if the condition in parentheses directly after the if evaluates to non-zero (true); otherwise, the second block executes.
The else and following block of code are completely optional. If there is no need to execute code if a condition is not true, leave it out.
Also, keep in mind that an if can directly follow an else statement. While this can occasionally be useful, chaining more than two or three if-elses in this fashion is considered bad programming practice. We can get around this with the Switch-Case construct described later.
Two other general syntax notes need to be made that you will also see in other control constructs: First, note that there is no semicolon after if or else. There could be, but the block (code enclosed in { and }) takes the place of that. Second, if you only intend to execute one statement as a result of an if or else, curly braces are not needed. However, many programmers believe that inserting curly braces anyway in this case is good coding practice.
The following code sets a variable c equal to the greater of two variables a and b, or 0 if a and b are equal.
``` if(a > b) {
c = a;
} else if(b > a) {
c = b;
} else {
c = 0;
}
```
Consider this question: why can't you just forget about else and write the code like:
``` if(a > b) {
c = a;
}
if(a < b) {
c = b;
}
if(a == b) {
c = 0;
}
```
There are several answers to this. Most importantly, if your conditionals are not mutually exclusive, two cases could execute instead of only one. If the code was different and the value of a or b changes somehow (e.g.: you reset the lesser of a and b to 0 after the comparison) during one of the blocks? You could end up with multiple if statements being invoked, which is not your intent. Also, evaluating if conditionals takes processor time. If you use else to handle these situations, in the case above assuming (a > b) is non-zero (true), the program is spared the expense of evaluating additional if statements. The bottom line is that it is usually best to insert an else clause for all cases in which a conditional will not evaluate to non-zero (true).
The conditional expression
A conditional expression is a way to set values conditionally in a more shorthand fashion than If-Else. The syntax is:
```(/* logical expression goes here */) ? (/* if non-zero (true) */) : (/* if 0 (false) */)
```
The logical expression is evaluated. If it is non-zero (true), the overall conditional expression evaluates to the expression placed between the ? and :, otherwise, it evaluates to the expression after the :. Therefore, the above example (changing its function slightly such that c is set to b when a and b are equal) becomes:
```c = (a > b) ? a : b;
```
Conditional expressions can sometimes clarify the intent of the code. Nesting the conditional operator should usually be avoided. It's best to use conditional expressions only when the expressions for a and b are simple. Also, contrary to a common beginner belief, conditional expressions do not make for faster code. As tempting as it is to assume that fewer lines of code result in faster execution times, there is no such correlation.
The Switch-Case statement
Say you write a program where the user inputs a number 1-5 (corresponding to student grades, A(represented as 1)-D(4) and F(5)), stores it in a variable grade and the program responds by printing to the screen the associated letter grade. If you implemented this using If-Else, your code would look something like this:
``` if(grade == 1) {
printf("A\n");
} else if(grade == 2) {
printf("B\n");
} else if /* etc. etc. */
```
Having a long chain of if-else-if-else-if-else can be a pain, both for the programmer and anyone reading the code. Fortunately, there's a solution: the Switch-Case construct, of which the basic syntax is:
``` switch(/* integer or enum goes here */) {
case /* potential value of the aforementioned int or enum */:
/* code */
case /* a different potential value */:
/* different code */
/* insert additional cases as needed */
default:
/* more code */
}
```
The Switch-Case construct takes a variable, usually an int or an enum, placed after switch, and compares it to the value following the case keyword. If the variable is equal to the value specified after case, the construct "activates", or begins executing the code after the case statement. Once the construct has "activated", there will be no further evaluation of cases.
Switch-Case is syntactically "weird" in that no braces are required for code associated with a case.
Very important: Typically, the last statement for each case is a break statement. This causes program execution to jump to the statement following the closing bracket of the switch statement, which is what one would normally want to happen. However if the break statement is omitted, program execution continues with the first line of the next case, if any. This is called a fall-through. When a programmer desires this action, a comment should be placed at the end of the block of statements indicating the desire to fall through. Otherwise another programmer maintaining the code could consider the omission of the 'break' to be an error, and inadvertently 'correct' the problem. Here's an example:
``` switch ( someVariable ) {
case 1:
printf("This code handles case 1\n");
break;
case 2:
printf("This prints when someVariable is 2, along with...\n");
/* FALL THROUGH */
case 3:
printf("This prints when someVariable is either 2 or 3.\n" );
break;
}
```
If a default case is specified, the associated statements are executed if none of the other cases match. A default case is optional. Here's a switch statement that corresponds to the sequence of if - else if statements above.
Back to our example above. Here's what it would look like as Switch-Case:
``` switch (grade) {
case 1:
printf("A\n");
break;
case 2:
printf("B\n");
break;
case 3:
printf("C\n");
break;
case 4:
printf("D\n");
break;
default:
printf("F\n");
break;
}
```
A set of statements to execute can be grouped with more than one value of the variable as in the following example. (the fall-through comment is not necessary here because the intended behavior is obvious)
``` switch (something) {
case 2:
case 3:
case 4:
/* some statements to execute for 2, 3 or 4 */
break;
case 1:
default:
/* some statements to execute for 1 or other than 2,3,and 4 */
break;
}
```
Switch-Case constructs are particularly useful when used in conjunction with user defined enum data types. Some compilers are capable of warning about an unhandled enum value, which may be helpful for avoiding bugs.
↑Jump back a section
Loops
Often in computer programming, it is necessary to perform a certain action a certain number of times or until a certain condition is met. It is impractical and tedious to simply type a certain statement or group of statements a large number of times, not to mention that this approach is too inflexible and unintuitive to be counted on to stop when a certain event has happened. As a real-world analogy, someone asks a dishwasher at a restaurant what he did all night. He will respond, "I washed dishes all night long." He is not likely to respond, "I washed a dish, then washed a dish, then washed a dish, then...". The constructs that enable computers to perform certain repetitive tasks are called loops.
While loops
A while loop is the most basic type of loop. It will run as long as the condition is non-zero (true). For example, if you try the following, the program will appear to lock up and you will have to manually close the program down. A situation where the conditions for exiting the loop will never become true is called an infinite loop.
``` int a=1;
while(42) {
a = a*2;
}
```
Here is another example of a while loop. It prints out all the powers of two less than 100.
``` int a=1;
while(a<100) {
printf("a is %d \n",a);
a = a*2;
}
```
The flow of all loops can also be controlled by break and continue statements. A break statement will immediately exit the enclosing loop. A continue statement will skip the remainder of the block and start at the controlling conditional statement again. For example:
``` int a=1;
while (42) { // loops until the break statement in the loop is executed
printf("a is %d ",a);
a = a*2;
if(a>100) {
break;
} else if(a==64) {
continue; // Immediately restarts at while, skips next step
}
printf("a is not 64\n");
}
```
In this example, the computer prints the value of a as usual, and prints a notice that a is not 64 (unless it was skipped by the continue statement).
Similar to If above, braces for the block of code associated with a While loop can be omitted if the code consists of only one statement, for example:
``` int a=1;
while(a < 100) a = a*2;
```
This will merely increase a until a is not less than 100.
When the computer reaches the end of the while loop, it always goes back to the while statement at the top of the loop, where it re-evaluates the controlling condition. If that condition is "true" at that instant -- even if it was temporarily 0 for a few statements inside the loop -- then the computer begins executing the statements inside the loop again; otherwise the computer exits the loop. The computer does not "continuously check" the controlling condition of a while loop during the execution of that loop. It only "peeks" at the controlling condition each time it reaches the `while` at the top of the loop.
It is very important to note, once the controlling condition of a While loop becomes 0 (false), the loop will not terminate until the block of code is finished and it is time to reevaluate the conditional. If you need to terminate a While loop immediately upon reaching a certain condition, consider using break.
A common idiom is to write:
``` int i = 5;
while(i--) {
printf("java and c# can't do this\n");
}
```
This executes the code in the while loop 5 times, with i having values that range from 4 down to 0 (inside the loop). Conveniently, these are the values needed to access every item of an array containing 5 elements.
For loops
For loops generally look something like this:
```for(initialization; test; increment) {
/* code */
}
```
The initialization statement is executed exactly once - before the first evaluation of the test condition. Typically, it is used to assign an initial value to some variable, although this is not strictly necessary. The initialization statement can also be used to declare and initialize variables used in the loop.
The test expression is evaluated each time before the code in the for loop executes. If this expression evaluates as 0 (false) when it is checked (i.e. if the expression is not true), the loop is not (re)entered and execution continues normally at the code immediately following the FOR-loop. If the expression is non-zero (true), the code within the braces of the loop is executed.
After each iteration of the loop, the increment statement is executed. This often is used to increment the loop index for the loop, the variable initialized in the initialization expression and tested in the test expression. Following this statement execution, control returns to the top of the loop, where the test action occurs. If a continue statement is executed within the for loop, the increment statement would be the next one executed.
Each of these parts of the for statement is optional and may be omitted. Because of the free-form nature of the for statement, some fairly fancy things can be done with it. Often a for loop is used to loop through items in an array, processing each item at a time.
``` int myArray[12];
int ix;
for (ix = 0; ix<12; ix++) {
myArray[ix] = 5 * ix + 3;
}
```
The above for loop initializes each of the 12 elements of myArray. The loop index can start from any value. In the following case it starts from 1.
``` for(ix = 1; ix <= 10; ix++) {
printf("%d ", ix);
}
```
which will print
```1 2 3 4 5 6 7 8 9 10
```
You will most often use loop indexes that start from 0, since arrays are indexed at zero, but you will sometimes use other values to initialize a loop index as well.
The increment action can do other things, such as decrement. So this kind of loop is common:
``` for (i = 5; i > 0; i--) {
printf("%d ",i);
}
```
which yields
```5 4 3 2 1
```
Here's an example where the test condition is simply a variable. If the variable has a value of 0 or NULL, the loop exits, otherwise the statements in the body of the loop are executed.
``` for (t = list_head; t; t = NextItem(t) ) {
/*body of loop */
}
```
A WHILE loop can be used to do the same thing as a FOR loop, however a FOR loop is a more condensed way to perform a set number of repetitions since all of the necessary information is in a one line statement.
A FOR loop can also be given no conditions, for example:
``` for(;;) {
/* block of statements */
}
```
This is called an infinite loop since it will loop forever unless there is a break statement within the statements of the for loop. The empty test condition effectively evaluates as true.
It is also common to use the comma operator in for loops to execute multiple statements.
``` int i, j, n = 10;
for(i = 0, j = 0; i <= n; i++,j+=2) {
printf("i = %d , j = %d \n",i,j);
}
```
Special care should be taken when designing or refactoring the conditional part, especially whether using < or <= , whether start and stop should be corrected by 1, and in case of prefix- and postfix-notations. ( On a 100 yards promenade with a tree every 10 yards there are 11 trees. )
``` int i, n = 10;
for(i = 0; i < n; i++) printf("%d ",i); // processed n times => 0 1 2 3 ... (n-1)
printf("\n");
for(i = 0; i <= n; i++) printf("%d ",i); // processed (n+1) times => 0 1 2 3 ... n
printf("\n");
for(i = n; i--;) printf("%d ",i); // processed n times => (n-1) ...3 2 1 0
printf("\n");
for(i = n; --i;) printf("%d ",i); // processed (n-1) times => (n-1) ...4 3 2 1
printf("\n");
```
Do-While loops
A DO-WHILE loop is a post-check while loop, which means that it checks the condition after each run. As a result, even if the condition is zero (false), it will run at least once. It follows the form of:
``` do {
/* do stuff */
} while (condition);
```
Note the terminating semicolon. This is required for correct syntax. Since this is also a type of while loop, break and continue statements within the loop function accordingly. A continue statement causes a jump to the test of the condition and a break statement exits the loop.
It is worth noting that Do-While and While are functionally almost identical, with one important difference: Do-While loops are always guaranteed to execute at least once, but While loops will not execute at all if their condition is 0 (false) on the first evaluation.
↑Jump back a section
One last thing: goto
goto is a very simple and traditional control mechanism. It is a statement used to immediately and unconditionally jump to another line of code. To use goto, you must place a label at a point in your program. A label consists of a name followed by a colon (:) on a line by itself. Then, you can type "goto label;" at the desired point in your program. The code will then continue executing beginning with label. This looks like:
``` MyLabel:
/* some code */
goto MyLabel;
```
The ability to transfer the flow of control enabled by gotos is so powerful that, in addition to the simple if, all other control constructs can be written using gotos instead. Here, we can let "S" and "T" be any arbitrary statements:
``` if (''cond'') {
S;
} else {
T;
}
/* ... */
```
The same statement could be accomplished using two gotos and two labels:
``` if (''cond'') goto Label1;
T;
goto Label2;
Label1:
S;
Label2:
/* ... */
```
Here, the first goto is conditional on the value of "cond". The second goto is unconditional. We can perform the same translation on a loop:
``` while (''cond1'') {
S;
if (''cond2'') break;
T;
}
/* ... */
```
Which can be written as:
``` Start:
if (!''cond1'') goto End;
S;
if (''cond2'') goto End;
T;
goto Start;
End:
/* ... */
```
As these cases demonstrate, often the structure of what your program is doing can usually be expressed without using gotos. Undisciplined use of gotos can create unreadable, unmaintainable code when more idiomatic alternatives (such as if-elses, or for loops) can better express your structure. Theoretically, the goto construct does not ever have to be used, but there are cases when it can increase readability, avoid code duplication, or make control variables unnecessary. You should consider first mastering the idiomatic solutions, and use goto only when necessary. Keep in mind that many, if not most, C style guidelines strictly forbid use of goto, with the only common exceptions being the following examples.
One use of goto is to break out of a deeply nested loop. Since break will not work (it can only escape one loop), goto can be used to jump completely outside the loop. Breaking outside of deeply nested loops without the use of the goto is always possible, but often involves the creation and testing of extra variables that may make the resulting code far less readable than it would be with goto. The use of goto makes it easy to undo actions in an orderly fashion, typically to avoid failing to free memory that had been allocated.
Another accepted use is the creation of a state machine. This is a fairly advanced topic though, and not commonly needed.
↑Jump back a section
Examples
```#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int years;
printf("Enter your age in years : ");
fflush(stdout);
errno = 0;
if(scanf("%d", &years) != 1 || errno)
return EXIT_FAILURE;
printf("Your age in days is %d\n", years * 365);
return 0;
}
```
↑Jump back a section
Further reading
↑Jump back a section
Read in another language
This page is available in 5 languages
Last modified on 4 March 2013, at 00:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234154224395752, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/tags/bose-einstein-condensate/hot?filter=year
|
Tag Info
Hot answers tagged bose-einstein-condensate
11
What prevents bosons from occupying the same location?
Er ... nothing prevents this. That's what a Bose-Einstein condensate is: lots of bosons in the same place and quantum state. You are observing that the sate is not perfectly localized, but that is a consequence of the state not being exactly zero momentum. Ultimately the Heisenberg principle puts a lower limit on how localized they could be. If the bosons ...
6
Can bosons that are composed of several fermions occupy the same state?
This is a nice puzzle--- but the answer is simple: the composite bosons can occupy the same state when the state is spatially delocalized on a scale larger than the scale of the wavefunction of the fermions inside, but they feel a repulsive force which prevents them from being at the same spatial point, so that they cannot sit at the same point at the same ...
4
Why do Photons want to be together?
I think what you are talking about is the stimulated emission of radiation. This is part of the process that occurs in a LASER and in fact, gave the LASER it's name - Light Amplification through the Simulated Emission of Radiation. When an atom is in an excited state it can spontaneously decay to a lower energy state with the emission of a photon of a ...
4
Bose-Einstein condensate for general interacting systems
Bogoliubov proved long, long ago that the condensate is stable against weak interactions. The interactions scatter some fraction of bosons out of the lowest-energy single-particle state ("depleting" the condensate), but off-diagonal long range order remains. For a nice introduction to Bogoliubov's theory see Ben Simon's lectures ...
4
What prevents bosons from occupying the same location?
This is really just a comment to dmckee's answer, but it got a bit long for a comment. The problem with your question: what keeps bosons from occupying the same location? is that no particle has a precisely defined position. Remember that when we get down to the sizes of atoms etc particles don't have a position. They are described by a wavefunction ...
3
Gross-Pitaevskii equation in Bose-Einstein condensates
1) Some of the assumptions of the Gross-Pitaevskii equation (GPE) are: all atoms are in the same condensate wave function, the condensate is at $T=0$, collisions between atoms are sufficiently low energy that the interactions can be well described by the $s$-wave scattering length, so that the interaction can be written ...
3
Can bosons that are composed of several fermions occupy the same state?
Yes, they can, an experimental example of that is Bose-Einstein Condensate of fermions. And that is possible because actually they will have the same wave function, in sense that nature no more capable of making any distinguish between them. Regarding everyday life, actually saying that it is bosonic is just a formal statement, in sense that Pauli exclusion ...
3
Bose-Einstein condensation in systems with a degenerate ground state
In reality you almost always find that the particles prefer to go to the same state because of some tiny energy shifts in the system. For example, a ferromagnetic condensate can have many degenerate states, but there is an energy cost for particles that disagree. These systems will break symmetry by having all the particles choose the same (arbitrary) state. ...
3
Why water is not superfluid?
You refer to the Landau criterion for superfluidity (there is a separate question whether this is really the best way to think about superfluids, and whether the Landau criterion is necessary and/or sufficient). In a superfluid the low energy excitations are phonons, the dispersion relation is linear $E_p\sim c p$, and the critical velocity is non-zero. In ...
2
Are all bose-einstein condensates superfluid?
You can have superfluids that are not BECs and BECs that are not superfluid. Let me quote a text, "Bose-Einstein Condensation in Dilute Gases", Pethick & Smith, 2nd edition (2008), chapter 10: Historically, the connection between superfluidity and the existence of a condensate, a macroscopically occupied quantum state, dates back to Fritz ...
2
Why is the BCS trial function valid across the BEC-BCS crossover?
You can gain some intuition from looking at the density distribution function in momentum space which for the $|BCS\rangle$ is given by $n_k=v^{2}_k$. In the BCS limit one finds approximately the filled Fermi sphere, while in the BEC limit $n_k\sim 1/(1+[ka]^2)^2$ which is proportional to the square of the Fourier transform of the dimer wave function. For ...
2
Why do Photons want to be together?
Any particle in a system (like a photon or an electron) can be in many different energy state. You may be familiar with the energy states of atoms, these are the states occupied by electrons orbiting the nucleus. But electron are fermions and they don't want to be together (in fact they can't). In order to minimize the energy of the atom any additional ...
1
First order coherence function in terms of momentum distribution function
I believe, that the derivation is wrong... If you assume a translationally invariant state, such that $G^1(r, r') = G^1(r - r')$ then you can get the result. Rewrite the exponential as $p r - p' r' = p( r- r') + r'(p - p')$. Since, in this case, the left-hand-side of Eq. (2.27) can only depend on $r - r'$ it must be such that $p = p'$ from the second term. ...
1
Bose-Einstein condensation in systems with a degenerate ground state
The particles are simply distributed evenly between the degenerate lowest energy states. This is the case for a ideal spinor BEC without Zeeman effects, for example: "Because there are three internal states, the condensation temperature of an ideal spin-1 gas at $p = q = 0$ is reduced to $T_c^{\mathrm{spinor}} = (1/3)^{2/3}T_0$" where "$T_0$ is the ...
1
Bose-Einstein condensate in 1D
It is necessary to clarify that a uniform, non interacting Bose gas (considered to be confined in a periodic box) in thermal equilibrium does not have a macroscopic occupation of the zero momentum mode if $d<3$. This is not quite accurate for $d=2$ as macroscopic occupation is achieved at T=0, or rather the critical temperature tends to zero in the limit ...
1
Looking for a complete review of the BEC-BCS crossover
I recently started reading this book: http://www.amazon.com/BCS-BEC-Crossover-Unitary-Lecture-Physics/dp/3642219772 So far I like the organization and pace. But judging by the table of contents it appears to be very detailed and thorough. It is, however, a monograph. But the style is pretty close to a textbook. Plus it has around 150 references at the end ...
1
Why water is not superfluid?
Because water is liquid at much too high a temperature. Helium is only superfluid near absolute zero. To have a superfluid, you need the quantum wavelength of the atoms given the environmental decoherence to be longer than the separation between the atoms, so they can coherently come together.
Only top voted, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201645851135254, "perplexity_flag": "head"}
|
http://mathhelpforum.com/calculus/69854-converting-parametric-equations-into-cartesian-form.html
|
# Thread:
1. ## converting parametric equations into cartesian form
show that x = sin t and y = sin (t + pi/6)
can be written in the form y = ax + b√1-x^2 stating the values of a and b
im really stuck
2. Originally Posted by sharp357
show that x = sin t and y = sin (t + pi/6)
can be written in the form y = ax + b√1-x^2 stating the values of a and b
im really stuck
$y = \sin \left( t + \frac {\pi}6 \right)$
$= \sin t \cos \frac {\pi}6 + \sin \frac {\pi}6 \cos t$ ........by the addition formula for sine
$= \frac {\sqrt{3}}2 \sin t + \frac 12 \cos t$
$= \frac {\sqrt{3}}2 \sin t + \frac 12 \sqrt{ 1 - \sin^2 t }$
can you finish? in particular, pay attention to the $\sin t$'s you see in the above expression. what do you know about $\sin t$ here?
3. hmmm, i think i can. thanks for the help
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8789119720458984, "perplexity_flag": "middle"}
|
http://physics.aps.org/authors/kimitoshi_kono
|
# Kimitoshi Kono
Kimitoshi Kono is the Chief Scientist of the Low Temperature Physics Laboratory, RIKEN. He received his Doctor of Science degree from the Department of Physics, University of Tokyo in 1982. He was an associate professor at the Hyogo University of Teacher Education, Tsukuba University, and the ISSP at the University of Tokyo before moving to RIKEN in 2000. He was a Humboldt Reseach Fellow at the University of Mainz (1988–1989) and at the University of Konstanz (1991). An experimentalist, his research interests include the properties of superfluid helium-$3$, strong correlation effects in single-electron transport on the surface of liquid helium, electron qubits on liquid He, the quantum transport associated with spins and supersolidity of helium-$4$.
Viewpoint
## Electrons Take Their Places on a Liquid Helium Grid
, Published December 19, 2011
quantum-info | mesoscopics
A device that efficiently transfers single electrons from one grid point to another on the surface of liquid helium could provide a scalable way to make arrays of electron qubits.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9164714217185974, "perplexity_flag": "middle"}
|
http://physics.stackexchange.com/questions/tagged/mathematics
|
# Tagged Questions
DO NOT USE THIS TAG just because your question involves math! If your question is on simplification of a mathematical expression, please ask it at math.stackexchange.com The mathematics tag covers non-applied pure mathematical disciplines that are traditionally not part of the mathematical physics ...
0answers
27 views
### Problem with Discrete Parseval's Theorem [migrated]
I think I must be missing something obvious, but I can't for the life of me see what it is. The discrete version of Parseval's theorem can be written like this: \$\sum_{n=0}^{N-1} |x[n]|^2 = ...
2answers
59 views
### Expanding two-variable function $f(x,y)$ over the complete sets $\{ g_{i}(x) \}$ and $\{ h_{j}(y) \}$
Quite often (see, for example, this PDF, 50 KB) when discussing the Born-Oppenheimer approximation the following assertion is made: any well-behaved function of two independent variables $f(x,y)$ can ...
2answers
105 views
### Why is $r'/r^2 = -1/r$? [closed]
If $r=r(t)$, why is $\frac{r'(t)}{(r(t))^2}$ = $\frac{1}{r(t)}$ where $'$ denotes the derivative? I saw it in a lecture. Can you please explain?
2answers
77 views
### Is there any phenomenon in physics which is sensitive to irrational numbers?
We can measure only rational numbers by our scale. Here is an example where irrational numbers does makes sense. If so then this question may have some theoretical importance. Is irrational numbers ...
1answer
113 views
### Does nature tetrate?
We see addition, multiplication and exponentiation in the natural formulae that make up physics. However, do we ever see tetration (repeated exponentiation) or higher hyper-operators in nature? ...
2answers
144 views
### Integer physics
Are there interesting (aspects of) problems in modern physics that can be expressed solely in terms of integer numbers? Bonus points for quantum mechanics.
1answer
77 views
### Spin(n) group SO(n) relation
Is it correct to state that the elements of Spin(n) fulfill a Clifford algebra and that the Lie group generators of Spin(n) is given by the commutator of the elements? If not, then what is the ...
2answers
113 views
### Suggestions on a particular arXiv publication on math needed for theoretical physicists [closed]
I'm going to start my PhD in a year and I'll be taking a gap year doing other stuff. But I also wanted to fill in the gaps in my math knowledge and I came across an arXiv publication called ...
2answers
59 views
### Equation describing magnetic hysteresis
So when you're looking at B-H curves for ferromagnetic substances, you often see these magnetic hysteresis curves, which occur, I gather, largely because of domain formation which has some reversible ...
1answer
139 views
### Universal Sequence and relationship of mathematics and reality [closed]
In "The Special and General Theory of Relativity" Einstein says: How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably ...
2answers
126 views
### Studying the logical structure of physics as a mathematical object per se? [closed]
I was wondering is there a branch of mathematical physics which studies the underlying logical structure of physics as a mathematical object per se? Let me explain what I mean by that. I'm ...
1answer
82 views
### Math for Thermodynamics Basics
I am studying Statistical Mechanics and Thermodynamics from a book that i am not sure who has written it, because of its cover is not present. There is a section that i can not understand: ...
1answer
181 views
### How deep can my knowledge of particle physics go without the maths?
Successfully just got my first question answered on here, and now time for the second. So I recently gained interest in particle physics and was wondering. By no means do I have the mathematical ...
1answer
71 views
### Taylor expansion of an integral in spherical co-ordinates
I've some difficulty deriving this equation from jackson electrodynamics (The equation after 1.30) \$\nabla^2 \Phi_a\left({\textbf{x}}\right)=-\frac{1}{\epsilon_0}\int_{0}^{R} ...
2answers
68 views
### continuity of the electric potential due to a surface charge
The Electric potential due to a charge distribution on a surface is : $\Phi \left ( x \right )=\int \frac{\sigma \left ( {x^{}}' \right )dx{}'}{\left \| x-x{}' \right \|}da$ I want to show that it's ...
3answers
215 views
### What is a dual / cotangent space?
Dual spaces are home to bras in quantum mechanics; cotangent spaces are home to linear maps in the tensor formalism of general relativity. After taking courses in these two subjects, I've still never ...
2answers
159 views
### How much pure math should a physics/microelectronics person know [duplicate]
I do condensed matter physics modeling in my phd and I was struck up learning quite an amount of physics. But while having done lot of physics courses, I see that if I learn pure math I would ...
0answers
107 views
### Is there a physical motivation to study finite fields?
Clearly finite groups are of immense value in physics and these are also substructures of fields. However I never came across any computations involving finite fields at university and so I never ...
1answer
104 views
### Topology for physicists [duplicate]
Which are the best introductory books for topology, algebraic geometry, manifolds etc, needed for string theory?
0answers
37 views
### How much math do I need to know to learn about quantum mechanics? [duplicate]
I am not good at math, so I needed to know if quantum mechanics involves a lot of math like, astrophysics for example, if it does, is there any book that can teach me this level of math?
11answers
749 views
### Is it possible for a physical object to have a irrational length?
Suppose I have a caliper that is infinitely precise. Also suppose that this caliper returns not a number, but rather whether the precise length is rational or irrational. If I were to use this ...
1answer
93 views
### Calculation of spherical Bessel functions - meaning of $\left(\frac{1}{x}\frac{d}{dx}\right)^{l}$
I'm trying to understand the calculation of spherical Bessel functions in chapter four of Griffiths' Introduction to Quantum Mechanics (2nd ed, p142). He gives ...
2answers
289 views
### Level of calculus required for physics [closed]
First time for me here so kindly let me know if I violate the rules - especially if this is a duplicate. After reading the page how to become a good theoretical phycist, I started a serious revision ...
0answers
192 views
### Interesting Math Topics Useful for Physics [closed]
What are some interesting, but less popular, math topics that are useful for physics that can be self-studied? Specifically, topics that might ultimately be useful in high energy theory (even if it is ...
1answer
157 views
### Impact of LHC on other science and technologies, in particular on mathematics?
The Large Hadron Collider (LHC) "remains one of the largest and most complex experimental facilities ever built" (Wikipedia); it may even be the most complex project in humankind's history (?). Such ...
2answers
385 views
### How should a theoretical physicist study maths? [duplicate]
Possible Duplicate: How should a physics student study mathematics? If some-one wants to do research in string theory for example, Would the Nakahara Topology, geometry and physics book and ...
2answers
262 views
### (Co)homology of the universe
In this post let $U$ be the universe considered as a manifold. From what I gather we don't really have any firm evidence whether the universe is closed or open. The evidence seems to point towards it ...
1answer
140 views
### Potential for charge distribution, finiteness
Consider a potential for charge distribution: $$v_H(\mathbf{r}) ~=~ \int \frac{\rho(\mathbf{r'})}{|\mathbf{r}-\mathbf{r'}|}d\mathbf{r'}$$ where $\rho(\mathbf{r'})$ is the charge density. This ...
0answers
79 views
### Division algebras $(\mathbb{R,C,H,O})$ and discrete symmetry [closed]
I once saw a statement about the relation between division algebra(which means you can define a division in this algebra, there is a theorem saying we only have 4 kinds of division algebra, real R, ...
5answers
418 views
### Is physics rigorous in the mathematical sense?
I am a student studying Mathematics with no prior knowledge of Physics whatsoever except for very simple equations. I would like to ask, due to my experience with Mathematics: Is there a set of ...
3answers
338 views
### Does the axiom of choice appear to be “true” in the context of physics?
I have been wondering about the axiom of choice and how it relates to physics. In particular, I was wondering how many (if any) experimentally-verified physical theories require axiom of choice (or ...
1answer
137 views
### The use of Hall algebras in physics
I asked the same question in mo. I think maybe here there are more physics guys to help me. I once read a statement (not memorized precisely) that a certain physics quantity between two states of ...
1answer
274 views
### Reference for mathematics of string theory [closed]
I have a great interest in the area of string theory, but since I am more focused on mathematics, I was wondering if there is any book out there that covers mathematical aspects of string theory. I ...
1answer
218 views
### Mathematical definitions in string theory
Does anyone know of a book that has mathematical definitions of a string, a $p$-brane, a $D$-brane and other related topics. All the books I have looked at don't have a precise definition and this is ...
0answers
164 views
### Integrals given by Landau [closed]
Discussion about Landau's "Theoretical Minimum" has already been posted here. Unfortunately I couldn't find much about some examples of questions he gave to students. There are three questions in the ...
2answers
130 views
### Quantum Mechanics in terms of *-algebras
I'm currently trying to find my way into the geometric description of Quantum Mechanics. I therefor started reading: Geometry of state spaces. In: Entanglement and Decoherence (A. Buchleitner et ...
3answers
278 views
### Mathematics for Quantum Mechanics [duplicate]
What math should I study if I want to get a basic understanding of quantum mechanics and especially to be able to use the Schrodinger's equation.
4answers
545 views
### How do you do an integral involving the derivative of a delta function?
I got an integral in solving Schrodinger equation with delta function potential. It looks like $$\int \frac{y(x)}{x} \frac{\mathrm{d}\delta(x-x_0)}{\mathrm{d}x}$$ I'm trying to solve this by ...
1answer
180 views
### Why is physical space equivalent to $\mathbb{R}^3$?
Why is physical space equivalent to $\mathbb{R}^3$, as opposed to e.g. $\mathbb{Q}^3$? I am trying to understand what would be the logical reasons behind our assumption that our physical space is ...
5answers
306 views
### What is the meaning of following expresion $C=\frac{\delta Q}{dT}$ mathematicly
Our professor raised the following question during our lecture in Statistical Physics (even so it's related to Thermodynamics): Many text books (even wikipedia) writes wrong expressions (from ...
2answers
244 views
### Sum total distance of electrons on a spherical surface
What is the sum total distance between every possible pair of point charges when there are n point charges on a spherical surface? All point charges can only and are located on the infinitesimal ...
2answers
185 views
### Quantum mechanics on Cantor set?
Has quantum mechanics been studied on highly singular and/or discrete spaces? The particular space that I have in mind is (usual) Cantor set. What is the right way to formulate QM of a particle on a ...
0answers
54 views
### Is it possible to use the properties of quantum mechnics to develop a computer that develop mathmatical theory? [closed]
Is it possible to use the properties of quantum mechnics to develop a computer that develop mathmatical theory? I want some reference please, because i want to get into very detailed. Thanks in ...
7answers
691 views
### Why are radians more natural than any other angle unit?
I'm convinced that radians are, at the very least, the most convenient unit for angles in mathematics and physics. In addition to this I suspect that they are the most fundamentally natural unit for ...
2answers
58 views
### What is the minimal set of expectation values I need in a statistical model?
At least if $\vec v$ is really only a one dimensional parameter, measuring all the moments $\langle v^n \rangle_f$ seems to give me all the information to compute $\langle A \rangle_f$ with $A(v)$ ...
2answers
146 views
### Mathematical problems with impact on physics [closed]
Are there any purely mathematical, unsolved questions, whose resolution would have (great, or concrete) impact on physics? Eg. it could almost surely tell us whether particle x exist or not, assuming ...
0answers
115 views
### How is the poincare conjecture(and perelman proof) helpful in studying the properties of the universe?
Can someone tell me how the poincare's famous conjecture or its proof by perelmen can be helpful in deciding some properties like the shape of the universe?
4answers
477 views
### Topology needed for Differential Geometry [duplicate]
I am a physics undergrad, and need to study differential geometry ASAP to supplement my studies on solitons and instantons. How much topology do I need to know. I know some basic concepts reading from ...
3answers
316 views
### shifting from mathematics to physics
I am a postgraduate in mathematics. I studied physics during my B.Sc.studies.I want to go for further studies in physics particularly in theoretical physics. I am in a job and cant afford regular ...
2answers
342 views
### Book covering Topology required for physics and applications
I am a physics undergrad, and interested to learn Topology so far as it has use in Physics. Currently I am trying to study Topological solitons but bogged down by some topological concepts. I am not ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313659071922302, "perplexity_flag": "middle"}
|
http://mathoverflow.net/questions/87633?sort=newest
|
## construct the elliptic fibration of elliptic k3 surface
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi all,
As we know, every elliptic k3 surface admits an elliptic fibration over $P^1$, but generally how do we construct this fibration? For example, how to get such a fibration for Fermat quartic?
Moreover, as we know all (elliptic) k3 surfaces are differential equivalent to each other, does this mean: topologically the elliptic fibration we get for each elliptic fibraion is the same, which is just the torus fibration over $S^2$ with 24 node singularities? Or, the totally space is the same, but different complex data(structure) provides different way or "direction" of projection onto $S^2$, thus induces different type of fibrations?
Thanks!
-
One elliptic fibration of the Fermat quartic that was known (in all but name) to Euler is described on pages 12-13 of the lecture notes at math.harvard.edu/~elkies/euler_11c.pdf, which also describe a few other ways to work with elliptic fibrations of K3's. – Noam D. Elkies Feb 6 2012 at 5:41
1
Certainly the topological fibrations of elliptically fibered K3 surfaces are not all topologically equivalent. The number of singular fibers, as well as the types of singularities, can vary. The "weighted sum" of the singularities of fibers always equals 24, but that leaves a lot of room for variation. – Jason Starr Feb 6 2012 at 11:20
Some more questions in this direction: -- @ Jason Starr: What is the weight you are mentioning? --So the number of singular fibers (which I think is the degree of j-map to $\mathbb{P}^1$ can be different in different examples? --Are there examples where the singular fiber is "not" normal-crossing" (or semi-stable)? – Mohammad F.Tehrani Mar 27 at 2:07
When talking about j-map above, I am assuming there is a section. – Mohammad F.Tehrani Mar 27 at 2:52
This looks fun: grdb.lboro.ac.uk/search/ellk3 – Mohammad F.Tehrani Mar 27 at 2:57
## 1 Answer
Let $S$ be a smooth projective $K3$ surface, say over the complex numbers, and suppose that $S$ admits a (non-constant) fibration $\pi\colon S\to C$ over a curve $C$.
By the universal property of the normalization, we can suppose that $C$ is normal, hence smooth. Now, this curve $C$ must be $\mathbb P^1$, since otherwise you would have (by pulling-back) some non-trivial global holomorphic $1$-form on $S$, contradicting $h^{1,0}(S)=0$.
Next, this fibration is clearly given by the linear system $|\pi^*H^0(\mathbb P^1,\mathcal O(1))|\subset |L|$ of the pull-back $L:=\pi^*\mathcal O(1)$, which is spanned by two independent sections, say $\sigma$ and $\tau$. Now, take a general fiber: it is a smooth curve $F\subset S$ which is a divisor in the above-mentioned linear system, of the form ${}$. In particular $\mathcal O_S(F)\simeq L$. Moreover, by definition, $\mathcal O_F(F)\simeq L|_F=\pi^*\mathcal O(1)|_F$ which is trivial.
By adjunction, $K_F\simeq (K_S\otimes\mathcal O_S(F))|_F\simeq\mathcal O_F$ is trivial, so that $F$ is an elliptic curve.
Thus, any fibration of a smooth projective $K3$ surface is an elliptic fibration over $\mathbb P^1$, obtained as above.
Now, let's consider the more specific case of the Fermat's quartic $S:\{x^4-y^4-z^4+t^4=0\}$ in $\mathbb P^3$ (it is the standard Fermat's quartic up to multiplying $y$ and $z$ by a $4$th root of $-1$). Then, we can factorize it in the following way $$(x^2+y^2)(x^2-y^2)-(z^2+t^2)(z^2-t^2)=0.$$ This shows that, for $[\lambda:\mu]\in\mathbb P^1$, the complete intersection given by $$C_{[\lambda:\mu]}:=\begin{cases} \lambda(x^2-y^2)=\mu(z^2+t^2) \\ \mu(x^2+y^2)=\lambda(z^2-t^2) \end{cases}$$ is contained in $S$. For generic $[\lambda:\mu]\in\mathbb P^1$, this is a smooth elliptic curve, since its tangent bundle fits in the following short exact sequence $$0\to T_{C_{[\lambda:\mu]}}\to T_{\mathbb P^3}|_{C_{[\lambda:\mu]}}\to\mathcal O_{C_{[\lambda:\mu]}}(2)\oplus\mathcal O_{C_{[\lambda:\mu]}}(2)\to 0.$$ The function $[\lambda:\mu]$ defines a map from $S$ onto $\mathbb P^1$, which is the elliptic pencil on $S$ you were looking for.
Note that, for $\lambda/\mu=0,\pm 1,\pm i,\infty$, $C_{[\lambda:\mu]}$ degenerates into a cycle of four lines. This gives you the 24 singularities.
-
Thanks a lot! It is quite helpful! – Jay Feb 7 2012 at 5:35
Good! So is this answer satisfactory or you wanted to know more specific things? – diverietti Feb 7 2012 at 9:40
that is great~ thanks! by the way, do you know anything about the existence of sections? maybe just a topological section... – Jay Feb 8 2012 at 5:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9075043201446533, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/42981?sort=newest
|
## Partial word orders on groups
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This is a followup question related to this question. Recall that a left-invariant partial order on a finitely generated group $G$ is called a partial word order if for every $a\le b\le c$ we have $|b|\le C(|a|+|c|)$ for some constant $C$ (where $|x|$ is the word length of $x$). For example, the following partial order on ${\mathbb Z}^2$: $(m,n)\le (k,l)$ iff $m\le k, n\le l$ is a word partial order. Certainly the empty (trivial) partial order ($a\le b$ iff $a=b$) is always a word order.
Question 1. Is there a canonical way to construct non-trivial partial word orders on groups?
Update 2 A sub-question: is there a general algebraic property of a group that guaranties existence of such non-trivial partial orders? (Being free Abelian of finite rank is such a property, but I am looking for non-trivial answers.)
Update 1 One possible generalization. I call a partial order on a group quasi-invariant if for every $g,a,b$ if $a\lt b$, then there exist two elements $c,d$ such that $gac\lt gbd$ and $|c|,|d|\lt C$ for some constant $C$.
Question 2. Did anybody study quasi-invariant partial orders on groups?
Motivation The reason I want to study such things is to introduce an extra structure on the asymptotic cones of groups. If one tries to define a partial order on an asymptotic cone, one would need a quasi-(left)-invariant word partial order on the group.
-
## 1 Answer
If G is a torsion group, then no elements may be comparable, so the only left-invariant order is the trivial one. That seems to rule out a canonical construction.
-
I did not say that the canonical construction should always be non-trivial. If the group is trivial, all orders on it are also trivial. But your remark does rule out lots of groups. – Mark Sapir Nov 1 2010 at 8:20
In general, question 1 is formulated in such a way that a negative answer seems impossible. How do you prove that a canonical way does not exist? The problem is that "canonical" is not precisely defined. When you see it, you can say that it is, but proving that it does not exist is not possible. I have reformulated the question to make it more concrete. Still "no" is not possible, but there are more variations for "yes". – Mark Sapir Nov 1 2010 at 8:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906783401966095, "perplexity_flag": "head"}
|
http://quant.stackexchange.com/questions/7156/pricing-in-hjm-framework/7161
|
# Pricing in HJM framework
As mentioned in earlier question, I am a math student, who attained a course in interest rate theory. However I have some question how these things actually work in reality.
So assume we are working with Heath-Jarrow-Morton (HJM) framework. We assume that the forward curve is given by $$f(t,T)=f(0,T) +\int_0^t\alpha(s,T)ds+\int_0^t\sigma(s,T)dW(s)$$ We know that under the drift condition, we have absence of arbitrage. Hence under the risk neutral measure $Q$, with $Q$ Brownian Motion $B$, we have $$f(t,T)=f(0,T)+\int_0^t[\sigma(s,T)\int_0^t\sigma(s,u)du]ds+\int_0^t\sigma(s,T)dB(s)\tag{1}$$ So modelling in the HJM case, you start at $(1)$ and try to choose $\sigma(s,T)$ to fit the forward curve from the market? For me, that seems quite hard, since your $\sigma$ depends on two parameters. Or do you fix one of these and try to fit the curve with the other?
So, it would be appreciated if someone could explain how you actually do a "martingale modelling" in the HJM framework.
If you use a short rate model, i.e. write down the dynamics of your short rate depending on a family of parameters, you would do the following:
1. calculate bond prices under this parametric family
2. choose parameter such that bond prices match with market prices
3. do pricing
is here a similar way? What is the main advantage of the HJM framework, beside you get a perfect fit for the initial forward curve and modelling the entire forward curve? Is this framework used in reality?
-
## 3 Answers
I have hardly ever seen HJM in action - I have to admit this. Short rate models and LIBOR market models are more widespread in my experience.
But let me share the following thoughts:
1. In my mind the drift condition is the central statement of HJM. It says that given today's term-structure and the volatilities what the only arbitrage-free drift is.
2. A very good paper on the continuous time and for pedagogical reasons (and applications too) the discrete time version can be found here: The Heath-Jarrow-Morton Framework by Martin Haugh You also find remarks for sepcifying $\sigma(s,t)$ there on page 7.
3. Application of the HJM framework to yield-curve prediction can be found here. There we see the drift conditon again: Consistent Long-Term Yield Curve Prediction by Josef Teichmann, Mario V. Wüthrich
-
The HJM model should be rather called HJM framework. Because: For specific choices of the forward rate's instantaneous volatility function $\sigma(t,T)$ you obtain other models from the HJM. The HJM is a super-set.
• The rate curve is calibrate via the specification of $f(0,T)$.
• The drift $\alpha(t,T)$ is - as always - confined by risk neutrality, i.e., it is given by the assumption that you cannot create arbitrage from a bond-portfolio. The $Q$-average relative performance of a bond w.r.t. the bank account numéraire is zero. (Note that for short rate models the drift is also restricted via this assumption, but since in a short rate model $f(0,T)$ appears in the drift, its drift is to some extend a free parameter (used to calibrate the rate curve). In that sense short rate models are a bit special. Due to this I wouldn't start a lecture on interest rates models with short rate models. I would start with LMM or HJM and then consider short rate models (going from general to special)).
• The dynamic of the rate curve is calibrate via $\sigma(t,T)$.
For example you obtain Hull-Whilte short rate model is given by $\sigma(t,T) = \sigma_r(t) \exp(-a (T-t))$ where $\sigma_r(t)$ is the short rate volatility and $a$ is the short rate mean reversion. For another choice of $\sigma(t,T)$ you obtain the smaller familiy of Cheyette models (Ritchken-Sankarasubramanian-Framework). There is also a special choice of the HJM volatlity $\sigma(t,T)$ which results in the LIBOR Market Model (see Section 24.2 of http://www.amazon.com/dp/0470047224 (Amazon Reader allows to look inside)).
The LIBOR Market Model framework can also be seen as a discrete version of the HJM, where you choose $\sigma(t,T)$ picewise constant. You may then calibrate this covariance matrix to products like swaptions etc. I believe that this is the closest candidate to what "calibrating HJM" could mean. Because for the unrestricted calibration of $\sigma(t,T)$ you would need a continuum of financial products, which you don't have. Thus you have to restrict $\sigma$ by either
• a functional form (e.g. getting a short rate model)
• a discretization (e.g. getting a LIBOR market model)
See also " Derive a short rate model from HJM "
-
Thank you for your comment. I totally agree with calling it HJM framework (as I did) instead as HJM model. I really like your interpretation of a super-set. From a historical point of view, why did Heat-Jarrow-Morton investigate to find this new framework? Did they want to find just a "generalization" of short rate models or did the really want to develop a new framework. As you said, as soon as you want apply this framework, you will end up either in LIBOR market model or the short rate model. – hulik Feb 1 at 8:28
They just derived the condition for the drift of $f = f(t,T)$ under the equivalent martingale measure w.r.t. $B(t) = \exp( \int f(t,t) \mathrm{d}t)$. So from HJM you now have two options: 1. Choose your model (like LMM) and derive the risk neutral drift for it (in its own formalism) - or - 2. Choose your model by setting $\sigma$ and derive the drift from the HJM drift. - For me, the important role of HJM is that we see how things are linked together. For example: I know how to choose the parameters in LMM to get an LMM dynamic similar to Hull-While ("calibrate LMM to Hull-White") - if needed. – Christian Fries Feb 1 at 11:05
@ChristianFries Thank you for your answer with further insights. – Richard Feb 1 at 14:11
1
@ChristianFries Also I totally agree with :$\textit{ Due to this I wouldn't start a lecture on interest rates models with short rate models }$ However in D. Filipovic's book about Interest rate theory they first introduce short rate models, then HJM and later LMM. – hulik Feb 2 at 15:29
I had to go and dig in one of the books I worked with in my grad studies which is particularly useful for Fixed Income: Term Structure Models by Filipovic.
The Chapter 6 is dedicated to the HJM models, and the most important theorem states the equation you mentioned in $(1)$ and that the discounted bond price go as follows:
$$\frac{P(t,T)}{B(t)} = P(0,T) \mathcal{E}_t(v(\cdot,T) \bullet W^*)$$
Where
• $B(t)$ is the discount of a risk-free bond
• $W^*$ is a $\mathbb{Q}$-brownian motion
• $(h \bullet X)_t=\int_0^th(s)dX_s$
• $\mathcal{E}_t = e^{X_t - \frac{1}{2} \langle X,X \rangle_t}$ is the stochastic exponential
• $v(s,T) = -\int_s^T \sigma(s,u)du$
So, the price of the bond (computed with the risk-neutral measure $\mathbb{Q}$) does not depend on the drift of the real world forward curve $\alpha(\cdot,T)$ at all: it only depends on the volatility of the forward curve $\sigma(\cdot,T)$.
From what I remember and could quickly read again, this is the main advantage of the HJM framework.
-
It is exactly this book I'm reading at the moment. The dependence just on the volatility matrix $\sigma(t,T)$ is due to the HJM drift condition, which excludes arbitrage. The problem is, if you do not assume further regularity properties on $\sigma$ you will not know what the distribution of $\mathcal{E}_t(v(\cdot,T)\bullet W^*)$ is. – hulik Jan 31 at 8:08
@SRKX thanks for pointing this out - I think you hit the point +1 from my side ;) – Richard Jan 31 at 8:37
@hulik Indeed. But at least you have a close-form solution, and you do not have to model the drift of the forward curve. That's quite cool already. – SRKX♦ Jan 31 at 9:06
1
@hulik but you will not be able to have all curve shapes with Vasicek, Ho Lee or CIR - you need the Hull White model to get the curve right and you need the generalized Hull White model if you want to model time dependent volatilities -> then we are already close to HJM conerning complexity. – Richard Jan 31 at 11:39
1
My point above was that in case one wants to do real curve modelling (shape and volas) one has to use something advanced - advanced short rate models or HJM. For example for the generalized Hull White model one could calibrate it to traded Swaptions with the terms and maturities matching the terms that you want to model in the volatilities. Reading the paper Nr.2 will hopefully give you some more insight - it is the best thing I've ever seen on HJM. Nr. 3 offers a nice application step by step. – Richard Jan 31 at 12:49
show 2 more comments
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355626106262207, "perplexity_flag": "middle"}
|
http://mathhelpforum.com/algebra/16622-help-finding-roots.html
|
# Thread:
1. ## Help finding roots
Hi, I need to solve for x in the following two equations:
(x-1/x)^2 - 77/12 (x-1/x) + 10 = 0
[I expanded out to get (x^2 + 1/x^2 - 77x/12 + 77/12x + 8). Then I used the factor theorem to plug in possible x's, and found that (x-4) is a factor. However, I don't know where to go from there, or if what I've done so far is correct.]
and
(3x-5)(3x+1)^2(3x+7) + 68 = 0
Thank you for any help!
2. Originally Posted by starswept
Hi, I need to solve for x in the following two equations:
(x-1/x)^2 - 77/12 (x-1/x) + 10 = 0
[I expanded out to get (x^2 + 1/x^2 - 77x/12 + 77/12x + 8). Then I used the factor theorem to plug in possible x's, and found that (x-4) is a factor. However, I don't know where to go from there, or if what I've done so far is correct.]
and
(3x-5)(3x+1)^2(3x+7) + 68 = 0
Thank you for any help!
don't expand! it is quadratic in (x - 1)/x
replace (x - 1)/x with another variable, say y, and you will see that
y^2 - (77/12)y + 10 = 0
solve for the roots of that equation, when you get them, replace y with (x - 1)/x and solve for x
EDIT: Oh, it was x - 1/x not (x - 1)/x. that does not change the process though
EDIT 2: by the way, you expanded incorrectly
EDIT 3: I must be tired. the expansion was correct, so scratch out EDIT 2. I still think my original plan is best though. Yes you have to do the quadratic formula 2 or 3 times (2 if you use Plato's method), but to that's easier than working with the quartic equation you would get if you continued down the path you were going
3. Originally Posted by janvdl
Let's use Jhevon's method.
$y = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
$y = \frac{(77/12) \pm \sqrt{(-77/12)^2 - 4(1)(10)}}{2}$
$y = \frac{(77/12) \pm \sqrt{1,1736}}{2}$
$y = 3,795 \ or \ y = 2,622$
incomplete, remember we want the x-values. so assuming what you did is correct (i have no time to check it) you now replace each y with x - 1/x and solve for x to find the corresponding x-values
4. Originally Posted by janvdl
Let's use Jhevon's method... and the mathematician's best friend, the quadratic formula
$y = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$
$y = \frac{(77/12) \pm \sqrt{(-77/12)^2 - 4(1)(10)}}{2}$
$y = \frac{(77/12) \pm \sqrt{1,1736}}{2}$
$y = 3,795 \ or \ y = 2,622$
$x - \frac{1}{x} = 3,795$
$x^2 - 3,975x - 1 = 0$
Use the quadratic formula again.
Then you'll see that x = 3.302 or
x = -0.302
$x - \frac{1}{x} = 2,622$
$x^2 - 2,622x - 1 = 0$
Use the quadratic formula AGAIN.
Then you'll get x = 2.414
x = -0.414
5. $y = \left( {x - \frac{1}{x}} \right)\quad \Rightarrow \quad 12y^2 - 77y + 120 = 0\quad \Rightarrow \quad \left( {3y - 8} \right)\left( {4y - 15} \right) = 0$
Now you solve for x:
$\left( {x - \frac{1}{x}} \right) = \frac{8}{3}\quad \& \quad \left( {x - \frac{1}{x}} \right) = \frac{{15}}{4}$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390483498573303, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showpost.php?p=3785236&postcount=10
|
View Single Post
Recognitions:
Gold Member
Quote by elfmotat When solving this equation, he found that EM waves travel at a speed given by: $v=\frac{1}{\sqrt{\mu_0 \epsilon _0}}$ , where μ0 and ε0 are the magnetic and electric constants.
Correct me if I'm wrong, but as I understand it, μ0 and ε0 are entities of the EM field only. This does not leave much room for confidence that other fields would propagate at a maximum speed of light.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9603528380393982, "perplexity_flag": "middle"}
|
http://www.physicsforums.com/showthread.php?p=4238089
|
Physics Forums
## Question about SO(N) group generators
Hi all. I have a question about the properties of the generators of the SO(N) group.
What kind of commutation relation they satisfy? Is it true that the generators λ are such that:
$$\lambda^T=-\lambda$$ ??
Thank you very much
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
The commutators are complicated, in general--or too complicated for me. Yes, the Lie algebra of SO(n) is the skew-symmetric matrices, which is the condition you wrote. That comes from differentiating a path of orthogonal matrices at the identity, or rather differentiating the equation that defines an orthogonal matrix.
Notice, that the n-dimensionality of SO(n) are triangle numbers in ℝn hopefully this can help you figure out a reason why, also I set a link to a video I think that might be able to help. Link: http://www.youtube.com/watch?v=-W6JWck4__Y Edit: Also may I ask why do you need to know this thing about the lie commutators in SO(n)?
## Question about SO(N) group generators
Quote by homeomorphic Yes, the Lie algebra of SO(n) is the skew-symmetric matrices, which is the condition you wrote. That comes from differentiating a path of orthogonal matrices at the identity, or rather differentiating the equation that defines an orthogonal matrix.
Thank you very much! That solves some problems!
Quote by Tenshou Edit: Also may I ask why do you need to know this thing about the lie commutators in SO(n)?
I am working on the SO(N) symmetry of a $\lambda \phi^4$ theory in QFT and I need the exact expression of the commutator of two conserved charges, so I need to know the commutator of the generators.
Thread Tools
| | | |
|------------------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: Question about SO(N) group generators | | |
| Thread | Forum | Replies |
| | Quantum Physics | 5 |
| | Linear & Abstract Algebra | 1 |
| | High Energy, Nuclear, Particle Physics | 3 |
| | Quantum Physics | 8 |
| | Calculus & Beyond Homework | 6 |
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9133378267288208, "perplexity_flag": "middle"}
|
http://math.stackexchange.com/questions/154197/proof-alpha-1-alpha-2-in-a-triangle/154200
|
# Proof: $\alpha_1>\alpha_2$ in a triangle
I have to proof in a triangle, that $\alpha_1>\alpha_2$ holds. The inner point P (from where I draw the smaller triangle) is set randomly.
Does anyone have a suggestion where I have to start?
Greetings
-
1
You're probabaly missing one piece of vital information: The random point $P$ is supposed to be inside the original triangle? – mrf Jun 5 '12 at 11:26
Yes, of course you're right – ulead86 Jun 5 '12 at 11:30
## 3 Answers
Given that the origin triangle is $\triangle ABC$, and the inner point is $P$, where $\alpha_1 = \angle BPC$, $\alpha_2 = \angle BAC$. Supposing that $D$ is the intersection of $AP$ and $BC$, we have $\angle BPC = \angle BPD + \angle DPC = \angle BAP + \angle ABP + \angle CAP + \angle ACP$ $= \angle BAC + \angle ABP + \angle ACP > \angle BAC$, Q.E.D.
-
I tohught about this answer a few days but I can't find an explanation for $\angle BPD + \angle DPC = \angle BAP + \angle ABP + \angle CAP + \angle ACP$ perhaps you can give a hint? – ulead86 Jun 8 '12 at 6:45
– Frank Science Jun 8 '12 at 9:53
Each of $\,\alpha_1\,,\,\alpha_2\,$ equals $\,180 - (\beta_1+\beta_2)\,$ , with $\beta_i$ being the other two angles in the big (in the little red) triangle. As the other two angles of $\,\alpha_1\,$ are each less than the other two angles of $\,\alpha_2\,$ we get what we want.
-
There is nothing more to do? I thought I have to argue, why the angles are smaller... – ulead86 Jun 5 '12 at 11:27
@Daniels Of course you have! The other two angles of $\alpha_1$ are smaller than the corresponding ones of $\alpha_2$ since they're partial to them...or in other words: each of the other angles corres. to $\alpha_2$ equals the corres. angle of $\alpha_1$ plus a little more...you can mark this in your diagram, to make it clearer. – DonAntonio Jun 5 '12 at 11:43
HINT: Draw a line that passes through the vertices of the two angles and the opposite side.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390844106674194, "perplexity_flag": "head"}
|
http://math.stackexchange.com/questions/174693/decomposition-of-l-in-a-subfield-of-a-cyclotomic-number-field-of-an-odd-prime
|
# Decomposition of $l$ in a subfield of a cyclotomic number field of an odd prime order $l$
Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$. Let $K = \mathbb{Q}(\zeta)$. Let $A$ be the ring of algebraic integers in $K$. Let $G$ be the Galois group of $\mathbb{Q}(\zeta)/\mathbb{Q}$. $G$ is isomorphic to $(\mathbb{Z}/l\mathbb{Z})^*$. Hence $G$ is a cyclic group of order $l - 1$. Let $f$ be a positive divisor of $l - 1$. Let $e = (l - 1)/f$. There exists a unique subgroup $G_f$ of $G$ whose order is $f$. Let $K_f$ be the fixed subfield of $K$ by $G_f$. Let $A_f$ be the ring of algebraic integers in $K_f$. Let $\mathfrak{l} = (1 - \zeta)A$. $\mathfrak{l}$ is a prime ideal lying over $l$. Let $\mathfrak{l}_f = \mathfrak{l} \cap A_f$.
My question: Is the following proposition true? If yes, how would you prove this?
Proposition
(1) $lA = \mathfrak{l}^{l-1}$.
(2) $lA_f = \mathfrak{l}_f^e$.
(3) $\mathfrak{l}_fA = \mathfrak{l}^f$
-
## 1 Answer
Note: I am going to use $p$ everywhere instead of $l$.
Yes, and it all follows from the fact that $p$ is totally ramified. To see this, show that $N_{K/\mathbb{Q}}(1-\zeta) = p = [A : \mathfrak{p}]$ and notice that $p = \prod_{k \in (\mathbb{Z}/p\mathbb{Z})^\times} (1 - \zeta^k) = \epsilon (1 - \zeta)^{p-1}$ for some cyclotomic unit $\epsilon$. Hence $pA = \mathfrak{p}^{p-1}$.
$(2)$ and $(3)$ are immediate consequences of $(1)$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9013872742652893, "perplexity_flag": "head"}
|
http://physics.stackexchange.com/questions/54452/is-momentum-still-conserved-in-non-phase-matched-nonlinear-optical-processes
|
# Is momentum still conserved in non-phase-matched nonlinear optical processes?
To be efficient, a phase-matching condition has to be fulfilled in many nonlinear optical processes. For instance, the phase-matching requirement for second-harmonic generation is
$k_{2\omega}=2k_{\omega}$ or $\Delta k = k_{2\omega}-2k_{\omega}=0$
It is often said that this is equivalent to momentum conservation. However, even if $\Delta k \neq 0$, the process still takes place - although with lower efficiency and a finite coherence length $L = \frac{\pi}{\Delta k}$.
How can the conversion process still occur while momentum is not conserved? Is there momentum transfer to the medium? I guess not, because in many nonlinear processes only virtual photons participate. Do the photons 'borrow' momentum to make the jump? In other words, how does this work?
-
## 1 Answer
There is typically considered to be an uncertainty which softens the matching condition. In the case of momentum, the momentum state is only as well defined as the spatial extent of the interaction allows it to be. If the interaction length is given by $L$, which we can take to be an approximate measure of the position uncertainty $\Delta x$, then the corresponding momentum uncertainty is \begin{equation} \Delta p \ge \frac{\hbar}{\Delta x} \end{equation} so that there is a corresponding uncertainty in $\Delta k = \Delta p / \hbar$, which gives (up to some factors) $\Delta k \sim (\Delta x)^{-1}$. The finitude of the system size, either in time or space, means that the determination of the Fourier coefficients has a certain wiggle room (in $\omega$ or $\mathbf{k}$, respectively).
-
Thanks, but I have some difficulties with this. Since these processes happen on the macroscopic scale, there should be a classical way of stating that momentum is still conserved. Then we would need to associate momentum with each of the waves and derive somehow that it is conserved even if there is no phase-matching. Put differently, the light in most nonlinear optics experiments can be described purely classically. So I have a hard time accepting you would need quantum mechanics to explain how momentum is conserved in a non-matched process. Any ideas? – RVL Feb 20 at 7:51
1
@RVL I don't think you do need quantum mechanics to make this argument work. An uncertainty relation also holds for classical wavepackets, while the momentum carried by the wave is related to the wavevector in classical EM. If you work through the classical formalism you should get exactly the same physics out. If you want to see it even quicker than that, note that the momentum $p = \hbar k$, so in the uncertainty relation $\hbar \Delta k \geq \hbar/\Delta x$ the $\hbar$ cancels and everything still holds in the classical limit $\hbar \rightarrow 0$. – Mark Mitchison Feb 20 at 11:59
@Mark Good point. Could you remind me, I thought the momentum of a classical EM wave was the Poynting vector/c^2, but I lost the connection to the momentum $p$ you describe for the moment. Thanks to both of you! – RVL Feb 20 at 13:00
The difference is between the momentum per photon ($\mathbf{p}=\hbar \mathbf{k}$) and the momentum density ($\mathbf{p}=c^{-2}\int_A \mathbf{S}$, where the subscripted $A$ means the integral over some area), the difference between quantum and continuous treatments of the fields. – KDN Feb 20 at 15:37
1
@RVL The Poynting vector $\mathbf{S}$ is related to $\mathbf{E} \times \mathbf{B}$. (I will not bother to write all the constants.) Using Faraday's equation, you should be able to show that for plane waves, $\mathbf{k}\times\mathbf{E} = \omega\mathbf{B}$. Plug this into the expression for the Poynting vector, remembering that EM waves are transverse, and you'll get something like $\mathbf{S} \sim \mathbf{k}E_0^2$, where $E_0$ is the amplitude of the electric field. – Mark Mitchison Feb 20 at 16:08
show 1 more comment
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306235909461975, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/61999/iteration-of-a-nonlinear-map
|
## Iteration of a nonlinear map
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $T\colon R^n\to R^n$ be a linear map. If we want to study the behavior of $T^kx$ for some $x\in R^n$ as integer $k$ grows, we usually look at the eigen structure of $T$.
Now let $S\colon R^n\to R^n$ be a linear map plus a nonlinear perturbation. And I want to study the behavior of $S^kx$ for some $x\in R^n$ as integer $k$ grows. I am wondering if there exists a theory that discusses this kind of problem.
-
## 2 Answers
Yes, there is such a theory, and you already gave yourself the answer in the tag! For particular shape of your map $S$, i.e. a linear hyperbolic map (meaning: the spectrum is disjoint from the unit circle) $T$ plus a small (in a sense to be precised) Lipschitz perturbation , the Hartman-Grobman tells you that $S$ is conjugated with $T$ by a Hölder continuous homeomorphism. The proof is very elementary and follows from the contraction principle; see e.g. M.Shub's book, Global stability of Dynamical Systems.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Suppose there is a linear $T$ with $\| Sx - Tx \| = o(\|x\|)$ as $x \to 0$, and all eigenvalues of $T$ have absolute value $< 1$. Then there exist positive integer $N$ and positive reals $\delta$ and $\epsilon$ such that for $\|x\| < \delta$, $\|S^N x\| \le (1 - \epsilon) \|x\|$. It follows that for $\|x\|$ sufficiently small, $S^k x \to 0$ as $k \to \infty$. Was that the sort of result you were looking for?
-
The nonlinear map $S$ that I am interested in is equal to a linear map $T$ plus a polynomial in $x$. I want to know for what kind of $x$ $S^kx$ will converge as $k$ grows and where it will converge to. – silkrain Apr 17 2011 at 16:04
The general theory (and in particular the Hartman-Grobman theorem, as mentioned by Pietro) can tell you about what happens in a neighbourhood of 0. So if $T$ has $k$ eigenvalues (counted by algebraic multiplicity) with absolute value $< 1$ and none with absolute value $=1$, there will be a $k$-dimensional manifold near 0 on which the iterates will converge to 0. If there are fixed points other than 0, the linearizations around those fixed points will tell you about convergence to them. – Robert Israel Apr 17 2011 at 18:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338149428367615, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/32938/surfaces-in-mathbbp3-with-isolated-singularities/33238
|
## Surfaces in $\mathbb{P}^3$ with isolated singularities
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
It is classically known that every smooth, complex, projective surface $S$ is birational to a surface $S' \subset \mathbb{P}^3$ having only ordinary singularities, i.e. a curve $C$ of double points, containing a finite number of pinch points and a finite number of triple points, which are triple also for $S'$. The proof is obtained by embedding $S$ in $\mathbb{P}^5$ and by taking a projection
$\pi_{L} \colon S \to \mathbb{P}^3$,
where $L \subset \mathbb{P}^5$ is a general line. This is the method originally used by M. Noether in order to prove his famous formula
$\chi(\mathcal{O}_S)=\frac{1}{12}(K_S^2+c_2(S))$,
see Griffiths-Harris "Principles of Algebraic Geometry", p. 600.
My question is now the following:
is it also true that every smooth, complex, projective surface $S$ is birational to a surface $S' \subset \mathbb{P}^3$ having only isolated singularities? And if not, is there any counterexample?
-
## 5 Answers
My instinct tells me no. Here's a pseudo-proof. Take $X$ to be a surface with nontrivial fundamental group e.g. a product of two curves $X=C_1\times C_2$ where say $C_2$ has positive genus. Suppose it's birational to surface $Y\subset \mathbb{P}^3$ with isolated singularities at $p_1,\ldots p_N$. Let `$U=\mathbb{P}^3-\{p_1,\ldots p_N\}$`. Then $\pi_1(U)=\pi_1(\mathbb{P}^3)$ is trivial. Then some version of Zariski-Lefschetz should give $\pi_1(S\cap U)=\pi_1(U)=1$ (this is the "pseudo" part, since I'm too lazy to track this down). We have a diagram $$U\leftarrow U''\to U'\subset X$$ where the arrows are blow ups of points, and the inclusion is as an open set. Then $$\pi_1(U)\cong \pi_1(U'')\cong \pi_1(U')$$ is trivial. However $\pi_1(U')$ would surject onto $\pi_1(X)$ QED.
Remark: It suffices to use $H_1$ in the place of $\pi_1$. I think this would be easier to justify.
Remark 2: As is clear from remarks below, this argument is insufficient even with $H_1$, but perhaps there is a germ of a correct idea.
Some hours later: I no longer feel that this approach is viable. Nevertheless, I believe for whatever irrational reason that there must be a counterexample. One thing is certainly clear, and that is that this is a damn good problem.
-
2
That's what I was also thinking, but maybe it would be safer to assume that both curves have positive genus, to rule out the possibility of cones over plane curves. In fact, just abelian surfaces might work. – damiano Jul 22 2010 at 11:54
Good point. The example of a a cone shows that this nonsense as written. – Donu Arapura Jul 22 2010 at 12:00
I think that there must be a counterexample, too. But, as it is clear from your interesting answers, finding it could be rather difficult. – Francesco Polizzi Jul 23 2010 at 13:09
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
To the best of my knowledge this is a long standing open problem. I cannot recall a reference, as this is something I studied in the 1980's, but I recall this being phrased as an unsolved problem from the 19th century Italian school. The conjecture is that no normal surface in P^3 is birational to a smooth surface which has two dimensional image in it's Albanese. One specific case of this that has been studied more extensively are Zariski surfaces:z^n = f(x,y) where f is a polynomial of degree n with only cusps and nodes as singularities. There are lots of information about when such a surface is irregular, but beyond that not much is known. I believe that even if f is a sextic polynomial it is unknow whether or not the resulting surface can have 2 dimensional image in it's Albanese. I have heard Catanese ask about the case where S is an abelian surface.
-
@aginensky Can you please add some references to your answer? Where is it phrased as an unsolved problem? Who made the conjecture? – JME Jul 26 2010 at 12:44
@JME I will try and get some references. As I said, it has been a while since I was looking at these things. – aginensky Jul 29 2010 at 0:30
Suppose $S$ is a smooth surface birational to an abelian surface, and $f:S\to S'\subset\mathbb{P}^3$ is a birational morphism. Then $S'$ cannot have isolated singularities.
If it did, one could find a smooth hyperplane section $C\subset S^'$ missing the singular points. Since $V=\mathbb{P}^3- S'$ is smooth and affine, $H^i_c(V)=0$ for $i<3$ and so $H^1(S')=H^1(\mathbb{P}^3)=0$ by the exact sequence for $H_c$. On the other hand, let $U=S'-C\simeq S-D$ where $D=f^{-1}(C)$. Consider the long exact cohomology sequences for the pairs $(S,D)$ and $(S',C)$: $$0 \to H^1(S) \to H^1(U) \to H^2_D(S) \to\dots$$ and $$0 \to H^1(S') \to H^1(U) \to H^2_C(S') \to \dots$$ As $H^1(S')=0$, these imply that $H^1(S)$ injects into $H^2_C(S')$. But $C$ is irreducible, and contained in the smooth part of $S'$, so $H^2_C(S')$ is 1-dimensional. (Or you can argue with weights).
Remark: as indicated below this argument is false (and any cohomological argument along the same lines runs into the same problem).
-
Oops, this is also nonsense as written, because $S'-C$ and $S-D$ aren't isomorphic. There is a proof along these lines but I am unable to recall it now. – Tony Scholl Jul 22 2010 at 13:21
I like the argument in principle, but I do have one concern: why is $S'-C\cong S-D$? It seems that the argument could be applied when $S'$ is a cone, leading to a false conclusion. – Donu Arapura Jul 22 2010 at 13:29
OK, I guess my comment was redundant. – Donu Arapura Jul 22 2010 at 13:30
3
It seems this problem is quite hard: in Nakamura and Umezu, Tokyo J Math (18) 1995, the authors show there is no normal quintic surface in $P^3$ birational to an abelian surface. They conjecture the result in arbitrary degree. – Tony Scholl Jul 22 2010 at 21:22
@Tony I did not know this reference, thank you for pointing it out. – Francesco Polizzi Jul 23 2010 at 13:02
This answer is completely rewritten. This is not an actual answer but a thought related to the question. I decided to leave it hear since it is short.
Note first that if there is a regular map from a surface $X$ to $\mathbb P^3$ whose image has only isolated singularities, then $X$ has curves with negative self-intersection. In particular, if $X$ has no such curves then its image in $\mathbb P^3$ is smooth.
Now, suppose we have a surface $X$ with isolated singularities in $\mathbb CP^3$, say of general type and consider the question:
Question. Let $X'$ be the minimal resolution of singularities on $X$. Can we say something about $X$ if $X'$ contains rational $-1$ curves?
-
1
If you blow up the original surface S, you will always have curves with negative self-intersections. The question is asking for BIRATIONAL! – CX Dec 13 at 19:57
I think there may be a confusion between two meanings of "minimal" here: surfaces do admit minimal resolutions (i.e., resolutions through which all other factor) but those needn't be minimal (i.e., they may contain $-1$-curves). – algori Dec 13 at 20:59
1
@Dmitri: the question was not whether there exists a birational morphism to $\mathbb P^3$ whose image has the desired properties, but whether any surface is birational to such a surface in $\mathbb P^3$. The point being that your proof shows that for certain surfaces there is no such morphism, but as CX points out, there still could be a rational map that is not everywhere defined. – Sándor Kovács Dec 14 at 0:46
ps: a minimal surface by any definition may have a curve with negative self-intersection. For instance a (smooth) K3 surface may contain a $(-2)$-curve, yet it is minimal in any which way you like to define minimal. (In particular, it is the minimal resolution of the surface you get when you blow down that $(-2)$-curve). – Sándor Kovács Dec 14 at 0:50
Sandor, sure I understand what you say. I will rewrite this "answer" so there are no doubts in this. – Dmitri Dec 14 at 10:07
show 1 more comment
Let $S' \subset \mathbb{P}^3$ be the birational projection of a smooth surface $S \subset \mathbb{P}^4$. The general projection theorem of Gruson-Peskine (http://arxiv.org/abs/1010.2399v1) tells you that $S'$ is either smooth or has a curve of double points.
For instance if $S$ is the Severi surface in $\mathbb{P}^4$, then its projection on $\mathbb{P}^3$ (the Steiner surface) has a curve of double points.
So the answer to your question is "never true", unless your surface naturally lives in $\mathbb{P}^3$.
Edit Ok after some times, I realize that you are interested in a smooth surface (say $S \subset \mathbb{P}^5$) which maps birationnaly to a surface $S' \subset \mathbb{P}^3$ but for which the map does not come from the ambiant projective space. So my answer is useless...
-
2
why should the desired birational map factor through an imbedding into $\mathbb{P}^4$? – Vivek Shende May 10 at 22:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 75, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555857181549072, "perplexity_flag": "head"}
|
http://mathoverflow.net/questions/83419?sort=oldest
|
## What is a canonical set of representatives in $GL(n,F)$ for the vertices in the Bruhat Tits building?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
$F$ is a non archimedean field here. To be more precise, I would actually prefer a set of representative in $B(F)$ for the discrete space $B(F) / B(o)Z(F)$?
This can be phrased also as question about lattices in $F^n$, but I would prefer to stay on the group level.
-
B denotes the Borel subgroup of GL(N). – Marc Palm Dec 14 2011 at 12:58
1
To clarify further: I assume $B$ is a Borel subgroup, $B(F)$ its $F$ rational points, and I assume $o$ is supposed to be the ring of integers $\mathcal{O}$ of $F$, and $Z(F)$ the center (i.e. diagonal matrices). Correct? – Max Dec 14 2011 at 14:37
correct! ... – Marc Palm Dec 15 2011 at 9:20
## 2 Answers
You somehow want to parametrise the vertices of the building of $G={\rm GL}(n,F)$ : $$G/F^\times K = BK/F^\times K= B(F)/B({\mathfrak o})Z(F)$$ (by Iwasawa decomposition).
For $n=2$ ou can easily find representatives, but for $n>2$, it's going to be tricky!
I just give some hints. Write $N$ for the unipotent radical of $B$ and $T$ for the diagonal torus so that $B=T\ltimes N$.
-- If $n,n'\in N$ and $t, t'\in T$, then if $nt\sim nt'$ mod $B(O)Z(F)$, one has $t\sim t'$ mod $Z(F)T(O)$. So you may assume that $t$ is of the form
$$t= {\rm diag}(\varpi^{k_1}, ...,\varpi^{k_n})$$ where $(k_1 ,...,k_n )$ is well defined modulo the diagonal action of $\mathbb Z$ on ${\mathbb Z}^n$.
-- You have $nt \sim n't$ mod $Z(F)B(O)$ iff $n\sim n'$ mod $tN(0)t^{-1}$.
So for each $t$ as above, you need to find a system of representatives of $$N(F)/tN(O)t^{-1}$$ For $n=2$, this is easy. For $n>2$, this seems tricky. I've never tried ...
-
Here $O={\mathfrak o}$ is the ring of integers, and $\varpi$ a uniformizer. – Paul Broussous Dec 14 2011 at 20:14
That was my initial idea too. I have now succeeded by iterating your argument also for the unipotent radical, seen as an iterated semidirect product. Thanks. – Marc Palm Dec 15 2011 at 9:18
Nevertheless, it is certainly solved somewhere. The final partition looks like $$\coprod\limits_{\alpha_1 \in T(F) / T(o)} \coprod\limits_{\alpha_2 \in E_1(F) / E_1(o)^{\alpha_{1}} } \cdots \coprod\limits_{E_n(F) / E_n(o)^{\alpha_1 \alpha_2 \cdots \alpha_{n-1}}},$$ where $E_k$ is the set of upper diagonal matrices with only $k-th$ column nonzero and $H^x = x^{-1} H x$. – Marc Palm Dec 15 2011 at 9:35
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The comments together with Paul's answer emphasize the importance of formulating the question more precisely and with some context (what you've read on the subject, for instance). Though I'm not at all a specialist in buildings, I know some of the complicated history of the subject as it evolved into long and highly sophisticated papers by Bruhat-Tits and others. But your question about general (or equally well special) linear groups, which are split over the prime field, goes back to the foundational papers such as the 1965 Publ. Math. IHES paper by Iwahori and Matsumoto, freely available by a quick author search here, followed by the detailed exercises in Chapter IV of Bourbaki's 1968 treatise Groupes et algebres de Lie where the BN-pair structure (or Tits system) is developed into the basic theory of buildings.
In this early work there is a treatment of the special subgroup structure present in a split (Chevalley) group over a standard `$p$`-adic field: fixing a Borel subgroup over the finite residue class field, one can lift it to the `$p$`-adic integers where it becomes an Iwahori subgroup. Such groups are determined up to conjugacy in the ambient algebraic group. Along with a copy of the affine Weyl group, an Iwahori subgroup determines a BN-pair structure and Bruhat decomposition. In turn there are finitely many maximal (proper) "parahoric" subgroups. Their cosets in the big group become the vertices for the resulting building. If you regard the original Borel subgroup as "canonical", this pathway should lead to canonical vertices of the building. As Paul observes, in the case `$n=2$` all of this is fairly easy to write down; here the building is just an infinite tree.
Once you get beyond split groups and ordinary `$p$`-adic extensions of the rationals, a lot more machinery has to be developed in order to work effectively with buildings and subgroup actions on them. But the general linear group, especially in semisimple rank 1, is the natural starting point for combining group theory and combinatorial geometry in a visualizable way.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317362308502197, "perplexity_flag": "head"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.